Skip to content

Releases: rstudio/keras3

keras 2.8.0

10 Feb 13:01
1c08139
Compare
Choose a tag to compare
  • Breaking change: The semantics of passing a named list to keras_model() have changed.

    Previously, keras_model() would unname() supplied inputs and outputs.
    Then, if a named list was passed to subsequent fit()/evaluate()/call()/predict() invocations, matching of x and y was done to the model's input and outpt tensor$name's.
    Now, matching is done to names() of inputs and/or outputs supplied to keras_model().
    Call unname() on inputs and outputs to restore the old behavior, e.g.:

    keras_model(unname(inputs), unname(outputs))
    

    keras_model() can now accept a named list for multi-input and/or multi-output
    models. The named list is converted to a dict in python.
    (Requires Tensorflow >= 2.4, Python >= 3.7).

    If inputs is a named list:

    • call(), fit(), evaluate(), and predict() methods can also
      accept a named list for x, with names matching to the
      names of inputs when the model was constructed.
      Positional matching of x is still also supported (requires python 3.7+).

    If outputs is a named list:

    • fit() and evaluate() methods can only
      accept a named list for y, with names matching to the
      names of outputs when the model was constructed.
  • New layer layer_depthwise_conv_1d().

  • Models gain format() and print() S3 methods for compatibility
    with the latest reticulate. Both are powered by model$summary().

  • summary() method for Models gains arguments expand_nested and show_trainable,
    both default to FALSE.

  • keras_model_custom() is soft deprecated. Please define custom models by
    subclassing keras$Model directly using %py_class% or R6::R6Class().

  • Fixed warning issued by k_random_binomial().

  • Fixed error raised when k_random_binomial() was passed a non-floating dtype.

  • Added k_random_bernouli() as an alias for k_random_binomial().

  • image_load() gains a color_mode argument.

  • Fixed issue where create_layer_wrapper() would not include arguments
    with a NULL default value in the returned wrapper.

  • Fixed issue in r_to_py.R6ClassGenerator (and %py_class%) where
    single-expression initialize functions defined without { would error.

  • Deprecated functions are no longer included in the package documentation index.

keras 2.7.0

09 Nov 19:56
Compare
Choose a tag to compare
  • Default Tensorflow + Keras version is now 2.7.

  • New API for constructing RNN (Recurrent Neural Network) layers. This is a
    flexible interface that complements the existing RNN layers. It is primarily
    intended for advanced / research applications, e.g, prototyping novel
    architectures. It allows you to compose a RNN with a custom "cell", a Keras layer that
    processes one step of a sequence.
    New symbols:

    • layer_rnn(), which can compose with builtin cells:
    • layer_gru_cell()
    • layer_lstm_cell()
    • layer_simple_rnn_cell()
    • layer_stacked_rnn_cells()
      To learn more, including how to make a custom cell layer, see the new vignette:
      "Working with RNNs".
  • New dataset functions:

    • text_dataset_from_directory()
    • timeseries_dataset_from_array()
  • New layers:

    • layer_additive_attention()
    • layer_conv_lstm_1d()
    • layer_conv_lstm_3d()
  • layer_cudnn_gru() and layer_cudnn_lstm() are deprecated.
    layer_gru() and layer_lstm() will automatically use CuDNN if it is available.

  • layer_lstm() and layer_gru():
    default value for recurrent_activation changed
    from "hard_sigmoid" to "sigmoid".

  • layer_gru(): default value reset_after changed from FALSE to TRUE

  • New vignette: "Transfer learning and fine-tuning".

  • New applications:

    • MobileNet V3: application_mobilenet_v3_large(), application_mobilenet_v3_small()
    • ResNet: application_resnet101(), application_resnet152(), resnet_preprocess_input()
    • ResNet V2:application_resnet50_v2(), application_resnet101_v2(),
      application_resnet152_v2() and resnet_v2_preprocess_input()
    • EfficientNet: application_efficientnet_b{0,1,2,3,4,5,6,7}()
  • Many existing application_*()'s gain argument classifier_activation,
    with default 'softmax'.
    Affected: application_{xception, inception_resnet_v2, inception_v3, mobilenet, vgg16, vgg19}()

  • New function %<-active%, a ergonomic wrapper around makeActiveBinding()
    for constructing Python @property decorated methods in %py_class%.

  • bidirectional() sequence processing layer wrapper gains a backwards_layer arguments.

  • Global pooling layers layer_global_{max,average}_pooling_{1,2,3}d() gain a
    keepdims argument with default value FALSE.

  • Signatures for layer functions are in the process of being simplified.
    Standard layer arguments are moving to ... where appropriate
    (and will need to be provided as named arguments).
    Standard layer arguments include:
    input_shape, batch_input_shape, batch_size, dtype,
    name, trainable, weights.
    Layers updated:
    layer_global_{max,average}_pooling_{1,2,3}d(),
    time_distributed(), bidirectional(),
    layer_gru(), layer_lstm(), layer_simple_rnn()

  • All the backend function with a shape argument k_*(shape =) that now accept a
    a mix of integer tensors and R numerics in the supplied list.

  • All layer functions now accept NA as a synonym for NULL in arguments
    that specify shape as a vector of dimension values,
    e.g., input_shape, batch_input_shape.

  • k_random_uniform() now automatically casts minval and maxval to the output dtype.

  • install_keras() gains arg with default pip_ignore_installed = TRUE.

keras 2.6.1

30 Sep 19:20
Compare
Choose a tag to compare
  • New family of preprocessing layers. These are the spiritual successor to the tfdatasets::step_* family of data transformers (to be deprecated in a future release). See the new vignette "Working with Preprocessing Layers" for details.
    New functions:

    Image preprocessing:

    • layer_resizing()
    • layer_rescaling()
    • layer_center_crop()

    Image augmentation:

    • layer_random_crop()
    • layer_random_flip()
    • layer_random_translation()
    • layer_random_rotation()
    • layer_random_zoom()
    • layer_random_contrast()
    • layer_random_height()
    • layer_random_width()

    Categorical features preprocessing:

    • layer_category_encoding()
    • layer_hashing()
    • layer_integer_lookup()
    • layer_string_lookup()

    Numerical features preprocessing:

    • layer_normalization()
    • layer_discretization()

    These join the previous set of text preprocessing functions, each of which have some minor changes:

    • layer_text_vectorization() (changed arguments)
    • get_vocabulary()
    • set_vocabulary()
    • adapt()
  • adapt() changes:

    • Now accepts all features preprocessing layers, previously
      only layer_text_vectorization() instances were valid.
    • reset_state argument is removed. It only ever accepted the default value of TRUE.
    • New arguments batch_size and steps.
    • Now returns the adapted layer invisibly for composability with %>% (previously returned NULL)
  • get_vocabulary() gains a include_special_tokens argument.

  • set_vocabulary():

    • Now returns the adapted layer invisibly for composability with %>% (previously returned NULL)
    • Signature simplified. Deprecated arguments (df_data oov_df_value) are now subsumed in ....
  • layer_text_vectorization():

    • valid values for argument output_mode change: "binary" is renamed to "multi_hot" and
      "tf-idf" is renamed to "tf_idf" (backwards compatibility is preserved).
    • Fixed an issue where valid values of output_mode = "int" would incorrectly
      return a ragged tensor output shape.
  • Existing layer instances gain the ability to be added to sequential models via a call. E.g.:

    layer <- layer_dense(units = 10)
    model <- keras_model_sequential(input_shape = c(1,2,3)) %>%
      layer()
  • Functions in the merging layer family gain the ability to return a layer instance if
    the first argument inputs is missing. (affected: layer_concatenate(), layer_add(),
    layer_subtract(), layer_multiply(), layer_average(), layer_maximum(),
    layer_minimum() , layer_dot())

  • %py_class% gains the ability to delay initializing the Python session until first use.
    It is now safe to implement and export %py_class% objects in an R package.

  • Fixed an issue in layer_input() where passing a tensorflow DType objects to argument dtype would throw an error.

  • Fixed an issue in compile() where passing an R function via an in-line
    call would result in an error from subsequent fit() calls.
    (e.g., compile(loss = function(y_true, y_pred) my_loss(y_true, y_pred))
    now succeeds)

  • clone_model() gains a clone_function argument that allows you to customize each layer as it is cloned.

  • Bumped minimum R version to 3.4. Expanded CI to test on all supported R version. Fixed regression that prevented package installation on R <= 3.4

keras 2.6.0

23 Aug 16:43
Compare
Choose a tag to compare

Breaking changes (Tensorflow 2.6):

  • Note: The following breaking changes are specific to Tensorflow version 2.6.0.
    However, the keras R package maintains compatibility with multiple versions of Tensorflow/Keras.
    You can upgrade the R package and still preserve the previous behavior by
    installing a specific version of Tensorflow: keras::install_keras(tensorflow="2.4.0")

  • predict_proba() and predict_classes() were removed.

  • model_to_yaml() and model_from_yaml() were removed.

  • default changed: layer_text_vectorization(pad_to_max_tokens=FALSE)

  • set_vocabulary() arguments df_data and oov_df_value are removed. They are replaced by the new argument idf_weights.

New Features:

  • Default Tensorflow/Keras version is now 2.6

  • Introduced %py_class%, an R-language constructor for Python classes.

  • New vignettes:

    • Subclassing Python classes: How to use %py_class%.
    • Making new layers and models via subclassing.
    • Customizing what happens in fit (example of how to define a model, like a GAN, with a custom train step).
    • Writing your own callbacks.
  • The keras Python module is exported

  • Major changes to the underlying handling of custom R6 layer classes.

    • A new r_to_py() method is provided for R6ClassGenerator objects.
    • R6 custom layers can now inherit directly from Python layer classes
      or other R6 custom layer classes.
    • Custom R6 layers can now be instantiated directly after conversion of the class generator with r_to_py(), without going through create_layer().
    • KerasLayer is deprecated (new classes should inherit directly from keras$layers$Layer).
    • KerasWrapper is deprecated (new classes should inherit directly from keras$layers$Wrapper).
    • create_wrapper() is deprecated (no longer needed, use create_layer() directly).
    • All layer class methods provided as R functions now have a super in scope that resolves to the Python super class object.
    • Methods of super can be accessed in the 3 common ways:
      • (Python 3 style): super()$"__init__"()
      • (Python 2 style): super(ClassName, self)$"__init__"()
      • (R6 style): super$initialize()
    • User defined custom classes that inherit from a Python type are responsible for calling super()$`__init__`(...) if appropriate.
    • Custom layers can now properly handle masks (#1225)
      • supports_masking = TRUE attribute is now supported
      • compute_mask() user defined method is now supported
    • call() methods now support a training argument, as well as any additional arbitrary user-defined arguments
  • Layer() custom layer constructor is now lazy about initializing the Python session and safe to use on the top level of an R package (#1229).

  • New function create_layer_wrapper() that can create a composing R function wrapper around a custom layer class.

  • Refactored install_keras() (along with tensorflow::install_tensorflow()).
    Installation should be more reliable for more users now.
    If you encounter installation issues, please file an issue: https://github.com/rstudio/keras/issues/new

    • Potentially breaking change: numeric versions supplied without a patchlevel now automatically pull the latest patch release.
      (e.g. install_keras(tensorflow="2.4") will install tensorflow version "2.4.2". Previously it would install "2.4.0")

    • pandas is now a default extra packages installed by install_keras()

    • pyyaml is no longer installed by install_keras() if TF >= 2.6.

  • Loss functions:

    • All the loss functions gain the ability to return a callable
      (a keras$losses$Loss instance) if y_true and y_pred arguments are missing.

    • New builtin loss functions:

      • loss_huber()
      • loss_kl_divergence()
  • Metric functions:

    • All the metric functions gain the ability to return a keras$metrics$Metric instance if called without y_true and y_pred

    • Each metric function is now documented separately, with a common ?Metric topic demonstrating example usage.

    • New built-in metrics:

      • metric_true_negatives()
      • metric_true_positives()
      • metric_false_negatives()
      • metric_false_positives()
      • metric_specificity_at_sensitivity()
      • metric_sensitivity_at_specificity()
      • metric_precision()
      • metric_precision_at_recall()
      • metric_sum()
      • metric_recall()
      • metric_recall_at_precision()
      • metric_root_mean_squared_error()
      • metric_sparse_categorical_accuracy()
      • metric_mean_tensor()
      • metric_mean_wrapper()
      • metric_mean_iou()
      • metric_mean_relative_error()
      • metric_logcosh_error()
      • metric_mean()
      • metric_cosine_similarity()
      • metric_categorical_hinge()
      • metric_accuracy()
      • metric_auc()
  • keras_model_sequential() gains the ability to accept arguments that
    define the input layer like input_shape and dtype.
    See ?keras_model_sequential for details and examples.

  • Many layers gained new arguments, coming to parity with the interface
    available in the latest Python version:

    layer name new argument
    layer_gru time_major
    layer_lstm time_major
    layer_max_pooling_1d data_format
    layer_conv_lstm_2d return_state
    layer_depthwise_conv_2d dilation_rate
    layer_conv_3d_transpose dilation_rate
    layer_conv_1d groups
    layer_conv_2d groups
    layer_conv_3d groups
    layer_locally_connected_1d implementation
    layer_locally_connected_2d implementation
    layer_text_vectorization vocabulary
  • The compile() method for keras models has been updated:

    • optimizer is now an optional argument. It defaults to "rmsprop" for regular keras models.
      Custom models can specify their own default optimizer.
    • loss is now an optional argument.
    • New optional arguments: run_eagerly, steps_per_execution.
    • target_tensors and sample_weight_mode must now be supplied as named arguments.
  • Added activation functions swish and gelu. (#1226)

  • set_vocabulary() gains a idf_weights argument.

  • All optimizer had argument lr renamed to learning_rate.
    (backwards compatibility is preserved, an R warning is now issued).

  • The glue package was added to Imports

  • Refactored automated tests to closer match the default installation procedure
    and compute environment of most user.

  • Expanded CI test coverage to include R devel, oldrel and 3.6.

keras 2.4.0

29 Mar 19:07
0cfffa2
Compare
Choose a tag to compare
  • Use compat module when using set_session and get_session. (#1046)
  • Allows passing other arguments to keras_model eg name. (#1045)
  • Fixed bug when serializing models with the plaidml backends.(#1084)
  • Install keras no longer tries to install scipy because it's already installed by tensorflow (#1081)
  • Fixed bug with layer_text_vectorization with TensorFlow >= 2.3 (#1131)
  • Handle renamed argument text to input_text in text_one_hot (#1133)
  • Added TensorFlow 2.3 to the CI (#1102)
  • Fix C stack error when using Image Data Generators and Time Series generators with TensorFlow <= 2.0.1 (#1135)
  • Fixed warning raised in the initial epoch (@gsteinbu #1130)
  • Consistent result when using text_hashing_trick with missing values (@topepo #1048)
  • Added a custom error message for k_logsumexp as it was removed from Keras (#1137)
  • Fixed bug when printing models that are not built yet. (#1138)
  • Fix drop_duplicates DeprecationWarning with tf 2.3 (@gsteinbu #1139 #1141)
  • Fixed bug when plotting the model history if the model used an early stopping callback (#1140)
  • install_keras now installs a fixed version of h5py, because newer versions are backward incompatible. (#1142)
  • Simplify testing utilities by using a helper-* file. (#1173)
  • Deprecated hdf5_matrix if using TF >= 2.4 (#1175)
  • Fixed TensorFlow nightly installation on CI (#1176)
  • Support for TensorFlow v2.4: just small fixes for custom classes. (#1177)
  • Added untar argument to get_file as it seems to be slightly different from extract (#1179)
  • Warn when not using the tensorflow implementation of Keras (#1181)
  • Added layer_layer_normalization (#1183)
  • Added layer_multihead_attention (#1184)
  • Added image_dataset_from_directory (#1185)
  • Fixed bug when using a custom layer with a time distributed adverb. (#1188)
  • Added the ragged argument to layer_input. (#1193)
  • Fixed *_generator deadlocks with recent versions of TensorFlow (#1197)

CRAN Release

21 May 13:28
cdbfaaa
Compare
Choose a tag to compare
Merge pull request #1041 from dfalbel/v2.3.0.0-rc0

Prepare for the 2.3.0.0 release

CRAN Release

08 Oct 18:10
d357c68
Compare
Choose a tag to compare
Merge pull request #889 from dfalbel/prepare-release

Prepare 2.2.5.0 release

CRAN Release

06 Apr 01:54
2084de3
Compare
Choose a tag to compare
Merge pull request #727 from dfalbel/cran/2.2.4.1

CRAN release