Releases: rstudio/keras3
keras 2.8.0
-
Breaking change: The semantics of passing a named list to
keras_model()
have changed.Previously,
keras_model()
wouldunname()
suppliedinputs
andoutputs
.
Then, if a named list was passed to subsequentfit()
/evaluate()
/call()
/predict()
invocations, matching ofx
andy
was done to the model's input and outpttensor$name
's.
Now, matching is done tonames()
ofinputs
and/oroutputs
supplied tokeras_model()
.
Callunname()
oninputs
andoutputs
to restore the old behavior, e.g.:keras_model(unname(inputs), unname(outputs))
keras_model()
can now accept a named list for multi-input and/or multi-output
models. The named list is converted to adict
in python.
(Requires Tensorflow >= 2.4, Python >= 3.7).If
inputs
is a named list:call()
,fit()
,evaluate()
, andpredict()
methods can also
accept a named list forx
, with names matching to the
names ofinputs
when the model was constructed.
Positional matching ofx
is still also supported (requires python 3.7+).
If
outputs
is a named list:fit()
andevaluate()
methods can only
accept a named list fory
, with names matching to the
names ofoutputs
when the model was constructed.
-
New layer
layer_depthwise_conv_1d()
. -
Models gain
format()
andprint()
S3 methods for compatibility
with the latest reticulate. Both are powered bymodel$summary()
. -
summary()
method for Models gains argumentsexpand_nested
andshow_trainable
,
both default toFALSE
. -
keras_model_custom()
is soft deprecated. Please define custom models by
subclassingkeras$Model
directly using%py_class%
orR6::R6Class()
. -
Fixed warning issued by
k_random_binomial()
. -
Fixed error raised when
k_random_binomial()
was passed a non-floating dtype. -
Added
k_random_bernouli()
as an alias fork_random_binomial()
. -
image_load()
gains acolor_mode
argument. -
Fixed issue where
create_layer_wrapper()
would not include arguments
with aNULL
default value in the returned wrapper. -
Fixed issue in
r_to_py.R6ClassGenerator
(and%py_class%
) where
single-expressioninitialize
functions defined without{
would error. -
Deprecated functions are no longer included in the package documentation index.
keras 2.7.0
-
Default Tensorflow + Keras version is now 2.7.
-
New API for constructing RNN (Recurrent Neural Network) layers. This is a
flexible interface that complements the existing RNN layers. It is primarily
intended for advanced / research applications, e.g, prototyping novel
architectures. It allows you to compose a RNN with a custom "cell", a Keras layer that
processes one step of a sequence.
New symbols:layer_rnn()
, which can compose with builtin cells:layer_gru_cell()
layer_lstm_cell()
layer_simple_rnn_cell()
layer_stacked_rnn_cells()
To learn more, including how to make a custom cell layer, see the new vignette:
"Working with RNNs".
-
New dataset functions:
text_dataset_from_directory()
timeseries_dataset_from_array()
-
New layers:
layer_additive_attention()
layer_conv_lstm_1d()
layer_conv_lstm_3d()
-
layer_cudnn_gru()
andlayer_cudnn_lstm()
are deprecated.
layer_gru()
andlayer_lstm()
will automatically use CuDNN if it is available. -
layer_lstm()
andlayer_gru()
:
default value forrecurrent_activation
changed
from"hard_sigmoid"
to"sigmoid"
. -
layer_gru()
: default valuereset_after
changed fromFALSE
toTRUE
-
New vignette: "Transfer learning and fine-tuning".
-
New applications:
- MobileNet V3:
application_mobilenet_v3_large()
,application_mobilenet_v3_small()
- ResNet:
application_resnet101()
,application_resnet152()
,resnet_preprocess_input()
- ResNet V2:
application_resnet50_v2()
,application_resnet101_v2()
,
application_resnet152_v2()
andresnet_v2_preprocess_input()
- EfficientNet:
application_efficientnet_b{0,1,2,3,4,5,6,7}()
- MobileNet V3:
-
Many existing
application_*()
's gain argumentclassifier_activation
,
with default'softmax'
.
Affected:application_{xception, inception_resnet_v2, inception_v3, mobilenet, vgg16, vgg19}()
-
New function
%<-active%
, a ergonomic wrapper aroundmakeActiveBinding()
for constructing Python@property
decorated methods in%py_class%
. -
bidirectional()
sequence processing layer wrapper gains abackwards_layer
arguments. -
Global pooling layers
layer_global_{max,average}_pooling_{1,2,3}d()
gain a
keepdims
argument with default valueFALSE
. -
Signatures for layer functions are in the process of being simplified.
Standard layer arguments are moving to...
where appropriate
(and will need to be provided as named arguments).
Standard layer arguments include:
input_shape
,batch_input_shape
,batch_size
,dtype
,
name
,trainable
,weights
.
Layers updated:
layer_global_{max,average}_pooling_{1,2,3}d()
,
time_distributed()
,bidirectional()
,
layer_gru()
,layer_lstm()
,layer_simple_rnn()
-
All the backend function with a shape argument
k_*(shape =)
that now accept a
a mix of integer tensors and R numerics in the supplied list. -
All layer functions now accept
NA
as a synonym forNULL
in arguments
that specify shape as a vector of dimension values,
e.g.,input_shape
,batch_input_shape
. -
k_random_uniform()
now automatically castsminval
andmaxval
to the output dtype. -
install_keras()
gains arg with defaultpip_ignore_installed = TRUE
.
keras 2.6.1
-
New family of preprocessing layers. These are the spiritual successor to the
tfdatasets::step_*
family of data transformers (to be deprecated in a future release). See the new vignette "Working with Preprocessing Layers" for details.
New functions:Image preprocessing:
layer_resizing()
layer_rescaling()
layer_center_crop()
Image augmentation:
layer_random_crop()
layer_random_flip()
layer_random_translation()
layer_random_rotation()
layer_random_zoom()
layer_random_contrast()
layer_random_height()
layer_random_width()
Categorical features preprocessing:
layer_category_encoding()
layer_hashing()
layer_integer_lookup()
layer_string_lookup()
Numerical features preprocessing:
layer_normalization()
layer_discretization()
These join the previous set of text preprocessing functions, each of which have some minor changes:
layer_text_vectorization()
(changed arguments)get_vocabulary()
set_vocabulary()
adapt()
-
adapt()
changes:- Now accepts all features preprocessing layers, previously
onlylayer_text_vectorization()
instances were valid. reset_state
argument is removed. It only ever accepted the default value ofTRUE
.- New arguments
batch_size
andsteps
. - Now returns the adapted layer invisibly for composability with
%>%
(previously returnedNULL
)
- Now accepts all features preprocessing layers, previously
-
get_vocabulary()
gains ainclude_special_tokens
argument. -
set_vocabulary()
:- Now returns the adapted layer invisibly for composability with
%>%
(previously returnedNULL
) - Signature simplified. Deprecated arguments (
df_data
oov_df_value
) are now subsumed in...
.
- Now returns the adapted layer invisibly for composability with
-
layer_text_vectorization()
:- valid values for argument
output_mode
change:"binary"
is renamed to"multi_hot"
and
"tf-idf"
is renamed to"tf_idf"
(backwards compatibility is preserved). - Fixed an issue where valid values of
output_mode = "int"
would incorrectly
return a ragged tensor output shape.
- valid values for argument
-
Existing layer instances gain the ability to be added to sequential models via a call. E.g.:
layer <- layer_dense(units = 10) model <- keras_model_sequential(input_shape = c(1,2,3)) %>% layer()
-
Functions in the merging layer family gain the ability to return a layer instance if
the first argumentinputs
is missing. (affected:layer_concatenate()
,layer_add()
,
layer_subtract()
,layer_multiply()
,layer_average()
,layer_maximum()
,
layer_minimum()
,layer_dot()
) -
%py_class%
gains the ability to delay initializing the Python session until first use.
It is now safe to implement and export%py_class%
objects in an R package. -
Fixed an issue in
layer_input()
where passing a tensorflowDType
objects to argumentdtype
would throw an error. -
Fixed an issue in
compile()
where passing an R function via an in-line
call would result in an error from subsequentfit()
calls.
(e.g.,compile(loss = function(y_true, y_pred) my_loss(y_true, y_pred))
now succeeds) -
clone_model()
gains aclone_function
argument that allows you to customize each layer as it is cloned. -
Bumped minimum R version to 3.4. Expanded CI to test on all supported R version. Fixed regression that prevented package installation on R <= 3.4
keras 2.6.0
Breaking changes (Tensorflow 2.6):
-
Note: The following breaking changes are specific to Tensorflow version 2.6.0.
However, the keras R package maintains compatibility with multiple versions of Tensorflow/Keras.
You can upgrade the R package and still preserve the previous behavior by
installing a specific version of Tensorflow:keras::install_keras(tensorflow="2.4.0")
-
predict_proba()
andpredict_classes()
were removed. -
model_to_yaml()
andmodel_from_yaml()
were removed. -
default changed:
layer_text_vectorization(pad_to_max_tokens=FALSE)
-
set_vocabulary()
argumentsdf_data
andoov_df_value
are removed. They are replaced by the new argumentidf_weights
.
New Features:
-
Default Tensorflow/Keras version is now 2.6
-
Introduced
%py_class%
, an R-language constructor for Python classes. -
New vignettes:
- Subclassing Python classes: How to use
%py_class%
. - Making new layers and models via subclassing.
- Customizing what happens in fit (example of how to define a model, like a GAN, with a custom train step).
- Writing your own callbacks.
- Subclassing Python classes: How to use
-
The
keras
Python module is exported -
Major changes to the underlying handling of custom R6 layer classes.
- A new
r_to_py()
method is provided forR6ClassGenerator
objects. - R6 custom layers can now inherit directly from Python layer classes
or other R6 custom layer classes. - Custom R6 layers can now be instantiated directly after conversion of the class generator with
r_to_py()
, without going throughcreate_layer()
. KerasLayer
is deprecated (new classes should inherit directly fromkeras$layers$Layer
).KerasWrapper
is deprecated (new classes should inherit directly fromkeras$layers$Wrapper
).create_wrapper()
is deprecated (no longer needed, usecreate_layer()
directly).- All layer class methods provided as R functions now have a
super
in scope that resolves to the Python super class object. - Methods of
super
can be accessed in the 3 common ways:- (Python 3 style):
super()$"__init__"()
- (Python 2 style):
super(ClassName, self)$"__init__"()
- (R6 style):
super$initialize()
- (Python 3 style):
- User defined custom classes that inherit from a Python type are responsible for calling
super()$`__init__`(...)
if appropriate. - Custom layers can now properly handle masks (#1225)
supports_masking = TRUE
attribute is now supportedcompute_mask()
user defined method is now supported
call()
methods now support atraining
argument, as well as any additional arbitrary user-defined arguments
- A new
-
Layer()
custom layer constructor is now lazy about initializing the Python session and safe to use on the top level of an R package (#1229). -
New function
create_layer_wrapper()
that can create a composing R function wrapper around a custom layer class. -
Refactored
install_keras()
(along withtensorflow::install_tensorflow()
).
Installation should be more reliable for more users now.
If you encounter installation issues, please file an issue: https://github.com/rstudio/keras/issues/new-
Potentially breaking change: numeric versions supplied without a patchlevel now automatically pull the latest patch release.
(e.g.install_keras(tensorflow="2.4")
will install tensorflow version "2.4.2". Previously it would install "2.4.0") -
pandas is now a default extra packages installed by
install_keras()
-
pyyaml is no longer installed by
install_keras()
if TF >= 2.6.
-
-
Loss functions:
-
All the loss functions gain the ability to return a callable
(akeras$losses$Loss
instance) ify_true
andy_pred
arguments are missing. -
New builtin loss functions:
loss_huber()
loss_kl_divergence()
-
-
Metric functions:
-
All the metric functions gain the ability to return a
keras$metrics$Metric
instance if called withouty_true
andy_pred
-
Each metric function is now documented separately, with a common
?Metric
topic demonstrating example usage. -
New built-in metrics:
metric_true_negatives()
metric_true_positives()
metric_false_negatives()
metric_false_positives()
metric_specificity_at_sensitivity()
metric_sensitivity_at_specificity()
metric_precision()
metric_precision_at_recall()
metric_sum()
metric_recall()
metric_recall_at_precision()
metric_root_mean_squared_error()
metric_sparse_categorical_accuracy()
metric_mean_tensor()
metric_mean_wrapper()
metric_mean_iou()
metric_mean_relative_error()
metric_logcosh_error()
metric_mean()
metric_cosine_similarity()
metric_categorical_hinge()
metric_accuracy()
metric_auc()
-
-
keras_model_sequential()
gains the ability to accept arguments that
define the input layer likeinput_shape
anddtype
.
See?keras_model_sequential
for details and examples. -
Many layers gained new arguments, coming to parity with the interface
available in the latest Python version:layer name new argument layer_gru
time_major
layer_lstm
time_major
layer_max_pooling_1d
data_format
layer_conv_lstm_2d
return_state
layer_depthwise_conv_2d
dilation_rate
layer_conv_3d_transpose
dilation_rate
layer_conv_1d
groups
layer_conv_2d
groups
layer_conv_3d
groups
layer_locally_connected_1d
implementation
layer_locally_connected_2d
implementation
layer_text_vectorization
vocabulary
-
The
compile()
method for keras models has been updated:optimizer
is now an optional argument. It defaults to"rmsprop"
for regular keras models.
Custom models can specify their own default optimizer.loss
is now an optional argument.- New optional arguments:
run_eagerly
,steps_per_execution
. target_tensors
andsample_weight_mode
must now be supplied as named arguments.
-
Added activation functions swish and gelu. (#1226)
-
set_vocabulary()
gains aidf_weights
argument. -
All optimizer had argument
lr
renamed tolearning_rate
.
(backwards compatibility is preserved, an R warning is now issued). -
The glue package was added to Imports
-
Refactored automated tests to closer match the default installation procedure
and compute environment of most user. -
Expanded CI test coverage to include R devel, oldrel and 3.6.
keras 2.4.0
- Use compat module when using
set_session
andget_session
. (#1046) - Allows passing other arguments to
keras_model
egname
. (#1045) - Fixed bug when serializing models with the plaidml backends.(#1084)
- Install keras no longer tries to install scipy because it's already installed by tensorflow (#1081)
- Fixed bug with
layer_text_vectorization
with TensorFlow >= 2.3 (#1131) - Handle renamed argument
text
toinput_text
intext_one_hot
(#1133) - Added TensorFlow 2.3 to the CI (#1102)
- Fix C stack error when using Image Data Generators and Time Series generators with TensorFlow <= 2.0.1 (#1135)
- Fixed warning raised in the initial epoch (@gsteinbu #1130)
- Consistent result when using
text_hashing_trick
with missing values (@topepo #1048) - Added a custom error message for
k_logsumexp
as it was removed from Keras (#1137) - Fixed bug when printing models that are not built yet. (#1138)
- Fix drop_duplicates DeprecationWarning with tf 2.3 (@gsteinbu #1139 #1141)
- Fixed bug when plotting the model history if the model used an early stopping callback (#1140)
install_keras
now installs a fixed version of h5py, because newer versions are backward incompatible. (#1142)- Simplify testing utilities by using a
helper-*
file. (#1173) - Deprecated
hdf5_matrix
if using TF >= 2.4 (#1175) - Fixed TensorFlow nightly installation on CI (#1176)
- Support for TensorFlow v2.4: just small fixes for custom classes. (#1177)
- Added
untar
argument toget_file
as it seems to be slightly different fromextract
(#1179) - Warn when not using the tensorflow implementation of Keras (#1181)
- Added
layer_layer_normalization
(#1183) - Added
layer_multihead_attention
(#1184) - Added
image_dataset_from_directory
(#1185) - Fixed bug when using a custom layer with a time distributed adverb. (#1188)
- Added the
ragged
argument tolayer_input
. (#1193) - Fixed
*_generator
deadlocks with recent versions of TensorFlow (#1197)
CRAN Release
Merge pull request #1041 from dfalbel/v2.3.0.0-rc0 Prepare for the 2.3.0.0 release
CRAN Release
Merge pull request #889 from dfalbel/prepare-release Prepare 2.2.5.0 release
CRAN Release
Merge pull request #727 from dfalbel/cran/2.2.4.1 CRAN release