Created by: dependabot[bot]
Bumps tensorflow-gpu from 1.14.0 to 1.15.0.
Release notes
Sourced from tensorflow-gpu's releases.
TensorFlow 1.15.0
Release 1.15.0
This is the last 1.x release for TensorFlow. We do not expect to update the 1.x branch with features, although we will issue patch releases to fix vulnerabilities for at least one year.
Major Features and Improvements
- As announced,
tensorflow
pip package will by default include GPU support (same astensorflow-gpu
now) for the platforms we currently have GPU support (Linux and Windows). It will work on machines with and without Nvidia GPUs.tensorflow-gpu
will still be available, and CPU-only packages can be downloaded attensorflow-cpu
for users who are concerned about package size.- TensorFlow 1.15 contains a complete implementation of the 2.0 API in its
compat.v2
module. It contains a copy of the 1.15 main module (withoutcontrib
) in thecompat.v1
module. TensorFlow 1.15 is able to emulate 2.0 behavior using theenable_v2_behavior()
function. This enables writing forward compatible code: by explicitly importing eithertensorflow.compat.v1
ortensorflow.compat.v2
, you can ensure that your code works without modifications against an installation of 1.15 or 2.0.EagerTensor
now supports numpy buffer interface for tensors.- Add toggles
tf.enable_control_flow_v2()
andtf.disable_control_flow_v2()
for enabling/disabling v2 control flow.- Enable v2 control flow as part of
tf.enable_v2_behavior()
andTF2_BEHAVIOR=1
.- AutoGraph translates Python control flow into TensorFlow expressions, allowing users to write regular Python inside
tf.function
-decorated functions. AutoGraph is also applied in functions used withtf.data
,tf.distribute
andtf.keras
APIS.- Adds
enable_tensor_equality()
, which switches the behavior such that:
- Tensors are no longer hashable.
- Tensors can be compared with
==
and!=
, yielding a Boolean Tensor with element-wise comparison results. This will be the default behavior in 2.0.- Auto Mixed-Precision graph optimizer simplifies converting models to
float16
for acceleration on Volta and Turing Tensor Cores. This feature can be enabled by wrapping an optimizer class withtf.train.experimental.enable_mixed_precision_graph_rewrite()
.- Add environment variable
TF_CUDNN_DETERMINISTIC
. Setting to "true" or "1" forces the selection of deterministic cuDNN convolution and max-pooling algorithms. When this is enabled, the algorithm selection procedure itself is also deterministic.- TensorRT
- Migrate TensorRT conversion sources from contrib to compiler directory in preparation for TF 2.0.
- Add additional, user friendly
TrtGraphConverter
API for TensorRT conversion.- Expand support for TensorFlow operators in TensorRT conversion (e.g.
Gather
,Slice
,Pack
,Unpack
,ArgMin
,ArgMax
,DepthSpaceShuffle
).- Support TensorFlow operator
CombinedNonMaxSuppression
in TensorRT conversion which significantly accelerates object detection models.Breaking Changes
- Tensorflow code now produces 2 different pip packages:
tensorflow_core
containing all the code (in the future it will contain only the private implementation) andtensorflow
which is a virtual pip package doing forwarding totensorflow_core
(and in the future will contain only the public API of tensorflow). We don't expect this to be breaking, unless you were importing directly from the implementation.- TensorFlow 1.15 is built using devtoolset7 (GCC7) on Ubuntu 16. This may lead to ABI incompatibilities with extensions built against earlier versions of TensorFlow.
- Deprecated the use of
constraint=
and.constraint
with ResourceVariable.tf.keras
:
OMP_NUM_THREADS
is no longer used by the default Keras config. To configure the number of threads, usetf.config.threading
APIs.tf.keras.model.save_model
andmodel.save
now defaults to saving a TensorFlow SavedModel.keras.backend.resize_images
(and consequently,keras.layers.Upsampling2D
) behavior has changed, a bug in the resizing implementation was fixed.- Layers now default to
float32
, and automatically cast their inputs to the layer's dtype. If you had a model that usedfloat64
, it will probably silently usefloat32
in TensorFlow2, and a warning will be issued that starts with Layer "layer-name" is casting an input tensor from dtype float64 to the layer's dtype of float32. To fix, either set the default dtype to float64 withtf.keras.backend.set_floatx('float64')
, or passdtype='float64'
to each of the Layer constructors. Seetf.keras.layers.Layer
for more information.- Some
tf.assert_*
methods now raise assertions at operation creation time (i.e. when this Python line executes) if the input tensors' values are known at that time, not during the session.run(). When this happens, a noop is returned and the input tensors are marked non-feedable. In other words, if they are used as keys infeed_dict
argument tosession.run()
, an error will be raised. Also, because some assert ops don't make it into the graph, the graph structure changes. A different graph can result in different per-op random seeds when they are not given explicitly (most often).Bug Fixes and Other Changes
... (truncated)
tf.estimator
:
tf.keras.estimator.model_to_estimator
now supports exporting totf.train.Checkpoint
format, which allows the saved checkpoints to be compatible withmodel.load_weights
.- Fix tests in canned estimators.
- Expose Head as public API.
- Fixes critical bugs that help with
DenseFeatures
usability in TF2tf.data
:
- Promoting
unbatch
from experimental to core API.- Adding support for datasets as inputs to
from_tensors
andfrom_tensor_slices
and batching and unbatching of nested datasets.tf.keras
:
tf.keras.estimator.model_to_estimator
now supports exporting to tf.train.Checkpoint format, which allows the saved checkpoints to be compatible withmodel.load_weights
.- Saving a Keras Model using
tf.saved_model.save
now saves the list of variables, trainable variables, regularization losses, and the call function.- Deprecated
tf.keras.experimental.export_saved_model
andtf.keras.experimental.function
. Please usetf.keras.models.save_model(..., save_format='tf')
andtf.keras.models.load_model
instead.- Add an
implementation=3
mode fortf.keras.layers.LocallyConnected2D
andtf.keras.layers.LocallyConnected1D
layers usingtf.SparseTensor
to store weights, allowing a dramatic speedup for large sparse models.
Changelog
Sourced from tensorflow-gpu's changelog.
Release 1.15.0
This is the last 1.x release for TensorFlow. We do not expect to update the 1.x branch with features, although we will issue patch releases to fix vulnerabilities for at least one year.
Major Features and Improvements
- As announced,
tensorflow
pip package will by default include GPU support (same astensorflow-gpu
now) for the platforms we currently have GPU support (Linux and Windows). It will work on machines with and without Nvidia GPUs.tensorflow-gpu
will still be available, and CPU-only packages can be downloaded attensorflow-cpu
for users who are concerned about package size.- TensorFlow 1.15 contains a complete implementation of the 2.0 API in its
compat.v2
module. It contains a copy of the 1.15 main module (withoutcontrib
) in thecompat.v1
module. TensorFlow 1.15 is able to emulate 2.0 behavior using theenable_v2_behavior()
function. This enables writing forward compatible code: by explicitly importing eithertensorflow.compat.v1
ortensorflow.compat.v2
, you can ensure that your code works without modifications against an installation of 1.15 or 2.0.- EagerTensor now supports numpy buffer interface for tensors.
- Add toggles
tf.enable_control_flow_v2()
andtf.disable_control_flow_v2()
for enabling/disabling v2 control flow.- Enable v2 control flow as part of
tf.enable_v2_behavior()
andTF2_BEHAVIOR=1
.- AutoGraph translates Python control flow into TensorFlow expressions, allowing users to write regular Python inside
tf.function
-decorated functions. AutoGraph is also applied in functions used withtf.data
,tf.distribute
andtf.keras
APIS.- Adds
enable_tensor_equality()
, which switches the behavior such that:
- Tensors are no longer hashable.
- Tensors can be compared with
==
and!=
, yielding a Boolean Tensor with element-wise comparison results. This will be the default behavior in 2.0.Breaking Changes
- Tensorflow code now produces 2 different pip packages:
tensorflow_core
containing all the code (in the future it will contain only the private implementation) andtensorflow
which is a virtual pip package doing forwarding totensorflow_core
(and in the future will contain only the public API of tensorflow). We don't expect this to be breaking, unless you were importing directly from the implementation.- TensorFlow 1.15 is built using devtoolset7 (GCC7) on Ubuntu 16. This may lead to ABI incompatibilities with extensions built against earlier versions of TensorFlow.
- Deprecated the use of
constraint=
and.constraint
with ResourceVariable.tf.keras
:
OMP_NUM_THREADS
is no longer used by the default Keras config. To configure the number of threads, usetf.config.threading
APIs.tf.keras.model.save_model
andmodel.save
now defaults to saving a TensorFlow SavedModel.keras.backend.resize_images
(and consequently,keras.layers.Upsampling2D
) behavior has changed, a bug in the resizing implementation was fixed.- Layers now default to
float32
, and automatically cast their inputs to the layer's dtype. If you had a model that usedfloat64
, it will probably silently usefloat32
in TensorFlow2, and a warning will be issued that starts with Layer "layer-name" is casting an input tensor from dtype float64 to the layer's dtype of float32. To fix, either set the default dtype to float64 withtf.keras.backend.set_floatx('float64')
, or passdtype='float64'
to each of the Layer constructors. Seetf.keras.layers.Layer
for more information.- Some
tf.assert_*
methods now raise assertions at operation creation time (i.e. when this Python line executes) if the input tensors' values are known at that time, not during the session.run(). When this happens, a noop is returned and the input tensors are marked non-feedable. In other words, if they are used as keys infeed_dict
argument tosession.run()
, an error will be raised. Also, because some assert ops don't make it into the graph, the graph structure changes. A different graph can result in different per-op random seeds when they are not given explicitly (most often).Bug Fixes and Other Changes
... (truncated)
tf.estimator
:
tf.keras.estimator.model_to_estimator
now supports exporting totf.train.Checkpoint
format, which allows the saved checkpoints to be compatible withmodel.load_weights
.- Fix tests in canned estimators.
- Expose Head as public API.
- Fixes critical bugs that help with
DenseFeatures
usability in TF2tf.data
:
- Promoting
unbatch
from experimental to core API.- Adding support for datasets as inputs to
from_tensors
andfrom_tensor_slices
and batching and unbatching of nested datasets.tf.keras
:
tf.keras.estimator.model_to_estimator
now supports exporting to tf.train.Checkpoint format, which allows the saved checkpoints to be compatible withmodel.load_weights
.- Saving a Keras Model using
tf.saved_model.save
now saves the list of variables, trainable variables, regularization losses, and the call function.- Deprecated
tf.keras.experimental.export_saved_model
andtf.keras.experimental.function
. Please usetf.keras.models.save_model(..., save_format='tf')
andtf.keras.models.load_model
instead.- Add an
implementation=3
mode fortf.keras.layers.LocallyConnected2D
andtf.keras.layers.LocallyConnected1D
layers usingtf.SparseTensor
to store weights, allowing a dramatic speedup for large sparse models.- Enable the Keras compile API
experimental_run_tf_function
flag by default. This flag enables single training/eval/predict execution path. With this 1. All input types are converted toDataset
. 2. When distribution strategy is not specified this goes through the no-op distribution strategy path. 3. Execution is wrapped in tf.function unlessrun_eagerly=True
is set in compile.- Raise error if
batch_size
argument is used when input is dataset/generator/keras sequence.tf.lite
- Add
GATHER
support to NN API delegate.- tflite object detection script has a debug mode.
- Add delegate support for
QUANTIZE
.- Added evaluation script for COCO minival.
- Add delegate support for
QUANTIZED_16BIT_LSTM
.- Converts hardswish subgraphs into atomic ops.
- Add support for defaulting the value of
cycle_length
argument oftf.data.Dataset.interleave
to the number of schedulable CPU cores.
Commits
-
590d6ee
Merge pull request #31861 from tensorflow-jenkins/relnotes-1.15.0rc0-16184 -
b27ac43
Update RELEASE.md -
07bf663
Merge pull request #33213 from Intel-tensorflow/mkl-dnn-0.20.6 -
46f50ff
Merge pull request #33262 from tensorflow/ggadde-1-15-cp2 -
49c154e
Merge pull request #33263 from tensorflow/ggadde-1-15-final-version -
a16adeb
Update TensorFlow version to 1.15.0 in preparation for final relase. -
8d71a87
Add saving of loaded/trained compatibility models in test and fix a compatibi... -
8c48aff
[Intel Mkl] Upgrading MKL-DNN to 0.20.6 to fix SGEMM regression -
38ea9bb
Merge pull request #33120 from tensorflow/perf -
a8ef0f5
Automated rollback of commit db7e43192d405973c6c50f6e60e831a198bb4a49 - Additional commits viewable in compare view
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase
.
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
-
@dependabot rebase
will rebase this PR -
@dependabot recreate
will recreate this PR, overwriting any edits that have been made to it -
@dependabot merge
will merge this PR after your CI passes on it -
@dependabot squash and merge
will squash and merge this PR after your CI passes on it -
@dependabot cancel merge
will cancel a previously requested merge and block automerging -
@dependabot reopen
will reopen this PR if it is closed -
@dependabot ignore this [patch|minor|major] version
will close this PR and stop Dependabot creating any more for this minor/major version (unless you reopen the PR or upgrade to it yourself) -
@dependabot ignore this dependency
will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) -
@dependabot use these labels
will set the current labels as the default for future PRs for this repo and language -
@dependabot use these reviewers
will set the current reviewers as the default for future PRs for this repo and language -
@dependabot use these assignees
will set the current assignees as the default for future PRs for this repo and language -
@dependabot use this milestone
will set the current milestone as the default for future PRs for this repo and language
You can disable automated security fix PRs for this repo from the Security Alerts page.