Skip to content

Releases: google-ai-edge/LiteRT

v2.1.0rc1

21 Nov 23:35

Choose a tag to compare

v2.1.0rc1 Pre-release
Pre-release

Release 2.1.0rc1

Major Features and Improvements

  • NPU: Added support for Qualcomm Snapdragon Gen5
  • NPU: Added support for MediaTek Dimensity 9500
  • NPU: Added support for NPU JIT mode on Qualcomm and MediaTek

Bug Fixes and Other Changes

  • Fixes Android min SDK version to 23.
  • NPU: Fixes partition algorithm when the full model cannot be offloaded to NPU.

Breaking Changes

  • Removed direct C headers usage. Users no longer need to include C headers.
  • TensorBuffer::CreateManaged() requires Environment always.
  • All TensorBuffer creation requires Environment except HostMemory types.
  • LiteRT C++ constructors are hidden. All LiteRT C++ objects should be created by Create() methods.
  • Move internal only C++ APIs(such as litert_logging.h) to litert/cc/internal
  • Remove Tensor, Subgraph, Signature access from litert::Model. Instead users can access SimpleTensor, SimpleSignature from CompiledModel.
  • The CompiledModel::Create() API no longer needs litert::Model. They can be created from filename, model buffers directly.
  • Annotation, Metrics APIs are removed from CompiledModel.
  • Removed individual OpaqueOptions creation. These OpaqueOptions objects are obtained by Options directly.
    • Options::GetCpuOptions()
    • Options::GetGpuOptions()
    • Options::GetRuntimeOptions()

v1.4.1

19 Nov 18:59

Choose a tag to compare

Release 1.4.1

Bug Fixes and Other Changes

  • Fixed Android minSDK version to 21

v2.0.3

13 Nov 18:31
b85cdce

Choose a tag to compare

Release 2.0.3

Major Features and Improvements

  • Add Python backend for Google Tensor. The backend doesn't yet register itself, so it's available by default.
  • Change manufacturer to Google and SoC models to include the Tensor_ prefix for Google Tensor.
  • Minor naming changes to some flags for the Google Tensor compiler plugin.

Bug Fixes and Other Changes

  • N/A

v2.0.2

17 Sep 18:34

Choose a tag to compare

Release 2.0.2

Major Features and Improvements

LiteRT GPU Accelerator

  • Added an option to control GPU inference priority.

LiteRT API Refactoring

  • Introduced target litert/cc:litert_api_with_dynamic_runtime This is a convenience Bazel target containing LiteRt C++ and C APIs. Users of this library are responsible to bundle LiteRT C API Runtime libLiteRtRuntimeCApi.so.
  • C++ APIs that need LiteRT C API Runtime are moved to litert/cc/dynamic_runtime/
    Note: This is for internal usage. If you want to use dynamic API, use litert/cc:litert_api_with_dynamic_runtime.
  • All static public C++ APIs (including litert/cc/internal) are moved to litert/cc/
    Note: You shouldn't mix static API targets with dynamic API targets.

Bug Fixes and Other Changes

  • Fixed a segmentation fault error on //litert/tools:apply_plugin_test
  • Refactored example backend compiler plugin and dispatch implementation.
  • Improved LiteRT op coverage for Qualcomm and MediaTek backends.

v2.0.2a1

02 Sep 15:59

Choose a tag to compare

v2.0.2a1 Pre-release
Pre-release

Release 2.0.2a1

LiteRT

Major Features and Improvements

Breaking Changes

  • com.google.ai.edge.litert.TensorBufferRequirements
    • It becomes a data class, so all fields could be accessed directly without getter methods.
    • The type of field strides changes from IntArry to List<Int> to be immutable.
  • com.google.ai.edge.litert.Layout
    • The type of field dimensions and strides changes from IntArry to List<Int> to be immutable.
  • Rename GPU option NoImmutableExternalTensorsMode to NoExternalTensorsMode

Known Caveats

Major Features and Improvements

  • [tflite] Add error detection in TfLiteRegistration::init(). When a Delegate
    kernel returns TfLiteKernelInitFailed(), it is treated
    as a critical failure on Delegate. This error will be detected in
    SubGraph::ReplaceNodeSubsetsWithDelegateKernels() will cause
    Delegate::Prepare() to fail, ultimately leading
    InterpreterBuilder::operator() or Interpreter::ModifyGraphWithDelegate() to
    return an error.
  • Added Profiler API in Compiled Model: source.
  • Added Error reporter API in Compiled Model: source.
  • Added resize input tensor API in Compiled Model: source.

Bug Fixes and Other Changes

  • The Android minSdkVersion has increased to 23.
  • Update tests to provide kLiteRtHwAcceleratorNpu for fully AOT compiled
    models.

LiteRT v1.4.0 release

07 Nov 22:36

Choose a tag to compare

Release 1.4.0

Bug Fixes and Other Changes

  • Fixed support for 16 kb page

v2.0.0-alpha

20 May 00:27

Choose a tag to compare

v1.2.0

13 Mar 23:11

Choose a tag to compare

v1.0.1

04 Sep 04:29

Choose a tag to compare

This is the first release of the LiteRT, the new name of TensorFlow Lite. Please see more details in this blog post.

In its current state, the LiteRT repository is not intended for open source development because it is pulling in existing TensorFlow code via a git submodule. We intend to evolve this repo to a point where developers can directly build and contribute here, at which time we will make a separate announcement.

This LiteRT release is pinned to TF commit 2adc36c and is compatible with the following packages:

Prebuilt artifacts for this release: