Skip to content

Milestones

List view

  • Overdue by 5 month(s)
    Due by January 24, 2025
    7/7 issues closed
  • Building, packaging and monitoring in place

    No due date
    2/2 issues closed
  • Plan the next API SDK to provide us with a good base for going to 1.0.

    No due date
    4/4 issues closed
  • * A working Delta DIff backend according to the design specified [here](https://github.com/treeverse/lakeFS/blob/469e446c475f1b49a7524ac7b08a3335f7f59e2b/design/open/delta-diff.md) * A dummy GUI client implementation (not yet working with backend)

    No due date
    15/15 issues closed
  • Add support for overwrite mode, and parquet file format.

    No due date
    4/4 issues closed
  • Add multiwriter support, first for overwrite save mode and then for other save modes.

    No due date
    2/2 issues closed
  • Writing all file formats, in all modes. Also, implement the relevant remaining class methods. The relevant file formats to add are: * CSV * ORC * JSON The relevant modes to add are: * ErrorIfExists (default) * Ignore

    No due date
    3/3 issues closed
  • LakeFSOutputCommitter running. Supports some output format (possibly not all, esp. not Parquet) and some SaveMode (possibly not all). Milestone 1 - deadline 1/12/22 DOD: Write a text file in append mode There is a detailed design of a write flow agreed with VE Measure performance of write operation Component tests Merge branch to master (and notify on beta in docs)

    No due date
    13/13 issues closed
  • Discovery and Execution Plan - #3566 * graveler KV implementation * Lock free commits are excluded of this milestone * Postponed tasks from previous MS: * Remove unneccessary IDs from KV Auth - #3521 * `auth.MetadataManager` KV support - #3520

    Overdue by 2 year(s)
    Due by July 28, 2022
    57/57 issues closed
  • Definition of done: An end-to-end wizard that imports data and generates Spark configurations. A README file should describe the steps taken by the Wizard, in addition to the steps needed to configure Hive Metastore. BI and operational monitoring should be applied.

    No due date
    25/25 issues closed
  • No due date
    3/3 issues closed
  • No due date
    2/2 issues closed
  • * Auth KV Implementation * Actions KV Implementation * Performance and Benchmark tests for pkgs * Migration benchmark * Additional KV implementation on KV DB Start lakeFS container with kv-enabled, DynamoDB production with multiparts, actions and auth.

    Overdue by 3 year(s)
    Due by June 16, 2022
    30/30 issues closed
  • Multipart upload package - testing discovery - package specific and anything we require to feel trust in using kv in lakefs - kv interface and pg implementation - set/get bytes - feature flag db/kv - kv support library for working with models (protobuf marshal) - multipart upload model implementation - migrate model from db to kv

    Overdue by 3 year(s)
    Due by May 19, 2022
    25/25 issues closed
  • No due date
    9/10 issues closed
  • Design the lakeFS next generation metastore

    Overdue by 3 year(s)
    Due by December 27, 2021
    3/3 issues closed
  • No due date
    12/12 issues closed
  • No due date
    28/29 issues closed
  • lakeFS filesystem

    No due date
    62/62 issues closed
  • No due date
    4/5 issues closed
  • First deliverable to realize the CI/CD for data vision: - Basic webhook support for commits and merges - Protected branches (mergeable but not writable/committable) - Boilerplate and examples for basic webhooks - Documentation on how to configure and deploy webhooks

    No due date
    51/52 issues closed
  • The second milestone for lakeFS on the rocks: - seperation of metadata from data services - HadoopFilesystem on top of metadata service - management services out of core: data retention, data export from lakeFS to object store

    No due date
    7/7 issues closed