Code Monkey home page Code Monkey logo

treeverse / lakefs Goto Github PK

View Code? Open in Web Editor NEW
4.1K 40.0 329.0 140.28 MB

lakeFS - Data version control for your data lake | Git for data

Home Page: https://docs.lakefs.io

License: Apache License 2.0

Go 74.38% Makefile 0.39% HTML 0.01% CSS 0.38% JavaScript 7.44% Dockerfile 0.09% Shell 0.24% Thrift 1.36% Scala 4.15% Python 3.63% Java 5.00% C++ 0.16% TypeScript 2.05% SCSS 0.08% Lua 0.62% Ruby 0.01% Batchfile 0.02%
data-engineering data-versioning go object-storage data-lake aws-s3 data-quality azure-blob-storage google-cloud-storage golang

lakefs's Introduction

Apache License Go tests status Node tests status Integration tests status Docs Preview & Link Check status Artifact HUB code of conduct

lakeFS is Data Version Control (Git for Data)

lakeFS is an open-source tool that transforms your object storage into a Git-like repository. It enables you to manage your data lake the way you manage your code.

With lakeFS you can build repeatable, atomic, and versioned data lake operations - from complex ETL jobs to data science and analytics.

lakeFS supports AWS S3, Azure Blob Storage, and Google Cloud Storage as its underlying storage service. It is API compatible with S3 and works seamlessly with all modern data frameworks such as Spark, Hive, AWS Athena, DuckDB, and Presto.

For more information, see the documentation.

Getting Started

You can spin up a standalone sandbox instance of lakeFS using Docker:

docker run --pull always \
		   --name lakefs \
		   -p 8000:8000 \
		   treeverse/lakefs:latest \
		   run --quickstart

Once you've got lakeFS running, open http://127.0.0.1:8000/ in your web browser.

Quickstart

👉🏻 For a hands-on walk through of the core functionality in lakeFS head over to the quickstart to jump right in!

Make sure to also have a look at the lakeFS samples. These are a rich resource of examples of end-to-end applications that you can build with lakeFS.

Why Do I Need lakeFS?

ETL Testing with Isolated Dev/Test Environment

When working with a data lake, it’s useful to have replicas of your production environment. These replicas allow you to test these ETLs and understand changes to your data without impacting downstream data consumers.

Running ETL and transformation jobs directly in production without proper ETL Testing is a guaranteed way to have data issues flow into dashboards, ML models, and other consumers sooner or later. The most common approach to avoid making changes directly in production is to create and maintain multiple data environments and perform ETL testing on them. Dev environment to develop the data pipelines and test environment where pipeline changes are tested before pushing it to production. With lakeFS you can create branches, and get a copy of the full production data, without copying anything. This enables a faster and easier process of ETL testing.

Reproducibility

Data changes frequently. This makes the task of keeping track of its exact state over time difficult. Oftentimes, people maintain only one state of their data––its current state.

This has a negative impact on the work, as it becomes hard to:

  • Debug a data issue.
  • Validate machine learning training accuracy (re-running a model over different data gives different results). Comply with data audits.

In comparison, lakeFS exposes a Git-like interface to data that allows keeping track of more than just the current state of data. This makes reproducing its state at any point in time straightforward.

CI/CD for Data

Data pipelines feed processed data from data lakes to downstream consumers like business dashboards and machine learning models. As more and more organizations rely on data to enable business critical decisions, data reliability and trust are of paramount concern. Thus, it’s important to ensure that production data adheres to the data governance policies of businesses. These data governance requirements can be as simple as a file format validation, schema check, or an exhaustive PII(Personally Identifiable Information) data removal from all of organization’s data.

Thus, to ensure the quality and reliability at each stage of the data lifecycle, data quality gates need to be implemented. That is, we need to run Continuous Integration(CI) tests on the data, and only if data governance requirements are met can the data can be promoted to production for business use.

Everytime there is an update to production data, the best practice would be to run CI tests and then promote(deploy) the data to production. With lakeFS you can create hooks that make sure that only data that passed these tests will become part of production.

Rollback

A rollback operation is used to to fix critical data errors immediately.

What is a critical data error? Think of a situation where erroneous or misformatted data causes a signficant issue with an important service or function. In such situations, the first thing to do is stop the bleeding.

Rolling back returns data to a state in the past, before the error was present. You might not be showing all the latest data after a rollback, but at least you aren’t showing incorrect data or raising errors. Since lakeFS provides versions of the data without making copies of the data, you can time travel between versions and roll back to the version of the data before the error was presented.

Community

Stay up to date and get lakeFS support via:

More information

Licensing

lakeFS is completely free and open-source and licensed under the Apache 2.0 License.

Who Uses lakeFS?

lakeFS is used by numerous companies, including those below. If you use lakeFS and would like to be included here please open a PR.

  • AirAsia
  • APEX Global
  • AppsFlyer
  • Auburn University
  • BAE Systems
  • Bureau of Labor Statistics
  • Cambridge Consultants
  • Connor, Clark & Lunn Financial Group
  • Context Labs Bv
  • Daimler Truck
  • Enigma
  • EPCOR
  • Ford Motor Company
  • Generali
  • Giesecke+Devrient
  • greehill
  • Karius
  • Lockheed Martin
  • Luxonis
  • Mixpeek
  • Netflix
  • Paige
  • PETRONAS
  • Pollinate
  • Proton Technologies AG
  • ProtonMail
  • Renaissance Computing Institute
  • RHEA Group
  • RMS
  • Sensum
  • Similarweb
  • State Street Global Advisors
  • Terramera
  • Tredence
  • Volvo Cars
  • Webiks
  • Windward
  • Woven by Toyota

lakefs's People

Contributors

adunuthulan avatar arielshaqed avatar arouyan avatar dependabot[bot] avatar eden-ohana avatar eladlachmi avatar guy-har avatar idanovo avatar iddoavn avatar isan-rivkin avatar itaiad200 avatar itaidavid avatar johnnyaug avatar jonathan-rosenberg avatar karentamrazyan avatar lynnro314 avatar michalwosk avatar n-o-z avatar nicholasjng avatar nopcoder avatar ortz avatar ozkatz avatar peacing avatar rohansahana avatar shamikakumar avatar shimi9276 avatar talsofer avatar tzahij avatar vinodhini-sd avatar yaelriv avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lakefs's Issues

Improve "lakefs diagnose" to verify AWS IAM role used in retention batch tagging operation

After #405, There are numerous "interesting" failure modes for the role used for retention batch tagging:

  1. Must be assume-able by batch tagging.
  2. Must have put-object-tagging permission.
  3. Must be allowed to list-bucket.
    This one is odd; here's why: If not, then when an object does not exist we receive AccessDenied instead of NoSuchKey. That makes reports useless both for users and for (future) object_dedup cleanups.

Diagnose and report all of these (assuming of course that the running user has sufficient permissions...).

Fix printing in "lakefs auth policies"

Detected in #256. Policies are printed with lots of %v, which ends up throwing addresses at the user.

E.g. policies create:

ariels@ariels:~/Dev/lakeFS$ echo '{"id": "foo", "statement": [{"action": ["auth:*"], "effect": "Allow", "resource": "arn:::::"}]}' | ./lakectl auth policies create --policy-document -
Policy created successfully.
ID: 0xc00043c020
Creation Date: 2020-07-08 12:16:26 +0300 IDT
Statements:

+--------------+-------------------------------+-------------+--------------+--------------+---------+
| POLICY ID    | CREATION DATE                 | STATEMENT # | RESOURCE     | EFFECT       | ACTIONS |
+--------------+-------------------------------+-------------+--------------+--------------+---------+
| 0xc0004d2590 | 1970-01-01 02:00:00 +0200 IST |           0 | 0xc00043c040 | 0xc00043c030 | auth:*  |
+--------------+-------------------------------+-------------+--------------+--------------+---------+

E.g. policies list:

ariels@ariels:~/Dev/lakeFS$ ./lakectl auth policies list
+--------------+-------------------------------+-------------+--------------+--------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| POLICY ID    | CREATION DATE                 | STATEMENT # | RESOURCE     | EFFECT       | ACTIONS                                                                                                                                                                                                                   |
+--------------+-------------------------------+-------------+--------------+--------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| 0xc0004e3d60 | 2020-07-07 18:28:24 +0300 IDT |           0 | 0xc0004e3d80 | 0xc0004e3d70 | auth:*                                                                                                                                                                                                                    |
| 0xc0004e3d90 | 2020-07-07 18:28:24 +0300 IDT |           0 | 0xc0004e3dc0 | 0xc0004e3db0 | auth:CreateCredentials, auth:DeleteCredentials, auth:ListCredentials, auth:ReadCredentials                                                                                                                                |
| 0xc0004e3dd0 | 2020-07-07 18:28:24 +0300 IDT |           0 | 0xc0004e3df0 | 0xc0004e3de0 | fs:*                                                                                                                                                                                                                      |
| 0xc0004e3e00 | 2020-07-07 18:28:24 +0300 IDT |           0 | 0xc0004e3e20 | 0xc0004e3e10 | fs:List*, fs:Read*                                                                                                                                                                                                        |
| 0xc0004e3e30 | 2020-07-07 18:28:24 +0300 IDT |           0 | 0xc0004e3e50 | 0xc0004e3e40 | fs:ListRepositories, fs:ReadRepository, fs:ReadCommit, fs:ListBranches, fs:ListObjects, fs:ReadObject, fs:WriteObject, fs:DeleteObject, fs:RevertBranch, fs:ReadBranch, fs:CreateBranch, fs:DeleteBranch, fs:CreateCommit |
| 0xc0004e3e60 | 2020-07-07 18:28:24 +0300 IDT |           0 | 0xc0004e3e80 | 0xc0004e3e70 | retention:*                                                                                                                                                                                                               |
| 0xc0004e3e90 | 2020-07-07 18:28:24 +0300 IDT |           0 | 0xc0004e3eb0 | 0xc0004e3ea0 | retention:Get*                                                                                                                                                                                                            |
| 0xc0004e3ec0 | 2020-07-08 12:16:26 +0300 IDT |           0 | 0xc0004e3ee0 | 0xc0004e3ed0 | auth:*                                                                                                                                                                                                                    |
+--------------+-------------------------------+-------------+--------------+--------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

Serve swagger-ui from CDN

We include a full copy of swagger-ui in docs/assets/js. This increases size and reduces performance.

Repository creation first commit should include the user/author that performed the action

Sometimes the lakeFS system performs commits as a result of another user action. Examples include branch creation, repository creation, merge commits and import API commits.

In the case of branch and repository creation, an empty committer name is used (look for the constant CatalogerCommitter). This should be changed to the user that initiated the action.

Interactive tool for generating a lakeFS configuration file

The lakefs binary uses a yaml configuration file.

As a lakeFS user, I would like a command line utility to generate this config file interactively.

Upon running "lakefs config", the user should be asked to fill in the basic information for a lakefs server to run.
A minimal configuration file example can be found here in the docs.

Example of information to get from the user:

  1. Database connection string
  2. Blockstore type.
  3. If the blockstore type is S3, choose the region and authentication method (aws profile, aws key pair, or using the instance role)
  4. S3 gateway domain name

The output should be a yaml file containing the configuration. The user should be able to specify an output destination for the file. If not specified, it should be saved to the default location: $HOME/.lakefs.yaml (with override protection).

SQL error is shown to the user

When trying to create a repository with a name of an existing repository, the following error is printed:

Error executing command: error creating repository: insert repository: ERROR: duplicate key value violates unique constr aint "catalog_repositories_name_uindex" (SQLSTATE 23505)

This should be changed to a user-friendly error.

Example command to reproduce:
lakectl repo create lakefs://existing-repo s3://example-bucket

Improve server response for uncommitted changes

DiffUncommitted should support prefix. It will enable us getting diff by level.
The response should include flag saying if there was a change in the branch level and under the prefix return the diff by level.
It should consider add/delete in the folder level.

Perform retention queries in smaller chunks

Retention uses single large queries. These can hurt DB performance during retention. Break queries up into smaller chunks to improve performance and let other transactions continue. (It is a sound transformation!)

In the UI, make it easily possible to copy the path to an object

In S3 web interface, the path is part of the URL so the slash separator is preserved, making the path easy to copy (see photo).

In our UI, the path is given as a query param, so slashes are escaped.
Perhaps add a copy button for the path. Need to decide which path type to copy - s3 or lakeFS, and whether it will include the repo and branch names.

image

Monitor repo retention

I would suggest tracking skipped repos due to errors - if one of them failed, return a non-zero return code. Otherwise it'll get logged into the void and probably won't ever be noticed by anyone until the S3 bills start racking up...

Originally posted by @ozkatz in #309 (comment)

MVCC rollback branch changes

The capability to rollback branch committed changes by apply the reverse changes.
Optionally for specific range of changes/commits

Client/server upgrade story

At the very least:

  • Detect errors when client and server Swagger definitions don't match for a query
  • Decide for both FC and BC
  • Can initially be as simple as allowing extra props on client parsing but disallowing them all on server parsing

Possibly required for release: these clients or servers will be involved in users' next upgrades.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.