Lazydata: Scalable data dependencies for Python projects

Overview

CircleCI

lazydata: scalable data dependencies

lazydata is a minimalist library for including data dependencies into Python projects.

Problem: Keeping all data files in git (e.g. via git-lfs) results in a bloated repository copy that takes ages to pull. Keeping code and data out of sync is a disaster waiting to happen.

Solution: lazydata only stores references to data files in git, and syncs data files on-demand when they are needed.

Why: The semantics of code and data are different - code needs to be versioned to merge it, and data just needs to be kept in sync. lazydata achieves exactly this in a minimal way.

Benefits:

  • Keeps your git repository clean with just code, while enabling seamless access to any number of linked data files
  • Data consistency assured using file hashes and automatic versioning
  • Choose your own remote storage backend: AWS S3 or (coming soon:) directory over SSH

lazydata is primarily designed for machine learning and data science projects. See this medium post for more.

Getting started

In this section we'll show how to use lazydata on an example project.

Installation

Install with pip (requires Python 3.5+):

$ pip install lazydata

Add to your project

To enable lazydata, run in project root:

$ lazydata init 

This will initialise lazydata.yml which will hold the list of files managed by lazydata.

Tracking a file

To start tracking a file use track("<path_to_file>") in your code:

my_script.py

from lazydata import track

# store the file when loading  
import pandas as pd
df = pd.read_csv(track("data/my_big_table.csv"))

print("Data shape:" + df.shape)

Running the script the first time will start tracking the file:

$ python my_script.py
## lazydata: Tracking a new file data/my_big_table.csv
## Data shape: (10000,100)

The file is now tracked and has been backed-up in your local lazydata cache in ~/.lazydata and added to lazydata.yml:

files:
  - path: data/my_big_table.csv
    hash: 2C94697198875B6E...
    usage: my_script.py

If you re-run the script without modifying the data file, lazydata will just quickly check that the data file hasn't changed and won't do anything else.

If you modify the data file and re-run the script, this will add another entry to the yml file with the new hash of the data file, i.e. data files are automatically versioned. If you don't want to keep past versions, simply remove them from the yml.

And you are done! This data file is now tracked and linked to your local repository.

Sharing your tracked files

To access your tracked files from multiple machines add a remote storage backend where they can be uploaded. To use S3 as a remote storage backend run:

$ lazydata add-remote s3://mybucket/lazydata

This will configure the S3 backend and also add it to lazydata.yml for future reference.

You can now git commit and push your my_script.py and lazydata.yml files as you normally would.

To copy the stored data files to S3 use:

$ lazydata push

When your collaborator pulls the latest version of the git repository, they will get the script and the lazydata.yml file as usual.

Data files will be downloaded when your collaborator runs my_script.py and the track("my_big_table.csv") is executed:

$ python my_script.py
## lazydata: Downloading stored file my_big_table.csv ...
## Data shape: (10000,100)

To get the data files without running the code, you can also use the command line utility:

# download just this file
$ lazydata pull my_big_table.csv

# download everything used in this script
$ lazydata pull my_script.py

# download everything stored in the data/ directory and subdirs
$ lazydata pull data/

# download the latest version of all data files
$ lazydata pull

Because lazydata.yml is tracked by git you can safely make and switch git branches.

Data dependency scenarios

You can achieve multiple data dependency scenarios by putting lazydata.track() into different parts of the code:

  • Jupyter notebook data dependencies by using tracking in notebooks
  • Data pipeline output tracking by tracking saved files
  • Class-level data dependencies by tracking files in __init__(self)
  • Module-level data dependencies by tracking files in __init__.py
  • Package-level data dependencies by tracking files in setup.py

Coming soon...

  • Examine stored file provenance and properties
  • Faceting multiple files into portable datasets
  • Storing data coming from databases and APIs
  • More remote storage options

Stay in touch

This is an early stable beta release. To find out about new releases subscribe to our new releases mailing list.

Contributing

The library is licenced under Apache-2 licence. All contributions are welcome!

Comments
  • lazydata command not recognizable on Windows

    lazydata command not recognizable on Windows

    Add to your project To enable lazydata, run in project root:

    $ lazydata init

    This resulted in:

    'lazydata' is not recognized as an internal or external command, operable program or batch file.

    on windows 10.

    opened by lastmeta 6
  • Adding support for custom endpoint

    Adding support for custom endpoint

    Useful for users that do not want to rely on Amazon S3 while using this package.

    I'm running a Minio Server for storage, which mimics an S3 container.

    boto3 (api doc here) supports custom endpoints that it's going to hit via the S3 API.

    I would be good to add tests to this behaviour, along with testing pulls and pushes for normal behaviour. Using mocks maybe ?

    EDIT: I also corrected a line which caused the package not to work for python 3.5, see commit 0d4f8fc

    opened by zbitouzakaria 6
  • corrupted lazydata.yml if application crashes

    corrupted lazydata.yml if application crashes

    I noticed when the python application that is tracking some data crashes at some point, it can leave behind a corrupted yaml file. It is not that uncommon when you are in an exploratory phase of building an ML model to write a code that can crash for example because of memory issues etc. It would be great if yaml file handle closes after each track call to ensure the file does not get corrupted! Thanks! I really like this project!

    opened by rmanak 4
  • All local file revisions hardlink to the latest revision

    All local file revisions hardlink to the latest revision

    I tested this out by creating and tracking a single file through multiple revisions.

    Let's say we have a big_file.csv whose content look like this:

    a, b, c
    1, 2, 3
    

    We first track it using this script:

    from lazydata import track
    
    # store the file when loading  
    import pandas as pd
    df = pd.read_csv(track("big_file.csv"))
    
    print("Data shape:" + str(df.shape))
    

    Change the file content multiple times, for ex:

    a, b, c
    1, 2, 3
    4, 5, 6
    

    And keep executing the script between the multiple revisions:

    (dev3.5)  ~/test_lazydata > python my_script.py 
    LAZYDATA: Tracking new file `big_file.csv`
    Data shape:(1, 3)
    (dev3.5)  ~/test_lazydata > vim big_file.csv  # changing file
    (dev3.5)  ~/test_lazydata > python my_script.py
    LAZYDATA: Tracked file `big_file.csv` changed, recording a new version...
    Data shape:(2, 3)
    (dev3.5)  ~/test_lazydata > vim big_file.csv  # changing file
    (dev3.5)  ~/test_lazydata > python my_script.py
    LAZYDATA: Tracked file `big_file.csv` changed, recording a new version...
    Data shape:(3, 3)
    (dev3.5)  ~/test_lazydata > vim big_file.csv  # changing file
    (dev3.5)  ~/test_lazydata > python my_script.py
    LAZYDATA: Tracked file `big_file.csv` changed, recording a new version...
    Data shape:(4, 3)
    

    A simple ls afterwards points to the mistake:

    (dev3.5)  ~/test_lazydata > ls -lah
    total 20
    drwxrwxr-x  2 zakaria zakaria 4096 sept.  5 16:14 .
    drwxr-xr-x 56 zakaria zakaria 4096 sept.  5 16:14 ..
    -rw-rw-r--  5 zakaria zakaria   44 sept.  5 16:14 big_file.csv
    -rw-rw-r--  1 zakaria zakaria  482 sept.  5 16:14 lazydata.yml
    -rw-rw-r--  1 zakaria zakaria  158 sept.  5 16:12 my_script.py
    

    Notice the number of hardlinks to big_file.csv. There should only be one. What is happening is that all the revisions point to the same file.

    You can also check ~/.lazydata/data directly for the content of the different files. It'a all the same.

    opened by zbitouzakaria 3
  • SyntaxError: invalid syntax

    SyntaxError: invalid syntax

    Using Python 2.7.6 and the example script you provide, with a file outside of the github repo (file exists and file path is correct, I checked):

    from lazydata import track
    
    with open(track("/home/lg390/tmp/data/some_data_file.txt"), "r") as f:
        print(f.read())
    

    I get

    -> % python sample_script.py
    Traceback (most recent call last):
      File "sample_script.py", line 1, in <module>
        from lazydata import track
      File "/usr/local/lib/python2.7/dist-packages/lazydata/__init__.py", line 1, in <module>
        from .tracker import track
      File "/usr/local/lib/python2.7/dist-packages/lazydata/tracker.py", line 11
        def track(path:str) -> str:
                      ^
    SyntaxError: invalid syntax
    
    opened by LaurentGatto 3
  • Add http link remote backend

    Add http link remote backend

    I have a project https://github.com/rominf/profanityfilter that could benefit from lazydata. I think it would be cool to move badword dictionaries out of repository and track them alongside hunspell dictionaries with lazydata. The problem is that I want these files to be accessible by end users, that means, I don't want them to be stored in AWS. Instead, I would like them downloaded by http link.

    opened by rominf 2
  • Comparison with DVC

    Comparison with DVC

    Hello!

    First of all thank you for your contribution to the community! I’ve just found out about this and it seems to be a nice project that is growing!

    You are probably familiar with dvc (https://github.com/iterative/dvc).

    I’ve been investigating it in order to include it in my ML pipeline. Can you explain briefly how/if Lazydata differs from dvc? And any advantages and disadvantages? I understand that there may be some functionalities that maybe are not yet implemented purely due to time constraints or similar. I’m more interested in knowing if there are any differences in terms of paradigm.

    Ps- if you have a different channel for these kind of questions please let me know.

    Thank you very much!

    opened by silverdna 2
  • Publish a release on PyPI

    Publish a release on PyPI

    I've made a PR #13 that you've accepted. Unfortunately, I cannot use the library the easy way because you didn't upload the latest changes on PyPI.

    I propose to implement #20 first.

    opened by rominf 0
  • Move backends requirements to extras_require

    Move backends requirements to extras_require

    If the package has optional features that require their own dependencies you can use extras_require.

    I propose to make use of extras_require for all backends that require dependencies to minimize the number of installed packages. For example, I do not use s3, but all 11 packages are installed, 6 of them are needed for s3.

    opened by rominf 0
  • Azure integration

    Azure integration

    Here's a start of the azure integration. Haven't written tests yet but let me know what you think. Also sorry for some of the style changes, I have an autoformatter on (black). Let me know if you want me to turn that off.

    Ref #18

    opened by avril-affine 3
  • Implementing multiple backends by re-using snakemake.remote or pyfilesystem2

    Implementing multiple backends by re-using snakemake.remote or pyfilesystem2

    Would it be possible to wrap the classes implementing snakemake.remote.AbstractRemoteObject (snakemake.remote, AbstractRemoteObject) into lazydata.remote.RemoteStorage class?

    This would allow to implement the following remote storage providers in one go (https://snakemake.readthedocs.io/en/stable/snakefiles/remote_files.html):

    • Amazon Simple Storage Service (AWS S3): snakemake.remote.S3
    • Google Cloud Storage (GS): snakemake.remote.GS
    • File transfer over SSH (SFTP): snakemake.remote.SFTP
    • Read-only web (HTTP[S]): snakemake.remote.HTTP
    • File transfer protocol (FTP): snakemake.remote.FTP
    • Dropbox: snakemake.remote.dropbox
    • XRootD: snakemake.remote.XRootD
    • GenBank / NCBI Entrez: snakemake.remote.NCBI
    • WebDAV: snakemake.remote.webdav
    • GFAL: snakemake.remote.gfal
    • GridFTP: snakemake.remote.gridftp
    • iRODS: snakemake.remote.iRODS
    • EGA: snakemake.remote.EGA

    Pyfilesystem2

    Another alternative would be to write a wrapper around pyfilesystem2: https://github.com/PyFilesystem/pyfilesystem2. It supports the following filesystems: https://www.pyfilesystem.org/page/index-of-filesystems/

    Builtin

    • FTPFS File Transfer Protocol.
    • ...

    Official

    Filesystems in the PyFilesystem organisation on GitHub.

    • S3FS Amazon S3 Filesystem.
    • WebDavFS WebDav Filesystem.

    Third Party

    • fs.archive Enhanced archive filesystems.
    • fs.dropboxfs Dropbox Filesystem.
    • fs-gcsfs Google Cloud Storage Filesystem.
    • fs.googledrivefs Google Drive Filesystem.
    • fs.onedrivefs Microsoft OneDrive Filesystem.
    • fs.smbfs A filesystem running over the SMB protocol.
    • fs.sshfs A filesystem running over the SSH protocol.
    • fs.youtube A filesystem for accessing YouTube Videos and Playlists.
    • fs.dnla A filesystem for accessing accessing DLNA Servers
    opened by Avsecz 3
  • lazydata track - tracking files produced by other CLI tools

    lazydata track - tracking files produced by other CLI tools

    First, thanks for the amazing package. Exactly what I was looking for!

    It would be great to also have a command lazydata track <file1> <file2> ..., which would run lazydata.track() on the specified files. That way, the user can use CLI tools outside of python while still easily tracking the produced files.

    opened by Avsecz 2
Releases(1.0.19)
SpyQL - SQL with Python in the middle

SpyQL SQL with Python in the middle Concept SpyQL is a query language that combines: the simplicity and structure of SQL with the power and readabilit

Daniel Moura 853 Dec 30, 2022
Kafka Connect JDBC Docker Image.

kafka-connect-jdbc This is a dockerized version of the Confluent JDBC database connector. Usage This image is running the connect-standalone command w

Marc Horlacher 1 Jan 05, 2022
A wrapper for SQLite and MySQL, Most of the queries wrapped into commands for ease.

Before you proceed, make sure you know Some real SQL, before looking at the code, otherwise you probably won't understand anything. Installation pip i

Refined 4 Jul 30, 2022
This is a repository for a task assigned to me by Bilateral solutions!

Processing-Files-using-MySQL This is a repository for a task assigned to me by Bilateral solutions! Task: Make Folders named Processing,queue and proc

Kandal Khandeka 1 Nov 07, 2022
A HugSQL-inspired database library for Python

PugSQL PugSQL is a simple Python interface for using parameterized SQL, in files. See pugsql.org for the documentation. To install: pip install pugsql

Dan McKinley 558 Dec 24, 2022
Tool for synchronizing clickhouse clusters

clicksync Tool for synchronizing clickhouse clusters works only with partitioned MergeTree tables can sync clusters with different node number uses in

Alexander Rumyantsev 1 Nov 30, 2021
Dlsite-doujin-renamer - Dlsite doujin renamer tool with python

dlsite-doujin-renamer Features 支持深度查找带有 RJ 号的文件夹 支持手动选择文件夹或拖拽文件夹到软件窗口 支持在 config

111 Jan 02, 2023
A tool to snapshot sqlite databases you don't own

The core here is my first attempt at a solution of this, combining ideas from browser_history.py and karlicoss/HPI/sqlite.py to create a library/CLI tool to (as safely as possible) copy databases whi

Sean Breckenridge 10 Dec 22, 2022
A SQL linter and auto-formatter for Humans

The SQL Linter for Humans SQLFluff is a dialect-flexible and configurable SQL linter. Designed with ELT applications in mind, SQLFluff also works with

SQLFluff 5.5k Jan 08, 2023
aiopg is a library for accessing a PostgreSQL database from the asyncio

aiopg aiopg is a library for accessing a PostgreSQL database from the asyncio (PEP-3156/tulip) framework. It wraps asynchronous features of the Psycop

aio-libs 1.3k Jan 03, 2023
Database connection pooler for Python

Nimue Strange women lying in ponds distributing swords is no basis for a system of government! --Dennis, Peasant Nimue is a database connection pool f

1 Nov 09, 2021
SQL for Humans™

Records: SQL for Humans™ Records is a very simple, but powerful, library for making raw SQL queries to most relational databases. Just write SQL. No b

Ken Reitz 6.9k Jan 03, 2023
Amazon S3 Transfer Manager for Python

s3transfer - An Amazon S3 Transfer Manager for Python S3transfer is a Python library for managing Amazon S3 transfers. Note This project is not curren

the boto project 158 Jan 07, 2023
Python client for Apache Kafka

Kafka Python client Python client for the Apache Kafka distributed stream processing system. kafka-python is designed to function much like the offici

Dana Powers 5.1k Jan 08, 2023
A Python-based RPC-like toolkit for interfacing with QuestDB.

pykit A Python-based RPC-like toolkit for interfacing with QuestDB. Requirements Python 3.9 Java Azul

QuestDB 11 Aug 03, 2022
Google Sheets Python API v4

pygsheets - Google Spreadsheets Python API v4 A simple, intuitive library for google sheets which gets your work done. Features: Open, create, delete

Nithin Murali 1.4k Dec 31, 2022
Simple DDL Parser to parse SQL (HQL, TSQL, AWS Redshift, Snowflake and other dialects) ddl files to json/python dict with full information about columns: types, defaults, primary keys, etc.

Simple DDL Parser Build with ply (lex & yacc in python). A lot of samples in 'tests/. Is it Stable? Yes, library already has about 5000+ usage per day

Iuliia Volkova 95 Jan 05, 2023
pandas-gbq is a package providing an interface to the Google BigQuery API from pandas

pandas-gbq pandas-gbq is a package providing an interface to the Google BigQuery API from pandas Installation Install latest release version via conda

Google APIs 348 Jan 03, 2023
Simplest SQL mapper in Python, probably

SQL MAPPER Basically what it does is: it executes some SQL thru a database connector you fed it, maps it to some model and gives to u. Also it can cre

2 Nov 07, 2022
Py2neo is a comprehensive toolkit for working with Neo4j from within Python applications or from the command line.

Py2neo v3 Py2neo is a client library and toolkit for working with Neo4j from within Python applications and from the command line. The core library ha

64 Oct 14, 2022