Google-drive-to-sqlite - Create a SQLite database containing metadata from Google Drive

Overview

google-drive-to-sqlite

PyPI Changelog Tests License

Create a SQLite database containing metadata from Google Drive

If you use Google Drive, and especially if you have shared drives with other people there's a good chance you have hundreds or even thousands of files that you may not be fully aware of.

This tool can download metadata about those files - their names, sizes, folders, content types, permissions, creation dates and more - and store them in a SQLite database.

This lets you use SQL to analyze your Google Drive contents, using Datasette or the SQLite command-line tool or any other SQLite database browsing software.

Installation

Install this tool using pip:

$ pip install google-drive-to-sqlite

Authentication

⚠️ This application has not yet been verified by Google - you may find you are unable to authenticate until that verification is complete. #10

First, authenticate with Google Drive using the auth command:

% google-drive-to-sqlite auth
Visit the following URL to authenticate with Google Drive

https://accounts.google.com/o/oauth2/v2/auth?...

Then return here and paste in the resulting code:
Paste code here: 

Follow the link, sign in with Google Drive and then copy and paste the resulting code back into the tool.

This will save an authentication token to the file called auth.json in the current directory.

To specify a different location for that file, use the --auth option:

google-drive-to-sqlite auth --auth ~/google-drive-auth.json

The auth command also provides options for using a different scope, Google client ID and Google client secret. You can use these to create your own custom authentication tokens that can work with other Google APIs, see issue #5 for details.

Full --help:

Usage: google-drive-to-sqlite auth [OPTIONS]

  Authenticate user and save credentials

Options:
  -a, --auth FILE              Path to save token, defaults to auth.json
  --google-client-id TEXT      Custom Google client ID
  --google-client-secret TEXT  Custom Google client secret
  --scope TEXT                 Custom token scope
  --help                       Show this message and exit.

To revoke the token that is stored in auth.json, run the revoke command:

google-drive-to-sqlite revoke

Or if your token is stored in another location:

google-drive-to-sqlite revoke -a ~/google-drive-auth.json

google-drive-to-sqlite files

To retrieve metadata about the files in your Google Drive, or a folder or search within it, use the google-drive-to-sqlite files command.

This will default to writing details about every file in your Google Drive to a SQLite database:

google-drive-to-sqlite files files.db

Files will be written to a files table, which will be created if it does not yet exist.

If a file already exists in that table, based on a matching id, it will be replaced with fresh data.

Instead of writing to SQLite you can use --json to output as JSON, or --nl to output as newline-delimited JSON:

google-drive-to-sqlite files --nl

Use --folder ID to retrieve everything in a specified folder and its sub-folders:

google-drive-to-sqlite files files.db --folder 1E6Zg2X2bjjtPzVfX8YqdXZDCoB3AVA7i

Use --q QUERY to use a custom search query:

google-drive-to-sqlite files files.db -q 'starred = true'

Use --full-text TEXT to search for files where the full text matches a search term:

google-drive-to-sqlite files files.db --full-text 'datasette'

Use --stop-after X to stop after retrieving X files.

Full --help:

Usage: google-drive-to-sqlite files [OPTIONS] [DATABASE]

  Retrieve metadata for files in Google Drive, and write to a SQLite database or
  output as JSON.

      google-drive-to-sqlite files files.db

  Use --json to output JSON, --nl for newline-delimited JSON:

      google-drive-to-sqlite files files.db --json

  Use a folder ID to recursively fetch every file in that folder and its sub-
  folders:

      google-drive-to-sqlite files files.db --folder
      1E6Zg2X2bjjtPzVfX8YqdXZDCoB3AVA7i

Options:
  -a, --auth FILE       Path to auth.json token file
  --folder TEXT         Files in this folder ID and its sub-folders
  -q TEXT               Files matching this query
  --full-text TEXT      Search for files with text match
  --json                Output JSON rather than write to DB
  --nl                  Output newline-delimited JSON rather than write to DB
  --stop-after INTEGER  Stop paginating after X results
  --help                Show this message and exit.

google-drive-to-sqlite download FILE_ID

The download command can be used to download files from Google Drive.

You'll need one or more file IDs, which look something like 0B32uDVNZfiEKLUtIT1gzYWN2NDI4SzVQYTFWWWxCWUtvVGNB.

To download the file, run this:

google-drive-to-sqlite download 0B32uDVNZfiEKLUtIT1gzYWN2NDI4SzVQYTFWWWxCWUtvVGNB

This will detect the content type of the file and use that as the extension - so if this file is a JPEG the file would be downloaded as:

0B32uDVNZfiEKLUtIT1gzYWN2NDI4SzVQYTFWWWxCWUtvVGNB.jpeg

You can pass multiple file IDs to the command at once.

To hide the progress bar and filename output, use -s or --silent.

If you are downloading a single file you can use the -o output to specify a filename and location:

google-drive-to-sqlite download 0B32uDVNZfiEKLUtIT1gzYWN2NDI4SzVQYTFWWWxCWUtvVGNB \
  -o my-image.jpeg

Use -o - to write the file contents to standard output:

google-drive-to-sqlite download 0B32uDVNZfiEKLUtIT1gzYWN2NDI4SzVQYTFWWWxCWUtvVGNB \
  -o - > my-image.jpeg

Full --help:

Usage: google-drive-to-sqlite download [OPTIONS] FILE_IDS...

  Download one or more file IDs to disk

Options:
  -a, --auth FILE    Path to auth.json token file
  -o, --output FILE  File to write to, or - for standard output
  -s, --silent       Hide progress bar and filename
  --help             Show this message and exit.

google-drive-to-sqlite get URL

The get command makes authenticated requests to the specified URL, using credentials derived from the auth.json file.

For example:

% google-drive-to-sqlite get 'https://www.googleapis.com/drive/v3/about?fields=*'
{
    "kind": "drive#about",
    "user": {
        "kind": "drive#user",
        "displayName": "Simon Willison",
# ...

If the resource you are fetching supports pagination you can use --paginate key to paginate through all of the rows in a specified key. For example, the following API has a nextPageToken key and a files list, suggesting it supports pagination:

% google-drive-to-sqlite get https://www.googleapis.com/drive/v3/files
{
    "kind": "drive#fileList",
    "nextPageToken": "~!!~AI9...wogHHYlc=",
    "incompleteSearch": false,
    "files": [
        {
            "kind": "drive#file",
            "id": "1YEsITp_X8PtDUJWHGM0osT-TXAU1nr0e7RSWRM2Jpyg",
            "name": "Title of a spreadsheet",
            "mimeType": "application/vnd.google-apps.spreadsheet"
        },

To paginate through everything in the files list you would use --paginate files lyike this:

% google-drive-to-sqlite get https://www.googleapis.com/drive/v3/files --paginate files
[
  {
    "kind": "drive#file",
    "id": "1YEsITp_X8PtDUJWHGM0osT-TXAU1nr0e7RSWRM2Jpyg",
    "name": "Title of a spreadsheet",
    "mimeType": "application/vnd.google-apps.spreadsheet"
  },
  # ...

Add --nl to stream paginated data as newline-delimited JSON:

% google-drive-to-sqlite get https://www.googleapis.com/drive/v3/files --paginate files --nl
{"kind": "drive#file", "id": "1YEsITp_X8PtDUJWHGM0osT-TXAU1nr0e7RSWRM2Jpyg", "name": "Title of a spreadsheet", "mimeType": "application/vnd.google-apps.spreadsheet"}
{"kind": "drive#file", "id": "1E6Zg2X2bjjtPzVfX8YqdXZDCoB3AVA7i", "name": "Subfolder", "mimeType": "application/vnd.google-apps.folder"}

Add --stop-after 5 to stop after 5 records - useful for testing.

Full --help:

Usage: google-drive-to-sqlite get [OPTIONS] URL

  Make an authenticated HTTP GET to the specified URL

Options:
  -a, --auth FILE       Path to auth.json token file
  --paginate TEXT       Paginate through all results in this key
  --nl                  Output paginated data as newline-delimited JSON
  --stop-after INTEGER  Stop paginating after X results
  --help                Show this message and exit.

Privacy policy

This tool requests access to your Google Drive account in order to retrieve metadata about your files there. It also offers a feature that can download the content of those files.

The credentials used to access your account are stored in the auth.json file on your computer. The metadata and content retrieved from Google Drive is also stored only on your own personal computer.

At no point to the developers of this tool gain access to any of your data.

Development

To contribute to this tool, first checkout the code. Then create a new virtual environment:

cd google-drive-to-sqlite
python -m venv venv
source venv/bin/activate

Or if you are using pipenv:

pipenv shell

Now install the dependencies and test dependencies:

pip install -e '.[test]'

To run the tests:

pytest
Comments
  • google-drive-to-sqlite initial release

    google-drive-to-sqlite initial release

    Basic design:

    google-drive-to-sqlite files google.db FOLDER_ID
    

    Looks for auth.json with credentials in the current directory - it wants a refresh_token or an access_token of some sort.

    Features for the initial release:

    • [x] #2
    • [x] #3
    • [x] #7
    • [x] #5
    design 
    opened by simonw 13
  • Retry once (or more?) on any TransportError

    Retry once (or more?) on any TransportError

    Got this exception:

      File "/Users/simon/Dropbox/Development/google-drive-to-sqlite/google_drive_to_sqlite/utils.py", line 79, in get
        response = httpx.get(url, params=params, headers=headers, timeout=self.timeout)
      File "/Users/simon/.local/share/virtualenvs/google-drive-to-sqlite-Wr1nXkpK/lib/python3.10/site-packages/httpx/_api.py", line 189, in get
        return request(
      File "/Users/simon/.local/share/virtualenvs/google-drive-to-sqlite-Wr1nXkpK/lib/python3.10/site-packages/httpx/_api.py", line 100, in request
        return client.request(
      File "/Users/simon/.local/share/virtualenvs/google-drive-to-sqlite-Wr1nXkpK/lib/python3.10/site-packages/httpx/_client.py", line 802, in request
        return self.send(request, auth=auth, follow_redirects=follow_redirects)
      File "/Users/simon/.local/share/virtualenvs/google-drive-to-sqlite-Wr1nXkpK/lib/python3.10/site-packages/httpx/_client.py", line 889, in send
        response = self._send_handling_auth(
      File "/Users/simon/.local/share/virtualenvs/google-drive-to-sqlite-Wr1nXkpK/lib/python3.10/site-packages/httpx/_client.py", line 917, in _send_handling_auth
        response = self._send_handling_redirects(
      File "/Users/simon/.local/share/virtualenvs/google-drive-to-sqlite-Wr1nXkpK/lib/python3.10/site-packages/httpx/_client.py", line 954, in _send_handling_redirects
        response = self._send_single_request(request)
      File "/Users/simon/.local/share/virtualenvs/google-drive-to-sqlite-Wr1nXkpK/lib/python3.10/site-packages/httpx/_client.py", line 990, in _send_single_request
        response = transport.handle_request(request)
      File "/Users/simon/.local/share/virtualenvs/google-drive-to-sqlite-Wr1nXkpK/lib/python3.10/site-packages/httpx/_transports/default.py", line 217, in handle_request
        with map_httpcore_exceptions():
      File "/Users/simon/.pyenv/versions/3.10.0/lib/python3.10/contextlib.py", line 153, in __exit__
        self.gen.throw(typ, value, traceback)
      File "/Users/simon/.local/share/virtualenvs/google-drive-to-sqlite-Wr1nXkpK/lib/python3.10/site-packages/httpx/_transports/default.py", line 77, in map_httpcore_exceptions
        raise mapped_exc(message) from exc
    httpx.RemoteProtocolError: Server disconnected without sending a response.
    

    Would be good to retry once if this happens.

    bug 
    opened by simonw 12
  • `google-drive-to-sqlite download FILE_ID` command

    `google-drive-to-sqlite download FILE_ID` command

    Here's the recipe that worked for retrieving the binary contents of a file - the trick is the alt=media parameter:

    def get_binary(file_id):
        return httpx.get(
            "https://www.googleapis.com/drive/v3/files/{}?alt=media".format(file_id),
            headers={
                "Authorization": "Bearer {}".format(access_token)
            }
        ).content
    

    Originally posted by @simonw in https://github.com/simonw/google-drive-to-sqlite/issues/1#issuecomment-1041026849

    enhancement 
    opened by simonw 11
  • Add `auth` options for setting client ID and secret and scope

    Add `auth` options for setting client ID and secret and scope

    If auth could optionally take an alternative client ID and secret and scope (and write them to auth.json) then the get command could be used by power users (mainly me) to explore other Google APIs.

    • #3
    enhancement 
    opened by simonw 9
  • `google-drive-to-sqlite auth` command

    `google-drive-to-sqlite auth` command

    Authentication will be tricky. For the moment I'll go with the simplest thing possible, but I may need to build a google-drive-to-sqlite auth command just to get things up and running.

    Originally posted by @simonw in https://github.com/simonw/google-drive-to-sqlite/issues/1#issuecomment-1041023730

    research 
    opened by simonw 9
  • Schema change: a file/drive can only have one owner

    Schema change: a file/drive can only have one owner

    The numbers in this screenshot seem to indicate that each file and folder have a single owner only - even though the JSON would suggest the possibility for multiple owners:

    image

    So I may be able to get rid of those many-to-many tables entirely.

    enhancement 
    opened by simonw 6
  • `files` options for listing just Google Docs documents

    `files` options for listing just Google Docs documents

    Would be neat to have options that can return:

    • All Google Docs documents of any type
    • Just docs
    • Just sheets
    • Just presentations
    • Maybe just drawings?

    Related:

    • #21
    enhancement blocked 
    opened by simonw 5
  • Invalid Credentials error if `access_token` expires

    Invalid Credentials error if `access_token` expires

    I left a files job running which took over an hour, and came back to:

    google-drive-to-sqlite big_folder.db --folder 1E6Zg2X2bjjtPzVfX8YqdXZDCoB3AVA7i
    

    google_drive_to_sqlite.utils.FilesError: {'error': {'errors': [{'domain': 'global', 'reason': 'authError', 'message': 'Invalid Credentials', 'locationType': 'header', 'location': 'Authorization'}], 'code': 401, 'message': 'Invalid Credentials'}}

    It looks like it took longer than an hour and the access_token expired!

    Since we have a refresh_token it would be feasible to catch this and try again with a new token.

    bug 
    opened by simonw 5
  • httpx TimeoutError

    httpx TimeoutError

    Got this exception while testing code from #7:

      File "/Users/simon/Dropbox/Development/google-drive-to-sqlite/google_drive_to_sqlite/utils.py", line 25, in paginate_files
        data = httpx.get(
      File "/Users/simon/.local/share/virtualenvs/google-drive-to-sqlite-Wr1nXkpK/lib/python3.10/site-packages/httpx/_api.py", line 189, in get
        return request(
      File "/Users/simon/.local/share/virtualenvs/google-drive-to-sqlite-Wr1nXkpK/lib/python3.10/site-packages/httpx/_api.py", line 100, in request
        return client.request(
      File "/Users/simon/.local/share/virtualenvs/google-drive-to-sqlite-Wr1nXkpK/lib/python3.10/site-packages/httpx/_client.py", line 802, in request
        return self.send(request, auth=auth, follow_redirects=follow_redirects)
      File "/Users/simon/.local/share/virtualenvs/google-drive-to-sqlite-Wr1nXkpK/lib/python3.10/site-packages/httpx/_client.py", line 889, in send
        response = self._send_handling_auth(
      File "/Users/simon/.local/share/virtualenvs/google-drive-to-sqlite-Wr1nXkpK/lib/python3.10/site-packages/httpx/_client.py", line 917, in _send_handling_auth
        response = self._send_handling_redirects(
      File "/Users/simon/.local/share/virtualenvs/google-drive-to-sqlite-Wr1nXkpK/lib/python3.10/site-packages/httpx/_client.py", line 954, in _send_handling_redirects
        response = self._send_single_request(request)
      File "/Users/simon/.local/share/virtualenvs/google-drive-to-sqlite-Wr1nXkpK/lib/python3.10/site-packages/httpx/_client.py", line 990, in _send_single_request
        response = transport.handle_request(request)
      File "/Users/simon/.local/share/virtualenvs/google-drive-to-sqlite-Wr1nXkpK/lib/python3.10/site-packages/httpx/_transports/default.py", line 217, in handle_request
        with map_httpcore_exceptions():
      File "/Users/simon/.pyenv/versions/3.10.0/lib/python3.10/contextlib.py", line 153, in __exit__
        self.gen.throw(typ, value, traceback)
      File "/Users/simon/.local/share/virtualenvs/google-drive-to-sqlite-Wr1nXkpK/lib/python3.10/site-packages/httpx/_transports/default.py", line 77, in map_httpcore_exceptions
        raise mapped_exc(message) from exc
    httpx.ReadTimeout: The read operation timed out
    
    bug 
    opened by simonw 5
  • `google-drive-to-sqlite files` command

    `google-drive-to-sqlite files` command

    I'm going to have this:

    google-drive-to-sqlite files google.db
    

    Retrieve ALL files in the drive.

    google-drive-to-sqlite files google.db --folder FOLDER_ID
    

    Will do just the files in that folder - using ?q= and "folder_id" in parents - but applied recursively to all of the sub-folders.

    google-drive-to-sqlite files google.db --q search_term
    

    Will allow advanced search terms (passed to ?q=).

    Originally posted by @simonw in https://github.com/simonw/google-drive-to-sqlite/issues/1#issuecomment-1041172083

    enhancement 
    opened by simonw 5
  • `drive_users` table fills up with null rows

    `drive_users` table fills up with null rows

    Something went very wrong here:

    image

    88 rows where permissionId is not null, 14,012 rows where permissionId is null.

    Originally posted by @simonw in https://github.com/simonw/google-drive-to-sqlite/issues/18#issuecomment-1046167012

    bug 
    opened by simonw 4
  • Support authentication using service account keys

    Support authentication using service account keys

    Service account keys take the form of a JSON file on disk containing a primary key.

    It's possible, albeit non-obvious, to make calls to the Google Drive API using these keys.

    They might represent a better authentication mechanism for many use-cases. See also:

    • #39
    enhancement 
    opened by simonw 4
  • OOB auth flow is scheduled for deprecation

    OOB auth flow is scheduled for deprecation

    Just learned about this: https://developers.googleblog.com/2022/02/making-oauth-flows-safer.html?m=1#disallowed-oo

    OAuth out-of-band (OOB) is a legacy flow developed to support native clients which do not have a redirect URI like web apps to accept the credentials after a user approves an OAuth consent request. The OOB flow poses a remote phishing risk and clients must migrate to an alternative method to protect against this vulnerability. New clients will be unable to use this flow starting on Feb 28, 2022.

    ...

    • Feb 28, 2022 - new OAuth usage will be blocked for the OOB flow
    • Sep 5, 2022 - a user-facing warning message may be displayed to non-compliant OAuth requests
    • Oct 3, 2022 - the OOB flow is deprecated for existing clients

    From a comment on Hacker News: https://news.ycombinator.com/item?id=30417735

    I'm using that flow here: https://github.com/simonw/google-drive-to-sqlite/blob/1215097786c0ecdb12a766c9f2c8e53b2b0cd0f9/google_drive_to_sqlite/cli.py#L56-L65

    research 
    opened by simonw 8
  • `--sql` option for `download` and `export` commands

    `--sql` option for `download` and `export` commands

    twitter-to-sqlite has this: https://datasette.io/tools/twitter-to-sqlite#user-content-providing-input-from-a-sql-query-with---sql-and---attach

    $ twitter-to-sqlite users-lookup my.db \
        --sql="select followed_id from following where followed_id not in (
            select id from users)" --ids
    

    Some kind of equivalent for the download and export commands would be neat.

    enhancement research 
    opened by simonw 3
  • Can I use the Drive Activity API to find files recently added to a nested folder?

    Can I use the Drive Activity API to find files recently added to a nested folder?

    https://developers.google.com/drive/activity/v2/reference/rest/v2/activity/query looks like it's possible it might be able to solve the problem "show me files that were added to this deeply nested folder structure since last time I looked.

    research 
    opened by simonw 0
  • Mechanism for creating a table based on csv export from a Google Sheet

    Mechanism for creating a table based on csv export from a Google Sheet

    Related to:

    • #28

    Might not be feasible, depending on how good the CSV export is especially for sheets that don't have any headers. But could be fun to explore!

    research 
    opened by simonw 0
Releases(0.4)
  • 0.4(Feb 20, 2022)

    • Redesigned the schema to reflect that Google Drive files can only have one, not multiple owners. #34
      • Removed drive_folders_owners and drive_files_owners tables.
      • drive_files and drive_folders now have a _owner column that is a foreign key to drive_users.
    • New google-drive-to-sqlite export format file_id command for exporting Google Docs files as different formats such as pdf. #21
    • New options to files: --apps for all Google Docs files of all types, or --docs, --sheets, --presentations and --drawings for Google Docs files of specific types.
    • google-drive-to-sqlite files and get commands now have a --verbose option showing what the tool is doing in detail.
    • Automatically retries up to twice on HTTP transport errors, with a 2 second delay. #18
    • The auth.json file now defaults to 0600 permissions, meaning only the user can read that file on a shared system. #37
    Source code(tar.gz)
    Source code(zip)
  • 0.3(Feb 19, 2022)

    • New design for the database schema: #9
      • drive_files contains files, with a _parent foreign key to the parent folder
      • drive_folders contains folders, with a self-referential _parent foreign key
      • drive_users contains users who may own or have modified files
      • drive_folders_owners relates folders to their owners
      • drive_files_owners relates files to their owners
      • The full schema is now included in the documentation.
    • New --import-json and --import-nl options to the files command for creating a database using JSON data retrieved previously using the --nl and --json options. #20
    • --starred, --trashed and --shared-with-me options for files as shortcuts for constructing more advanced search queries. #25
    • Documentation now has a quickstart section. #24
    Source code(tar.gz)
    Source code(zip)
  • 0.2a0(Feb 18, 2022)

    • New google-drive-to-sqlite revoke command, for revoking stored credentials. #16
    • If an hour long access_token expires while a command is running, the failed request is retried with a new token automatically generated using the refresh_token. #11
    Source code(tar.gz)
    Source code(zip)
  • 0.1a2(Feb 17, 2022)

  • 0.1a1(Feb 17, 2022)

  • 0.1a0(Feb 16, 2022)

    This initial alpha release may not work for anyone other than me, as Google still need to verify my OAuth application.

    • google-drive-to-sqlite auth command for authenticating with Google and storing the resulting credentials. #2
    • google-drive-to-sqlite files command for fetching file metadata from Google Drive and storing it in a SQLite database file, or outputting it as JSON. #7
    • google-drive-to-sqlite get command for executing other authenticated API calls against Google APIs. #3
    Source code(tar.gz)
    Source code(zip)
Sparse Physics-based and Interpretable Neural Networks

Sparse Physics-based and Interpretable Neural Networks for PDEs This repository contains the code and manuscript for research done on Sparse Physics-b

28 Jan 03, 2023
Official implementation of Rethinking Graph Neural Architecture Search from Message-passing (CVPR2021)

Rethinking Graph Neural Architecture Search from Message-passing Intro The GNAS can automatically learn better architecture with the optimal depth of

Shaofei Cai 48 Sep 30, 2022
Conditional Generative Adversarial Networks (CGAN) for Mobility Data Fusion

This code implements the paper, Kim et al. (2021). Imputing Qualitative Attributes for Trip Chains Extracted from Smart Card Data Using a Conditional Generative Adversarial Network. Transportation Re

Eui-Jin Kim 2 Feb 03, 2022
Unofficial pytorch implementation of 'Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization'

pytorch-AdaIN This is an unofficial pytorch implementation of a paper, Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization [Hua

Naoto Inoue 873 Jan 06, 2023
Generalized Random Forests

generalized random forests A pluggable package for forest-based statistical estimation and inference. GRF currently provides non-parametric methods fo

GRF Labs 781 Dec 25, 2022
Feed forward VQGAN-CLIP model, where the goal is to eliminate the need for optimizing the latent space of VQGAN for each input prompt

Feed forward VQGAN-CLIP model, where the goal is to eliminate the need for optimizing the latent space of VQGAN for each input prompt. This is done by

Mehdi Cherti 135 Dec 30, 2022
Python library for tracking human heads with FLAME (a 3D morphable head model)

Video Head Tracker 3D tracking library for human heads based on FLAME (a 3D morphable head model). The tracking algorithm is inspired by face2face. It

61 Dec 25, 2022
PyTorch implementation of the paper: Label Noise Transition Matrix Estimation for Tasks with Lower-Quality Features

Label Noise Transition Matrix Estimation for Tasks with Lower-Quality Features Estimate the noise transition matrix with f-mutual information. This co

<a href=[email protected]"> 1 Jun 05, 2022
Code and data (Incidents Dataset) for ECCV 2020 Paper "Detecting natural disasters, damage, and incidents in the wild".

Incidents Dataset See the following pages for more details: Project page: IncidentsDataset.csail.mit.edu. ECCV 2020 Paper "Detecting natural disasters

Ethan Weber 67 Dec 27, 2022
An implementation of chunked, compressed, N-dimensional arrays for Python.

Zarr Latest Release Package Status License Build Status Coverage Downloads Gitter Citation What is it? Zarr is a Python package providing an implement

Zarr Developers 1.1k Dec 30, 2022
Implementation of "Generalizable Neural Performer: Learning Robust Radiance Fields for Human Novel View Synthesis"

Generalizable Neural Performer: Learning Robust Radiance Fields for Human Novel View Synthesis Abstract: This work targets at using a general deep lea

163 Dec 14, 2022
A Pytorch Implementation of [Source data‐free domain adaptation of object detector through domain

A Pytorch Implementation of Source data‐free domain adaptation of object detector through domain‐specific perturbation Please follow Faster R-CNN and

1 Dec 25, 2021
A platform to display the carbon neutralization information for researchers, decision-makers, and other participants in the community.

Welcome to Carbon Insight Carbon Insight is a platform aiming to display the carbon neutralization roadmap for researchers, decision-makers, and other

Microsoft 14 Oct 24, 2022
Official PyTorch implementation of Less is More: Pay Less Attention in Vision Transformers.

Less is More: Pay Less Attention in Vision Transformers Official PyTorch implementation of Less is More: Pay Less Attention in Vision Transformers. By

73 Jan 01, 2023
This is the official pytorch implementation for our ICCV 2021 paper "TRAR: Routing the Attention Spans in Transformers for Visual Question Answering" on VQA Task

🌈 ERASOR (RA-L'21 with ICRA Option) Official page of "ERASOR: Egocentric Ratio of Pseudo Occupancy-based Dynamic Object Removal for Static 3D Point C

Hyungtae Lim 225 Dec 29, 2022
Pytorch Performace Tuning, WandB, AMP, Multi-GPU, TensorRT, Triton

Plant Pathology 2020 FGVC7 Introduction A deep learning model pipeline for training, experimentaiton and deployment for the Kaggle Competition, Plant

Bharat Giddwani 0 Feb 25, 2022
Official PyTorch implementation of the paper "Self-Supervised Relational Reasoning for Representation Learning", NeurIPS 2020 Spotlight.

Official PyTorch implementation of the paper: "Self-Supervised Relational Reasoning for Representation Learning" (2020), Patacchiola, M., and Storkey,

Massimiliano Patacchiola 135 Jan 03, 2023
Vpw analyzer - A visual J1850 VPW analyzer written in Python

VPW Analyzer A visual J1850 VPW analyzer written in Python Requires Tkinter, Pan

7 May 01, 2022
Awesome Monocular 3D detection

Awesome Monocular 3D detection Paper list of 3D detetction, keep updating! Contents Paper List 2022 2021 2020 2019 2018 2017 2016 KITTI Results Paper

Zhikang Zou 184 Jan 04, 2023