A tool written in Python to download all Snapmaps content from a specific location.

Overview

snapmap-archiver

A tool written in Python to download all Snapmaps content from a specific location.

snapmap-archiver splash

Setup

pip3 install snapmap-archiver

View on PyPI

Install dependencies with pip3.

pip3 install -r requirements.txt

Install aria2c

Usage

python3 -m snapmap_archiver -o [OUTPUT DIR] -l="[LATITUDE],[LONGITUDE]"

Unfortunately you have to use the arbitrary -l="lat,lon" rather than just -l "lat,lon" when parsing negative numbers as argsparse interprets said numbers as extra arguments.

Optional Arguments

Export JSON

You can export a JSON file with info about downloaded snaps with the --write-json argument, which will contain information like the time the Snap was posted, and the Snap location.

Snap Radius

The radius from the coordinates you provide that will be included for downloads. -r 20000 will download all Snaps within a 20km radius of your coordinates.

Comments
  • Added support for merging video and overlay file into one video file

    Added support for merging video and overlay file into one video file

    Added support for merging video and overlay file into one video file using ffmpeg. You can disable this using the --no-overlay argument. This solves #3 Also added overlayText and filetype fields to the --write-json argument.

    opened by Gertje823 1
  • Merge overlay.png with media.mp4

    Merge overlay.png with media.mp4

    Snaps with text and stickers don't include said graphics in the video file, instead they're stored in an image called overlay.png, and displayed over the top by the browser/app.

    We could have an option like --merge-overlay which would use something like ffmpeg or avconv to put the image over the top of the media.mp4 file and export it as a new file.

    opened by king-millez 1
  • [Question] Downloading the snaps

    [Question] Downloading the snaps

    I notice you're using Aria for downloading the videos. Is there a significant enough time difference using the external library over say a piece of code like

    import requests
    ...
    with open(filename + '.mp4', 'w+') as f:
       # req_headers from get_data.py
       f.write(requests.get(snap['media']['raw_url'], headers=req_headers))
    

    now that above code most likely won't run out of the box but I feel removing the Aria requirement would make it more attractive to people as they could get it running straight from pip

    And sorry for opening an issue to discuss this, I don't know where the best place to talk about the code is.

    opened by AlanTheBlank 0
  • Bump urllib3 from 1.26.3 to 1.26.4

    Bump urllib3 from 1.26.3 to 1.26.4

    Bumps urllib3 from 1.26.3 to 1.26.4.

    Release notes

    Sourced from urllib3's releases.

    1.26.4

    :warning: IMPORTANT: urllib3 v2.0 will drop support for Python 2: Read more in the v2.0 Roadmap

    • Changed behavior of the default SSLContext when connecting to HTTPS proxy during HTTPS requests. The default SSLContext now sets check_hostname=True.

    If you or your organization rely on urllib3 consider supporting us via GitHub Sponsors

    Changelog

    Sourced from urllib3's changelog.

    1.26.4 (2021-03-15)

    • Changed behavior of the default SSLContext when connecting to HTTPS proxy during HTTPS requests. The default SSLContext now sets check_hostname=True.
    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • File naming

    File naming

    A better way to name the downloaded Snaps would be nice, rather than [SNAP_ID].mp4, which isn't really searchable. The data returned by utils.organise_data() could be used, which includes the Snap location and the timestamp from when it was posted.

    The actual API also contains some extra data for certain snaps, like the raw text used for a video.

    At the moment I'm thinking a good naming convention could be [MINOR_LOCATION] - [TIMESTAMP] - [SNAP_ID].mp4...

    opened by king-millez 0
  • Dynamically change radius to get maximum amount of relevant content

    Dynamically change radius to get maximum amount of relevant content

    When querying the API, you can pass the parameter radiusMeters, which will give you more/less specific content the lower/higher the number you provide is. Starting from 95000 and working down to 0, we could iterate through the maximum amount of content in a specific area. I'm thinking iterating by -2500 for values > 10000 would be good, then -1000 until 1000, then -100 to 100, then -10 to 0, arbitrarily ending with 1.

    opened by king-millez 0
Releases(2.0)
  • 2.0(Dec 23, 2022)

    Version 2.0

    Install with pip install snapmap-archiver

    Updated Codebase

    • Usable as a package
    • The SnapmapArchiver class can be used across projects for custom integration with other packages
    • More efficient: no more blank excepts!
    • Better API integration
    • Uses dict.get instead of countless except KeyError checks

    Installable

    • Package is now (properly) installable through pip
    • Now buildable with setup.py
    Source code(tar.gz)
    Source code(zip)
  • 1.3.1(Jan 6, 2022)

Owner
Canberra based developer. 15yo
A downloader for Cave Story written in Python

Cave Story Downloader This is a downloader for Cave Story written in Python. Thi

Imsad2 2 Feb 16, 2022
Aline file downloader automator!

AlineDorker Aline is used for donwloading files with google dorking , dowloading specified links such as dorks. Dependences: python3 installed pip ins

27 Nov 16, 2022
Youtube videos and channels scraper python wrapper!

YouTubeCrawle Wrapper for python Why This wrapper? This is wrapper is not limited to videos only it can scrape both channel and videos seperately ;D

Kei 16 Aug 08, 2022
A simple kemono.party downloader using python.

kemono-dl This is a simple kemono.party downloader. How to use Install python Download source code from releases and extract it Then install requireme

318 Dec 27, 2022
Desktop utility to download images/videos/music/text from various websites, and more

Desktop utility to download images/videos/music/text from various websites, and more

Kurt Bestor 11.2k Jan 08, 2023
A Fast as F*** Downloader

FAFD A Fast as F*** Downloader Github Usages You'll want to use a URL like this: https://github.com/RPowell-C/FAFD/raw/main/FAFD.py It's easier DONT F

1 Jan 19, 2022
YouTube Video Search Engine For Python

YouTube-Video-Search-Engine Introduction With the increasing demand for electronic devices, it is hard for people to choose the best products from mul

1 Dec 21, 2021
This package helps you to directly download an APK from Google Play by providing the package id of the app

Apk Downloader About | Features | Technologies | Requirements | Starting | License | Author 🎯 About This package helps you to directly download an AP

Daniel Agyapong 9 Dec 11, 2022
music downloader written in python. (Uses jiosaavn API)

music downloader written in python. (Uses jiosaavn API)

Rohn Chatterjee 35 Jul 20, 2022
A Python package for downloading / archiving all available episodes from a podcast RSS feed.

allcasts 📻 🗃 A Python package for downloading all available episodes from a podcast RSS feed. Useful for making private archives of your favourite p

Lewis Gentle 5 Nov 20, 2022
Web Downloader With Python

Web Downloader Introduction This module will provide API to download the webpage components : html file, image file, css fil, javascript file, href li

3 Dec 28, 2022
Download all games from a public Itch.io Game Jam

Itch Jam Downloader Downloads all games from a public Itch.io Game Jam. What you'll need: Python 3.8+ pip install -r requirements.txt For site mirrori

Dragoon Aethis 19 Dec 07, 2022
Open Source application for downloading and playing music.

Musifre Greetings For HackHeist(Wartex) Judges: Synopsis, Promotion Video & Product Functioning Video are present in Documentation Folder. A Star woul

Yash Dhingra 9 Mar 22, 2022
ImageScraper is a cross-platform tool for downloading a specified count from xkcd, Astronomy Picture of the Day and Existential Comics

ImageScraper The ImageScraper is a cross-platform tool for downloading a specified count from xkcd, Astronomy Picture of the Day and Existential Comic

1amnobody 1 Jan 25, 2022
A股tick下载,自动判断交易日历,获取全市场level1数据

TickDown A股tick下载,自动判断交易日历,获取全市场level1数据 依赖项 func_timeout requests some_tool(仓库里) akshare 使用 定时任务在上午 09:07开始运行 参数调节 max_num 单批次提交的股票数,当前为800,可以自行尝试多个数

Demon Finch 7 Jul 06, 2022
Using Youtube downloader is the fast and easy way to download and save any YouTube video.

Youtube video downloader using Django Using Django as a backend along with pytube module to create Youtbue Video Downloader. https://yt-videos-downloa

Suman Raj Khanal 10 Jun 18, 2022
File Downloader

File Downloader Watches a file containing download links and runs a command to download them. The link file is in form of: # comment DOWNLOAD_LINK

Pouriya 1 Jan 08, 2022
Downloads .ksy files and their dependencies straight from the official kaitai-struct format gallery.

ksy-dl Downloads .ksy files and their dependencies straight from the official kaitai-struct format gallery. This tool will: Fetch any of the official

3 Jun 20, 2022
Youtube_dl_helper - A hacky python script meant to automate the process of downloading mp3 files from YouTube using youtube-dl library

youtube_dl_helper A helper program meant to automate the process of downloading mp3 files from YouTube using youtube-dl library Dependencies In order

Guilherme Bittencourt de Borba 1 Jan 04, 2022