Simply scrape / download all the media from an fansly account.

Overview

Title Banner Downloads Stars

UI Banner

Description

On click code, to scrape your favorite fansly creators media content. After you've ran the code, it'll create a folder named CreatorName_fansly in the same directory you launched the code from. That folder will have two sub-folders called Pictures & Videos, which will contain the downloaded content sorted into them. This is pretty useful for example; if you dislike the website theming and would like to view the media on your local machine instead. This code does not bypass any paywalls & no end user information is collected during usage.

How To

  1. If you have Python installed download the GitHub repository, else use the Executable version
  2. Make sure you have registered an account on fansly and are logged in with it in your browser or you'll not be able to get a authorization token from Developer Console.
  3. Go to whatever creators account page and open your browsers developer console (Most often Key: F12)
  4. Reload the website by using the rotating arrow symbol to the left of your browsers search bar(Key: F5), while the developer console is open. Now do the steps on the following picture:
  5. GitHub Banner
  6. Now paste both of the strings - that were on the right side of authorization: & User-Agent: - which you just copied, as shown in the picture above. Into the configuration file (config.ini) and replace the two strings with their corresponding values. (1. [MyAccount] > Authorization_Token= paste the value for authorization:; 2. [MyAccount] > User_Agent= paste the value for User-Agent:.
  7. Replace the value for [TargetedCreator] > Username= with whatever content creator you wish.
  8. Save the config.ini file with the changes you've done to it, close it & then start up fansly scraper.

From now on, you'll only need to re-do step 7 for every future use case.

Not enough content downloaded? Enable media previews. (Download_Media_Previews to True in the configuration file)

You can turn Open_Folder_When_Finished to False; if you no longer wish the download folder to automatically open after code completion.

Installation

You can just install the Executable version. Else you'll need to install python (ticking pip in installer) and paste below in cmd.exe.

pip install requests loguru imagehash pillow

Support

Dependant on how many people show me that they're liking the code by giving 's on this repo, I'll expand functionality & push more quality of life updates. Would you like to help out more? Any crypto donations are welcome!

BTC: bc1q82n68gmdxwp8vld524q5s7wzk2ns54yr27flst

ETH: 0x645a734Db104B3EDc1FBBA3604F2A2D77AD3BDc5

Disclaimer

"Fansly" or fansly.com is operated by Select Media LLC as stated on their "Contact" page. This code (Avnsx/fansly) isn't in any way affiliated with, sponsored by, or endorsed by Select Media LLC or "Fansly". The developer of this code is not responsible for the end users actions. Of course I've never even used this code myself ever before and haven't experienced its intended functionality on my local machine. This was written purely for educational purposes, in a entirely theoretical environment.

Written with python 3.9.7 for Windows 10, Version 21H1 Build 19043.1237

Comments
  • Scraper Not Downloading Media From Messages

    Scraper Not Downloading Media From Messages

    As title suggests, I am having trouble downloading media from messages. The scraper provides the line "No scrapable media found in messages" even though there is plenty of media to download from messages. Please help.

    opened by DennisKaizer 10
  • i can't downlaod the content in private messages - Fansly

    i can't downlaod the content in private messages - Fansly

    Hello Avnsx=) I Want Thank You For Your project about Fansly, is working well=) But it seems to me that the program is not working for the content that is in the messages, i hope/wish you get a solution for this situation=) Continuation of a good work

    added feature 
    opened by Milomamas 10
  • does not download entire media

    does not download entire media

    ive tried running this several times and it downloads maybe 1/6th of the total images and videos before closing/crashing (??) i dont have the knowledge necessary to offer anything more helpful than that to diagnose the problem, sorry.

    good first issue solved 
    opened by alfratrople 9
  • Incomplete download

    Incomplete download

    Hello,

    I noticed that the videos and photos older than a specific date (July 14 2022) were not downloaded.

    What could be the reason for this?

    Is there a way to specify media download for a specific date range?

    Thanks and congrats for this wonderful tool.

    builds 
    opened by mssm45 7
  • API Returned unauthorized

    API Returned unauthorized

    This started happening after updating to 0.3.4. The error message would show "Used authorization token" but it's completely different from the actual cookie. Pretty much like this issue. https://github.com/Avnsx/fansly/issues/30

    bug solved 
    opened by ForcedToRock 7
  • API returned unauthorized

    API returned unauthorized

    Hey, firstly thank you for this, it's really a nice tool to use ! I used it several times without issues. I didnt change anything but now I have this message : [11]ERROR | 20:56 || API returned unauthorized. This is most likely because of a wrong authorization token, in the configuration file.

    But I cant change the authorization token.

    Can you tell me what to do there ? Thank you very much !

    bug solved 
    opened by sashacorosk 7
  • EOF Error

    EOF Error

    The application crashed and provided me with an error and instructions as I'm sure you know because you programmed it. Here is the error. I'm not sure what extra information I can provide. I was using previously downloaded files from .2 and tried the new update old download folder option. Let me know if you need more information.

    Traceback (most recent call last):
      File "urllib3\connectionpool.py", line 703, in urlopen
      File "urllib3\connectionpool.py", line 398, in _make_request
      File "urllib3\connection.py", line 239, in request
      File "http\client.py", line 1282, in request
      File "http\client.py", line 1328, in _send_request
      File "http\client.py", line 1277, in endheaders
      File "http\client.py", line 1037, in _send_output
      File "http\client.py", line 998, in send
      File "ssl.py", line 1236, in sendall
      File "ssl.py", line 1205, in send
    ssl.SSLEOFError: EOF occurred in violation of protocol (_ssl.c:2384)
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "requests\adapters.py", line 440, in send
      File "urllib3\connectionpool.py", line 785, in urlopen
      File "urllib3\util\retry.py", line 592, in increment
    urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='cdn2.fansly.com', port=443): Max retries exceeded with url (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:2384)')))
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "fansly_scraper.py", line 304, in <module>
      File "requests\sessions.py", line 542, in get
      File "requests\sessions.py", line 529, in request
      File "requests\sessions.py", line 645, in send
      File "requests\adapters.py", line 517, in send
    requests.exceptions.SSLError: HTTPSConnectionPool(host='cdn2.fansly.com', port=443): Max retries exceeded with url  (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:2384)')))
    
    bug solved 
    opened by Nostang3 7
  • Did I miss a step

    Did I miss a step

    I used the auto config. Entered the name I wanted. Ran the scraper, the scraper couldn't find the folder where to download. It didn't create one so I did using the exact name and interior folder names. It ran saying it was downloading a bunch of files, but once fully finished only videos were downloaded.

    I double checked the folder names and they are exactly like the prompt. At first I thought I was due to the folder, but since the videos are downloading I am very confused why the photos are not.

    invalid 
    opened by EricVodka 6
  • Download more then one model at a time

    Download more then one model at a time

    can i just download all models consecutively or list more then one manually. If i sub to 5 models and want to run one command to grab them all is that possible

    opened by ctrlcmdshft 4
  • Naming files by date posted feature?

    Naming files by date posted feature?

    Hello! I'm just wondering if it's possible to have an option to add the post date to the file name? With all the ones I've downloaded so far, all of the files seem to be out of order according to the creator's video/image set.

    solved 
    opened by pamman2 4
  • Getting Error When Running

    Getting Error When Running

    When I run the scraper I get the error " "'TargetedCreator'" is missing or malformed in the configuration file! Read the ReadMe file for assistance."

    I copied the authorization and User-Agent from the dev tools as explained in the ReadMe but I still get the error. Should they be in quotes or a space between the = and the information I am putting into the config file?

    invalid 
    opened by BDM96 4
  • Doesn't download all posts; Subscribed content is missing

    Doesn't download all posts; Subscribed content is missing

    There are posts that aren't downloaded, it also says 218 duplicates declined! Those 218 could be mistaken for a duplicate but they aren't those are missing posts photos and I'm pretty sure. Wouldn't it be good if there was an option in the config "Put duplicates in separate folder", but it still downloads the duplicates?

    Either way it's missing posts!

    bug help wanted investigating 
    opened by Joakimgreenday 8
  • Executable randomly closes; due to being Rate Limited

    Executable randomly closes; due to being Rate Limited

    I leave it running in the background and after some downloads it simply closes (it's far from having downloaded everything). I'm using Windows 11 and set it to run as admin (without admin it also would close after a bit)

    bug investigating 
    opened by RobertoJKN 7
Releases(v0.3.5)
  • v0.3.5(Sep 1, 2022)

    This is the compiled release of Fansly Scraper. It provides an executable file, with which you can launch the program without having python installed. You'll have to unzip the "Fansly_Scraper.zip" file first with winrar or similar programs.

    Read Quick Start to know how to run fansly scraper

    Changelog v0.3.5

    added configuration settings for:
    + seperating messages or previews into subfolders
      > option "seperate _messages/_previews" can be set to "True" or "False"
    + naming files by date posted (issues id 28 - thanks @pawnstar81)
      > option "naming_convention" is now supported with values "Datepost" or "Standard"
    
    changes:
    + configuration "update_recent_download" can now also be set to "Auto"
    + added support for two factor authentication with AC
    + adjusted scraper for fansly api version 3
    
    fixed a bug:
    + where api would return unauthorized (issues id 30 & 39)
    + where login request no longer returned auth token (issues id 18)
    
    module changes:
    + removed requirement for module selenium_wire
    + adjusted AC for undetected_chromedriver > 3.1.5 (please update!)
    + fixed compability for opera gx (issues id 38)
    

    More information about the new configuration options

    Full Changelog: https://github.com/Avnsx/fansly/compare/v0.3.3...v0.3.5

    VirusTotal (detections?): fansly scraper: https://rebrand.ly/scraper_035 automatic configurator: https://rebrand.ly/config_035 updater: https://rebrand.ly/updater_031

    Compiled using pyinstaller

    Source code(tar.gz)
    Source code(zip)
    Fansly_Scraper.zip(83.95 MB)
  • v0.3.3(Apr 29, 2022)

    This is the compiled release of Fansly Scraper. It provides an executable file, with which you can launch the program without having python installed. You'll have to unzip the "Fansly_Scraper.zip" file first with winrar or similar programs.

    Read Quick Start to know how to run fansly scraper

    Changelog v0.3.3

    bug fixes:
    + adjusted scraper to recent fansly API changes (issues id 17)
    

    VirusTotal (detections?): fansly scraper: https://rebrand.ly/scraper_033 automatic configurator: https://rebrand.ly/config_031 updater: https://rebrand.ly/updater_031

    Compiled using pyinstaller

    Source code(tar.gz)
    Source code(zip)
    Fansly_Scraper.zip(89.00 MB)
  • v0.3.2(Feb 28, 2022)

    This is the compiled release of Fansly Scraper. It provides an executable file, with which you can launch the program without having python installed. You'll have to unzip the "Fansly_Scraper.zip" file first with winrar or similar programs.

    Read Quick Start to know how to run fansly scraper

    Changelog v0.3.2

    changes:
    + low content warnings no longer print; if update_recent is enabled
    + errors now wait for user input; instead of a closing timer
    + now using github api, for version checks
    + improved error outputs, for config.ini issues
    + added rare stargazers reminder
    + added go to github shortcut into download folder
    
    bug fixes:
    + fixed issue with saving (issues id 14)
    

    VirusTotal (detections?): fansly scraper: https://rebrand.ly/scraper_032 automatic configurator: https://rebrand.ly/config_031 updater: https://rebrand.ly/updater_031

    Compiled using pyinstaller

    Source code(tar.gz)
    Source code(zip)
    Fansly_Scraper.zip(88.81 MB)
  • v0.3.1(Feb 4, 2022)

    This is the compiled release of Fansly Scraper. It provides an executable file, with which you can launch the program without having python installed. You'll have to unzip the "Fansly_Scraper_v0.3.1.zip" file first with winrar or similar programs.

    Read Quick Start to know how to run fansly scraper

    Changelog v0.3.1

    changed:
    + if OS unsupported; configurator now automatically redirects to Get Started
    + sleep timers highered for errors
    
    fixed a bug:
    + with message scraping (issues id 11)
    + with profile scraping (issues id 9)
    

    Q&A: Your opinion is asked! Test the software and let me know here

    VirusTotal (detections?): fansly scraper: https://rebrand.ly/scraper_031 automatic configurator: https://rebrand.ly/config_031 updater: https://rebrand.ly/updater_031

    Compiled using pyinstaller 4.8

    Source code(tar.gz)
    Source code(zip)
    Fansly_Scraper_v0.3.1.zip(87.22 MB)
  • v0.3(Feb 2, 2022)

    This is the compiled release of Fansly Scraper. It provides an executable file, with which you can launch the program without having python installed. You'll have to unzip the "Fansly_Scraper_v0.3.zip" file first with winrar or similar programs.

    Changelog v0.3

    changed:
    + added support for scraping media from messages
    + added option to update old download folders
    + added option to show file names during downloads
    + added configurator to now automatically get required token & user agent
    + changed configurator user agent to chrome v97
    + configurator checks if running compiled/IDE & version of required module
    + added updater to update all compiled repository files by just clicking on it
    + scraper now forces you to update it, if it has booted up on a old version
    + scraper version now is stored in config.ini
    
    fixed a bug:
    + with scraping media from messages, if creator had not DMed you
    + where config parser would throw interpolation errors
    + where pyinstaller binary would crash while exiting
    + with the open_when_finished function
    + where creator id would not be found, if no avatar
    + where filenames with over 175 characters might cause a crash
    + where open when finished did not work due to OS
    + where imagehash created lists with numpy arrays in them
    + where save_changes() wouldn't save user_agent
    + causing configurator to fail logging required data
    

    Q&A: Your opinion is asked! Test the software and let me know here

    VirusTotal: Fansly Scraper: https://www.virustotal.com/gui/file/f60aabcc26a2bf6c63cf96fe7d382a283925832f1cbbd936d2ba5f32c3ba1090 Automatic Configurator: https://www.virustotal.com/gui/file/264b02cafb20d215abb39da7d5dcf31fd9eeed11da57aee97fd1f93d9cb9d1c2 updater: https://www.virustotal.com/gui/file/d7244d3eba12320b6e4ead0bf5a136314add49586acb41a6f34d23e4a57d4ec3

    Obviously any detections, are false positives.

    Compiled using pyinstaller 4.8

    Source code(tar.gz)
    Source code(zip)
    Fansly_Scraper_v0.3.zip(87.22 MB)
  • v0.2(Oct 17, 2021)

    This is the compiled release of Fansly Scraper. It provides an executable file, with which you can launch the program without having python installed. You'll have to unzip the "FanslyScraperExecutableV0.2.zip" file first with winrar or similar programs.

    Changelog v0.2

    + now using image fingerprinting & hashing to greatly reduce duplicate videos / images
    + visual updates: now using loguru module for colored output
    + introduced logic to self check:
        - for github repo updates
        - internet connection
        - creators subscribers, followers & available content
    + changed functionality:
        - user now is warned if atleast 20% of available content wasn't successfully archieved
        - adjusted & added more output texts for even more clarification
        - errors now print the full error traceback
    + bug fixes:
        - fixed a bug, where a error would be caused if filenames had characters incompatible with windows
    

    Compiled using pyinstaller.

    VirusTotal: https://www.virustotal.com/gui/file/60c358e606658ea8a329cc25b8368f4293246097cde40cbd8fbe54851b9d215c

    Source code(tar.gz)
    Source code(zip)
    FanslyScraperExecutableV0.2.zip(50.27 MB)
  • v0.1(Oct 9, 2021)

    This is the compiled release of Fansly Scraper. It provides an executable file, with which you can launch the program without having python installed. You'll have to unzip the "FanslyScraperExecutable.zip" file first with winrar or similar programs.

    VirusTotal: https://www.virustotal.com/gui/file/05ea28312775f9e4c5c28bb1d946468b58e67cfcf561a28f0c08c1e6014f4a5d Compiled using: https://github.com/brentvollebregt/auto-py-to-exe

    Source code(tar.gz)
    Source code(zip)
    FanslyScraperExecutable.zip(6.79 MB)
Owner
Mika C.
Passionate programmer and freelancer for various languages, currently focused on Python. I only publish a small portion of the code I write on here.
Mika C.
Google Developer Profile Badge Scraper

Google Developer Profile Badge Scraper GDev Profile Badge Scraper is a Google Developer Profile Web Scraper which scrapes for specific badges in a use

Siddhant Lad 7 Jan 10, 2022
Python scrapper scrapping torrent website and download new movies Automatically.

torrent-scrapper Python scrapper scrapping torrent website and download new movies Automatically. If you like it Put a ⭐ on this repo 😇 Run this git

Fazil vk 1 Jan 08, 2022
Web Scraping OLX with Python and Bsoup.

webScrap WebScraping first step. Authors: Paulo, Claudio M. First steps in Web Scraping. Project carried out for training in Web Scrapping. The export

claudio paulo 5 Sep 25, 2022
🤖 Threaded Scraper to get discord servers from disboard.org written in python3

Disboard-Scraper Threaded Scraper to get discord servers from disboard.org written in python3. Setup. One thread / tag If you whant to look for multip

Ѵιcнч 11 Nov 01, 2022
Extract embedded metadata from HTML markup

extruct extruct is a library for extracting embedded metadata from HTML markup. Currently, extruct supports: W3C's HTML Microdata embedded JSON-LD Mic

Scrapinghub 725 Jan 03, 2023
Complete pipeline for crawling online newspaper article.

Complete pipeline for crawling online newspaper article. The articles are stored to MongoDB. The whole pipeline is dockerized, thus the user does not need to worry about dependencies. Additionally, d

newspipe 4 May 27, 2022
High available distributed ip proxy pool, powerd by Scrapy and Redis

高可用IP代理池 README | 中文文档 本项目所采集的IP资源都来自互联网,愿景是为大型爬虫项目提供一个高可用低延迟的高匿IP代理池。 项目亮点 代理来源丰富 代理抓取提取精准 代理校验严格合理 监控完备,鲁棒性强 架构灵活,便于扩展 各个组件分布式部署 快速开始 注意,代码请在release

SpiderClub 5.2k Jan 03, 2023
Crawler do site Fundamentus.com com o uso do framework scrapy, tanto da aba detalhada como a de resumo.

Crawler do site Fundamentus.com com o uso do framework scrapy, tanto da aba detalhada como a de resumo. (Todas as infomações)

Guilherme Silva Uchoa 3 Oct 04, 2022
Python script who crawl first shodan page and check DBLTEK vulnerability

🐛 MASS DBLTEK EXPLOIT CHECKER USING SHODAN 🕸 Python script who crawl first shodan page and check DBLTEK vulnerability

Divin 4 Jan 09, 2022
Python script to check if there is any differences in responses of an application when the request comes from a search engine's crawler.

crawlersuseragents This Python script can be used to check if there is any differences in responses of an application when the request comes from a se

Podalirius 13 Dec 27, 2022
Web-Scrapper using Python and Flask

Web-Scrapper "[초급]Python으로 웹 스크래퍼 만들기" 코스 -NomadCoders 기초적인 Python 문법강의부터 시작하여 웹사이트의 html파일에서 원하는 내용을 Scrapping해서 출력, csv 파일로 저장, flask를 이용한 간단한 웹페이지

윤성도 1 Nov 10, 2021
Lovely Scrapper

Lovely Scrapper

Tushar Gadhe 2 Jan 01, 2022
A web crawler script that crawls the target website and lists its links

A web crawler script that crawls the target website and lists its links || A web crawler script that lists links by scanning the target website.

2 Apr 29, 2022
A training task for web scraping using python multithreading and a real-time-updated list of available proxy servers.

Parallel web scraping The project is a training task for web scraping using python multithreading and a real-time-updated list of available proxy serv

Kushal Shingote 1 Feb 10, 2022
爬虫案例合集。包括但不限于《淘宝、京东、天猫、豆瓣、抖音、快手、微博、微信、阿里、头条、pdd、优酷、爱奇艺、携程、12306、58、搜狐、百度指数、维普万方、Zlibraty、Oalib、小说、招标网、采购网、小红书》

lxSpider 爬虫案例合集。包括但不限于《淘宝、京东、天猫、豆瓣、抖音、快手、微博、微信、阿里、头条、pdd、优酷、爱奇艺、携程、12306、58、搜狐、百度指数、维普万方、Zlibraty、Oalib、小说网站、招标采购网》 简介: 时光荏苒,记不清写了多少案例了。

lx 793 Jan 05, 2023
Goblyn is a Python tool focused to enumeration and capture of website files metadata.

Goblyn Metadata Enumeration What's Goblyn? Goblyn is a tool focused to enumeration and capture of website files metadata. How it works? Goblyn will se

Gustavo 46 Nov 22, 2022
A web Scraper for CSrankings.com that scrapes University and Faculty list for a particular country

A look into what we're building Demo.mp4 Prerequisites Python 3 Node v16+ Steps to run Create a virtual environment. Activate the virtual environment.

2 Jun 06, 2022
This repo has the source code for the crawler and data crawled from auto-data.net

This repo contains the source code for crawler and crawled data of cars specifications from autodata. The data has roughly 45k cars

Tô Đức Anh 5 Nov 22, 2022
Simple proxy scraper made by using ProxyScrape's api.

What is Moon? Moon is a lightweight and fast proxy scraper made by using ProxyScrape's api. What can i do with this? You can use proxies for varietys

1 Jul 04, 2022
Web-scraping - Program that scrapes a website for a collection of quotes, picks one at random and displays it

web-scraping Program that scrapes a website for a collection of quotes, picks on

Manvir Mann 1 Jan 07, 2022