Simply scrape / download all the media from an fansly account.

Overview

Title Banner Downloads Stars

UI Banner

Description

On click code, to scrape your favorite fansly creators media content. After you've ran the code, it'll create a folder named CreatorName_fansly in the same directory you launched the code from. That folder will have two sub-folders called Pictures & Videos, which will contain the downloaded content sorted into them. This is pretty useful for example; if you dislike the website theming and would like to view the media on your local machine instead. This code does not bypass any paywalls & no end user information is collected during usage.

How To

  1. If you have Python installed download the GitHub repository, else use the Executable version
  2. Make sure you have registered an account on fansly and are logged in with it in your browser or you'll not be able to get a authorization token from Developer Console.
  3. Go to whatever creators account page and open your browsers developer console (Most often Key: F12)
  4. Reload the website by using the rotating arrow symbol to the left of your browsers search bar(Key: F5), while the developer console is open. Now do the steps on the following picture:
  5. GitHub Banner
  6. Now paste both of the strings - that were on the right side of authorization: & User-Agent: - which you just copied, as shown in the picture above. Into the configuration file (config.ini) and replace the two strings with their corresponding values. (1. [MyAccount] > Authorization_Token= paste the value for authorization:; 2. [MyAccount] > User_Agent= paste the value for User-Agent:.
  7. Replace the value for [TargetedCreator] > Username= with whatever content creator you wish.
  8. Save the config.ini file with the changes you've done to it, close it & then start up fansly scraper.

From now on, you'll only need to re-do step 7 for every future use case.

Not enough content downloaded? Enable media previews. (Download_Media_Previews to True in the configuration file)

You can turn Open_Folder_When_Finished to False; if you no longer wish the download folder to automatically open after code completion.

Installation

You can just install the Executable version. Else you'll need to install python (ticking pip in installer) and paste below in cmd.exe.

pip install requests loguru imagehash pillow

Support

Dependant on how many people show me that they're liking the code by giving 's on this repo, I'll expand functionality & push more quality of life updates. Would you like to help out more? Any crypto donations are welcome!

BTC: bc1q82n68gmdxwp8vld524q5s7wzk2ns54yr27flst

ETH: 0x645a734Db104B3EDc1FBBA3604F2A2D77AD3BDc5

Disclaimer

"Fansly" or fansly.com is operated by Select Media LLC as stated on their "Contact" page. This code (Avnsx/fansly) isn't in any way affiliated with, sponsored by, or endorsed by Select Media LLC or "Fansly". The developer of this code is not responsible for the end users actions. Of course I've never even used this code myself ever before and haven't experienced its intended functionality on my local machine. This was written purely for educational purposes, in a entirely theoretical environment.

Written with python 3.9.7 for Windows 10, Version 21H1 Build 19043.1237

Comments
  • Scraper Not Downloading Media From Messages

    Scraper Not Downloading Media From Messages

    As title suggests, I am having trouble downloading media from messages. The scraper provides the line "No scrapable media found in messages" even though there is plenty of media to download from messages. Please help.

    opened by DennisKaizer 10
  • i can't downlaod the content in private messages - Fansly

    i can't downlaod the content in private messages - Fansly

    Hello Avnsx=) I Want Thank You For Your project about Fansly, is working well=) But it seems to me that the program is not working for the content that is in the messages, i hope/wish you get a solution for this situation=) Continuation of a good work

    added feature 
    opened by Milomamas 10
  • does not download entire media

    does not download entire media

    ive tried running this several times and it downloads maybe 1/6th of the total images and videos before closing/crashing (??) i dont have the knowledge necessary to offer anything more helpful than that to diagnose the problem, sorry.

    good first issue solved 
    opened by alfratrople 9
  • Incomplete download

    Incomplete download

    Hello,

    I noticed that the videos and photos older than a specific date (July 14 2022) were not downloaded.

    What could be the reason for this?

    Is there a way to specify media download for a specific date range?

    Thanks and congrats for this wonderful tool.

    builds 
    opened by mssm45 7
  • API Returned unauthorized

    API Returned unauthorized

    This started happening after updating to 0.3.4. The error message would show "Used authorization token" but it's completely different from the actual cookie. Pretty much like this issue. https://github.com/Avnsx/fansly/issues/30

    bug solved 
    opened by ForcedToRock 7
  • API returned unauthorized

    API returned unauthorized

    Hey, firstly thank you for this, it's really a nice tool to use ! I used it several times without issues. I didnt change anything but now I have this message : [11]ERROR | 20:56 || API returned unauthorized. This is most likely because of a wrong authorization token, in the configuration file.

    But I cant change the authorization token.

    Can you tell me what to do there ? Thank you very much !

    bug solved 
    opened by sashacorosk 7
  • EOF Error

    EOF Error

    The application crashed and provided me with an error and instructions as I'm sure you know because you programmed it. Here is the error. I'm not sure what extra information I can provide. I was using previously downloaded files from .2 and tried the new update old download folder option. Let me know if you need more information.

    Traceback (most recent call last):
      File "urllib3\connectionpool.py", line 703, in urlopen
      File "urllib3\connectionpool.py", line 398, in _make_request
      File "urllib3\connection.py", line 239, in request
      File "http\client.py", line 1282, in request
      File "http\client.py", line 1328, in _send_request
      File "http\client.py", line 1277, in endheaders
      File "http\client.py", line 1037, in _send_output
      File "http\client.py", line 998, in send
      File "ssl.py", line 1236, in sendall
      File "ssl.py", line 1205, in send
    ssl.SSLEOFError: EOF occurred in violation of protocol (_ssl.c:2384)
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "requests\adapters.py", line 440, in send
      File "urllib3\connectionpool.py", line 785, in urlopen
      File "urllib3\util\retry.py", line 592, in increment
    urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='cdn2.fansly.com', port=443): Max retries exceeded with url (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:2384)')))
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "fansly_scraper.py", line 304, in <module>
      File "requests\sessions.py", line 542, in get
      File "requests\sessions.py", line 529, in request
      File "requests\sessions.py", line 645, in send
      File "requests\adapters.py", line 517, in send
    requests.exceptions.SSLError: HTTPSConnectionPool(host='cdn2.fansly.com', port=443): Max retries exceeded with url  (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:2384)')))
    
    bug solved 
    opened by Nostang3 7
  • Did I miss a step

    Did I miss a step

    I used the auto config. Entered the name I wanted. Ran the scraper, the scraper couldn't find the folder where to download. It didn't create one so I did using the exact name and interior folder names. It ran saying it was downloading a bunch of files, but once fully finished only videos were downloaded.

    I double checked the folder names and they are exactly like the prompt. At first I thought I was due to the folder, but since the videos are downloading I am very confused why the photos are not.

    invalid 
    opened by EricVodka 6
  • Download more then one model at a time

    Download more then one model at a time

    can i just download all models consecutively or list more then one manually. If i sub to 5 models and want to run one command to grab them all is that possible

    opened by ctrlcmdshft 4
  • Naming files by date posted feature?

    Naming files by date posted feature?

    Hello! I'm just wondering if it's possible to have an option to add the post date to the file name? With all the ones I've downloaded so far, all of the files seem to be out of order according to the creator's video/image set.

    solved 
    opened by pamman2 4
  • Getting Error When Running

    Getting Error When Running

    When I run the scraper I get the error " "'TargetedCreator'" is missing or malformed in the configuration file! Read the ReadMe file for assistance."

    I copied the authorization and User-Agent from the dev tools as explained in the ReadMe but I still get the error. Should they be in quotes or a space between the = and the information I am putting into the config file?

    invalid 
    opened by BDM96 4
  • Doesn't download all posts; Subscribed content is missing

    Doesn't download all posts; Subscribed content is missing

    There are posts that aren't downloaded, it also says 218 duplicates declined! Those 218 could be mistaken for a duplicate but they aren't those are missing posts photos and I'm pretty sure. Wouldn't it be good if there was an option in the config "Put duplicates in separate folder", but it still downloads the duplicates?

    Either way it's missing posts!

    bug help wanted investigating 
    opened by Joakimgreenday 8
  • Executable randomly closes; due to being Rate Limited

    Executable randomly closes; due to being Rate Limited

    I leave it running in the background and after some downloads it simply closes (it's far from having downloaded everything). I'm using Windows 11 and set it to run as admin (without admin it also would close after a bit)

    bug investigating 
    opened by RobertoJKN 7
Releases(v0.3.5)
  • v0.3.5(Sep 1, 2022)

    This is the compiled release of Fansly Scraper. It provides an executable file, with which you can launch the program without having python installed. You'll have to unzip the "Fansly_Scraper.zip" file first with winrar or similar programs.

    Read Quick Start to know how to run fansly scraper

    Changelog v0.3.5

    added configuration settings for:
    + seperating messages or previews into subfolders
      > option "seperate _messages/_previews" can be set to "True" or "False"
    + naming files by date posted (issues id 28 - thanks @pawnstar81)
      > option "naming_convention" is now supported with values "Datepost" or "Standard"
    
    changes:
    + configuration "update_recent_download" can now also be set to "Auto"
    + added support for two factor authentication with AC
    + adjusted scraper for fansly api version 3
    
    fixed a bug:
    + where api would return unauthorized (issues id 30 & 39)
    + where login request no longer returned auth token (issues id 18)
    
    module changes:
    + removed requirement for module selenium_wire
    + adjusted AC for undetected_chromedriver > 3.1.5 (please update!)
    + fixed compability for opera gx (issues id 38)
    

    More information about the new configuration options

    Full Changelog: https://github.com/Avnsx/fansly/compare/v0.3.3...v0.3.5

    VirusTotal (detections?): fansly scraper: https://rebrand.ly/scraper_035 automatic configurator: https://rebrand.ly/config_035 updater: https://rebrand.ly/updater_031

    Compiled using pyinstaller

    Source code(tar.gz)
    Source code(zip)
    Fansly_Scraper.zip(83.95 MB)
  • v0.3.3(Apr 29, 2022)

    This is the compiled release of Fansly Scraper. It provides an executable file, with which you can launch the program without having python installed. You'll have to unzip the "Fansly_Scraper.zip" file first with winrar or similar programs.

    Read Quick Start to know how to run fansly scraper

    Changelog v0.3.3

    bug fixes:
    + adjusted scraper to recent fansly API changes (issues id 17)
    

    VirusTotal (detections?): fansly scraper: https://rebrand.ly/scraper_033 automatic configurator: https://rebrand.ly/config_031 updater: https://rebrand.ly/updater_031

    Compiled using pyinstaller

    Source code(tar.gz)
    Source code(zip)
    Fansly_Scraper.zip(89.00 MB)
  • v0.3.2(Feb 28, 2022)

    This is the compiled release of Fansly Scraper. It provides an executable file, with which you can launch the program without having python installed. You'll have to unzip the "Fansly_Scraper.zip" file first with winrar or similar programs.

    Read Quick Start to know how to run fansly scraper

    Changelog v0.3.2

    changes:
    + low content warnings no longer print; if update_recent is enabled
    + errors now wait for user input; instead of a closing timer
    + now using github api, for version checks
    + improved error outputs, for config.ini issues
    + added rare stargazers reminder
    + added go to github shortcut into download folder
    
    bug fixes:
    + fixed issue with saving (issues id 14)
    

    VirusTotal (detections?): fansly scraper: https://rebrand.ly/scraper_032 automatic configurator: https://rebrand.ly/config_031 updater: https://rebrand.ly/updater_031

    Compiled using pyinstaller

    Source code(tar.gz)
    Source code(zip)
    Fansly_Scraper.zip(88.81 MB)
  • v0.3.1(Feb 4, 2022)

    This is the compiled release of Fansly Scraper. It provides an executable file, with which you can launch the program without having python installed. You'll have to unzip the "Fansly_Scraper_v0.3.1.zip" file first with winrar or similar programs.

    Read Quick Start to know how to run fansly scraper

    Changelog v0.3.1

    changed:
    + if OS unsupported; configurator now automatically redirects to Get Started
    + sleep timers highered for errors
    
    fixed a bug:
    + with message scraping (issues id 11)
    + with profile scraping (issues id 9)
    

    Q&A: Your opinion is asked! Test the software and let me know here

    VirusTotal (detections?): fansly scraper: https://rebrand.ly/scraper_031 automatic configurator: https://rebrand.ly/config_031 updater: https://rebrand.ly/updater_031

    Compiled using pyinstaller 4.8

    Source code(tar.gz)
    Source code(zip)
    Fansly_Scraper_v0.3.1.zip(87.22 MB)
  • v0.3(Feb 2, 2022)

    This is the compiled release of Fansly Scraper. It provides an executable file, with which you can launch the program without having python installed. You'll have to unzip the "Fansly_Scraper_v0.3.zip" file first with winrar or similar programs.

    Changelog v0.3

    changed:
    + added support for scraping media from messages
    + added option to update old download folders
    + added option to show file names during downloads
    + added configurator to now automatically get required token & user agent
    + changed configurator user agent to chrome v97
    + configurator checks if running compiled/IDE & version of required module
    + added updater to update all compiled repository files by just clicking on it
    + scraper now forces you to update it, if it has booted up on a old version
    + scraper version now is stored in config.ini
    
    fixed a bug:
    + with scraping media from messages, if creator had not DMed you
    + where config parser would throw interpolation errors
    + where pyinstaller binary would crash while exiting
    + with the open_when_finished function
    + where creator id would not be found, if no avatar
    + where filenames with over 175 characters might cause a crash
    + where open when finished did not work due to OS
    + where imagehash created lists with numpy arrays in them
    + where save_changes() wouldn't save user_agent
    + causing configurator to fail logging required data
    

    Q&A: Your opinion is asked! Test the software and let me know here

    VirusTotal: Fansly Scraper: https://www.virustotal.com/gui/file/f60aabcc26a2bf6c63cf96fe7d382a283925832f1cbbd936d2ba5f32c3ba1090 Automatic Configurator: https://www.virustotal.com/gui/file/264b02cafb20d215abb39da7d5dcf31fd9eeed11da57aee97fd1f93d9cb9d1c2 updater: https://www.virustotal.com/gui/file/d7244d3eba12320b6e4ead0bf5a136314add49586acb41a6f34d23e4a57d4ec3

    Obviously any detections, are false positives.

    Compiled using pyinstaller 4.8

    Source code(tar.gz)
    Source code(zip)
    Fansly_Scraper_v0.3.zip(87.22 MB)
  • v0.2(Oct 17, 2021)

    This is the compiled release of Fansly Scraper. It provides an executable file, with which you can launch the program without having python installed. You'll have to unzip the "FanslyScraperExecutableV0.2.zip" file first with winrar or similar programs.

    Changelog v0.2

    + now using image fingerprinting & hashing to greatly reduce duplicate videos / images
    + visual updates: now using loguru module for colored output
    + introduced logic to self check:
        - for github repo updates
        - internet connection
        - creators subscribers, followers & available content
    + changed functionality:
        - user now is warned if atleast 20% of available content wasn't successfully archieved
        - adjusted & added more output texts for even more clarification
        - errors now print the full error traceback
    + bug fixes:
        - fixed a bug, where a error would be caused if filenames had characters incompatible with windows
    

    Compiled using pyinstaller.

    VirusTotal: https://www.virustotal.com/gui/file/60c358e606658ea8a329cc25b8368f4293246097cde40cbd8fbe54851b9d215c

    Source code(tar.gz)
    Source code(zip)
    FanslyScraperExecutableV0.2.zip(50.27 MB)
  • v0.1(Oct 9, 2021)

    This is the compiled release of Fansly Scraper. It provides an executable file, with which you can launch the program without having python installed. You'll have to unzip the "FanslyScraperExecutable.zip" file first with winrar or similar programs.

    VirusTotal: https://www.virustotal.com/gui/file/05ea28312775f9e4c5c28bb1d946468b58e67cfcf561a28f0c08c1e6014f4a5d Compiled using: https://github.com/brentvollebregt/auto-py-to-exe

    Source code(tar.gz)
    Source code(zip)
    FanslyScraperExecutable.zip(6.79 MB)
Owner
Mika C.
Passionate programmer and freelancer for various languages, currently focused on Python. I only publish a small portion of the code I write on here.
Mika C.
PS5 bot to find a console in france for chrismas 🎄🎅🏻 NOT FOR SCALPERS

Une PS5 pour Noël Python + Chrome --headless = une PS5 pour noël MacOS Installer chrome Tweaker le .yaml pour la listes sites a scrap et les criteres

Olivier Giniaux 3 Feb 13, 2022
A Pixiv web crawler module

Pixiv-spider A Pixiv spider module WARNING It's an unfinished work, browsing the code carefully before using it. Features 0004 - Readme.md updated, co

Uzuki 1 Nov 14, 2021
An IpVanish Proxies Scraper

EzProxies Tired of searching for good proxies for hours? Just get an IpVanish account and get thousands of good proxies in few seconds! Showcase Watch

11 Nov 13, 2022
Demonstration on how to use async python to control multiple playwright browsers for web-scraping

Playwright Browser Pool This example illustrates how it's possible to use a pool of browsers to retrieve page urls in a single asynchronous process. i

Bernardas Ališauskas 8 Oct 27, 2022
WebScraper - A script that prints out a list of all EXTERNAL references in the HTML response to an HTTP/S request

Project A: WebScraper A script that prints out a list of all EXTERNAL references

2 Apr 26, 2022
Rottentomatoes, Goodreads and IMDB sites crawler. Semantic Web final project.

Crawler Rottentomatoes, Goodreads and IMDB sites crawler. Crawler written by beautifulsoup, selenium and lxml to gather books and films information an

Faeze Ghorbanpour 1 Dec 30, 2021
This project was created using Python technology and flask tools to scrape a music site

python-scrapping This project was created using Python technology and flask tools to scrape a music site You need to install the following packages to

hosein moradi 1 Dec 07, 2021
Twitter Eye is a Twitter Information Gathering Tool With Twitter Eye

Twitter Eye is a Twitter Information Gathering Tool With Twitter Eye, you can search with various keywords and usernames on Twitter.

Jolanda de Koff 19 Dec 12, 2022
Scrapy-soccer-games - Scraping information about soccer games from a few websites

scrapy-soccer-games Esse projeto tem por finalidade pegar informação de tabela d

Caio Alves 2 Jul 20, 2022
Scrap-mtg-top-8 - A top 8 mtg scraper using python

Scrap-mtg-top-8 - A top 8 mtg scraper using python

1 Jan 24, 2022
The open-source web scrapers that feed the Los Angeles Times California coronavirus tracker.

The open-source web scrapers that feed the Los Angeles Times' California coronavirus tracker. Processed data ready for analysis is available at datade

Los Angeles Times Data and Graphics Department 51 Dec 14, 2022
淘宝茅台抢购最新优化版本,淘宝茅台秒杀,优化了茅台抢购线程队列

淘宝茅台抢购最新优化版本,淘宝茅台秒杀,优化了茅台抢购线程队列

MaoTai 118 Dec 16, 2022
Python web scrapper

Website scrapper Web scrapping project in Python. Created for learning purposes. Start Install python Update configuration with websites Launch script

Nogueira Vitor 1 Dec 19, 2021
A Powerful Spider(Web Crawler) System in Python.

pyspider A Powerful Spider(Web Crawler) System in Python. Write script in Python Powerful WebUI with script editor, task monitor, project manager and

Roy Binux 15.7k Jan 04, 2023
抢京东茅台脚本,定时自动触发,自动预约,自动停止

jd_maotai 抢京东茅台脚本,定时自动触发,自动预约,自动停止 小白信用 99.6,暂时还没抢到过,朋友 80 多抢到了一瓶,所以我感觉是跟信用分没啥关系,完全是看运气的。

Aruelius.L 117 Dec 22, 2022
让中国用户使用git从github下载的速度提高1000倍!

序言 github上有很多好项目,但是国内用户连github却非常的慢.每次都要用插件或者其他工具来解决. 这次自己做一个小工具,输入github原地址后,就可以自动替换为代理地址,方便大家更快速的下载. 安装 pip install cit 主要功能与用法 主要功能 change 将目标地址转换为

35 Aug 29, 2022
Unja is a fast & light tool for fetching known URLs from Wayback Machine

Unja Fetch Known Urls What's Unja? Unja is a fast & light tool for fetching known URLs from Wayback Machine, Common Crawl, Virus Total & AlienVault's

Sheryar 10 Aug 07, 2022
python+selenium实现的web端自动打卡 + 每日邮件发送 + 金山词霸 每日一句 + 毒鸡汤(从2月份稳定运行至今)

python+selenium实现的web端自动打卡 说明 本打卡脚本适用于郑州大学健康打卡,其他web端打卡也可借鉴学习。(自己用的,从2月分稳定运行至今) 仅供学习交流使用,请勿依赖。开发者对使用本脚本造成的问题不负任何责任,不对脚本执行效果做出任何担保,原则上不提供任何形式的技术支持。 为防止

Sunday 1 Aug 27, 2022
爱奇艺会员,腾讯视频,哔哩哔哩,百度,各类签到

My-Actions 个人收集并适配Github Actions的各类签到大杂烩 不要fork了 ⭐️ star就行 使用方式 新建仓库并同步代码 点击Settings - Secrets - 点击绿色按钮 (如无绿色按钮说明已激活。直接到下一步。) 新增 new secret 并设置 Secr

280 Dec 30, 2022
This is a python api to scrape search results from a url.

googlescrape Installation Installation is simple! # Stable version pip install googlescrape Examples from googlescrape import client scrapeClient=cli

1 Dec 15, 2022