Crawl BookCorpus

Overview

Homemade BookCorpus

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@

Clawling could be difficult due to some issues of the website. Also, please consider another option such as using publicly available files at your own risk.

For example,

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@


These are scripts to reproduce BookCorpus by yourself.

BookCorpus is a popular large-scale text corpus, espetially for unsupervised learning of sentence encoders/decoders. However, BookCorpus is no longer distributed...

This repository includes a crawler collecting data from smashwords.com, which is the original source of BookCorpus. Collected sentences may partially differ but the number of them will be larger or almost the same. If you use the new corpus in your work, please specify that it is a replica.

How to use

Prepare URLs of available books. However, this repository already has a list as url_list.jsonl which was a snapshot I (@soskek) collected on Jan 19-20, 2019. You can use it if you'd like.

python -u download_list.py > url_list.jsonl &

Download their files. Downloading is performed for txt files if possible. Otherwise, this tries to extract text from epub. The additional argument --trash-bad-count filters out epub files whose word count is largely different from its official stat (because it may imply some failure).

python download_files.py --list url_list.jsonl --out out_txts --trash-bad-count

The results are saved into the directory of --out (here, out_txts).

Postprocessing

Make concatenated text with sentence-per-line format.

python make_sentlines.py out_txts > all.txt

If you want to tokenize them into segmented words by Microsoft's BlingFire, run the below. You can use another choices for this by yourself.

python make_sentlines.py out_txts | python tokenize_sentlines.py > all.tokenized.txt

Disclaimer

For example, you can refer to terms of smashwords.com. Please use the code responsibly and adhere to respective copyright and related laws. I am not responsible for any plagiarism or legal implication that rises as a result of this repository.

Requirement

  • python3 is recommended
  • beautifulsoup4
  • progressbar2
  • blingfire
  • html2text
  • lxml
pip install -r requirements.txt

Note on Errors

  • It is expected some error messages are shown, e.g., Failed: epub and txt, File is not a zip file or Failed to open. But, the number of failures will be much less than one of successes. Don't worry.

Acknowledgement

epub2txt.py is derived and modified from https://github.com/kevinxiong/epub2txt/blob/master/epub2txt.py

Citation

If you found this code useful, please cite it with the URL.

@misc{soskkobayashi2018bookcorpus,
    author = {Sosuke Kobayashi},
    title = {Homemade BookCorpus},
    howpublished = {\url{https://github.com/BIGBALLON/cifar-10-cnn}},
    year = {2018}
}

Also, the original papers which made the original BookCorpus are as follows:

Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler. "Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books." arXiv preprint arXiv:1506.06724, ICCV 2015.

@InProceedings{Zhu_2015_ICCV,
    title = {Aligning Books and Movies: Towards Story-Like Visual Explanations by Watching Movies and Reading Books},
    author = {Zhu, Yukun and Kiros, Ryan and Zemel, Rich and Salakhutdinov, Ruslan and Urtasun, Raquel and Torralba, Antonio and Fidler, Sanja},
    booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
    month = {December},
    year = {2015}
}
@inproceedings{moviebook,
    title = {Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books},
    author = {Yukun Zhu and Ryan Kiros and Richard Zemel and Ruslan Salakhutdinov and Raquel Urtasun and Antonio Torralba and Sanja Fidler},
    booktitle = {arXiv preprint arXiv:1506.06724},
    year = {2015}
}

Ryan Kiros, Yukun Zhu, Ruslan Salakhutdinov, Richard S. Zemel, Antonio Torralba, Raquel Urtasun, and Sanja Fidler. "Skip-Thought Vectors." arXiv preprint arXiv:1506.06726, NIPS 2015.

@article{kiros2015skip,
    title={Skip-Thought Vectors},
    author={Kiros, Ryan and Zhu, Yukun and Salakhutdinov, Ruslan and Zemel, Richard S and Torralba, Antonio and Urtasun, Raquel and Fidler, Sanja},
    journal={arXiv preprint arXiv:1506.06726},
    year={2015}
}
Comments
  • Could you share the processed all.txt?

    Could you share the processed all.txt?

    Hi Sosuke,

    Thanks a lot for the wonderful work! I expect to obtain the bookcorpus dataset with your crawler, but I failed to crawl the articles owing to some network errors. I am afraid that I cannot achieve a complete dataset. So could you please share with me the dataset you have got, e.g. the all.txt. My email address is [email protected]. Thanks!

    Zhijie

    opened by thudzj 9
  • Fix merging sentences in one paragraph

    Fix merging sentences in one paragraph

    This PR simply merges sentences in stack whenever it met an empty line. I am not sure why blank was necessary at the first place, so let't discuss about it if I'm missing some thing here.

    Consider one example from starting section of out_txts/100021__three-plays.txt. Current implementation output:

    Three Plays Published by Mike Suttons at Smashwords Copyright 2011 Mike Sutton ISBN 978-1-4659-8486-9 Tripping on Nothing
    

    It obviously merged the paragraph title Tripping on Nothing into stack incorrectly. With this PR, output is:

    Three Plays Published by Mike Suttons at Smashwords Copyright 2011 Mike Sutton ISBN 978-1-4659-8486-9
    
    
    Tripping on Nothing
    
    opened by yoquankara 4
  • intermittent issues with connections and file names

    intermittent issues with connections and file names

    example: python3.6 download_files.py --list url_list.jsonl --out out_txts --trash-bad-count 0 files had already been saved in out_txts. File is not a zip file | File is not a zip file File is not a zip file File is not a zip file File is not a zip file Failed to open https://www.smashwords.com/books/download/490185/8/latest/0/0/existence.epub HTTPError: HTTP Error 503: Service Temporarily Unavailable Succeeded in opening https://www.smashwords.com/books/download/490185/8/latest/0/0/existence.epub File is not a zip file File is not a zip file | File is not a zip file File is not a zip file File is not a zip file | File is not a zip file File is not a zip file "There is no item named '' in the archive" File is not a zip file File is not a zip file "There is no item named 'OPS/' in the archive" File is not a zip file File is not a zip file | File is not a zip file Failed to open https://www.smashwords.com/books/download/793264/8/latest/0/0/jaynells-wolf.epub HTTPError: HTTP Error 503: Service Temporarily Unavailable Succeeded in opening https://www.smashwords.com/books/download/793264/8/latest/0/0/jaynells-wolf.epub Failed to open https://www.smashwords.com/books/download/479710/6/latest/0/0/tainted-ava-delaney-lost-souls-1.txt HTTPError: HTTP Error 503: Service Temporarily Unavailable Succeeded in opening https://www.smashwords.com/books/download/479710/6/latest/0/0/tainted-ava-delaney-lost-souls-1.txt File is not a zip file "There is no item named 'OPS/' in the archive" File is not a zip file Failed to open https://www.smashwords.com/books/download/496160/8/latest/0/0/royal-blood-royal-blood-1.epub HTTPError: HTTP Error 404: Not Found Failed to open https://www.smashwords.com/books/download/496160/8/latest/0/0/royal-blood-royal-blood-1.epub HTTPError: HTTP Error 404: Not Found Gave up to open https://www.smashwords.com/books/download/496160/8/latest/0/0/royal-blood-royal-blood-1.epub [Errno 2] No such file or directory: 'out_txts/royal-blood-royal-blood-1.epub'

    opened by David-Levinthal 3
  • Network Error

    Network Error

    Hi,Thanks for your code, it's really useful for most nlp researchers and thank you again.

    And when I run this code, it's often interrupted by network error after download a little files, I thought this maybe caused by my network. so, could you please send me a email attached with the crawled BookCorpus datasets if you have ?

    My email is: [email protected]. Thank you very much.

    Best,

    opened by SummmerSnow 3
  • HTTPError: HTTP Error 401: Authorization Required

    HTTPError: HTTP Error 401: Authorization Required

    Thanks for you code, but I got some network trouble when I run the download_list script. The full error message is Failed to open https://www.smashwords.com/books/category/1/downloads/0/free/medium/0 HTTPError: HTTP Error 401: Authorization Required

    What's more, when I use your url_list.jsonl to download file, the download_filles script gaves the same error message: Failed to open https://www.smashwords.com/books/download/246580/6/latest/0/0/silence.txt HTTPError: HTTP Error 401: Authorization Required

    And I tried to open the url in my chrome, and I can see that page without error 401. Could help to find a solution? Thanks a lot~

    opened by NotToday 2
  • smashwords.com forbids this; readme should tell people to get permission first

    smashwords.com forbids this; readme should tell people to get permission first

    The code in this repo violates both the robots.txt of smashwords.com:

    $ curl -s https://www.smashwords.com/robots.txt | tail -4
    User-agent: *
    Disallow: /books/search?
    Disallow: /books/download/
    Crawl-delay: 4
    

    and their terms of service, as far as I can see: “Third parties are not authorized to download, host and otherwise redistribute Smashwords books without prior written agreement from Smashwords” (you could imagine that this only prohibits downloading for subsequent hosting or redistribution, but I think that would be an opportunistic interpretation :) ).

    The readme should tell people very clearly that they must get permission from smashwords.com before running this stuff against their site.

    opened by gthb 1
  • How to resolve URLError SSL: CERTIFICATE_VERIFY_FAILED

    How to resolve URLError SSL: CERTIFICATE_VERIFY_FAILED

    If you get the following error:

    URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:748)>
    

    Adding this block of code at the top of the script at download_files.py will resolve it.

    import os, ssl
    if (not os.environ.get('PYTHONHTTPSVERIFY', '') and
        getattr(ssl, '_create_unverified_context', None)):
        ssl._create_default_https_context = ssl._create_unverified_context
    
    opened by delzac 1
  • add: utf8 encoding for all file opens

    add: utf8 encoding for all file opens

    First of all, Thank you for sharing your work.

    There were some errors about encoding like below. image

    It's been resolved by adding encoding='utf8' for every opens.

    Have a beautiful day.

    opened by YongWookHa 1
  • download_list.py not working due to title change.

    download_list.py not working due to title change.

    Apparently the titles on smashwords changed. txt is now found under "Plain text; contains no formatting" and epub under "Supported by many apps and devices (e.g., Apple Books, Barnes and Noble Nook, Kobo, Google Play, etc.)"

    opened by 1227505 1
  • add strip for genre scraping

    add strip for genre scraping

    It was dirty. I added strip.

    "genres": ["\n                            Category: Fiction \u00bb Mystery & detective \u00bb Women Sleuths ", "\n                            Category: Fiction \u00bb Fantasy \u00bb Paranormal "]
    

    will be

    "genres": ["Category: Fiction \u00bb Mystery & detective \u00bb Women Sleuths", "Category: Fiction \u00bb Fantasy \u00bb Paranormal"]
    
    opened by soskek 0
  • Update on the `url_list.jsonl`

    Update on the `url_list.jsonl`

    Hello, on 2022-12-17 I run the script download_list.py with modified number to page to 31430 which covered the last search page. Here is the updated url_list.jsonl.zip

    There are 4544 entries loss, and 8475 entries added from the original file

    Hope this help

    opened by thipokKub 0
  • Here’s a download link for all of bookcorpus as of Sept 2020

    Here’s a download link for all of bookcorpus as of Sept 2020

    You can download it here: https://twitter.com/theshawwn/status/1301852133319294976?s=21

    it contains 18k plain text files. The results are very high quality. I spent about a week fixing the epub2txt script, which you can find at https://github.com/shawwn/scrap named “epub2txt-all”. (not epub2txt.)

    The new script:

    1. Correctly preserves structure, matching the table of contents very closely;

    2. Correctly renders tables of data (by default html2txt produces mostly garbage-looking results for tables),

    3. Correctly preserves code structure, so that source code and similar things are visually coherent,

    4. Converts numbered lists from “1\.” to “1.”

    5. Runs the full text through ftfy.fix_text() (which is what OpenAI does for GPT), replacing Unicode apostrophes with ascii apostrophes;

    6. Expands Unicode ellipses to “...” (three separate ascii characters).

    The tarball download link (see tweet above) also includes the original ePub URLs, updated for September 2020, which ended up being about 2k more than the URLs in this repo. But they’re hard to crawl. I do have the epub files, but I’m reluctant to distribute them for obvious reasons.

    opened by shawwn 13
  • epub2txt.py produces incorrect results for many epubs

    epub2txt.py produces incorrect results for many epubs

    Specifically this line: https://github.com/soskek/bookcorpus/blob/05a3f227d9748c2ee7ccaf93819d0e0236b6f424/epub2txt.py#L149

    image

    When I tried to convert a book on Tensorflow to text using this script, I noticed chapter 1 was being repeated multiple times.

    The reason is that the Table of Contents looks similar to this:

    ch1.html#section1
    ch1.html#section2
    ch1.html#section3
    ... ch2.html#section1 ch2.html#section2 ...

    The epub2txt script iterates over this table of contents, splits "ch1.html#section1" to "ch1.html", then converts that to text. Then repeats for "ch1.html#section2", which converts the same chapter into text.

    I have a fixed version here: https://github.com/shawwn/scrap/blob/afb699ee9c8181b3728b81fc410a31b66311f0d8/epub2txt#L158-L206

    opened by shawwn 1
  • Can anyone download all the files in the url list file?

    Can anyone download all the files in the url list file?

    I tried to download the bookscorpus data. So far I just downloaded around 5000 books. Can anyone get all the books? I met a lot HTTP Error: 403 Forbidden How to fix this ? Or can i get the all the bookscorpus data from somewhere ?

    Thanks

    opened by wxp16 13
Releases(v1.0)
Owner
Sosuke Kobayashi
[email protected] ML, NLP, CV
Sosuke Kobayashi
High available distributed ip proxy pool, powerd by Scrapy and Redis

高可用IP代理池 README | 中文文档 本项目所采集的IP资源都来自互联网,愿景是为大型爬虫项目提供一个高可用低延迟的高匿IP代理池。 项目亮点 代理来源丰富 代理抓取提取精准 代理校验严格合理 监控完备,鲁棒性强 架构灵活,便于扩展 各个组件分布式部署 快速开始 注意,代码请在release

SpiderClub 5.2k Jan 03, 2023
script to scrape direct download links (ddls) from google drive index.

bhadoo Google Personal/Shared Drive Index scraper. A small script to scrape direct download links (ddls) of downloadable files from bhadoo google driv

sαɴᴊɪᴛ sɪɴʜα 53 Dec 16, 2022
A list of Python Bots used to extract data from several websites

A list of Python Bots used to extract data from several websites. Data extraction is for products on e-commerce (ecommerce) websites. Data fetched i

Sahil Ladhani 1 Jan 14, 2022
A web Scraper for CSrankings.com that scrapes University and Faculty list for a particular country

A look into what we're building Demo.mp4 Prerequisites Python 3 Node v16+ Steps to run Create a virtual environment. Activate the virtual environment.

2 Jun 06, 2022
抢京东茅台脚本,定时自动触发,自动预约,自动停止

jd_maotai 抢京东茅台脚本,定时自动触发,自动预约,自动停止 小白信用 99.6,暂时还没抢到过,朋友 80 多抢到了一瓶,所以我感觉是跟信用分没啥关系,完全是看运气的。

Aruelius.L 117 Dec 22, 2022
Audio media crawler for lbry.

Audio media crawler for lbry. Requirements Python 3.8 Poetry 1.1.7 Elasticsearch 7.14.0 Lbry-sdk 0.99.0 Development This project uses poetry as a depe

Hound.fm 4 Dec 03, 2022
Crawler in Python 3.7, 3.8. 3.9. Pypy3

Description Python Crawler written Python 3. (Supports major Python releases Python3.6, Python3.7 and Python 3.8) Installation and Use Setup VirtualEn

Vinit Kumar 2 Mar 12, 2022
腾讯课堂,模拟登陆,获取课程信息,视频下载,视频解密。

腾讯课堂脚本 要学一些东西,但腾讯课堂不支持自定义变速,播放时有水印,且有些老师的课一遍不够看,于是这个脚本诞生了。 时间比较紧张,只会不定时修复重大bug。多线程下载之类的功能更新短期内不会有,如果你想一起完善这个脚本,欢迎pr 2020.5.22测试可用 使用方法 很简单,三部完成 下载代码,

163 Dec 30, 2022
Web mining module for Python, with tools for scraping, natural language processing, machine learning, network analysis and visualization.

Pattern Pattern is a web mining module for Python. It has tools for: Data Mining: web services (Google, Twitter, Wikipedia), web crawler, HTML DOM par

Computational Linguistics Research Group 8.4k Jan 08, 2023
Scrape all the media from an OnlyFans account - Updated regularly

Scrape all the media from an OnlyFans account - Updated regularly

CRIMINAL 3.2k Dec 29, 2022
Quick Project made to help scrape Lexile and Atos(AR) levels from ISBN

Lexile-Atos-Scraper Quick Project made to help scrape Lexile and Atos(AR) levels from ISBN You will need to install the chrome webdriver if you have n

1 Feb 11, 2022
CRI Scrape is a tool for get general info about Italian Red Cross in GAIA Platform

CRI Scrape CRI Scrape is a tool for get general info about Italian Red Cross in GAIA Platform Disclaimer This code is only for educational purpose. So

Vincenzo Cardone 0 Jul 23, 2022
Demonstration on how to use async python to control multiple playwright browsers for web-scraping

Playwright Browser Pool This example illustrates how it's possible to use a pool of browsers to retrieve page urls in a single asynchronous process. i

Bernardas Ališauskas 8 Oct 27, 2022
原神爬虫 抓取原神界面圣遗物信息

原神圣遗物半自动爬虫 说明 直接抓取原神界面中的圣遗物数据 目前只适配了背包页面的抓取 准确率:97.5%(普通通用接口,对 40 件随机圣遗物识别,统计完全正确的数量为 39) 准确率:100%(4k 屏幕,普通通用接口,对 110 件圣遗物识别,统计完全正确的数量为 110) 不排除还有小错误的

hwa 28 Oct 10, 2022
A Happy and lightweight Python Package that searches Google News RSS Feed and returns a usable JSON response and scrap complete article - No need to write scrappers for articles fetching anymore

GNews 🚩 A Happy and lightweight Python Package that searches Google News RSS Feed and returns a usable JSON response 🚩 As well as you can fetch full

Muhammad Abdullah 273 Dec 31, 2022
Explore scraping with BeautifulSoup!

beautifulsoup-scrape Explore scraping with BeautifulSoup! Part One: Start from Shakespeare As my professor is a poet (yes, and he teaches me data and

Chuqin 2 Oct 05, 2022
京东云无线宝积分推送,支持查看多设备积分使用情况

JDRouterPush 项目简介 本项目调用京东云无线宝API,可每天定时推送积分收益情况,帮助你更好的观察主要信息 更新日志 2021-03-02: 查询绑定的京东账户 通知排版优化 脚本检测更新 支持Server酱Turbo版 2021-02-25: 实现多设备查询 查询今

雷疯 199 Dec 12, 2022
京东茅台抢购

截止 2021/2/1 日,该项目已无法使用! 京东:约满即止,仅限京东实名认证用户APP端抢购,2月1日10:00开始预约,2月1日12:00开始抢购(京东APP需升级至8.5.6版本及以上) 写在前面 本项目来自 huanghyw - jd_seckill,作者的项目地址我找不到了,找到了再贴上

abee 73 Dec 03, 2022
Introduction to WebScraping Workshop - Semcomp 24 Beta

Extrair informações da internet de forma automatizada. Existem diversas maneiras de fazer isso, nesse tutorial vamos ver algumas delas, por meio de bibliotecas de python.

Luísa Moura 19 Sep 11, 2022
The first public repository that provides free BUBT website scraping API script on Github.

BUBT WEBSITE SCRAPPING SCRIPT I think this is the first public repository that provides free BUBT website scraping API script on github. When I was do

Md Imam Hossain 3 Feb 10, 2022