Web Content Retrieval for Humans™

Overview

Lassie

https://img.shields.io/pypi/v/lassie.svg?style=flat-square https://img.shields.io/travis/michaelhelmick/lassie.svg?style=flat-square https://img.shields.io/coveralls/michaelhelmick/lassie/master.svg?style=flat-square https://img.shields.io/badge/Say%20Thanks!-:)-1EAEDB.svg?style=flat-square

Lassie is a Python library for retrieving basic content from websites.

https://i.imgur.com/QrvNfAX.gif

Usage

>>> import lassie
>>> lassie.fetch('http://www.youtube.com/watch?v=dQw4w9WgXcQ')
{
    'description': u'Music video by Rick Astley performing Never Gonna Give You Up. YouTube view counts pre-VEVO: 2,573,462 (C) 1987 PWL',
    'videos': [{
        'src': u'http://www.youtube.com/v/dQw4w9WgXcQ?autohide=1&version=3',
        'height': 480,
        'type': u'application/x-shockwave-flash',
        'width': 640
    }, {
        'src': u'https://www.youtube.com/embed/dQw4w9WgXcQ',
        'height': 480,
        'width': 640
    }],
    'title': u'Rick Astley - Never Gonna Give You Up',
    'url': u'http://www.youtube.com/watch?v=dQw4w9WgXcQ',
    'keywords': [u'Rick', u'Astley', u'Sony', u'BMG', u'Music', u'UK', u'Pop'],
    'images': [{
        'src': u'http://i1.ytimg.com/vi/dQw4w9WgXcQ/hqdefault.jpg?feature=og',
        'type': u'og:image'
    }, {
        'src': u'http://i1.ytimg.com/vi/dQw4w9WgXcQ/hqdefault.jpg',
        'type': u'twitter:image'
    }, {
        'src': u'http://s.ytimg.com/yts/img/favicon-vfldLzJxy.ico',
        'type': u'favicon'
    }, {
        'src': u'http://s.ytimg.com/yts/img/favicon_32-vflWoMFGx.png',
        'type': u'favicon'
    }],
    'locale': u'en_US'
}

Install

Install Lassie via pip

$ pip install lassie

or, with easy_install

$ easy_install lassie

But, hey... that's up to you.

Documentation

Documentation can be found here: https://lassie.readthedocs.org/

Comments
  • Fix possible ValueError in convert_to_int caused by values like 1px

    Fix possible ValueError in convert_to_int caused by values like 1px

    When trying to parse http://www.wired.com/wiredscience/2013/09/rim-fire-map-color-scale/ a ValueError was raised in convert_to_img, because the page has image width and height values ending in px.

    I changed the function to be more liberal regarding dimension values, by extracting the digits before casting to int. I added a test for this.

    Not sure though if the value should be converted to int at all or kept as a string.

    opened by yaph 14
  • Import fails on Python3.5

    Import fails on Python3.5

    It appears something is seriously broken when trying to install lassie with Python 3.5. Install goes fine but when importing I get here:

    Python 3.5.0 (default, Sep 23 2015, 04:41:38)
    [GCC 4.2.1 Compatible Apple LLVM 7.0.0 (clang-700.0.72)] on darwin
    Type "help", "copyright", "credits" or "license" for more information.
    >>> import lassie
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "/Users/ben/dev/beavy/venv/src/lassie/lassie/__init__.py", line 19, in <module>
        from .api import fetch
      File "/Users/ben/dev/beavy/venv/src/lassie/lassie/api.py", line 11, in <module>
        from .core import Lassie
      File "/Users/ben/dev/beavy/venv/src/lassie/lassie/core.py", line 13, in <module>
        from bs4 import BeautifulSoup
      File "/Users/ben/dev/beavy/venv/lib/python3.5/site-packages/bs4/__init__.py", line 30, in <module>
        from .builder import builder_registry, ParserRejectedMarkup
      File "/Users/ben/dev/beavy/venv/lib/python3.5/site-packages/bs4/builder/__init__.py", line 308, in <module>
        from . import _htmlparser
      File "/Users/ben/dev/beavy/venv/lib/python3.5/site-packages/bs4/builder/_htmlparser.py", line 7, in <module>
        from html.parser import (
    ImportError: cannot import name 'HTMLParseError'
    
    opened by gnunicorn 6
  • Add optional structured properties for og:image and og:video

    Add optional structured properties for og:image and og:video

    From http://ogp.me/#structured.

    The og:video tag has the identical tags as og:image.

    og:image:url - Identical to og:image. og:image:secure_url - An alternate url to use if the webpage requires HTTPS. og:image:type - A MIME type for this image. og:image:width - The number of pixels wide. og:image:height - The number of pixels high.

    opened by jpadilla 6
  • Optional support for canonical URL meta tag.

    Optional support for canonical URL meta tag.

    This is very roughed in, but it adds support for returning the URL as provided by the canonical link element.

    There isn't anything to determine precedence with og:url.

    Has passing tests, and is disabled by default.

    Needed this for a project, not sure if it would be useful upstream.

    enhancement 
    opened by jmhobbs 5
  • Possible relative URL in og:image

    Possible relative URL in og:image

    I just came accros a page with a relative path value for the og:image. Adding a call to urljoin on the src attribute in line 186 of core.py would be a possibility, but maybe it's better to check for the src prop (possibly href prop too) in _filter_meta_data and do it there. What do you think about that?

    opened by yaph 5
  • Can't get the full article.

    Can't get the full article.

    Hi, I want to extract the article from the source url. I got only the title of the article and small parts of it under the "description" parameter.

    opened by yaseenox 4
  • Update requests==2.8 in setup.py, too

    Update requests==2.8 in setup.py, too

    The changelog for the last release states, that request is now pinned at version 2.8, yet when installing the latest version of lassie, it requires (and install) version 2.6 – the setup.py hasn't been updated to reflect that change and breaks the installation. This PR corrects that.

    opened by gnunicorn 4
  • Please allow to configure the requests session

    Please allow to configure the requests session

    It would be useful to be able to configure the requests session used to retrieve the requested URL.

    You could perhaps initialize a default session object in the Lassie constructor, which the user could then configure, and/or add a parameter to Lassie.fetch() to override the default session.

    opened by tawmas 4
  • Bump requests from 2.18.4 to 2.20.0

    Bump requests from 2.18.4 to 2.20.0

    ⚠️ Dependabot is rebasing this PR ⚠️

    If you make any changes to it yourself then they will take precedence over the rebase.


    Bumps requests from 2.18.4 to 2.20.0.

    Changelog

    Sourced from requests's changelog.

    2.20.0 (2018-10-18)

    Bugfixes

    • Content-Type header parsing is now case-insensitive (e.g. charset=utf8 v Charset=utf8).
    • Fixed exception leak where certain redirect urls would raise uncaught urllib3 exceptions.
    • Requests removes Authorization header from requests redirected from https to http on the same hostname. (CVE-2018-18074)
    • should_bypass_proxies now handles URIs without hostnames (e.g. files).

    Dependencies

    • Requests now supports urllib3 v1.24.

    Deprecations

    • Requests has officially stopped support for Python 2.6.

    2.19.1 (2018-06-14)

    Bugfixes

    • Fixed issue where status_codes.py's init function failed trying to append to a __doc__ value of None.

    2.19.0 (2018-06-12)

    Improvements

    • Warn user about possible slowdown when using cryptography version < 1.3.4
    • Check for invalid host in proxy URL, before forwarding request to adapter.
    • Fragments are now properly maintained across redirects. (RFC7231 7.1.2)
    • Removed use of cgi module to expedite library load time.
    • Added support for SHA-256 and SHA-512 digest auth algorithms.
    • Minor performance improvement to Request.content.
    • Migrate to using collections.abc for 3.7 compatibility.

    Bugfixes

    • Parsing empty Link headers with parse_header_links() no longer return one bogus entry.
    ... (truncated)
    Commits
    • bd84045 v2.20.0
    • 7fd9267 remove final remnants from 2.6
    • 6ae8a21 Add myself to AUTHORS
    • 89ab030 Use comprehensions whenever possible
    • 2c6a842 Merge pull request #4827 from webmaven/patch-1
    • 30be889 CVE URLs update: www sub-subdomain no longer valid
    • a6cd380 Merge pull request #4765 from requests/encapsulate_urllib3_exc
    • bbdbcc8 wrap url parsing exceptions from urllib3's PoolManager
    • ff0c325 Merge pull request #4805 from jdufresne/https
    • b0ad249 Prefer https:// for URLs throughout project
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot ignore this [patch|minor|major] version will close this PR and stop Dependabot creating any more for this minor/major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 3
  • Added support for open graph optional property `site_name`.

    Added support for open graph optional property `site_name`.

    Hi, I added supported for the open graph site_name property.

    This parse the following tag... <meta property="og:site_name" content="IMDb" /> into {"site_name": "IMDb"}

    opened by cameronmaske 3
  • make image urls absolute and added mock to test_requirements

    make image urls absolute and added mock to test_requirements

    I made a change so that when lassie.fetch is called with all_images=True the images src attributes contain absolute URLs. Since lassie already comes with a function that makes relative URLs absolute, I think it's better done inside lassie than in the application which imports it.

    When trying to run the tests after the changes the mock package was missing, so I added it to the test_requirements.txt file.

    opened by yaph 2
  • docs: Fix a few typos

    docs: Fix a few typos

    There are small typos in:

    • docs/usage/advanced_usage.rst

    Fixes:

    • Should read attributes rather than attibutes.
    • Should read actual rather than acutal.

    Semi-automated pull request generated by https://github.com/timgates42/meticulous/blob/master/docs/NOTE.md

    opened by timgates42 0
  • Any reason to pindown upper version in requirements.txt

    Any reason to pindown upper version in requirements.txt

    Hi,

    Since lassie is a library, limiting upper versions for dependencies as in

    requests>=2.18.4,<3.0.0
    beautifulsoup4>=4.9.0,<4.10.0
    

    can lead to conflicts for software using it, e.g. on pip install:

    The conflict is caused by:
        The user requested beautifulsoup4==4.10.0
        lassie 0.11.11 depends on beautifulsoup4<4.10.0 and >=4.9.0
    

    Is there any reason for the pindown?

    opened by idlesign 1
  • Encoding issues with german umlauts

    Encoding issues with german umlauts

    Hi,

    when getting the description from a German website the "ü" "ä" etc. end up being "ä", "ü" etc. Example: https://finanzguru.de/ Result:

    Finanzguru - Finanzen magisch einfach Finanzen magisch einfach. Verwalte deine Verträge, kündige per Fingertipp und spare Geld mit meinen Spartipps. Alles an einem Ort und komplett kostenfrei. Einfacher war es noch nie.

    I am using lassie within Django.

    opened by leugh 0
  • Add new filters for embeddable items

    Add new filters for embeddable items

    The idea is to return as much data as we can in the API so users can possibly embed media. (i.e. Spotify tracks)

    We'll probably add a new embed.py and return a new embed key in the lassie API response.

    enhancement 
    opened by michaelhelmick 0
Releases(0.11.11)
Web Scraping Practica With Python

Web-Scraping-Practica Integrants: Guillem Vidal Pallarols. Lídia Bandrés Solé Fitxers: Aquest document és el primer que trobem. A continuació trobem u

2 Nov 08, 2021
Webservice wrapper for hhursev/recipe-scrapers (python library to scrape recipes from websites)

recipe-scrapers-webservice This is a wrapper for hhursev/recipe-scrapers which provides the api as a webservice, to be consumed as a microservice by o

1 Jul 09, 2022
A crawler of doubamovie

豆瓣电影 A crawler of doubamovie 一个小小的入门级scrapy框架的应用,选取豆瓣电影对排行榜前1000的电影数据进行爬取。 spider.py start_requests方法为scrapy的方法,我们对它进行重写。 def start_requests(self):

Cats without dried fish 1 Oct 05, 2021
Danbooru scraper with python

Danbooru Version: 0.0.1 License under: MIT License Dependencies Python: = 3.9.7 beautifulsoup4 cloudscraper Example of use Danbooru from danbooru imp

Sugarbell 2 Oct 27, 2022
Python script for crawling ResearchGate.net papers✨⭐️📎

ResearchGate Crawler Python script for crawling ResearchGate.net papers About the script This code start crawling process by urls in start.txt and giv

Mohammad Sadegh Salimi 4 Aug 30, 2022
Rottentomatoes, Goodreads and IMDB sites crawler. Semantic Web final project.

Crawler Rottentomatoes, Goodreads and IMDB sites crawler. Crawler written by beautifulsoup, selenium and lxml to gather books and films information an

Faeze Ghorbanpour 1 Dec 30, 2021
Here I provide the source code for doing web scraping using the python library, it is Selenium.

Here I provide the source code for doing web scraping using the python library, it is Selenium.

M Khaidar 1 Nov 13, 2021
爱奇艺会员,腾讯视频,哔哩哔哩,百度,各类签到

My-Actions 个人收集并适配Github Actions的各类签到大杂烩 不要fork了 ⭐️ star就行 使用方式 新建仓库并同步代码 点击Settings - Secrets - 点击绿色按钮 (如无绿色按钮说明已激活。直接到下一步。) 新增 new secret 并设置 Secr

280 Dec 30, 2022
Haphazard scripts for scraping bitcoin/bitcoin data from GitHub

This is a quick-and-dirty tool used to scrape bitcoin/bitcoin pull request and commentary data. Each output/pr number folder contains comments.json:

James O'Beirne 8 Oct 12, 2022
DaProfiler allows you to get emails, social medias, adresses, works and more on your target using web scraping and google dorking techniques

DaProfiler allows you to get emails, social medias, adresses, works and more on your target using web scraping and google dorking techniques, based in France Only. The particularity of this program i

Dalunacrobate 347 Jan 07, 2023
A web scraping pipeline project that retrieves TV and movie data from two sources, then transforms and stores data in a MySQL database.

New to Streaming Scraper An in-progress web scraping project built with Python, R, and SQL. The scraped data are movie and TV show information. The go

Charles Dungy 1 Mar 28, 2022
Distributed Crawler Management Framework Based on Scrapy, Scrapyd, Django and Vue.js

Gerapy Distributed Crawler Management Framework Based on Scrapy, Scrapyd, Scrapyd-Client, Scrapyd-API, Django and Vue.js. Documentation Documentation

Gerapy 2.9k Jan 03, 2023
A python module to parse the Open Graph Protocol

OpenGraph is a module of python for parsing the Open Graph Protocol, you can read more about the specification at http://ogp.me/ Installation $ pip in

Erik Rivera 213 Nov 12, 2022
Binance harvester - A Python 3 script to harvest data from the Binance socket stream and calculate popular TA indicators and produce lists of top trending coins

Binance harvester - A Python 3 script to harvest data from the Binance socket stream and calculate popular TA indicators and produce lists of top trending coins

68 Oct 08, 2022
fork huanghyw/jd_seckill

Jd_Seckill 特别声明: 本仓库发布的jd_seckill项目中涉及的任何脚本,仅用于测试和学习研究,禁止用于商业用途,不能保证其合法性,准确性,完整性和有效性,请根据情况自行判断。 本项目内所有资源文件,禁止任何公众号、自媒体进行任何形式的转载、发布。

512 Jan 03, 2023
Pythonic Crawling / Scraping Framework based on Non Blocking I/O operations.

Pythonic Crawling / Scraping Framework Built on Eventlet Features High Speed WebCrawler built on Eventlet. Supports relational databases engines like

Juan Manuel Garcia 173 Dec 05, 2022
Current Antarctic large iceberg positions derived from ASCAT and OSCAT-2

Iceberg Locations Antarctic large iceberg positions derived from ASCAT and OSCAT-2. All data collected here are from the NASA SCP website Overview Thi

Joel Hanson 5 Jul 27, 2022
Linkedin webscraping - Linkedin web scraping with python

linkedin_webscraping This is the first step of a full project called "LinkedIn J

Pedro Dib 4 Apr 24, 2022
Web Crawlers for Data Labelling of Malicious Domain Detection & IP Reputation Evaluation

Web Crawlers for Data Labelling of Malicious Domain Detection & IP Reputation Evaluation This repository provides two web crawlers to label domain nam

1 Nov 05, 2021
Parse feeds in Python

feedparser - Parse Atom and RSS feeds in Python. Copyright 2010-2020 Kurt McKee Kurt McKee 1.5k Dec 30, 2022