Visual scraping for Scrapy

Related tags

Web Crawlingportia
Overview

Portia

Portia is a tool that allows you to visually scrape websites without any programming knowledge required. With Portia you can annotate a web page to identify the data you wish to extract, and Portia will understand based on these annotations how to scrape data from similar pages.

Running Portia

The easiest way to run Portia is using Docker:

You can run Portia using Docker & official Portia-image by running:

docker run -v ~/portia_projects:/app/data/projects:rw -p 9001:9001 scrapinghub/portia

You can also set up a local instance with Docker-compose by cloning this repo & running from the root of the folder:

docker-compose up

For more detailed instructions, and alternatives to using Docker, see the Installation docs.

Documentation

Documentation can be found from Read the docs. Source files can be found in the docs directory.

Comments
  • unable to deploy with scrapyd-deploy

    unable to deploy with scrapyd-deploy

    Hello,

    Could you please help me figure out what I'm doing wrong ? Here are the steps: i followed the portia install manual - all ok i created a new project, entered an url, tagged an item - all ok clicked "continue browsing", browsed through site, items were being extracted as expected - all ok

    Next i wanted to deploy my spider: 1st try : i tried to run, as the docs specified, scrapyd-deploy your_scrapyd_target -p project_name - got error - scrapyd wasn't installed fix: pip install scrapyd 2nd try : i launched scrapyd server (also missing from the docs), accessed http://localhost:6800/ -all ok after a brief reading of scrapyd docs i found out i had to edit the file scrapy.cfg from my project : slyd/data/projects/new_project/scrapy.cfg added the following : [deploy:local] url = http://localhost:6800/

    went back to the console, checked all is ok : $:> scrapyd-deploy -l local http://localhost:6800/

    $:> scrapyd-deploy -L local default

    seemed ok so i gave it another try : $scrapyd-deploy local -p default Packing version 1418722113 Deploying to project "default" in http://localhost:6800/addversion.json Server response (200): {"status": "error", "message": "IOError: [Errno 21] Is a directory: '/Users/Mihai/Work/www/4ideas/MarketWatcher/portia_tryout/portia/slyd/data/projects/new_project'"}

    What am I missing ?

    opened by MihaiCraciun 40
  • ImportError: No module named jsonschema.exceptions

    ImportError: No module named jsonschema.exceptions

    After correct installation, when I try to run

    twistd -n slyd

    I get

    Traceback (most recent call last): File "/usr/local/bin/twistd", line 14, in run() File "/usr/local/lib/python2.7/dist-packages/twisted/scripts/twistd.py", line 27, in run app.run(runApp, ServerOptions) File "/usr/local/lib/python2.7/dist-packages/twisted/application/app.py", line 642, in run runApp(config) File "/usr/local/lib/python2.7/dist-packages/twisted/scripts/twistd.py", line 23, in runApp _SomeApplicationRunner(config).run() File "/usr/local/lib/python2.7/dist-packages/twisted/application/app.py", line 376, in run self.application = self.createOrGetApplication() File "/usr/local/lib/python2.7/dist-packages/twisted/application/app.py", line 436, in createOrGetApplication ser = plg.makeService(self.config.subOptions) File "/home/euphorbium/Projects/mtg/scraper/portia-master/slyd/slyd/tap.py", line 55, in makeService root = create_root(config) File "/home/euphorbium/Projects/mtg/scraper/portia-master/slyd/slyd/tap.py", line 27, in create_root from slyd.crawlerspec import (CrawlerSpecManager, File "/home/euphorbium/Projects/mtg/scraper/portia-master/slyd/slyd/crawlerspec.py", line 12, in from jsonschema.exceptions import ValidationError ImportError: No module named jsonschema.exceptions

    opened by Euphorbium 28
  • How to install portia2.0 correctly by docker?

    How to install portia2.0 correctly by docker?

    when I install portia by docker I had a problems. when I run ember build -w ,the results is:

    Could not start watchman; falling back to NodeWatcher for file system events. Visit http://ember-cli.com/user-guide/#watchman for more info. components/browser-iframe.js: line 175, col 39, 'TreeMirror' is not defined. components/browser-iframe.js: line 212, col 9, '$' is not defined. components/browser-iframe.js: line 227, col 23, '$' is not defined. components/browser-iframe.js: line 281, col 20, '$' is not defined. 4 errors components/inspector-panel.js: line 92, col 16, '$' is not defined. components/inspector-panel.js: line 92, col 33, '$' is not defined. components/inspector-panel.js: line 102, col 16, '$' is not defined. 3 errors components/save-status.js: line 48, col 16, 'moment' is not defined. 1 error controllers/projects/project/conflicts/conflict.js: line 105, col 13, '$' is not defined. 1 error controllers/projects/project/spider.js: line 10, col 25, 'URI' is not defined. 1 error routes/projects/project/conflicts.js: line 5, col 16, '$' is not defined. 1 error services/web-socket.js: line 10, col 15, 'URI' is not defined. services/web-socket.js: line 15, col 12, 'URI' is not defined. 2 errors utils/browser-features.js: line 14, col 13, 'Modernizr' is not defined. 1 error utils/tree-mirror-delegate.js: line 30, col 20, '$' is not defined. utils/tree-mirror-delegate.js: line 66, col 17, '$' is not defined. 2 errors utils/utils.js: line 15, col 20, 'URI' is not defined. utils/utils.js: line 50, col 9, 'Raven' is not defined. utils/utils.js: line 56, col 9, 'Raven' is not defined. 3 errors ===== 10 JSHint Errors Build successful - 1641ms. Slowest Trees | Total
    ----------------------------------------------+--------------------- broccoli-persistent-filter:Babel > [Babel:... | 275ms
    JSHint app | 149ms
    SourceMapConcat: Concat: Vendor /assets/ve... | 89ms
    broccoli-persistent-filter:Babel | 85ms
    broccoli-persistent-filter:Babel | 82ms
    Slowest Trees (cumulative) | Total (avg)
    ----------------------------------------------+--------------------- broccoli-persistent-filter:Babel > [Ba... (2) | 294ms (147 ms)
    broccoli-persistent-filter:Babel (5) | 257ms (51 ms)
    JSHint app (1) | 149ms
    SourceMapConcat: Concat: Vendor /asset... (1) | 89ms
    broccoli-persistent-filter:TemplateCom... (2) | 88ms (44 ms)

    I wanna know where is wrong. By the way,this step I followed the some ISSUES,if I just follow the Documentation to install portia by docker(docker build -t portia .),when I run the portia,the http://localhost:9001/static/index.html will show me nothing. So,how to install portia2.0 correctly by docker?Could someone help me?

    opened by ChoungJX 19
  • Support for JS

    Support for JS

    Add support for JS based sites. It would be nice to have UI support for configuring instead of having to do it manually at the Scrapy level.

    Perhaps we can allow users to enable or disable sending requests via Splash.

    feature 
    opened by shaneaevans 19
  • Crawls duplicate/identical URLs

    Crawls duplicate/identical URLs

    If there are duplicate URLs on the page they will be crawled and exported as many times as you see the link.

    It is very unusual circumstances that you would need to crawl the same URL more than once.

    I have two proposals:

    1. Have a checkbox so you can tick "Avoid visiting duplicate links".
    2. Alternatively, add filtering options in the link crawler to filter by HTML markup too, that way only only links with certain classes.
    opened by jpswade 18
  • Question: how to store data in json

    Question: how to store data in json

    Hi, recently I have installed portia and it's quite good. But since I'm not very familiar with it, I have met a problem: In my template I used variants to scrape the website and it worked well when I clicked continue browsing or turned to a similiar page. However, when I started running the portia spider in cmd, it could successfully scrape data yet failed to store the whole data scraped in the json file. I found that in the json file only the data in the page I annotated were stored. I guess the problem is "WARNING: Dropped: Duplicate product scraped at http://bbs.chinadaily.com.cn/forum-83-206.html, first one was scraped at http://bbs.chinadaily.com.cn/forum-83-2.html" (it appeared in cmd when the spider was running), but I don't know how to solve the problem. I hope someone can help me as soon as possible. Thanks very much!

    opened by SophiaCY 17
  • Next page navigation

    Next page navigation

    Is the new portia scrape next page data? I build spider in portia with nextpage recorded option which is run in docker, but when i deployed it in scrapyd there is no data scraped i got output like

    2016-03-29 15:57:55 [scrapy] INFO: Spider opened 2016-03-29 15:57:55 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 2016-03-29 15:57:55 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023 2016-03-29 15:58:32 [scrapy] DEBUG: Retrying <POST http://localhost:8050/render.html?job_id=dfe9afdef59811e5b9ad000c29120335> (failed 1 times): 504 Gateway Time-out 2016-03-29 15:58:55 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 2016-03-29 15:59:07 [scrapy] DEBUG: Retrying <POST http://localhost:8050/render.html?job_id=dfe9afdef59811e5b9ad000c29120335> (failed 2 times): 504 Gateway Time-out 2016-03-29 15:59:38 [scrapy] DEBUG: Crawled (200) <POST http://localhost:8050/render.html?job_id=dfe9afdef59811e5b9ad000c29120335> (referer: None) 2016-03-29 15:59:39 [scrapy] DEBUG: Filtered offsite request to 'www.successfactors.com': <GET http://www.successfactors.com/> 2016-03-29 15:59:51 [scrapy] DEBUG: Crawled (200) <POST http://localhost:8050/render.html?job_id=dfe9afdef59811e5b9ad000c29120335> (referer: None) 2016-03-29 15:59:51 [scrapy] DEBUG: Crawled (200) <POST http://localhost:8050/render.html?job_id=dfe9afdef59811e5b9ad000c29120335> (referer: None) 2016-03-29 15:59:52 [scrapy] DEBUG: Crawled (200) <POST http://localhost:8050/render.html?job_id=dfe9afdef59811e5b9ad000c29120335> (referer: None) 2016-03-29 15:59:52 [scrapy] DEBUG: Crawled (200) <POST http://localhost:8050/render.html?job_id=dfe9afdef59811e5b9ad000c29120335> (referer: None) 2016-03-29 15:59:52 [scrapy] DEBUG: Crawled (200) <POST http://localhost:8050/render.html?job_id=dfe9afdef59811e5b9ad000c29120335> (referer: None) 2016-03-29 15:59:52 [scrapy] DEBUG: Filtered offsite request to 'www.cpchem.com': <GET http://www.cpchem.com/> 2016-03-29 15:59:52 [scrapy] DEBUG: Crawled (200) <POST http://localhost:8050/render.html?job_id=dfe9afdef59811e5b9ad000c29120335> (referer: None) 2016-03-29 15:59:55 [scrapy] INFO: Crawled 7 pages (at 7 pages/min), scraped 0 items (at 0 items/min) 2016-03-29 15:59:57 [scrapy] DEBUG: Crawled (200) <POST http://localhost:8050/render.html?job_id=dfe9afdef59811e5b9ad000c29120335> (referer: None) 2016-03-29 15:59:58 [scrapy] DEBUG: Crawled (200) <POST http://localhost:8050/render.html?job_id=dfe9afdef59811e5b9ad000c29120335> (referer: None) 2016-03-29 15:59:59 [scrapy] DEBUG: Crawled (200) <POST http://localhost:8050/render.html?job_id=dfe9afdef59811e5b9ad000c29120335> (referer: None) 2016-03-29 16:00:00 [scrapy] DEBUG: Crawled (200) <POST http://localhost:8050/render.html?job_id=dfe9afdef59811e5b9ad000c29120335> (referer: None) 2016-03-29 16:00:00 [scrapy] DEBUG: Crawled (200) <POST http://localhost:8050/render.html?job_id=dfe9afdef59811e5b9ad000c29120335> (referer: None) 2016-03-29 16:00:03 [scrapy] DEBUG: Crawled (200) <POST http://localhost:8050/render.html?job_id=dfe9afdef59811e5b9ad000c29120335> (referer: None) 2016-03-29 16:00:04 [scrapy] DEBUG: Crawled (200) <POST http://localhost:8050/render.html?job_id=dfe9afdef59811e5b9ad000c29120335> (referer: None) 2016-03-29 16:00:04 [scrapy] DEBUG: Crawled (200) <POST http://localhost:8050/render.html?job_id=dfe9afdef59811e5b9ad000c29120335> (referer: None)

    there scrapyd and splash are running in local, is there any idea for this issue?

    thanks in advance

    opened by agramesh 16
  • Running Portia with Docker

    Running Portia with Docker

    For windows machine I installed docker and this command "docker build -t portia ." also successfully run . after that i received error while entering this section

    docker run -i -t --rm -v <PROJECT_FOLDER>/data:/app/slyd/data:rw
    -p 9001:9001
    --name portia
    portia

    how configure this command and what is <PROJECT_FOLDER>. Could please help on this to test.

    opened by agramesh 15
  • Cant visit http://localhost:8000/

    Cant visit http://localhost:8000/

    After installed vagrant and virtualbox on my win7 amd64 pc, I successfully launched the Ubuntu virtual machine. image

    However, when I visited http://localhost:8000/, I got this. image

    What am i missing?

    opened by fakegit 14
  • Try to resolve the

    Try to resolve the "No module named slyd.tap error"?

    When I start bin/slyd i received the following error in cent-os

    slyd]$ bin/slyd 2015-08-27 17:18:29+0530 [-] Log opened. 2015-08-27 17:18:29.318689 [-] Splash version: 1.6 2015-08-27 17:18:29.319315 [-] Qt 4.8.5, PyQt 4.10.1, WebKit 537.21, sip 4.14.6, Twisted 15.2.1, Lua 5.1 2015-08-27 17:18:29.319412 [-] Open files limit: 1024 2015-08-27 17:18:29.319473 [-] Open files limit increased from 1024 to 4096 2015-08-27 17:18:29.534741 [-] Xvfb is started: ['Xvfb', ':2431', '-screen', '0', '1024x768x24'] Xlib: extension "RANDR" missing on display ":2431". 2015-08-27 17:18:29.637987 [-] Traceback (most recent call last): 2015-08-27 17:18:29.638202 [-] File "bin/slyd", line 41, in 2015-08-27 17:18:29.638312 [-] splash.server.main() 2015-08-27 17:18:29.638408 [-] File "/usr/lib/python2.7/site-packages/splash/server.py", line 373, in main 2015-08-27 17:18:29.638572 [-] max_timeout=opts.max_timeout 2015-08-27 17:18:29.638665 [-] File "/usr/lib/python2.7/site-packages/splash/server.py", line 279, in default_splash_server 2015-08-27 17:18:29.638801 [-] max_timeout=max_timeout 2015-08-27 17:18:29.638887 [-] File "bin/slyd", line 15, in make_server 2015-08-27 17:18:29.638981 [-] from slyd.tap import makeService

    2015-08-27 17:18:29.639099 [-] ImportError: No module named slyd.tap

    How could it resolve

    opened by agramesh 14
  • error unexpected error

    error unexpected error

    hi all

    i dont know why this happen and how to solve this i got some error while i click new sample the GUI for enter new sample not happen but got this error appear

    portia error

    can anyone help me to solve this?

    opened by jemjov 13
  • How to get fields exported in correct order?

    How to get fields exported in correct order?

    I've got this working on Docker, and can run a spider with something along the lines of docker exec -t portialatest portiacrawl "/app/data/projects/nproject" "nproject.com" -o "/mnt/dc.csv" I've got spiders setup for a few sites which all scrape the same data, my problem is that the exported fields are in different orders, I've tried inserting FIELDS_TO_EXPORT and also CSV_EXPORT_FIELDS into settings.py but seems to have no effect. I'm wanting to get the fields output all in the same order - the name of the fields (Annotations) are Ref, Beds, Baths, Price, Location Could some kind soul point me to the correct solution please? Many thanks.

    opened by buttonsbond 0
  • fix(sec): upgrade scrapy-splash to 0.8.0

    fix(sec): upgrade scrapy-splash to 0.8.0

    What happened?

    There are 1 security vulnerabilities found in scrapy-splash 0.7.2

    What did I do?

    Upgrade scrapy-splash from 0.7.2 to 0.8.0 for vulnerability fix

    What did you expect to happen?

    Ideally, no insecure libs should be used.

    The specification of the pull request

    PR Specification from OSCS

    opened by chncaption 0
  • Fix portia_server

    Fix portia_server

    this PR is abandoned in favor of Gerapy

    fix #883 #920 #913 #907 #903 #902 #895 #877 #842 #812 #811 #790 #760 #742 ...

    todo

    • [x] fix portia_server
      • [x] bump version from 2.0.8 to 2.1.0
    • [x] fix slybot (at least the build is working, may produce runtime errors)
      • [ ] fix some snapshot tests
    • [x] fix portiaui
    • [x] fix slyd
    • [ ] support frames #494

    related https://github.com/scrapinghub/portia2code/pull/12 https://github.com/scrapy/scrapely/pull/122

    opened by milahu 6
  • Correct typo in docker setup docs

    Correct typo in docker setup docs

    Minor typo in docs - name of environment variable used in documentation info box (PROJECT_FOLDER) didn't match the actual name used in the example shell commands (PROJECTS_FOLDER)

    opened by mz8i 0
  • Browser version error message

    Browser version error message

    I am running Portia using Docker on Windows. Even though I just installed and updated Chrome today to Version 95.0.4638.54 (Official Build) (64-bit), I am getting an error that I need to use an up-to-date Browser as soon as I open the portia (localhost:9001).

    opened by fptmark 2
Owner
Scrapinghub
Turn web content into useful data
Scrapinghub
SearchifyX, predecessor to Searchify, is a fast Quizlet, Quizizz, and Brainly webscraper with various stealth features.

SearchifyX SearchifyX, predecessor to Searchify, is a fast Quizlet, Quizizz, and Brainly webscraper with various stealth features. SearchifyX lets you

28 Dec 20, 2022
Web-Scrapper using Python and Flask

Web-Scrapper "[초급]Python으로 웹 스크래퍼 만들기" 코스 -NomadCoders 기초적인 Python 문법강의부터 시작하여 웹사이트의 html파일에서 원하는 내용을 Scrapping해서 출력, csv 파일로 저장, flask를 이용한 간단한 웹페이지

윤성도 1 Nov 10, 2021
Python scraper to check for earlier appointments in Clalit Health Services

clalit-appt-checker Python scraper to check for earlier appointments in Clalit Health Services Some background If you ever needed to schedule a doctor

Dekel 16 Sep 17, 2022
Creating Scrapy scrapers via the Django admin interface

django-dynamic-scraper Django Dynamic Scraper (DDS) is an app for Django which builds on top of the scraping framework Scrapy and lets you create and

Holger Drewes 1.1k Dec 17, 2022
A simple code to fetch comments below an Instagram post and save them to a csv file

fetch_comments A simple code to fetch comments below an Instagram post and save them to a csv file usage First you have to enter your username and pas

2 Jul 14, 2022
WebScraper - A script that prints out a list of all EXTERNAL references in the HTML response to an HTTP/S request

Project A: WebScraper A script that prints out a list of all EXTERNAL references

2 Apr 26, 2022
Screen scraping and web crawling framework

Pomp Pomp is a screen scraping and web crawling framework. Pomp is inspired by and similar to Scrapy, but has a simpler implementation that lacks the

Evgeniy Tatarkin 61 Jun 21, 2021
腾讯课堂,模拟登陆,获取课程信息,视频下载,视频解密。

腾讯课堂脚本 要学一些东西,但腾讯课堂不支持自定义变速,播放时有水印,且有些老师的课一遍不够看,于是这个脚本诞生了。 时间比较紧张,只会不定时修复重大bug。多线程下载之类的功能更新短期内不会有,如果你想一起完善这个脚本,欢迎pr 2020.5.22测试可用 使用方法 很简单,三部完成 下载代码,

163 Dec 30, 2022
Searching info from Google using Python Scrapy

Python-Search-Engine-Scrapy || Python-爬虫-索引/利用爬虫获取谷歌信息**/ Searching info from Google using Python Scrapy /* 利用 PYTHON 爬虫获取天气信息,以及城市信息和资料**/ translatio

HONGVVENG 1 Jan 06, 2022
Get paper names from dblp.org

scraper-dblp Get paper names from dblp.org and store them in a .txt file Useful for a related literature :) Install libraries pip3 install -r requirem

Daisy Lab 1 Dec 07, 2021
Scrapy uses Request and Response objects for crawling web sites.

Requests and Responses¶ Scrapy uses Request and Response objects for crawling web sites. Typically, Request objects are generated in the spiders and p

Md Rashidul Islam 1 Nov 03, 2021
This is a sport analytics project that combines the knowledge of OOP and Webscraping

This is a sport analytics project that combines the knowledge of Object Oriented Programming (OOP) and Webscraping, the weekly scraping of the English Premier league table is carried out to assess th

Dolamu Oludare 1 Nov 26, 2021
Web Scraping OLX with Python and Bsoup.

webScrap WebScraping first step. Authors: Paulo, Claudio M. First steps in Web Scraping. Project carried out for training in Web Scrapping. The export

claudio paulo 5 Sep 25, 2022
A distributed crawler for weibo, building with celery and requests.

A distributed crawler for weibo, building with celery and requests.

SpiderClub 4.8k Jan 03, 2023
Here I provide the source code for doing web scraping using the python library, it is Selenium.

Here I provide the source code for doing web scraping using the python library, it is Selenium.

M Khaidar 1 Nov 13, 2021
robobrowser - A simple, Pythonic library for browsing the web without a standalone web browser.

RoboBrowser: Your friendly neighborhood web scraper Homepage: http://robobrowser.readthedocs.org/ RoboBrowser is a simple, Pythonic library for browsi

Joshua Carp 3.7k Dec 27, 2022
A dead simple crawler to get books information from Douban.

Introduction A dead simple crawler to get books information from Douban. Pre-requesites Python 3 Install dependencies from requirements.txt (Optional)

Yun Wang 1 Jan 10, 2022
Scrapping Connections' info on Linkedin

Scrapping Connections' info on Linkedin

MohammadReza Ardestani 1 Feb 11, 2022
Scraping and visualising India's real-time COVID-19 data from the MOHFW dataset.

COVID19-WEB-SCRAPER Open Source Tech Lab - Project [SEMESTER IV] OSTL Assignments OSTL Assignments - 1 OSTL Assignments - 2 Project COVID19 India Data

AMEY THAKUR 8 Apr 28, 2022
Python script to check if there is any differences in responses of an application when the request comes from a search engine's crawler.

crawlersuseragents This Python script can be used to check if there is any differences in responses of an application when the request comes from a se

Podalirius 13 Dec 27, 2022