Comment Webpage Screenshot is a GitHub Action that captures screenshots of web pages and HTML files located in the repository

Overview

Comment Webpage Screenshot

Comment Webpage Screenshot is a GitHub Action that captures screenshots of web pages and HTML files located in the repository, uploads them to an Image Upload Service and comments the screenshots on the pull request that triggered the action.

Note: This Action Only Works on Pull Requests.

Workflow inputs

These are the inputs that can be provided on the workflow.

Name Required Description Default
upload_to No Image Upload Service Name (Options are: github_branch, imgur) More Details github_branch
capture_changed_html_files No Enable or Disable Screenshot Capture for Changed HTML Files on the Pull Request (Options are: yes, no) yes
capture_html_file_paths No Comma Seperated paths to the HTML files to be captured (Example: /pages/index.html, about.html) null
capture_urls No Comma Seperated URLs to be captured (Example: https://dev.example.com, https://dev.example.com/about.html) null
github_token No GITHUB_TOKEN provided by the workflow run or Personal Access Token (PAT) github.token

Example Workflow

name: Comment Webpage Screenshot

on:
  pull_request:
    types: [opened, reopened, synchronize]

jobs:
  build:
    runs-on: ubuntu-latest

    steps:
      - uses: actions/[email protected]

      - name: Comment Webpage Screenshot
        uses: saadmk11/[email protected]
        with:
          # Optional, the action will create a new branch and
          # upload the screenshots to that branch.
          upload_to: github_branch  # Or, imgur
          # Optional, the action will capture screenshots
          # of all the changed html files on the pull request.
          capture_changed_html_files: yes  # Or, no
          # Optional, the action will capture screenshots
          # of the html files provided in this input.
          # Comma seperated file paths are accepted
          capture_html_file_paths: "/pages/index.html, about.html"
          # Optional, the action will capture screenshots
          # of the URLs provided in this input.
          # You can add URLs of your development server or
          # run the server in the previous step
          # and add that URL here (For Example: http://172.17.0.1:8000/).
          # Comma seperated URLs are accepted.
          capture_urls: "https://dev.example.com, https://dev.example.com/about.html"
          # Optional
          github_token: {{ secrets.MY_GITHUB_TOKEN }}

Run Local Development Server Inside the Workflow and Capture Screenshots

If you want to run your application development server inside the action workflow and capture screenshot from the server running inside the workflow, Then You can structure the workflow yaml file similar to this:

Using Docker to Run The Application:

name: Comment App Screenshot

on:
  pull_request:
    types: [opened, synchronize]

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      # Checkout your pull request code
    - uses: actions/[email protected]
    
    # Build Development Docker Image
    - run: docker build -t local .
    # Run the Docker Image
    # You need to run this detached (-d)
    # so that the action is not blocked
    # and can move on to the next step
    # You Need to publish the port on the host (-p 8000:8000)
    # So that it is reachable outside the container
    - run: docker run --name demo -d -p 8000:8000 local
    # Sleep for few seconds and let the container start
    - run: sleep 10
    
    # Run Screenshot Comment Action
    - name: Run Screenshot Comment Action
      uses: saadmk11/[email protected]
      with:
        upload_to: github_branch
        capture_changed_html_files: no
        # You must use `172.17.0.1` if you are running
        # the application locally inside the workflow
        # Otherwise the container which will run this action 
        # will not be able to reach the application
        capture_urls: 'http://172.17.0.1:8000/, http://172.17.0.1:8000/admin/login/'

Directly Running The Application:

name: Comment App Screenshot

on:
  pull_request:
    types: [opened, synchronize]

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/[email protected]

      - name: Use Node.js
        uses: actions/[email protected]
        with:
          node-version: '16.x'
      # Use `nohup` to run the node app
      # so that the execution of the next steps are not blocked
      - run: nohup node main.js &
      # Sleep for few seconds and let the container start
      - run: sleep 5

      # Run Screenshot Comment Action
      - name: Run Screenshot Comment Action
        uses: saadmk11/[email protected]
        with:
          upload_to: imgur
          capture_changed_html_files: no
          # You must use `172.17.0.1` if you are running
          # the application locally inside the workflow
          # Otherwise, the container which will run this action 
          # will not be able to reach the application
          capture_urls: 'http://172.17.0.1:8081'

Important Note:

If you run the application server inside the GitHub Actions Workflow:

  • You need to run it in the background or detached mode.

  • If you are using docker to run your application server you need top publish the port to the host (for example: -p 8000:8000).

  • you can not use localhost url on capture_urls. You need to use 172.17.0.1 so that comment-webpage-screenshot action can send request to the server running locally. So, http://localhost:8081 will become http://172.17.0.1:8081

Examples including application code can be found here: Example Projects

Run External Development Server and Capture Screenshots

If your application has a external development server that deploys changes on every pull request. You can add the URLs of your development server on capture_urls input. This will let the action capture screenshots from the external development server after deployment.

Example:

name: Comment App Screenshot

on:
  pull_request:
    types: [opened, synchronize]

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
    # Run Screenshot Comment Action
    - name: Run Screenshot Comment Action
      uses: saadmk11/[email protected]
      with:
        upload_to: github_branch
        capture_changed_html_files: no
        # Add you external development server URL
        capture_urls: 'https://dev.example.com, https://dev.example.com/about.html'

Capture Screenshots for Static HTML Pages

If your repository contains only static files and does not require a server. You can just put the file path of the HTML files you want to capture screenshot of.

Example:

name: Comment Static Site Screenshot

on:
  pull_request:
    types: [opened, synchronize]

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
    # Run Screenshot Comment Action
    - name: Run Screenshot Comment Action
      uses: saadmk11/[email protected]
      with:
        upload_to: imgur
        # Capture Screenshots of Changed HTML Files
        capture_changed_html_files: yes
        # Comma seperated paths to any other HTML File
        capture_html_file_paths: "/pages/index.html, about.html"

Available Image Upload Services

As GitHub Does not allow us to upload images to a comment using the API we need to rely on other services to host the screenshots.

These are the currently available image upload services.

Imgur

If the value of upload_to input is imgur then the screenshots will be uploaded to Imgur. Keep in mind that the uploaded screenshots will be public and anyone can see them. Imgur also has a rate limit of how many images can be uploaded per hour. Refer to Imgur's Rate Limits Docs for more details. This is suitable for small open source repositories.

Please refer to Imgur terms of service here

GitHub Branch (Default)

If the value of upload_to input is github_branch then the screenshots will be pushed to a GitHub branch created by the action on your repository. The screenshots on the comments will reference the Images pushed to this branch.

This is suitable for open source and private repositories.

If you want to add/use a different image upload service, feel free create a new issue/pull request.

Examples

You Can find some example use cases of this action here: Example Projects

Here are some comments created by this action on the Example Project:

Screenshot from 2021-11-18 21-32-20 screencapture-github-saadmk11-django-demo-pull-11-2021-11-18-21_31_00

License

The code in this project is released under the GNU GENERAL PUBLIC LICENSE Version 3.

You might also like...
Parsel lets you extract data from XML/HTML documents using XPath or CSS selectors

Parsel Parsel is a BSD-licensed Python library to extract and remove data from HTML and XML using XPath and CSS selectors, optionally combined with re

Extract embedded metadata from HTML markup

extruct extruct is a library for extracting embedded metadata from HTML markup. Currently, extruct supports: W3C's HTML Microdata embedded JSON-LD Mic

A pure-python HTML screen-scraping library

Scrapely Scrapely is a library for extracting structured data from HTML pages. Given some example web pages and the data to be extracted, scrapely con

WebScraper - A script that prints out a list of all EXTERNAL references in the HTML response to an HTTP/S request
WebScraper - A script that prints out a list of all EXTERNAL references in the HTML response to an HTTP/S request

Project A: WebScraper A script that prints out a list of all EXTERNAL references

Scrapes the Sun Life of Canada Philippines web site for historical prices of their investment funds and then saves them as CSV files.

slocpi-scraper Sun Life of Canada Philippines Inc Investment Funds Scraper Install dependencies pip install -r requirements.txt Usage General format:

Python based Web Scraper which can discover javascript files and parse them for juicy information (API keys, IP's, Hidden Paths etc)
Python based Web Scraper which can discover javascript files and parse them for juicy information (API keys, IP's, Hidden Paths etc)

Python based Web Scraper which can discover javascript files and parse them for juicy information (API keys, IP's, Hidden Paths etc).

robobrowser - A simple, Pythonic library for browsing the web without a standalone web browser.

RoboBrowser: Your friendly neighborhood web scraper Homepage: http://robobrowser.readthedocs.org/ RoboBrowser is a simple, Pythonic library for browsi

A web service for scanning media hosted by a Matrix media repository

Matrix Content Scanner A web service for scanning media hosted by a Matrix media repository Installation TODO Development In a virtual environment wit

Github scraper app is used to scrape data for a specific user profile created using streamlit and BeautifulSoup python packages
Github scraper app is used to scrape data for a specific user profile created using streamlit and BeautifulSoup python packages

Github Scraper Github scraper app is used to scrape data for a specific user profile. Github scraper app gets a github profile name and check whether

Comments
  • Good job, thank you

    Good job, thank you

    Buddy, very good work! This is awesome. In a place that feels like the wild wild west, full of badly documented actions, crappy ones, actions that do not work or that are built for one person very specific use case finding a gem like this just feels awesome. It worked out of the box, with the provided example configuration and does exactly what I expect it to do: I'm impressed. Took me more time to find it than using it, and that is already a compliment to it. I love how it takes advantage of the features of github without requiring you to use anything else. Very, very good job, thank you.

    One little question, does it keep the history of screenshot or is the branch overwritten? Regards

    question 
    opened by danielo515 2
  • Factor out comment/upload feature from screenshot feature

    Factor out comment/upload feature from screenshot feature

    Hey ! Thanks a lot for this !

    In my use case, I do not want to use the screenshoting part, since I already produce my own in my dockerized test suite (by the way, they are animated gifs, not images).

    Unfortunately, this makes your action unusable. This is a shame since all the logic about upload service and calling the API works great.

    I'd suggest factoring out the upload/comment part to a different action, usable on it's own. I've done this here: https://github.com/opengisch/comment-pr-with-images but of course happy to merge back / collaborate on this if you're interested maintaining a splitted version of your action.

    Cheers.

    opened by olivierdalang 0
Releases(v0.5)
Owner
Maksudul Haque
Web Developer, Open Source Contributor
Maksudul Haque
Simply scrape / download all the media from an fansly account.

Simply scrape / download all the media from an fansly account. Providing updates as long as its continuously gaining popularity, so hit the ⭐ button!

Mika C. 334 Jan 01, 2023
Scrap the 42 Intranet's elearning videos in a single click

42intra_scraper Scrap the 42 Intranet's elearning videos in a single click. Why you would want to use it ? Adjust speed at your convenience. (The intr

Noufel 5 Oct 27, 2022
热搜榜-python爬虫+正则re+beautifulsoup+xpath

仓库简介 微博热搜榜, 参数wb 百度热搜榜, 参数bd 360热点榜, 参数360 csdn热榜接口, 下方查看 其他热搜待加入 如何使用? 注册vercel fork到你的仓库, 右上角 点击这里完成部署(一键部署) 请求参数 vercel配置好的地址+api?tit=+参数(仓库简介有参数信息

Harry 3 Jul 08, 2022
A Python module to bypass Cloudflare's anti-bot page.

cloudflare-scrape A simple Python module to bypass Cloudflare's anti-bot page (also known as "I'm Under Attack Mode", or IUAM), implemented with Reque

3k Jan 04, 2023
A simple, configurable and expandable combined shop scraper to minimize the costs of ordering several items

combined-shop-scraper A simple, configurable and expandable combined shop scraper to minimize the costs of ordering several items. Features Define an

2 Dec 13, 2021
Web Scraping Instagram photos with Selenium by only using a hashtag.

Web-Scraping-Instagram This project is used to automatically obtain images by web scraping Instagram with Selenium in Python. The required input will

Sandro Agama 3 Nov 24, 2022
一些爬虫相关的签名、验证码破解

cracking4crawling 一些爬虫相关的签名、验证码破解,目前已有脚本: 小红书App接口签名(shield)(2020.12.02) 小红书滑块(数美)验证破解(2020.12.02) 海南航空App接口签名(hnairSign)(2020.12.05) 说明: 脚本按目标网站、App命

XNFA 90 Feb 09, 2021
Crawl BookCorpus

These are scripts to reproduce BookCorpus by yourself.

Sosuke Kobayashi 590 Jan 03, 2023
Parse feeds in Python

feedparser - Parse Atom and RSS feeds in Python. Copyright 2010-2020 Kurt McKee Kurt McKee 1.5k Dec 30, 2022

An Web Scraping API for MDL(My Drama List) for Python.

PyMDL An API for MyDramaList(MDL) based on webscraping for python. Description An API for MDL to make your life easier in retriving and working on dat

6 Dec 10, 2022
Amazon scraper using scrapy, a python framework for crawling websites.

#Amazon-web-scraper This is a python program, which use scrapy python framework to crawl all pages of the product and scrap products data. This progra

Akash Das 1 Dec 26, 2021
A simple code to fetch comments below an Instagram post and save them to a csv file

fetch_comments A simple code to fetch comments below an Instagram post and save them to a csv file usage First you have to enter your username and pas

2 Jul 14, 2022
This is my CS 20 final assesment.

eeeeeSpider This is my CS 20 final assesment. How to use: Open program Run to your hearts content! There are no external dependancies that you will ha

1 Jan 17, 2022
A powerful annex BUBT, BUBT Soft, and BUBT website scraping script.

Annex Bubt Scraping Script I think this is the first public repository that provides free annex-BUBT, BUBT-Soft, and BUBT website scraping API script

Md Imam Hossain 4 Dec 03, 2022
A Python Covid-19 cases tracker that scrapes data off the web and presents the number of Cases, Recovered Cases, and Deaths that occurred because of the pandemic.

A Python Covid-19 cases tracker that scrapes data off the web and presents the number of Cases, Recovered Cases, and Deaths that occurred because of the pandemic.

Alex Papadopoulos 1 Nov 13, 2021
FilmMikirAPI - A simple rest-api which is used for scrapping on the Kincir website using the Python and Flask package

FilmMikirAPI - A simple rest-api which is used for scrapping on the Kincir website using the Python and Flask package

UserGhost411 1 Nov 17, 2022
Webservice wrapper for hhursev/recipe-scrapers (python library to scrape recipes from websites)

recipe-scrapers-webservice This is a wrapper for hhursev/recipe-scrapers which provides the api as a webservice, to be consumed as a microservice by o

1 Jul 09, 2022
京东茅台抢购 2021年4月最新版

Jd_Seckill 特别声明: 本仓库发布的jd_seckill项目中涉及的任何脚本,仅用于测试和学习研究,禁止用于商业用途,不能保证其合法性,准确性,完整性和有效性,请根据情况自行判断。 本项目内所有资源文件,禁止任何公众号、自媒体进行任何形式的转载、发布。 huanghyw 对任何脚本问题概不

45 Dec 14, 2022
Kusonime scraper using python3

Features Scrap from url Scrap from recommendation Search by query Todo [+] Search by genre Example # Get download url from kusonime import Scrap

MhankBarBar 2 Jan 28, 2022
Divar.ir Ads scrapper

Divar.ir Ads Scrapper Introduction This project first asynchronously grab Divar.ir Ads and then save to .csv and .xlsx files named data.csv and data.x

Iman Kermani 4 Aug 29, 2022