A Happy and lightweight Python Package that searches Google News RSS Feed and returns a usable JSON response and scrap complete article - No need to write scrappers for articles fetching anymore

Overview

GitHub contributors GitHub issues GNews license GNews stars PyPI PyPI - Downloads

GNews

🚩 A Happy and lightweight Python Package that searches Google News RSS Feed and returns a usable JSON response
🚩 As well as you can fetch full article (No need to write scrappers for articles fetching anymore)

Gnews

Installation

pip install gnews

Usage

from gnews import GNews

google_news = GNews()
json_resp = google_news.get_news('Pakistan')
print(json_resp[0])
{
'publisher': 'Aljazeera.com',
 'description': 'Pakistan accuses India of stoking conflict in Indian Ocean  '
                'Aljazeera.com',
 'published date': 'Tue, 16 Feb 2021 11:50:43 GMT',
 'title': 'Pakistan accuses India of stoking conflict in Indian Ocean - '
          'Aljazeera.com',
 'url': 'https://www.aljazeera.com/news/2021/2/16/pakistan-accuses-india-of-nuclearizing-indian-ocean'
 }

  • Get news will return the list, [{'title': '...', 'published date': '...', 'description': '...', 'url': '...', 'publisher': '...'}]

Available locations and languages

print(google_news.countries)

'Australia', 'Botswana', 'Canada ', 'Ethiopia', 'Ghana', 'India ', 'Indonesia', 'Ireland', 'Israel ', 'Kenya', 'Latvia',
'Malaysia', 'Namibia', 'New Zealand', 'Nigeria', 'Pakistan', 'Philippines', 'Singapore', 'South Africa', 'Tanzania',
'Uganda', 'United Kingdom', 'United States', 'Zimbabwe', 'Czech Republic', 'Germany', 'Austria', 'Switzerland', 'Argentina',
'Chile', 'Colombia', 'Cuba', 'Mexico', 'Peru', 'Venezuela', 'Belgium ', 'France', 'Morocco', 'Senegal', 'Italy', 'Lithuania',
'Hungary', 'Netherlands', 'Norway', 'Poland', 'Brazil', 'Portugal', 'Romania', 'Slovakia', 'Slovenia', 'Sweden',
'Vietnam', 'Turkey', 'Greece', 'Bulgaria', 'Russia', 'Ukraine ', 'Serbia', 'United Arab Emirates', 'Saudi Arabia', 'Lebanon',
'Egypt', 'Bangladesh', 'Thailand', 'China', 'Taiwan', 'Hong Kong', 'Japan', 'Republic of Korea'
print(google_news.languages)

'english', 'indonesian', 'czech', 'german', 'spanish', 'french', 'italian', 'latvian', 'lithuanian', 'hungarian',
'dutch', 'norwegian', 'polish', 'portuguese brasil', 'portuguese portugal', 'romanian', 'slovak', 'slovenian', 'swedish',
'vietnamese', 'turkish', 'greek', 'bulgarian', 'russian', 'serbian', 'ukrainian', 'hebrew', 'arabic', 'marathi', 'hindi', 'bengali',
'tamil', 'telugu', 'malyalam', 'thai', 'chinese simplified', 'chinese traditional', 'japanese', 'korean'

We can set country, language, period and size during initialization

google_news = GNews(language='english', country='United States', period='7d', max_results=10)

Others methods to set country, language, period and size

set_period('7d') # News from last 7 days
max_results(10) # number of responses across a keyword
set_country('United States') # News from a specific country 
set_language('english') # News in a sepcific language

Google News cover across 141+ countries with 41+ languages. On the bottom left side of the Google News page you may find a Language & region section where you can find all of the supported combinations.

Article Properties

Properties Description Example
title Title of the article IMF Staff and Pakistan Reach Staff-Level Agreement on the Pending Reviews Under the Extended Fund Facility
url Google news link to article Article Link
published date Published date Wed, 07 Jun 2017 07:01:30 GMT
description Short description of article IMF Staff and Pakistan Reach Staff-Level Agreement on the Pending Reviews Under the Extended Fund Facility ...
publisher Publisher of article The Guardian

Getting full article

you can use newspaper3k to scrap full article, you can also get full article using get_full_article by passing url.

Make sure you already install newspaper3k

Install newspaper3k

pip3 install newspaper3k

from gnews import GNews

google_news = GNews()
json_resp = google_news.get_news('Pakistan')
article = google_news.get_full_article(json_resp[0]['url']) # newspaper3k instance, you can access newspaper3k all attributes in article
article.title 

IMF Staff and Pakistan Reach Staff-Level Agreement on the Pending Reviews Under the Extended Fund Facility'

article.text 

End-of-Mission press releases include statements of IMF staff teams that convey preliminary findings after a mission. The views expressed are those of the IMF staff and do not necessarily represent the views of the IMF’s Executive Board.\n\nIMF staff and the Pakistani authorities have reached an agreement on a package of measures to complete second to fifth reviews of the authorities’ reform program supported by the IMF Extended Fund Facility (EFF) ..... (full article)

article.images

{'https://www.imf.org/~/media/Images/IMF/Live-Page/imf-live-rgb-h.ashx?la=en', 'https://www.imf.org/-/media/Images/IMF/Data/imf-logo-eng-sep2019-update.ashx', 'https://www.imf.org/-/media/Images/IMF/Data/imf-seal-shadow-sep2019-update.ashx', 'https://www.imf.org/-/media/Images/IMF/Social/TW-Thumb/twitter-seal.ashx', 'https://www.imf.org/assets/imf/images/footer/IMF_seal.png'}

article.authors

[]

Read full documentation for newspaper3k newspaper3k

Todo

  • Save to MongoDB
  • Save to MySQL

License

MIT ©

Comments
  • Google News URL format update

    Google News URL format update

    Hello,

    Thanks for providing this piece of code.

    I have recently come across weird behavior regarding the period parameter (e.g. 7d, you can get news from weeks prior). More importantly, the number of news output have dramatically reduced recently when combining countries and languages or even just providing a language and leaving the country parameter to None (for English)

    Turning language parameter to any other language (e.g. French ['fr']) returns 0 articles systematically even for popular searches.

    I suspect Google has changed/updated their url format and/or available countries/languages !

    opened by sif-gondy 6
  • get_news stopped working

    get_news stopped working

    Been working on some code for past week and it had been working fine with get_news("topic") . Stopped working earlier get_top_news() still works. Tried using other keywords for topic but still returns nothing.

    Any debug help?

    opened by shorenewsbeacon 5
  • [Questions] Hello author. Is possible to make Gnews get news from multiple topics?

    [Questions] Hello author. Is possible to make Gnews get news from multiple topics?

    This is my test code. It working with single keyword. Now i tried to make it with multiple keyword. It possible to do that? Example :

    google_news = GNews(language='vi', country='Vietnam',
                        period='1h', max_results=20)
    json_resp = google_news.get_news('Covid', 'Apple')
    print(json_resp)
    
    opened by ghost 3
  • Feature/results in date range

    Feature/results in date range

    get_news('key') can search within a date range, if provided. Other functions return warnings if a date range has been provided as they do not support searching in this way. A workaround for each other function is suggested, but will provide slightly different results

    opened by tigsinthetrees 2
  • Top headlines?

    Top headlines?

    Nice!

    There doesn't currently appear to be a way to get news stories without specifying a topic (key). Could I modify the function so if the user uses get_news() without a key, or passing None or the empty string, it just grabs the top stories from the main feed for your locale?

    Do you support Category or Location based searches?

    Thanks!

    opened by aaronchantrill 2
  • Streamlined workflows by minimizing clutter.

    Streamlined workflows by minimizing clutter.

    ⚔️ Things changed:

    This PR primarily focuses on .github/workflows/python-publish.yml.

    • The workflow now only triggers upon manual dispatch / a successful published release (which the comments at the beginning said that it did but actually didn't).

    • The PyPI publish workflow doesn't require multiple dependencies / commands to be set up anymore. The build job uses the build Python package and the publishing workflow has been changed to the official one provided by the Python Packaging Authority.

    • Bumped dependency version for checking out source code.

    • Bumped dependency version for setting up Python.

    🔖 To make this work:

    Since the publishing workflow has been changed, you will need to remove these secrets from the repository:

    • PYPI_USERNAME
    • PYPI_PASSWORD

    ... and replace them with PYPI_API_TOKEN. This secret will contain a token provided by PyPI itself, which you can get from the Manage page of your project by clicking on "Create a token for project-name".

    I hope this helps :D

    opened by hitblast 1
  • Stop unauthorized and redundant installs of the

    Stop unauthorized and redundant installs of the "newspaper" library

    The call to utils.import_or_install() introduced several issues:

    • It was called with the parameter "newspaper3k". Since the correct import name is "newspaper" ("newspaper3k" is used for installation purposes), the __import __ call always failed, which means every call to the get_full_article() method would result in a redundant "pip install" process starting.

    • Just to reiterate: Every user of this library right now sees a long and cryptic pip install output message WITH EACH AND EVERY CALL they make to get_full_article().

    • Installing pip packages and/or modifying a user's environment without permission or any indication of such behavior (nothing in the docs) is unacceptable.

    • Installing packages using direct calls to pip module internals is NOT the way to install packages and also yields warnings regarding the usage of incorrect pip wrappers.

    opened by valorien 1
  • Problems about get full article in a docker container

    Problems about get full article in a docker container

    Hello. I have some doubts about this line https://github.com/ranahaani/GNews/blob/master/gnews/gnews.py#L86. I need to run GNews in a docker container by using Airflow in order to get information about articles. I got the following message:

    WARNING: pip is being invoked by an old script wrapper. This will fail in a future version of pip.
    Please see https://github.com/pypa/pip/issues/5599 for advice on fixing the underlying issue.
    To avoid this problem you can invoke Python with '-m pip' instead of running pip directly.
    

    And then Airflow sent me a negative signal to fail my task.

    opened by aoelvp94 1
  • results are limited to 100

    results are limited to 100

    hi, it seems that I can't get more than 100 results despite changing the max_Result, why is that and would I get different results if I repeat the search ?

    opened by Alloooshe 1
  • [Snyk] Security upgrade certifi from 2021.10.8 to 2022.12.7

    [Snyk] Security upgrade certifi from 2021.10.8 to 2022.12.7

    Snyk has created this PR to fix one or more vulnerable packages in the `pip` dependencies of this project.

    Changes included in this PR

    • Changes to the following files to upgrade the vulnerable dependencies to a fixed version:
      • requirements.txt
    ⚠️ Warning
    requests 2.26.0 requires certifi, which is not installed.
    
    

    Vulnerabilities that will be fixed

    By pinning:

    Severity | Priority Score (*) | Issue | Upgrade | Breaking Change | Exploit Maturity :-------------------------:|-------------------------|:-------------------------|:-------------------------|:-------------------------|:------------------------- medium severity | 626/1000
    Why? Recently disclosed, Has a fix available, CVSS 6.8 | Insufficient Verification of Data Authenticity
    SNYK-PYTHON-CERTIFI-3164749 | certifi:
    2021.10.8 -> 2022.12.7
    | No | No Known Exploit

    (*) Note that the real score may have changed since the PR was raised.

    Some vulnerabilities couldn't be fully fixed and so Snyk will still find them when the project is tested again. This may be because the vulnerability existed within more than one direct dependency, but not all of the affected dependencies could be upgraded.

    Check the changes in this PR to ensure they won't cause issues with your project.


    Note: You are seeing this because you or someone else with access to this repository has authorized Snyk to open fix PRs.

    For more information: 🧐 View latest project report

    🛠 Adjust project settings

    📚 Read more about Snyk's upgrade and patch logic


    Learn how to fix vulnerabilities with free interactive lessons:

    🦉 Learn about vulnerability in an interactive lesson of Snyk Learn.

    opened by snyk-bot 0
  • [Snyk] Security upgrade python from 3.10.0 to 3.12.0a3

    [Snyk] Security upgrade python from 3.10.0 to 3.12.0a3

    This PR was automatically created by Snyk using the credentials of a real user.


    Keeping your Docker base image up-to-date means you’ll benefit from security fixes in the latest version of your chosen image.

    Changes included in this PR

    • Dockerfile

    We recommend upgrading to python:3.12.0a3, as this image has only 272 known vulnerabilities. To do this, merge this pull request, then verify your application still works as expected.

    Some of the most important vulnerabilities in your base image include:

    | Severity | Priority Score / 1000 | Issue | Exploit Maturity | | :------: | :-------------------- | :---- | :--------------- | | critical severity | 714 | Directory Traversal
    SNYK-DEBIAN11-DPKG-2847942 | No Known Exploit | | critical severity | 714 | Out-of-bounds Read
    SNYK-DEBIAN11-LIBTASN16-3061097 | No Known Exploit | | critical severity | 714 | OS Command Injection
    SNYK-DEBIAN11-OPENSSL-2807596 | No Known Exploit | | critical severity | 714 | OS Command Injection
    SNYK-DEBIAN11-OPENSSL-2933518 | No Known Exploit | | high severity | 614 | Improper Input Validation
    SNYK-DEBIAN11-XZUTILS-2444276 | No Known Exploit |


    Note: You are seeing this because you or someone else with access to this repository has authorized Snyk to open fix PRs.

    For more information: 🧐 View latest project report

    🛠 Adjust project settings


    Learn how to fix vulnerabilities with free interactive lessons:

    🦉 Learn about vulnerability in an interactive lesson of Snyk Learn.

    opened by ranahaani 0
  • Unable to obtain news Reports within a specified Date range

    Unable to obtain news Reports within a specified Date range

    I am trying to obtain news reports within a specific date range, using start_date and end_date parameters, but the ranging doesn't seem to work. It fetches the top news reports from the current date only. I have also attached the code and results image below. I have tried both the tuple approach as well as the datetime object approach, but none seem to work. I have also pointed to particular piece of code, which could have not set the parameter for end date. Screenshot 2022-11-23 at 11 48 52 AM

    Screenshot 2022-11-23 at 11 51 00 AM Screenshot 2022-11-23 at 11 56 16 AM
    opened by AryanKapadia 0
  • Nothing is fetched anymore

    Nothing is fetched anymore

    For some reason, I cannot seem to fetch any news anymore, not even with the README example. Could it be an IP issue? It seems to be working when using VPN.

    opened by rolandgvc 2
  • Allow config parameter in the gnews.get_full_article()

    Allow config parameter in the gnews.get_full_article()

    I using GNews get_full_article() function to extract the top_image from the Article. However, when I run this on my production server it throws me the below error:

    ERROR: Articledownload()failed with HTTPSConnectionPool(host='indianexpress.com', port=443): Max retries exceeded with url: /article/idea-exchange/gautam-gambhir-idea-exchange-first-challenge-mcd-polls-change-narrative-bjp-doesnt-do-anything-8158944/ (Caused by ProxyError('Cannot connect to proxy.', OSError('Tunnel connection failed: 403 Forbidden'))) on URL https://indianexpress.com/article/idea-exchange/gautam-gambhir-idea-exchange-first-challenge-mcd-polls-change-narrative-bjp-doesnt-do-anything-8158944/

    I searched through Google and ended up with this solution:

    from newspaper import Article
    from newspaper import Config
    
    user_agent = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0) Gecko/20100101 Firefox/78.0'
    config = Config()
    
    config.browser_user_agent = user_agent
    
    url = "https://www.chicagotribune.com/nation-world/ct-florida-school-shooter-nikolas-cruz-20180217-story.html"
    
    page = Article(url, config=config)
    
    
    page.download()
    page.parse()
    print(page.text)
    

    As per the code above, I need to mention the user agentand get it assigned to config.browser_user_agent to prevent the server from getting banned. However, if I want to use gnews.get_full_article() I am not able to specify the config parameter inside. Is there any provision to mention this parameter? Am I missing something?

    opened by sohaibrahman64 1
  • I got a lot less links than earlier in the last month / 2 months?

    I got a lot less links than earlier in the last month / 2 months?

    Hi,

    I use GNews a lot and lately I've been having some trouble with it.. The code I use is: google_news.get_news("News I want to get"). Normally I get several hundred links a day, now only a few? It seems that he can't find anything anymore, because even if I use the code once for a specific search term, it doesn't find all the news by far?

    I've been using the same code for a long time and nothing has changed in the code. I also did not change the version of GNews. I suddenly just got a lot less links?

    opened by Colder347 2
  • Non Issue - Just a Suggestion

    Non Issue - Just a Suggestion

    First off, thanks for creating and releasing this very helpful Package, it saved me a lot of time from coding it on my own for my quick project.

    The only suggestion I have is in reference to the formatting of the return value for 'description' What gets returned to me is not a description of the article but a series of short titles from other news sites, without links. I know you are running it through BeautifulSoup which removes the links and the list structure and what is left is a confusing mess.

    I modified the code so that I get back everything I want, but for other users you may want to add an option to switch that on and off. I added this to my base and control it during initialization of GNews, now I switch formatting by BeautifulSoup on/off with a simple option I pass once anytime it's needed. Considering most consumers of your API are technical this will not be confusing.

    opened by RaulEstaka 1
Releases(0.2.3)
Owner
Muhammad Abdullah
Python/Django
Muhammad Abdullah
Script used to download data for stocks.

This script is useful for downloading stock market data for a wide range of companies specified by their respective tickers. The script reads in the d

Carmelo Gonzales 71 Oct 04, 2022
Libextract: extract data from websites

Libextract is a statistics-enabled data extraction library that works on HTML and XML documents and written in Python

499 Dec 09, 2022
Simple tool to scrape and download cross country ski timings and results from live.skidor.com

LiveSkidorDownload Simple tool to scrape and download cross country ski timings and results from live.skidor.com Usage: Put the python file in a dedic

0 Jan 07, 2022
此脚本为 python 脚本,实现原理为利用 selenium 定位相关元素,再配合点击事件完成浏览器的自动化.

此脚本为 python 脚本,实现原理为利用 selenium 定位相关元素,再配合点击事件完成浏览器的自动化.

N0el4kLs 5 Nov 19, 2021
Using Python and Pushshift.io to Track stocks on the WallStreetBets subreddit

wallstreetbets-tracker Using Python and Pushshift.io to Track stocks on the WallStreetBets subreddit.

91 Dec 08, 2022
A Smart, Automatic, Fast and Lightweight Web Scraper for Python

AutoScraper: A Smart, Automatic, Fast and Lightweight Web Scraper for Python This project is made for automatic web scraping to make scraping easy. It

Mika 4.8k Jan 04, 2023
Minecraft Item Scraper

Minecraft Item Scraper To run, first ensure you have the BeautifulSoup module: pip install bs4 Then run, python minecraft_items.py folder-to-save-ima

Jaedan Calder 1 Dec 29, 2021
Displays market info for the LUNI token on the Terra Blockchain

LuniBot for Discord Displays market info for the LUNI/LUNA token on the Terra Blockchain (Webscrape method currently scraping CoinMarketCap). Will evo

0 Jan 22, 2022
A web crawler script that crawls the target website and lists its links

A web crawler script that crawls the target website and lists its links || A web crawler script that lists links by scanning the target website.

2 Apr 29, 2022
Poolbooru gelscraper - a simple python script for scraping images off gelbooru pools.

poolbooru_gelscraper a simple python script for scraping images off gelbooru pools. modules required:requests_html, and os by default saves files with

savantshuia 1 Jan 02, 2022
CRI Scrape is a tool for get general info about Italian Red Cross in GAIA Platform

CRI Scrape CRI Scrape is a tool for get general info about Italian Red Cross in GAIA Platform Disclaimer This code is only for educational purpose. So

Vincenzo Cardone 0 Jul 23, 2022
A Web Scraper built with beautiful soup, that fetches udemy course information. Get udemy course information and convert it to json, csv or xml file

Udemy Scraper A Web Scraper built with beautiful soup, that fetches udemy course information. Installation Virtual Environment Firstly, it is recommen

Aditya Gupta 15 May 17, 2022
A spider for Universal Online Judge(UOJ) system, converting problem pages to PDFs.

Universal Online Judge Spider Introduction This is a spider for Universal Online Judge (UOJ) system (https://uoj.ac/). It also works for all other Onl

TriNitroTofu 1 Dec 07, 2021
Web scraper for Zillow

Zillow-Scraper Instructions All terminal commands are highlighted. Make sure you first have python 3 installed. You can check this by running "python

Ali Rastegar 1 Nov 23, 2021
Web mining module for Python, with tools for scraping, natural language processing, machine learning, network analysis and visualization.

Pattern Pattern is a web mining module for Python. It has tools for: Data Mining: web services (Google, Twitter, Wikipedia), web crawler, HTML DOM par

Computational Linguistics Research Group 8.4k Jan 08, 2023
fork huanghyw/jd_seckill

Jd_Seckill 特别声明: 本仓库发布的jd_seckill项目中涉及的任何脚本,仅用于测试和学习研究,禁止用于商业用途,不能保证其合法性,准确性,完整性和有效性,请根据情况自行判断。 本项目内所有资源文件,禁止任何公众号、自媒体进行任何形式的转载、发布。

512 Jan 03, 2023
Google Scholar Web Scraping

Google Scholar Web Scraping This is a python script that asks for a user to input the url for a google scholar profile, and then it writes publication

Suzan M 1 Dec 12, 2021
The open-source web scrapers that feed the Los Angeles Times California coronavirus tracker.

The open-source web scrapers that feed the Los Angeles Times' California coronavirus tracker. Processed data ready for analysis is available at datade

Los Angeles Times Data and Graphics Department 51 Dec 14, 2022
A social networking service scraper in Python

snscrape snscrape is a scraper for social networking services (SNS). It scrapes things like user profiles, hashtags, or searches and returns the disco

2.4k Jan 01, 2023
Minimal set of tools to conduct stealthy scraping.

Stealthy Scraping Tools Do not use puppeteer and playwright for scraping. Explanation. We only use the CDP to obtain the page source and to get the ab

Nikolai Tschacher 88 Jan 04, 2023