This script is useful for downloading stock market data for a wide range of companies specified by their respective tickers. The script reads in the desired tickers and interacts with yahoo finance to download and save csv files containing information for: Date, Open, High, Low, Close, Adjusted Close, and Volume. Once data for a ticker is downloaded and stored, further requests for data will simply append the most recent information onto the existing csv file. Additionally, each time a user requests downloads, a list of the successful and failed requests will be generated. A few important notes: -Most importantly, HUGE shoutout to https://github.com/bradlucas/get-yahoo-quotes-python for the repo on downloading historic data from yahoo finance. My code is build on top of the work done there, which was a huge time saver. -Make sure to set up the directories for your ticker_location and csv_location. -The default behavior is to download as much data that yahoo finance can provide. -This data is daily historic data There are 5 command line arguments which may be helpful to facilitate the data download process, which may either be used directly in the terminal, or have their defaults set by modifying the download_data.py script. Command Line Arguments: --ticker_location (path): this specifies the file location containing a list of tickers to download data for. The list should be saved as a text file with each ticker on its own new line. --csv_location (path): this is the directory where csv files should be saved. If this directory does not already exist, create it manually before running the script. --add_tickers (string): this gives the user an option to add more tickers to their existing list and database. Pass in a string of tickers separated by commas (no spaces) to add the tickers to the list, and download their csv files. The default list of tickers will be updated to contain these new tickers specified. If there is not already a default list of tickers, create this before running the script. --remove_tickers (string): this gives the user an option to remove tickers from their list and database. Pass in a string of tickers separated by commas (no spaces) to remove the tickers from the list as well as the database (csv_location). If there is not already a default list of tickers, create this before running the script. --verbose (bool): this provides extra information while downloading data, useful for debugging. Set to false to only see the progress bar for data being downloaded. To use the script, follow these simple steps. 0. Install dependencies using pip install -r requirements.txt 1. Set up a default list of tickers. This can be a blank text file, or a list of tickers each on their own new line, saved as a text file. 2. Set up a directory to save csv files to. 3. Optionally, change the default ticker_location and csv_location file paths in the script itself. 4. Run the script download_data.py from the command line, or your favorite IDE. Examples: Download using a pre-saved list of tickers python download_data.py --ticker_location /home/user/Desktop/tickers.txt --csv_location /home/user/Desktop/CSVFiles/ Download data using a string of tickers without referencing a tickers.txt file python download_data.py --csv_location /home/user/Desktop/CSVFiles/ --add_tickers "GME,AMC,AAPL,TSLA,SPY" Download data using a string of tickers with referencing a tickers.txt file python download_data.py --csv_location /home/user/Desktop/CSVFiles/ --ticker_location /home/user/Desktop/tickers.txt --add_tickers "GME,AMC,AAPL,TSLA,SPY" From here, the rest is history (pun intended ;)). When downloading from a pre-saved list of tickers, the computer will open as many threads as it can to speed up this highly parallelizable process to get you your data as quick as possible. Once its finished, you'll find all the data in your csv_location folder! Now that you have data, you can easily update the files with the latest information at the end of each day, week, or whatever time frame you prefer. Simply run the script in the same way as previously described, and the newest data will be appended to the existing files. If there is a new ticker in your list, the full set of data will be downloaded. Happy downloading!
Script used to download data for stocks.
Overview
Owner
Carmelo Gonzales
This is my CS 20 final assesment.
eeeeeSpider This is my CS 20 final assesment. How to use: Open program Run to your hearts content! There are no external dependancies that you will ha
Simple Web scrapper Bot to scrap webpages using Requests, html5lib and Beautifulsoup.
WebScrapperRoBot Simple Web scrapper Bot to scrap webpages using Requests, html5lib and Beautifulsoup. Mark your Star ⭐ ⭐ What is Web Scraping ? Web s
A web service for scanning media hosted by a Matrix media repository
Matrix Content Scanner A web service for scanning media hosted by a Matrix media repository Installation TODO Development In a virtual environment wit
IGLS - Instagram Like Scraper CLI tool
IGLS - Instagram Like Scraper It's a web scraping command line tool based on python and selenium. Description This is a trial tool for learning purpos
Scrapy, a fast high-level web crawling & scraping framework for Python.
Scrapy Overview Scrapy is a fast high-level web crawling and web scraping framework, used to crawl websites and extract structured data from their pag
一个m3u8视频流下载脚本
一个Python的m3u8流视频下载脚本 介绍 m3u8流视频日益常见,目前好用的下载器也有很多,我把之前自己写的一个小脚本分享出来,供广大网友使用。写此程序的目的在于给视频下载爱好者提供一个下载样例,可直接调用,勿再重复造轮子。 使用方法 在python中直接运行程序或进行外部调用 import
API which uses discord to scrape NameMC searches/droptime/dropping status of minecraft names
NameMC Scrape API This is an api to scrape NameMC using message previews generated by discord. NameMC makes it a pain to scrape their website, but som
Collection of code files to scrap different kinds of websites.
STW-Collection Scrap The Web Collection; blog posts. This repo contains Scrapy sample code to scrap the following kind of websites: Do you want to lea
Simple tool to scrape and download cross country ski timings and results from live.skidor.com
LiveSkidorDownload Simple tool to scrape and download cross country ski timings
This tool can be used to extract information from any website
WEB-INFO- This tool can be used to extract information from any website Install Termux and run the command --- $ apt-get update $ apt-get upgrade $ pk
This program scrapes information and images for movies and TV shows.
Media-WebScraper This program scrapes information and images for movies and TV shows. Summary For more information on the program, read the WebScrape_
淘宝、天猫半价抢购,抢电视、抢茅台,干死黄牛党
taobao_seckill 淘宝、天猫半价抢购,抢电视、抢茅台,干死黄牛党 依赖 安装chrome浏览器,根据浏览器的版本找到对应的chromedriver下载安装 web版使用说明 1、抢购前需要校准本地时间,然后把需要抢购的商品加入购物车 2、如果要打包成可执行文件,可使用pyinstalle
A web scraper which checks price of a product regularly and sends price alerts by email if price reduces.
Amazon-Web-Scarper Created a web scraper using simple functions to check price of a product on amazon (can be duplicated to check price at other marke
Free-Game-Scraper is a useful script that allows you to track down free games and DLCs on many platforms.
Game Scraper Free-Game-Scraper is a useful script that allows you to track down free games and DLCs on many platforms. Join the discord About The Proj
This program will help you to properly scrape all data from a specific website
This program will help you to properly scrape all data from a specific website
👨🏼⚖️ reddit bot that turns comment chains into ace attorney scenes
Ace Attorney reddit bot 👨🏼⚖️ Reddit bot that turns comment chains into ace attorney scenes. You'll need to sign up for streamable and reddit and se
A simplistic scraper made to download tons of random screenshots made by people.
printStealer 1.1 What is this tool? This tool is developed to show the insecurity of the screenshot utility called prnt sc. It is a site that stores s
Python script for crawling ResearchGate.net papers✨⭐️📎
ResearchGate Crawler Python script for crawling ResearchGate.net papers About the script This code start crawling process by urls in start.txt and giv
A web scraper that exports your entire WhatsApp chat history.
WhatSoup 🍲 A web scraper that exports your entire WhatsApp chat history. Table of Contents Overview Demo Prerequisites Instructions Frequen
A database scraper created with mechanical soup and sqlite
WebscrapingDatabases a database scraper created with mechanical soup and sqlite author: Mariya Sha Watch on YouTube: This repository was created to su