学习强国 自动化 百分百正确、瞬间答题,分值45分

Overview

项目简介

学习强国自动化脚本,解放你的时间!

使用Selenium、requests、mitmpoxy、百度智能云文字识别开发而成

使用说明

:Chrome版本

image-20201207091125954

驱动会自动下载

首次使用会生成数据库文件db.db,用于提高文章、视频任务效率。

依赖安装

pip install -r requirements.txt

没有梯子的同学可使用国内阿里源:

pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple

使用方法

一定要找个网络好的地方,不然可能会出现错误

1、控制台运行:python main.py

2、选择选项(如非必要,尽量选择不显示自动化过程,以免误操)

等待片刻,连接学习强国服务器需要时间,等待时间与网速关系很大。

3、扫描二维码登录

4、选择任务(暂时只支持文章、视频、每日答题、每周答题、专项答题,后续功能正在开发,不过暂时也差不多够用了,45分呢),可多选,不过每个选项要用空格隔开,选择文章或视频时,等待时间稍久一点。

5、任务完成后需手动结束程序

使用示例

image-20210127134017543

image-20210207000645085

image-20210207000823436

image-20210207001005808

image-20210207001047255

image-20210207001122282

image-20210207001149837

image-20210207001303406

image-20210207001404717

百度智能云操作流程

1、登录控制台

点击→百度智能云

image-20210206231654368

2、创建应用

image-20210206235118045

3、选择选项

image-20210206235421370

image-20210206235616741

4、获取API Key、Secret Key

image-20210207000519698

版本说明

  • v0.1:文章、视频,分数:25
  • v0.2:优化文章、优化视频、每日答题(百分百正确),分数30
  • v0.3:新增每周答题、专项答题(也是百分百正确),分数45
  • v0.31:优化记录存储、优化目录结构、优化配置文件结构,增加进度条、增加自动下载驱动、增加系统兼容(Linux、Windows、MacOS)
  • v1.0: 重构整个项目,增加持久化、驱动自动检测与谷歌浏览器匹配、驱动自主下载、更快的登录、文章和视频自适应、更快更精准的答题、加强的防检测、每个文件都有说明注释(便于各位大佬修改)
  • v1.1: 由于专项答题视频答案匹配问题,现加入百度智能云的文字识别功能,可将视频中的答案提取出来,不过答案还需手动填写,因为提取的答案暂时没有好的办法过滤。至少不用看视频了是不2333.

附语

在持久化登录方面思考了很久,要不要做批量持久化?,后来想了想,本项目的目的是为了帮助没有时间做学习强国任务的个人节省时间,如果做了批量的话,恐怕会沦为某些人的牟利工具。所以最后决定只做单用户持久模式,如果有想做批量的话,建议细读学习强国App积分页的提醒警示。

希望大佬点点start,给点动力,希望能给大家带来更多的功能!

Owner
lisztomania
lisztomania
A web crawler for recording posts in "sina weibo"

Web Crawler for "sina weibo" A web crawler for recording posts in "sina weibo" Introduction This script helps collect attributes of posts in "sina wei

4 Aug 20, 2022
热搜榜-python爬虫+正则re+beautifulsoup+xpath

仓库简介 微博热搜榜, 参数wb 百度热搜榜, 参数bd 360热点榜, 参数360 csdn热榜接口, 下方查看 其他热搜待加入 如何使用? 注册vercel fork到你的仓库, 右上角 点击这里完成部署(一键部署) 请求参数 vercel配置好的地址+api?tit=+参数(仓库简介有参数信息

Harry 3 Jul 08, 2022
CreamySoup - a helper script for automated SourceMod plugin updates management.

CreamySoup/"Creamy SourceMod Updater" (or just soup for short), a helper script for automated SourceMod plugin updates management.

3 Jan 03, 2022
A scalable frontier for web crawlers

Frontera Overview Frontera is a web crawling framework consisting of crawl frontier, and distribution/scaling primitives, allowing to build a large sc

Scrapinghub 1.2k Jan 02, 2023
Incredibly fast crawler designed for OSINT.

Photon Incredibly fast crawler designed for OSINT. Photon Wiki • How To Use • Compatibility • Photon Library • Contribution • Roadmap Key Features Dat

Somdev Sangwan 9.3k Jan 02, 2023
Python script that reads Aliexpress offers urls from a Excel filename (.csv) and post then in a Telegram channel using a bot

Aliexpress to telegram post Python script that reads Aliexpress offers urls from a Excel filename (.csv) and post then in a Telegram channel using a b

Fernando 6 Dec 06, 2022
This project was created using Python technology and flask tools to scrape a music site

python-scrapping This project was created using Python technology and flask tools to scrape a music site You need to install the following packages to

hosein moradi 1 Dec 07, 2021
A repository with scraping code and soccer dataset from understat.com.

UNDERSTAT - SHOTS DATASET As many people interested in soccer analytics know, Understat is an amazing source of information. They provide Expected Goa

douglasbc 48 Jan 03, 2023
Web scraper build using python.

Web Scraper This project is made in pyhthon. It took some info. from website list then add them into data.json file. The dependencies used are: reques

Shashwat Harsh 2 Jul 22, 2022
Searching info from Google using Python Scrapy

Python-Search-Engine-Scrapy || Python-爬虫-索引/利用爬虫获取谷歌信息**/ Searching info from Google using Python Scrapy /* 利用 PYTHON 爬虫获取天气信息,以及城市信息和资料**/ translatio

HONGVVENG 1 Jan 06, 2022
Web scrapping

Project Setup Table of Contents Project Setup Table of Contents Run project locally Install Requirements Run script Run project locally Install Requir

Charles 3 Feb 04, 2022
PS5 bot to find a console in france for chrismas 🎄🎅🏻 NOT FOR SCALPERS

Une PS5 pour Noël Python + Chrome --headless = une PS5 pour noël MacOS Installer chrome Tweaker le .yaml pour la listes sites a scrap et les criteres

Olivier Giniaux 3 Feb 13, 2022
Auto Join: A GitHub action script to automatically invite everyone to the organization who star your repository.

Auto Invite To The Organization By Star A GitHub Action script to automatically invite everyone to your organization that stars your repository. What

Max Base 11 Dec 11, 2022
Python based Web Scraper which can discover javascript files and parse them for juicy information (API keys, IP's, Hidden Paths etc)

Python based Web Scraper which can discover javascript files and parse them for juicy information (API keys, IP's, Hidden Paths etc).

Amit 6 Aug 26, 2022
The first public repository that provides free BUBT website scraping API script on Github.

BUBT WEBSITE SCRAPPING SCRIPT I think this is the first public repository that provides free BUBT website scraping API script on github. When I was do

Md Imam Hossain 3 Feb 10, 2022
The open-source web scrapers that feed the Los Angeles Times California coronavirus tracker.

The open-source web scrapers that feed the Los Angeles Times' California coronavirus tracker. Processed data ready for analysis is available at datade

Los Angeles Times Data and Graphics Department 51 Dec 14, 2022
A python module to parse the Open Graph Protocol

OpenGraph is a module of python for parsing the Open Graph Protocol, you can read more about the specification at http://ogp.me/ Installation $ pip in

Erik Rivera 213 Nov 12, 2022
This is python to scrape overview and reviews of companies from Glassdoor.

Data Scraping for Glassdoor This is python to scrape overview and reviews of companies from Glassdoor. Please use it carefully and follow the Terms of

Houping 5 Jun 23, 2022
淘宝茅台抢购最新优化版本,淘宝茅台秒杀,优化了茅台抢购线程队列

淘宝茅台抢购最新优化版本,淘宝茅台秒杀,优化了茅台抢购线程队列

MaoTai 118 Dec 16, 2022
This repo has the source code for the crawler and data crawled from auto-data.net

This repo contains the source code for crawler and crawled data of cars specifications from autodata. The data has roughly 45k cars

Tô Đức Anh 5 Nov 22, 2022