联通手机营业厅自动做任务、签到、领流量、领积分等。

Overview

UnicomTask

简介

👯 😄 📫

联通手机营业厅自动完成每日任务,领流量、签到获取积分等,月底流量不发愁。

功能

  • 沃之树领流量、浇水(12M日流量)
  • 每日签到(1积分+翻倍4积分+第七天1G流量日包)
  • 天天抽奖,每天三次免费机会(随机奖励)
  • 游戏中心每日打卡(连续打卡,积分递增至最高7,第七天1G流量日包)
  • 游戏中心宝箱100M任务(100M日流量+随机奖励并翻倍)
  • 4G流量包看视频、下软件任务(90M+150M七日流量)
  • 每日领取100定向积分
  • 积分抽奖,每天最多抽30次(中奖几率渺茫)
  • 冬奥积分活动(第1和7天,可领取600定向积分,其余领取300定向积分,有效期至下月底)
  • 邮件推送运行结果

Github Actions 部署方法

1.fork本项目

项目地址:srcrs/UnicomTask

2.准备需要的参数

手机号、服务密码、appID。

其中appId的获取:

  • 安卓用户可在文件管理 --> Unicom/appid 文件中获取。

  • 苹果用户可抓取客户端登录接口获取。

https://m.client.10010.com/mobileService/login.htm

3.将必要参数填到Secrets

Secrets中的NameValue格式如下:

Name Value 说明
USERNAME_COVER 18566669999 手机号(必须)
PASSWORD_COVER 123456 服务密码(必须)
APPID_COVER xxxxxxxxx appId(必须)
EMAIL_COVER [email protected] 邮箱(可选)
LOTTERY_NUM 填写正整数 抽奖次数(可选)

4.开启Actions

默认Actions处于禁止状态,在Actions选项中开启Actions功能,把那个绿色的长按钮点一下。如果看到左侧工作流上有黄色!号,还需继续开启。

5.进行一次push操作

push操作会触发工作流运行。

删除掉README.md中的 😄 即可。完成后,每天早上7:30将自动完成每日任务。

同步上游代码

在最新的代码中,已经加上自动同步上游代码的action,将会定时在每周五16点执行,文件地址在.github/workflows/auto_merge.yml

同时您也可以安装pull应用,也可实现自动同步上游代码。

申明

本项目仅用于学习。

参考项目

mixool/HiCnUnicom,感谢该项目对于登录部分的思路

You might also like...
Releases(UnicomTask-v1.7)
Google Maps crawler using Selenium

Google Maps Crawler using Selenium Built as part of the Antifragile Dev Project Selenium crawler that browses Google Maps as a regular user and stores

Guilherme Latrova 46 Dec 16, 2022
Examine.com supplement research scraper!

ExamineScraper Examine.com supplement research scraper! Why I want to be able to search pages for a specific term. For example, I want to be able to s

Tyler 15 Dec 06, 2022
Python web scrapper

Website scrapper Web scrapping project in Python. Created for learning purposes. Start Install python Update configuration with websites Launch script

Nogueira Vitor 1 Dec 19, 2021
CRI Scrape is a tool for get general info about Italian Red Cross in GAIA Platform

CRI Scrape CRI Scrape is a tool for get general info about Italian Red Cross in GAIA Platform Disclaimer This code is only for educational purpose. So

Vincenzo Cardone 0 Jul 23, 2022
Tool to scan for secret files on HTTP servers

snallygaster Finds file leaks and other security problems on HTTP servers. what? snallygaster is a tool that looks for files accessible on web servers

Hanno Böck 2k Dec 28, 2022
A web scraping pipeline project that retrieves TV and movie data from two sources, then transforms and stores data in a MySQL database.

New to Streaming Scraper An in-progress web scraping project built with Python, R, and SQL. The scraped data are movie and TV show information. The go

Charles Dungy 1 Mar 28, 2022
This code will be able to scrape movies from a movie website and also provide download links to newly uploaded movies.

Movies-Scraper You are probably tired of navigating through a movie website to get the right movie you'd want to watch during the weekend. There may e

1 Jan 31, 2022
Simple library for exploring/scraping the web or testing a website you’re developing

Robox is a simple library with a clean interface for exploring/scraping the web or testing a website you’re developing. Robox can fetch a page, click on links and buttons, and fill out and submit for

Dan Claudiu Pop 79 Nov 27, 2022
哔哩哔哩爬取器:以个人为中心

Open Bilibili Crawer 哔哩哔哩是一个信息非常丰富的社交平台,我们基于此构造社交网络。在该网络中,节点包括用户(up主),以及视频、专栏等创作产物;关系包括:用户之间,包括关注关系(following/follower),回复关系(评论区),转发关系(对视频or动态转发);用户对创

Boshen Shi 3 Oct 21, 2021
a way to scrape a database of all of the isef projects

ISEF Database This is a simple web scraper which gets all of the projects and abstract information from here. My goal for this is for someone to get i

William Kaiser 1 Mar 18, 2022
A tool to easily scrape youtube data using the Google API

YouTube data scraper To easily scrape any data from the youtube homepage, a youtube channel/user, search results, playlists, and a single video itself

7 Dec 03, 2022
This is a python api to scrape search results from a url.

googlescrape Installation Installation is simple! # Stable version pip install googlescrape Examples from googlescrape import client scrapeClient=cli

1 Dec 15, 2022
Scrape plants scientific name information from Agroforestry Species Switchboard 2.0.

Agroforestry Species Switchboard 2.0 Scraper Scrape plants scientific name information from Species Switchboard 2.0. Requirements python = 3.10 (you

Mgs. M. Rizqi Fadhlurrahman 2 Dec 23, 2021
This tool crawls a list of websites and download all PDF and office documents

This tool crawls a list of websites and download all PDF and office documents. Then it analyses the PDF documents and tries to detect accessibility issues.

AccessibilityLU 7 Sep 30, 2022
Crawl BookCorpus

These are scripts to reproduce BookCorpus by yourself.

Sosuke Kobayashi 590 Jan 03, 2023
自动完成每日体温上报(Github Actions)

体温上报助手 简介 每天 10:30 GMT+8 自动完成体温上报,如想修改定时运行的时间,可修改 .github/workflows/SduHealthReport.yml 中 schedule 属性。 如果当日有异常,请手动在小程序端/PC 端填写!

Teng Zhang 23 Sep 15, 2022
A Python library for automating interaction with websites.

Home page https://mechanicalsoup.readthedocs.io/ Overview A Python library for automating interaction with websites. MechanicalSoup automatically stor

4.3k Jan 07, 2023
A Very simple free proxy list scraper.

Scrappp A Very simple free proxy list scraper, made in python The tool scrape proxy from diffrent sites and api's. Screenshots About the script !!! RE

Joji aka Moncef 12 Oct 27, 2022
python+selenium实现的web端自动打卡 + 每日邮件发送 + 金山词霸 每日一句 + 毒鸡汤(从2月份稳定运行至今)

python+selenium实现的web端自动打卡 说明 本打卡脚本适用于郑州大学健康打卡,其他web端打卡也可借鉴学习。(自己用的,从2月分稳定运行至今) 仅供学习交流使用,请勿依赖。开发者对使用本脚本造成的问题不负任何责任,不对脚本执行效果做出任何担保,原则上不提供任何形式的技术支持。 为防止

Sunday 1 Aug 27, 2022
Works very well and you can ask for the type of image you want the scrapper to collect.

Works very well and you can ask for the type of image you want the scrapper to collect. Also follows a specific urls path depending on keyword selection.

Memo Sim 1 Feb 17, 2022