A powerful annex BUBT, BUBT Soft, and BUBT website scraping script.

Overview

Annex Bubt Scraping Script

I think this is the first public repository that provides free annex-BUBT, BUBT-Soft, and BUBT website scraping API script on GitHub. When I was doing my 3rd year project one for my friend Abdullah Xayed wrote a web scraping project for me. Now I am maintaining this.

Important Note

There have an api script that can broke the security system of BUBT. So, I am not sharing some api script with you due to security reason. And I am requesting you not to use any of this provided api for production use. I already give you the API script. So, Host them on your web server and then use them for the production.

API Response & Type

BUBT API:

Name Method Description Examples
Student Verify GET Verify bubt students /global_file/getData.php?id=?&type=?
Faculty Verify GET Verify bubt faculty /global_file/getData.php?id=?&type=?

Abdullah Xayed API: (v1)

Name Method Description Examples
Annex Login GET Verify bubt faculty /bubt/v1/login?id=?&pass=?
Annex Result GET Get student result from annex by session id /bubt/v1/prevCourses?phpsessid=?
Annex Fees GET Get student fees from annex by session id /bubt/v1/fees?phpsessid=?
Annex Routine GET Get student routine from annex by session id working, Routine shift from annex to BUBT Soft /bubt/v1/routine?id=?
All Events GET Get all events from bubt website /bubt/v1/allEvent?
Events Details GET Get an event details by events url /bubt/v1/eventDetails?url=?
All Notice GET Get all notices from bubt website /bubt/v1/allNotice?
Notice Details GET Get a notice details by notices url /bubt/v1/noticeDetails?url=?

Sample Json Data

BUBT API:

Student Verify:

{
  "sis_std_id": "17181103084",
  "sis_std_name": "Md. Imam Hossain",
  "sis_std_prgrm_sn": "B.Sc. Engg. in CSE",
  "sis_std_prgrm_id": "006",
  "sis_std_intk": "37",
  "sis_std_email": "[email protected]",
  "sis_std_father": "Mahbub Rashid",
  "sis_std_gender": "M",
  "sis_std_LocGuardian": "Mahbub Rashid",
  "sis_std_Bplace": "Vasantek, Dhaka",
  "sis_std_Status": "R",
  "sis_std_blood": "",
  "gazo": "data:image/jpeg;base64,"
}

Faculty Verify:

[
  {
    "EmpId": "18020331033",
    "DemoId": "18020331033",
    "EmpName": "Md. Ahsanul Haque",
    "DOB": "1996-06-21T00:00:00",
    "PermanentAddress": "South Atapara, Bogura Sadar-5800, Bogura",
    "FatherName": "Md. Abdul Awal",
    "ECName": "Md. Abdul Awal",
    "ECNo": "01711936404",
    "ECRelation": "Father",
    "Gender": "Male",
    "DeptName": "Department of Computer Science & Engineering",
    "PosName": "Lecturer",
    "BloodGroup": "A+",
    "StatusId": "1",
    "EmpImage": "data:image/jpeg;base64,"
    }
]

Abdullah Xayed API:(v1)

Annex Login:

{
  "PHPSESSID": "7d1755fe6c32b74d321fe3d3ba69a4ad",
  "status": "success"
}

Annex Result:

{
  "data": [
    {
      "cgpa": "3.22",
      "results": [
        {
          "code": "ENG 101",
          "credit": "3",
          "grade": "B-",
          "title": "English Language-I",
          "type": "Theory"
        }
      ],
      "semester": "Fall, 2017-18",
      "sgpa": "3.22"
    }
  ],
  "status": "success"
}

Annex Fees:

{
  "data": [
    {
      "Demand": "44195",
      "Due": "0",
      "Paid": "44195",
      "Remarks": "Semester Charge+Tuition Fees+Others",
      "Semester": "Fall, 2017-18",
      "Waiver": "0",
      "payments": [
        {
          "Account_Code": "319",
          "Payment_Amount": "15600",
          "Payment_No": "1",
          "Reciept_No": "18888",
          "Waiver": "0"
        },
        {
          "Account_Code": "319",
          "Payment_Amount": "28595",
          "Payment_No": "2",
          "Reciept_No": "43019",
          "Waiver": "0"
        }
      ]
    }
  ],
  "result": {
    "Total_Demand": "384816",
    "Total_Due": "7442",
    "Total_Paid": "353923",
    "Total_Waiver": "23451"
  },
  "status": "success"
}

Annex Routine:

{
  "data": [
    {
      "Building": "",
      "Day": "Saturday",
      "Intake": "",
      "Room_No": "",
      "Schedule": "08:30 AM to 10:00 AM",
      "Section": "",
      "Subject_Code": "",
      "Teacher_Code": ""
    }
  ],
  "status": "success"
}

All Events:

{
  "data": [
    {
      "published_on": "5 Aug 2021",
      "title": "International Conference on Science and Contemporary Technologies (ICSCT) Opened at BUBT",
      "url": "https://www.bubt.edu.bd/home/event_details/200"
    }
  ],
  "status": "success"
}

Annex Notices:

{
  "data": [
      {
        "category": "Exam Related",
        "published_on": "8 Oct 2021",
        "title": "Defense Notice",
        "url": "https://www.bubt.edu.bd/home/notice_details/665"
      }
  ],
  "status": "success"
}

Events Details:

{
    "data": {
      "description": "Bangladesh University of  Business and Technology  (BUBT) organized a virtual Orientation  Program for Spring 2021 Students on April 22, 2021....",
      "downloads": [
        {
          "url": ""
        }
      ],
      "images": [
        {
          "url": "https://www.bubt.edu.bd/assets/frontend/media/1619504011BUBT_22_04__2021.jpg"
        }
      ],
      "pubDate": "25 Apr 2021",
      "title": "Virtual Orientation for Spring 2021 Students at BUBT"
    },
    "status": "success"
  }

Notice Details:

{
    "data": {
      "description": "Defense Notice\nThis is to notify the intern students that their Online Internship Defense will be held in Google Meet...",
      "downloads": [
        {
          "url": ""
        }
      ],
      "images": [
        {
          "url": ""
        }
      ],
      "pubDate": "8 Oct 2021",
      "title": "Defense Notice"
    },
    "status": "success"
}

🧑 Author

Md. Imam Hossain

You can also follow my GitHub Profile to stay updated about my latest projects:

GitHub Follow

If you liked the repo then kindly support it by giving it a star !

Copyright (c) 2020 MD. IMAM HOSSAIN

Owner
Md Imam Hossain
Lazy coder.
Md Imam Hossain
Lovely Scrapper

Lovely Scrapper

Tushar Gadhe 2 Jan 01, 2022
A package designed to scrape data from Yahoo Finance.

yahoostock A package designed to scrape data from Yahoo Finance. Installation The most simple installation method is through PIP. pip install yahoosto

Rohan Singh 2 May 28, 2022
Scrape puzzle scrambles from csTimer.net

Scroodle Selenium script to scrape scrambles from csTimer.net csTimer runs locally in your browser, so this doesn't strain the servers any more than i

Jason Nguyen 1 Oct 29, 2021
Scrapes proxies and saves them to a text file

Proxy Scraper Scrapes proxies from https://proxyscrape.com and saves them to a file. Also has a customizable theme system Made by nell and Lamp

nell 2 Dec 22, 2021
京东秒杀商品抢购Python脚本

Jd_Seckill 非常感谢原作者 https://github.com/zhou-xiaojun/jd_mask 提供的代码 也非常感谢 https://github.com/wlwwu/jd_maotai 进行的优化 主要功能 登陆京东商城(www.jd.com) cookies登录 (需要自

Andy Zou 1.5k Jan 03, 2023
Amazon web scraping using Scrapy Framework

Amazon-web-scraping-using-Scrapy-Framework Scrapy Scrapy is an application framework for crawling web sites and extracting structured data which can b

Sejal Rajput 1 Jan 25, 2022
An IpVanish Proxies Scraper

EzProxies Tired of searching for good proxies for hours? Just get an IpVanish account and get thousands of good proxies in few seconds! Showcase Watch

11 Nov 13, 2022
Haphazard scripts for scraping bitcoin/bitcoin data from GitHub

This is a quick-and-dirty tool used to scrape bitcoin/bitcoin pull request and commentary data. Each output/pr number folder contains comments.json:

James O'Beirne 8 Oct 12, 2022
Example of scraping a paginated API endpoint and dumping the data into a DB

Provider API Scraper Example Example of scraping a paginated API endpoint and dumping the data into a DB. Pre-requisits Python = 3.9 Pipenv Setup # i

Alex Skobelev 1 Oct 20, 2021
Scrapes the Sun Life of Canada Philippines web site for historical prices of their investment funds and then saves them as CSV files.

slocpi-scraper Sun Life of Canada Philippines Inc Investment Funds Scraper Install dependencies pip install -r requirements.txt Usage General format:

Daryl Yu 2 Jan 07, 2022
PaperRobot: a paper crawler that can quickly download numerous papers, facilitating paper studying and management

PaperRobot PaperRobot 是一个论文抓取工具,可以快速批量下载大量论文,方便后期进行持续的论文管理与学习。 PaperRobot通过多个接口抓取论文,目前抓取成功率维持在90%以上。通过配置Config文件,可以抓取任意计算机领域相关会议的论文。 Installation Down

moxiaoxi 47 Nov 23, 2022
Scrapy, a fast high-level web crawling & scraping framework for Python.

Scrapy Overview Scrapy is a fast high-level web crawling and web scraping framework, used to crawl websites and extract structured data from their pag

Scrapy project 45.5k Jan 07, 2023
Poolbooru gelscraper - a simple python script for scraping images off gelbooru pools.

poolbooru_gelscraper a simple python script for scraping images off gelbooru pools. modules required:requests_html, and os by default saves files with

savantshuia 1 Jan 02, 2022
Binance Smart Chain Contract Scraper + Contract Evaluator

Pulls Binance Smart Chain feed of newly-verified contracts every 30 seconds, then checks their contract code for links to socials.Returns only those with socials information included, and then submit

14 Dec 09, 2022
Newsscraper - A simple Python 3 module to get crypto or news articles and their content from various RSS feeds.

NewsScraper A simple Python 3 module to get crypto or news articles and their content from various RSS feeds. 🔧 Installation Clone the repo locally.

Rokas 3 Jan 02, 2022
Free-Game-Scraper is a useful script that allows you to track down free games and DLCs on many platforms.

Game Scraper Free-Game-Scraper is a useful script that allows you to track down free games and DLCs on many platforms. Join the discord About The Proj

KursK 2 Mar 28, 2022
Python framework to scrape Pastebin pastes and analyze them

pastepwn - Paste-Scraping Python Framework Pastebin is a very helpful tool to store or rather share ascii encoded data online. In the world of OSINT,

Rico 105 Dec 29, 2022
A spider for Universal Online Judge(UOJ) system, converting problem pages to PDFs.

Universal Online Judge Spider Introduction This is a spider for Universal Online Judge (UOJ) system (https://uoj.ac/). It also works for all other Onl

TriNitroTofu 1 Dec 07, 2021
A python module to parse the Open Graph Protocol

OpenGraph is a module of python for parsing the Open Graph Protocol, you can read more about the specification at http://ogp.me/ Installation $ pip in

Erik Rivera 213 Nov 12, 2022
Library to scrape and clean web pages to create massive datasets.

lazynlp A straightforward library that allows you to crawl, clean up, and deduplicate webpages to create massive monolingual datasets. Using this libr

Chip Huyen 2.1k Jan 06, 2023