Price forecasting of SGB and IRFC Bonds and comparing there returns

Overview

Project_Bonds

Project Title : Price forecasting of SGB and IRFC Bonds and comparing there returns.

Introduction of the Project

The 2008-09 global financial crises and 2020-21 pandemic have shown us the volatility of the market. Many people have are finding a way to invest money to secure their future. People are trying to find a secure investment with minimum financial risks with higher returns. This is also a fact that with investment their also comes with risks. There is a saying in the world of investment “Do not put all your egg in one basket”. We need to diverse portfolio in the area of investment, so that if one investment does not give you enough yields due to fluctuations in the market rates then other will give you higher yield. Bonds are one such investment people prefer the most. The Bonds we have selected are two government bonds – SGB (Sovereign Gold Bond) and IRFC (Indian Railway Finance Corporation). The objective was to forecast the prices of SGB and IRFC bond and calculate the returns. Compare the returns and recommend the client which one to pick based on the input that is number of years to forecast.

Technologies Used

  • Python – ML model (auto_arima (for grid search to find p,q,d values), ARIMA(for forecasting values))
  • SQLite – Database
  • Flask – Front End for deployment
  • Python Libraries – numpy, pandas, Statsmodels, re, nsepy, matplotlib
  • HTML/CSS

General info

This project is simple Forecasting model. Not taxes were put into use when calculating returns. IRFC Bond is a tax free bond but SGB we need to pay taxes if we try to sell it before the maturity period is over. Inflation rate and global pandemic situation is a rare phenonmenon and it is beyond anyone's control. It has been taken into business restriction.
Data has been collected from National Stock exchange of India The two bonds selected from NSE was -

Requirement file (contains libraries and their versions)

Libraries Used

Project Architecture

alt text

Explaining Project Architecture

Live data extraction

The data collected from NSE website (historical data) and the library which is used to collect live daily data from the website is nsepy. The data is then goes to python, two things happens in python. First, out of all the attributes, we only take "Close Price" and then the daily is then converted into monthly data. We use mean to calculate the the monthly average.

Data storage in sqlite

We chose SQLite because it is very easy to use and one does not need the knowledge of sql to observe the data. the database is created locally and and is being updated when the user usses the application. the user can easliy take the database and see the data in SQL viewr online available.

Data is then used by the model

When data is then called back by the python. the python then perform differencing method to remove the trend and seasonality from the data so that our data can be stable. For successful forecasting, it is necessary to keepp the time series data to be stationary.

p,d,q Hyperparameters

We use auto_arima function to calculate p,d,q value. We use re(regex) to store the summary of auto_arima in string format. then use "re.findall()" funtion to collect the value of p,d,q values. The downpoint of using this auto_arima function is that it runs two times when the programes gets executed. It calculate the hyperparameter values for both SGB and IRFC data.

ARIMA

This part is where the data is taken and then fit & predict.
This is for 12 months. Actual Data vs Predicted Data

Model Evaluation

SGB

The RMSE: 93.27 Rs. & The MAPE: 0.0185

IRFC

The RMSE: 21.62 Rs. & The MAPE: 0.0139
(Pretty Good)

Forecasting (12 Months)

Forecasted Data (12 Months)

Returns

This is the part where both SGB and IRFC foecasted data is being collected and based on that returns are calculated. If the SGB returns is higher than IRFC bonds then it will tell the customer about the amount of return for a specific time period.

User Input

The user will be given 3 options as Input. The user will select a specific time period from a drop down list. The options are -

  1. 4 Months (Quaterly)
  2. 6 Months (Half yearly)
  3. 12 Months (Anually)
    This options are time pperiod to forecast. If the user press 6 then the output page will show "6" forecasted values with a range Upper Price, Forecasted Price, Lower Price for both the bonds side by side. Below there will be a text where the returns will be diplayed if the user decides to sell the bonds then.
    12 Months Forecasted Prices - forecasted_prices

Python_code

correlation matrix fig=plt.gcf() fig.set_size_inches(10,8) plt.show() heatmap(gold) heatmap(bond) ############################### Live data to Feature engineering ################################################3 ##Taking close price as our univariate variable ##For gold gold=pd.DataFrame(gold["Close"]) gold["date"]=gold.index gold["date"]=gold['date'].astype(str) gold[["year", "month", "day"]] = gold["date"].str.split(pat="-", expand=True) gold['Dates'] = gold['month'].str.cat(gold['year'], sep ="-") gold.Dates=pd.to_datetime(gold.Dates) gold.set_index('Dates',inplace=True) col_sgb=pd.DataFrame(gold.groupby(gold.index).Close.mean()) ##For bond bond=pd.DataFrame(bond["Close"]) bond["date"]=bond.index bond["date"]=bond['date'].astype(str) bond[["year", "month", "day"]] = bond["date"].str.split(pat="-", expand=True) bond['Dates'] = bond['month'].str.cat(bond['year'], sep ="-") bond.Dates=pd.to_datetime(bond.Dates) bond.set_index('Dates',inplace=True) col_bond=pd.DataFrame(bond.groupby(bond.index).Close.mean()) col_sgb.columns = ["Avg_price"] col_bond.columns = ["Avg_price"] col_bond.isnull().sum() col_sgb.isnull().sum() ############################ SQL connection with monthly data ################################################ ############################### SQL database is created ################################################3 # Connect to the database from sqlalchemy import create_engine engine_sgb = create_engine('sqlite:///gold_database.db', echo=False) col_sgb.to_sql('SGB', con=engine_sgb,if_exists='replace') df_sgb = pd.read_sql('select * from SGB',engine_sgb ) df_sgb.Dates=pd.to_datetime(df_sgb.Dates) df_sgb.set_index('Dates',inplace=True) engine_irfcb = create_engine('sqlite:///irfcb_database.db', echo=False) col_bond.to_sql('IRFCB', con=engine_irfcb,if_exists='replace') df_bond = pd.read_sql('select * from IRFCB',engine_irfcb) df_bond.Dates=pd.to_datetime(df_bond.Dates) df_bond.set_index('Dates',inplace=True) ############################### SQL data to python ################################################3 # Plotting def plotting_bond(y): fig, ax = plt.subplots(figsize=(20, 6)) ax.plot(y,marker='.', linestyle='-', linewidth=0.5, label='Monthly Average') ax.plot(y.resample('Y').mean(),marker='o', markersize=8, linestyle='-', label='Yearly Mean Resample') ax.set_ylabel('Avg_price') ax.legend(); plotting_bond(df_sgb) plotting_bond(df_bond) #univariate analysis of Average Price df_sgb.hist(bins = 50) df_bond.hist(bins = 50) # check Stationary and adf test def test_stationarity(timeseries): #Determing rolling statistics rolmean = timeseries.rolling(12).mean() rolstd = timeseries.rolling(12).std() #Plot rolling statistics: fig, ax = plt.subplots(figsize=(16, 4)) ax.plot(timeseries, label = "Original Price") ax.plot(rolmean, label='rolling mean'); ax.plot(rolstd, label='rolling std'); plt.legend(loc='best') plt.title('Rolling Mean and Standard Deviation - Removed Trend and Seasonality') plt.show(block=False) print("Results of dickey fuller test") adft = adfuller(timeseries,autolag='AIC') print('Test statistic = {:.3f}'.format(adft[0])) print('P-value = {:.3f}'.format(adft[1])) print('Critical values :') for k, v in adft[4].items(): print('\t{}: {} - The data is {} stationary with {}% confidence'.format(k, v, 'not' if v y: a = print("The retrun of SGB is {a} and the return of IRFC Bond is {b} after {c} months".format(a=x,b=y,c=t)) else: a = print("The return of IRFC Bond is{a} and the return of SGB Bond is {b} after {c} months".format(a=x,b=y,c=t)) return a output_(gain_sgb,gain_bond, n) ">
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from pylab import rcParams
from statsmodels.tsa.stattools import adfuller
from statsmodels.tsa.seasonal import seasonal_decompose
from statsmodels.tsa.arima.model import ARIMA
from pmdarima.arima import auto_arima
from sklearn.metrics import mean_squared_error
import math
import re
from datetime import date
import nsepy 
import warnings
warnings.filterwarnings("ignore")
####################################          Live data extraction              ###################################################
##Extracting data from nsepy package
da=date.today()
gold= pd.DataFrame(nsepy.get_history(symbol="SGBAUG24",series="GB", start=date(2016,9,1), end=da))
bond= pd.DataFrame(nsepy.get_history(symbol="IRFC",series="N2", start=date(2012,1,1), end=da))

#############################                 Live data  extraction end                  ###############################################

# Heatmap - to check collinearity
def heatmap(x):
    plt.figure(figsize=(16,16))
    sns.heatmap(x.corr(),annot=True,cmap='Blues',linewidths=0.2) #data.corr()-->correlation matrix
    fig=plt.gcf()
    fig.set_size_inches(10,8)
    plt.show()
heatmap(gold)
heatmap(bond)
###############################                Live data to Feature engineering            ################################################3             

##Taking close price as our univariate variable
##For gold
gold=pd.DataFrame(gold["Close"])
gold["date"]=gold.index
gold["date"]=gold['date'].astype(str)
gold[["year", "month", "day"]] = gold["date"].str.split(pat="-", expand=True)
gold['Dates'] = gold['month'].str.cat(gold['year'], sep ="-")
gold.Dates=pd.to_datetime(gold.Dates)
gold.set_index('Dates',inplace=True)
col_sgb=pd.DataFrame(gold.groupby(gold.index).Close.mean())

##For bond
bond=pd.DataFrame(bond["Close"])
bond["date"]=bond.index
bond["date"]=bond['date'].astype(str)
bond[["year", "month", "day"]] = bond["date"].str.split(pat="-", expand=True)
bond['Dates'] = bond['month'].str.cat(bond['year'], sep ="-")
bond.Dates=pd.to_datetime(bond.Dates)
bond.set_index('Dates',inplace=True)
col_bond=pd.DataFrame(bond.groupby(bond.index).Close.mean())

col_sgb.columns = ["Avg_price"]
col_bond.columns = ["Avg_price"]

col_bond.isnull().sum()
col_sgb.isnull().sum()

############################                  SQL connection with monthly data           ################################################ 
###############################                SQL database is created                  ################################################3             

# Connect to the database
from sqlalchemy import create_engine
engine_sgb = create_engine('sqlite:///gold_database.db', echo=False)
col_sgb.to_sql('SGB', con=engine_sgb,if_exists='replace')
df_sgb = pd.read_sql('select * from SGB',engine_sgb )

df_sgb.Dates=pd.to_datetime(df_sgb.Dates)
df_sgb.set_index('Dates',inplace=True)


engine_irfcb = create_engine('sqlite:///irfcb_database.db', echo=False)
col_bond.to_sql('IRFCB', con=engine_irfcb,if_exists='replace')
df_bond = pd.read_sql('select * from IRFCB',engine_irfcb)

df_bond.Dates=pd.to_datetime(df_bond.Dates)
df_bond.set_index('Dates',inplace=True)
###############################                SQL data to python                 ################################################3             



# Plotting
def plotting_bond(y):
    fig, ax = plt.subplots(figsize=(20, 6))
    ax.plot(y,marker='.', linestyle='-', linewidth=0.5, label='Monthly Average')
    ax.plot(y.resample('Y').mean(),marker='o', markersize=8, linestyle='-', label='Yearly Mean Resample')
    ax.set_ylabel('Avg_price')
    ax.legend();
plotting_bond(df_sgb)
plotting_bond(df_bond)

#univariate analysis of Average Price
df_sgb.hist(bins = 50)
df_bond.hist(bins = 50)

# check Stationary and adf test
def test_stationarity(timeseries):
    #Determing rolling statistics
    rolmean = timeseries.rolling(12).mean()
    rolstd = timeseries.rolling(12).std()
    #Plot rolling statistics:
    fig, ax = plt.subplots(figsize=(16, 4))
    ax.plot(timeseries, label = "Original Price")
    ax.plot(rolmean, label='rolling mean');
    ax.plot(rolstd, label='rolling std');
    plt.legend(loc='best')
    plt.title('Rolling Mean and Standard Deviation - Removed Trend and Seasonality')
    plt.show(block=False)
    
    print("Results of dickey fuller test")
    adft = adfuller(timeseries,autolag='AIC')
    print('Test statistic = {:.3f}'.format(adft[0]))
    print('P-value = {:.3f}'.format(adft[1]))
    print('Critical values :')
    for k, v in adft[4].items():
        print('\t{}: {} - The data is {} stationary with {}% confidence'.format(k, v, 'not' if v
    
      y:
        a = print("The retrun of SGB is {a} and the return of IRFC Bond is {b} after {c} months".format(a=x,b=y,c=t))
    else:
        a = print("The return of IRFC Bond is{a} and the return of SGB Bond is {b} after {c} months".format(a=x,b=y,c=t))
    return a
output_(gain_sgb,gain_bond, n)

    

Home Page (Used HTML and CSS)

home

Predict Page

predict

Output Page

output

Project Completed --

Owner
Tishya S
Data Science aspirant
Tishya S
Practical Time-Series Analysis, published by Packt

Practical Time-Series Analysis This is the code repository for Practical Time-Series Analysis, published by Packt. It contains all the supporting proj

Packt 325 Dec 23, 2022
Anomaly Detection and Correlation library

luminol Overview Luminol is a light weight python library for time series data analysis. The two major functionalities it supports are anomaly detecti

LinkedIn 1.1k Jan 01, 2023
flexible time-series processing & feature extraction

A corona statistics and information telegram bot.

PreDiCT.IDLab 206 Dec 28, 2022
A high performance and generic framework for distributed DNN training

BytePS BytePS is a high performance and general distributed training framework. It supports TensorFlow, Keras, PyTorch, and MXNet, and can run on eith

Bytedance Inc. 3.3k Dec 28, 2022
We have a dataset of user performances. The project is to develop a machine learning model that will predict the salaries of baseball players.

Salary-Prediction-with-Machine-Learning 1. Business Problem Can a machine learning project be implemented to estimate the salaries of baseball players

Ayşe Nur Türkaslan 9 Oct 14, 2022
icepickle is to allow a safe way to serialize and deserialize linear scikit-learn models

icepickle It's a cooler way to store simple linear models. The goal of icepickle is to allow a safe way to serialize and deserialize linear scikit-lea

vincent d warmerdam 24 Dec 09, 2022
A simple python program which predicts the success of a movie based on it's type, actor, actress and director

Movie-Success-Prediction A simple python program which predicts the success of a movie based on it's type, actor, actress and director. The program us

Mahalinga Prasad R N 1 Dec 17, 2021
DirectML is a high-performance, hardware-accelerated DirectX 12 library for machine learning.

DirectML is a high-performance, hardware-accelerated DirectX 12 library for machine learning. DirectML provides GPU acceleration for common machine learning tasks across a broad range of supported ha

Microsoft 1.1k Jan 04, 2023
LibTraffic is a unified, flexible and comprehensive traffic prediction library based on PyTorch

LibTraffic is a unified, flexible and comprehensive traffic prediction library, which provides researchers with a credibly experimental tool and a convenient development framework. Our library is imp

432 Jan 05, 2023
CVXPY is a Python-embedded modeling language for convex optimization problems.

CVXPY The CVXPY documentation is at cvxpy.org. We are building a CVXPY community on Discord. Join the conversation! For issues and long-form discussio

4.3k Jan 08, 2023
CobraML: Completely Customizable A python ML library designed to give the end user full control

CobraML: Completely Customizable What is it? CobraML is a python library built on both numpy and numba. Unlike other ML libraries CobraML gives the us

Sriram Govindan 14 Dec 19, 2021
A simple example of ML classification, cross validation, and visualization of feature importances

Simple-Classifier This is a basic example of how to use several different libraries for classification and ensembling, mostly with sklearn. Example as

Rob 2 Aug 25, 2022
Open MLOps - A Production-focused Open-Source Machine Learning Framework

Open MLOps - A Production-focused Open-Source Machine Learning Framework Open MLOps is a set of open-source tools carefully chosen to ease user experi

Data Revenue 590 Dec 28, 2022
A complete guide to start and improve in machine learning (ML)

A complete guide to start and improve in machine learning (ML), artificial intelligence (AI) in 2021 without ANY background in the field and stay up-to-date with the latest news and state-of-the-art

Louis-François Bouchard 3.3k Jan 04, 2023
Lightweight Machine Learning Experiment Logging 📖

Simple logging of statistics, model checkpoints, plots and other objects for your Machine Learning Experiments (MLE). Furthermore, the MLELogger comes with smooth multi-seed result aggregation and co

Robert Lange 65 Dec 08, 2022
This is a curated list of medical data for machine learning

Medical Data for Machine Learning This is a curated list of medical data for machine learning. This list is provided for informational purposes only,

Andrew L. Beam 5.4k Dec 26, 2022
Accelerating model creation and evaluation.

EmeraldML A machine learning library for streamlining the process of (1) cleaning and splitting data, (2) training, optimizing, and testing various mo

Yusuf 0 Dec 06, 2021
Class-imbalanced / Long-tailed ensemble learning in Python. Modular, flexible, and extensible

IMBENS: Class-imbalanced Ensemble Learning in Python Language: English | Chinese/中文 Links: Documentation | Gallery | PyPI | Changelog | Source | Downl

Zhining Liu 176 Jan 04, 2023
Model factory is a ML training platform to help engineers to build ML models at scale

Model Factory Machine learning today is powering many businesses today, e.g., search engine, e-commerce, news or feed recommendation. Training high qu

16 Sep 23, 2022
Esse é o meu primeiro repo tratando de fim a fim, uma pipeline de dados abertos do governo brasileiro relacionado a compras de contrato e cronogramas anuais com spark, em pyspark e SQL!

Olá! Esse é o meu primeiro repo tratando de fim a fim, uma pipeline de dados abertos do governo brasileiro relacionado a compras de contrato e cronogr

Henrique de Paula 10 Apr 04, 2022