This repo contains a simple but effective tool made using python which can be used for quality control in statistical approach.

Overview

📈 Statistical Quality Control 📉

This repo contains a simple but effective tool made using python which can be used for quality control in statistical approach.

What is Statistical Quality Control?

  • statistical quality control is the use of statistical methods in the monitoring and maintaining of the quality of products and services. One method, referred to as acceptance sampling, can be used when a decision must be made to accept or reject a group of parts or items based on the quality found in a sample

  • Statistical quality control can be simply defined as an economic & effective system of maintaining & improving the quality of outputs throughout the whole operating process of specification, production & inspection based on continuous testing with random samples.

Why Statistical Quality Control?, what makes it important?

  • Statistical quality control techniques are extremely important for operating the estimable variations embedded in almost all manufacturing processes. Such variations arise due to raw material, consistency of product elements, processing machines, techniques deployed and packaging applications

  • SQC serves as a medium allowing manufacturers to attain maximum benefits by following controlled testing of manufactured products. Using this procedure, a manufacturing team can investigate the range of products with certain values that can be expected to reside under some existing conditions.

This statistical Quality Control can be easily implemented in python in few lines of code and graph can be beautifully visualized and analysed using matplotlib library.

For example lets consider a real life problem statement given like this:

  • A quality control inspector at the Cocoa Fizz soft drink company has taken ten samples with four observations each of the volume of bottles filled. The data and the computed means are shown in the table, use this information to develop control limits of three standard deviations for the bottling operation.

Data can be taken taken into an excel sheet like this:

After appending the data into excel sheet just hit run, statistical calculation will be done and you're greeted with this two graphs one is X-chat and the other one is R-chart.The x-bar and R-chart are quality control charts used to monitor the mean and variation of a process based on samples taken in a given time.X-bar chart: The mean or average change in process over time from subgroup values. The control limits on the X-Bar brings the sample’s mean and center into consideration.R-chart: The range of the process over the time from subgroups values. This monitors the spread of the process over the time.

Depending upon Data Graphs look like this:

(x-bar control chart)

(r-bar control chart)

From the both X bar and R charts it is clearly evident that the process is almost stable. If by chance the process is unstable that is there are many point in the outer region of quality control you make the process stable by changing the control limits,After the process stabilized, still if any point going out of control limits, it indicates an assignable cause exists in the process that needs to be addressed. This is an ongoing process to monitor the process performance.

Note:

  • Update data in excel before running the script, any number of rown and coloumns can be given.
  • Import used in this project are:
import pandas as pd 
import statistics
from statistics import mean,pstdev
import matplotlib.pyplot as plt
import numpy as np

make sure to install them before hand.

  • Code and logic is xplained in jupyter note book , do check that out
  • If you're interested more on this topic u can refer this PDF

Peace ✌️ .

Owner
SasiVatsal
open source enthusiast.🧑🏼‍💻 Just a teen interest in unix/linux 💻,android📱platforms, intermediate in python, js, c/c++.
SasiVatsal
2019 Data Science Bowl

Kaggle-2019-Data-Science-Bowl-Solution - Here i present my solution to kaggle 2019 data science bowl and how i improved it to win a silver medal in that competition.

Deepak Nandwani 1 Jan 01, 2022
.npy, .npz, .mtx converter.

npy-converter Matrix Data Converter. Expand matrix for multi-thread, multi-process Divid matrix for multi-thread, multi-process Support: .mtx, .npy, .

taka 1 Feb 07, 2022
Python utility to extract differences between two pandas dataframes.

Python utility to extract differences between two pandas dataframes.

Jaime Valero 8 Jan 07, 2023
PyTorch implementation for NCL (Neighborhood-enrighed Contrastive Learning)

NCL (Neighborhood-enrighed Contrastive Learning) This is the official PyTorch implementation for the paper: Zihan Lin*, Changxin Tian*, Yupeng Hou* Wa

RUCAIBox 73 Jan 03, 2023
Data Science Environment Setup in single line

datascienv is package that helps your to setup your environment in single line of code with all dependency and it is also include pyforest that provide single line of import all required ml libraries

Ashish Patel 55 Dec 16, 2022
Pyspark project that able to do joins on the spark data frames.

SPARK JOINS This project is to perform inner, all outer joins and semi joins. create_df.py: load_data.py : helps to put data into Spark data frames. d

Joshua 1 Dec 14, 2021
PyNHD is a part of HyRiver software stack that is designed to aid in watershed analysis through web services.

A part of HyRiver software stack that provides access to NHD+ V2 data through NLDI and WaterData web services

Taher Chegini 23 Dec 14, 2022
Random dataframe and database table generator

Random database/dataframe generator Authored and maintained by Dr. Tirthajyoti Sarkar, Fremont, USA Introduction Often, beginners in SQL or data scien

Tirthajyoti Sarkar 249 Jan 08, 2023
Python Package for DataHerb: create, search, and load datasets.

The Python Package for DataHerb A DataHerb Core Service to Create and Load Datasets.

DataHerb 4 Feb 11, 2022
A library to create multi-page Streamlit applications with ease.

A library to create multi-page Streamlit applications with ease.

Jackson Storm 107 Jan 04, 2023
Finding project directories in Python (data science) projects, just like there R rprojroot and here packages

Find relative paths from a project root directory Finding project directories in Python (data science) projects, just like there R here and rprojroot

Daniel Chen 102 Nov 16, 2022
Conduits - A Declarative Pipelining Tool For Pandas

Conduits - A Declarative Pipelining Tool For Pandas Traditional tools for declaring pipelines in Python suck. They are mostly imperative, and can some

Kale Miller 7 Nov 21, 2021
pyETT: Python library for Eleven VR Table Tennis data

pyETT: Python library for Eleven VR Table Tennis data Documentation Documentation for pyETT is located at https://pyett.readthedocs.io/. Installation

Tharsis Souza 5 Nov 19, 2022
Scraping and analysis of leetcode-compensations page.

Leetcode compensations report Scraping and analysis of leetcode-compensations page.

utsav 96 Jan 01, 2023
PipeChain is a utility library for creating functional pipelines.

PipeChain Motivation PipeChain is a utility library for creating functional pipelines. Let's start with a motivating example. We have a list of Austra

Michael Milton 2 Aug 07, 2022
Tkinter Izhikevich Neuron Model With Python

TKINTER IZHIKEVICH NEURON MODEL WITH PYTHON Hodgkin-Huxley Model It is a mathematical model for the generation and transmission of action potentials i

Rabia KOÇ 8 Jul 16, 2022
Incubator for useful bioinformatics code, primarily in Python and R

Collection of useful code related to biological analysis. Much of this is discussed with examples at Blue collar bioinformatics. All code, images and

Brad Chapman 560 Jan 03, 2023
Titanic data analysis for python

Titanic-data-analysis This Repo is an analysis on Titanic_mod.csv This csv file contains some assumed data of the Titanic ship after sinking This full

Hardik Bhanot 1 Dec 26, 2021
Tools for working with MARC data in Catalogue Bridge.

catbridge_tools Tools for working with MARC data in Catalogue Bridge. Borrows heavily from PyMarc

1 Nov 11, 2021
Using Data Science with Machine Learning techniques (ETL pipeline and ML pipeline) to classify received messages after disasters.

Using Data Science with Machine Learning techniques (ETL pipeline and ML pipeline) to classify received messages after disasters.

1 Feb 11, 2022