Fully Adaptive Bayesian Algorithm for Data Analysis (FABADA) is a new approach of noise reduction methods. In this repository is shown the package developed for this new method based on \citepaper.

Overview

Contributors Forks Stargazers Issues GNU License LinkedIn

Fully Adaptive Bayesian Algorithm for Data Analysis

FABADA

FABADA is a novel non-parametric noise reduction technique which arise from the point of view of Bayesian inference that iteratively evaluates possible smoothed models of the data, obtaining an estimation of the underlying signal that is statistically compatible with the noisy measurements. Iterations stop based on the evidence $E$ and the $\chi^2$ statistic of the last smooth model, and we compute the expected value of the signal as a weighted average of the smooth models. You can find the entire paper describing the new method in (link will be available soon).
Explore the docs »

View Demo · Report Bug · Request Feature

Table of Contents
  1. About The Method
  2. Getting Started
  3. Usage
  4. Results
  5. Contributing
  6. License
  7. Contact
  8. Cite

About The Method

This automatic method is focused in astronomical data, such as images (2D) or spectra (1D). Although, this doesn't mean it can be treat like a general noise reduction algorithm and can be use in any kind of two and one-dimensional data reproducing reliable results. The only requisite of the input data is an estimation of its variance.

(back to top)

Getting Started

We try to make the usage of FABADA as simple as possible. For that purpose, we have create a PyPI and Conda package to install FABADA in its latest version.

Prerequisites

The first requirement is to have a version of Python greater than 3.5. Although PyPI install the prerequisites itself, FABADA has two dependecies.

Installation

To install fabada we can, use the Python Package Index (PyPI) or Conda.

Using pip

  pip install fabada

we are currently working on uploading the package to the Conda system.

(back to top)

Usage

Along with the package two examples are given.

  • fabada_demo_image.py

In here we show how to use fabada for an astronomical grey image (two dimensional) First of all we have to import our library previously install and some dependecies

    from fabada import fabada
    import numpy as np
    from PIL import Image

Then we read the bubble image borrowed from the Hubble Space Telescope gallery. In our case we use the Pillow library for that. We also add some random Gaussian white noise using numpy.random.

    # IMPORTING IMAGE
    y = np.array(Image.open("bubble.png").convert('L'))

    # ADDING RANDOM GAUSSIAN NOISE
    np.random.seed(12431)
    sig      = 15             # Standard deviation of noise
    noise    = np.random.normal(0, sig ,y.shape)
    z        = y + noise
    variance = sig**2

Once the noisy image is generated we can apply fabada to produce an estimation of the underlying image, which we only have to call fabada and give it the variance of the noisy image

    y_recover = fabada(z,variance)

And its done 😉

As easy as one line of code.

The results obtained running this example would be:

Image Results

The left, middle and right panel corresponds to the true signal, the noisy meassurents and the estimation of fabada respectively. There is also shown the Peak Signal to Noise Ratio (PSNR) in dB and the Structural Similarity Index Measure (SSIM) at the bottom of the middle and right panel (PSNR/SSIM).

  • fabada_demo_spectra.py

In here we show how to use fabada for an astronomical spectrum (one dimensional), basically is the same as the example above since fabada is the same for one and two-dimensional data. First of all, we have to import our library previously install and some dependecies

    from fabada import fabada
    import pandas as pd
    import numpy as np

Then we read the interacting galaxy pair Arp 256 spectra, taken from the ASTROLIB PYSYNPHOT package which is store in arp256.csv. Again we add some random Gaussian white noise

    # IMPORTING SPECTRUM
    y = np.array(pd.read_csv('arp256.csv').flux)
    y = (y/y.max())*255  # Normalize to 255

    # ADDING RANDOM GAUSSIAN NOISE
    np.random.seed(12431)
    sig      = 10             # Standard deviation of noise
    noise    = np.random.normal(0, sig ,y.shape)
    z        = y + noise
    variance = sig**2

Once the noisy image is generated we can, again, apply fabada to produce an estimation of the underlying spectrum, which we only have to call fabada and give it the variance of the noisy image

    y_recover = fabada(z,variance)

And done again 😉

Which is exactly the same as for two dimensional data.

The results obtained running this example would be:

Spectra Results

The red, grey and black line represents the true signal, the noisy meassurents and the estimation of fabada respectively. There is also shown the Peak Signal to Noise Ratio (PSNR) in dB and the Structural Similarity Index Measure (SSIM) in the legend of the figure (PSNR/SSIM).

(back to top)

Results

All the results of the paper of this algorithm can be found in the folder results along with a jupyter notebook that allows to explore all of them through an interactive interface. You can run the jupyter notebook through Google Colab in this link --> Explore the results.

(back to top)

Contributing

Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.

If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement". Don't forget to give the project a star! Thanks again!

  1. Fork the Project
  2. Create your Feature Branch (git checkout -b feature/AmazingFeature)
  3. Commit your Changes (git commit -m 'Add some AmazingFeature')
  4. Push to the Branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

(back to top)

License

Distributed under the GNU General Public License. See LICENSE.txt for more information.

(back to top)

Contact

Pablo M Sánchez Alarcón - [email protected]

Yago Ascasibar Sequeiros - [email protected]

Project Link: https://github.com/PabloMSanAla/fabada

(back to top)

Cite

Thank you for using FABADA.

Citations and acknowledgement are vital for the continued work on this kind of algorithms.

Please cite the following record if you used FABADA in any of your publications.

@ARTICLE{2022arXiv220105145S,
author = {{Sanchez-Alarcon}, Pablo M and {Ascasibar Sequeiros}, Yago},
title = "{Fully Adaptive Bayesian Algorithm for Data Analysis, FABADA}",
journal = {arXiv e-prints},
keywords = {Astrophysics - Instrumentation and Methods for Astrophysics, Astrophysics - Astrophysics of Galaxies, Astrophysics - Solar and Stellar Astrophysics, Computer Science - Computer Vision and Pattern Recognition, Physics - Data Analysis, Statistics and Probability},
year = 2022,
month = jan,
eid = {arXiv:2201.05145},
pages = {arXiv:2201.05145},
archivePrefix = {arXiv},
eprint = {2201.05145},
primaryClass = {astro-ph.IM},
adsurl = {https://ui.adsabs.harvard.edu/abs/2022arXiv220105145S}
}

Sanchez-Alarcon, P. M. and Ascasibar Sequeiros, Y., “Fully Adaptive Bayesian Algorithm for Data Analysis, FABADA”, arXiv e-prints, 2022.

https://arxiv.org/abs/2201.05145

(back to top)

Readme file taken from Best README Template.

You might also like...
pyhsmm - library for approximate unsupervised inference in Bayesian Hidden Markov Models (HMMs) and explicit-duration Hidden semi-Markov Models (HSMMs), focusing on the Bayesian Nonparametric extensions, the HDP-HMM and HDP-HSMM, mostly with weak-limit approximations.
Bayesian-Torch is a library of neural network layers and utilities extending the core of PyTorch to enable the user to perform stochastic variational inference in Bayesian deep neural networks

Bayesian-Torch is a library of neural network layers and utilities extending the core of PyTorch to enable the user to perform stochastic variational inference in Bayesian deep neural networks. Bayesian-Torch is designed to be flexible and seamless in extending a deterministic deep neural network architecture to corresponding Bayesian form by simply replacing the deterministic layers with Bayesian layers.

Hierarchical-Bayesian-Defense - Towards Adversarial Robustness of Bayesian Neural Network through Hierarchical Variational Inference (Openreview) How the Deep Q-learning method works and discuss the new ideas that makes the algorithm work
How the Deep Q-learning method works and discuss the new ideas that makes the algorithm work

Deep Q-Learning Recommend papers The first step is to read and understand the method that you will implement. It was first introduced in a 2013 paper

PassAPI is a password generator in hash format and fully developed in Python, with the aim of teaching how to handle and build
PassAPI is a password generator in hash format and fully developed in Python, with the aim of teaching how to handle and build

simple, elegant and safe Introduction PassAPI is a password generator in hash format and fully developed in Python, with the aim of teaching how to ha

Implementation of temporal pooling methods studied in [ICIP'20] A Comparative Evaluation Of Temporal Pooling Methods For Blind Video Quality Assessment

Implementation of temporal pooling methods studied in [ICIP'20] A Comparative Evaluation Of Temporal Pooling Methods For Blind Video Quality Assessment

A variational Bayesian method for similarity learning in non-rigid image registration (CVPR 2022)
A variational Bayesian method for similarity learning in non-rigid image registration (CVPR 2022)

A variational Bayesian method for similarity learning in non-rigid image registration We provide the source code and the trained models used in the re

We evaluate our method on different datasets (including ShapeNet, CUB-200-2011, and Pascal3D+) and achieve state-of-the-art results, outperforming all the other supervised and unsupervised methods and 3D representations, all in terms of performance, accuracy, and training time.
Comments
  • chi2pdf

    chi2pdf

    https://github.com/PabloMSanAla/fabada/blob/44a0ae025d21a11235f6591f8fcacbf7c0cec1ec/fabada/init.py#L129

    The chi2pdf estimation is dependent on df. df, in the example demos, is set to data.size.

    In the case of fabada_demo_spectrum, data.size is 1430 samples.

    per wolfram alpha, the gamma function value of 715 is 1x10^1729, which is well out of the calculation range of any desktop computer.

    chi2_data = np.sum <-- a float chi2_pdf = stats.chi2.pdf(chi2_data, df=data.size)

    https://lost-contact.mit.edu/afs/inf.ed.ac.uk/group/teaching/matlab-help/R2014a/stats/chi2pdf.html

    chi2_pdf = (chi2data** (N - 2) / 2) * numpy.exp(-chi2sum / 2)
    / ((2 ** (N / 2)) * math.gamma(N / 2))

    As a result, this function is going to fail without any question, and numpy /python will happily ignore the NaN value which is always returned. this then turns chi2_pdf_derivative chi2_pdf_previous chi2_pdf_snd_derivative chi2_pdf_derivative_previous into NaN values as well.

    opened by falseywinchnet 0
  • data variance fixing unreachable

    data variance fixing unreachable

    https://github.com/PabloMSanAla/fabada/blob/master/fabada/init.py#L83 this line of code is unreachable: since all the nan's are already set to 0 previously

    opened by falseywinchnet 0
  • python equivalance

    python equivalance

    https://github.com/PabloMSanAla/fabada/blob/44a0ae025d21a11235f6591f8fcacbf7c0cec1ec/fabada/init.py#L115 This sets a reference, and afterwards, any update to the array being referenced also modifies the array referencing it.

    opened by falseywinchnet 2
Releases(v0.2)
An exploration of log domain "alternative floating point" for hardware ML/AI accelerators.

This repository contains the SystemVerilog RTL, C++, HLS (Intel FPGA OpenCL to wrap RTL code) and Python needed to reproduce the numerical results in

Facebook Research 373 Dec 31, 2022
Constrained Language Models Yield Few-Shot Semantic Parsers

Constrained Language Models Yield Few-Shot Semantic Parsers This repository contains tools and instructions for reproducing the experiments in the pap

Microsoft 43 Nov 23, 2022
Axel - 3D printed robotic hands and they controll with Raspberry Pi and Arduino combo

Axel It's our graduation project about 3D printed robotic hands and they control

0 Feb 14, 2022
ZeroVL - The official implementation of ZeroVL

This repository contains source code necessary to reproduce the results presente

31 Nov 04, 2022
Creating Multi Task Models With Keras

Creating Multi Task Models With Keras About The Project! I used the keras and Tensorflow Library, To build a Deep Learning Neural Network to Creating

Srajan Chourasia 4 Nov 28, 2022
Transfer Learning for Pose Estimation of Illustrated Characters

bizarre-pose-estimator Transfer Learning for Pose Estimation of Illustrated Characters Shuhong Chen *, Matthias Zwicker * WACV2022 [arxiv] [video] [po

Shuhong Chen 142 Dec 28, 2022
1st Place Solution to ECCV-TAO-2020: Detect and Represent Any Object for Tracking

Instead, two models for appearance modeling are included, together with the open-source BAGS model and the full set of code for inference. With this code, you can achieve around 79 Oct 08, 2022

This repository contains the code to replicate the analysis from the paper "Moving On - Investigating Inventors' Ethnic Origins Using Supervised Learning"

Replication Code for 'Moving On' - Investigating Inventors' Ethnic Origins Using Supervised Learning This repository contains the code to replicate th

Matthias Niggli 0 Jan 04, 2022
A full-fledged version of Pix2Seq

Stable-Pix2Seq A full-fledged version of Pix2Seq What it is. This is a full-fledged version of Pix2Seq. Compared with unofficial-pix2seq, stable-pix2s

peng gao 205 Dec 27, 2022
Simulator for FRC 2022 challenge: Rapid React

rrsim Simulator for FRC 2022 challenge: Rapid React out-1.mp4 Usage In order to run the simulator use the following: python3 rrsim.py [config_path] wh

1 Jan 18, 2022
Implémentation en pyhton de l'article Depixelizing pixel art de Johannes Kopf et Dani Lischinski

Implémentation en pyhton de l'article Depixelizing pixel art de Johannes Kopf et Dani Lischinski

TableauBits 3 May 29, 2022
Implementation of ML models like Decision tree, Naive Bayes, Logistic Regression and many other

ML_Model_implementaion Implementation of ML models like Decision tree, Naive Bayes, Logistic Regression and many other dectree_model: Implementation o

Anshuman Dalai 3 Jan 24, 2022
Pytorch implementation of XRD spectral identification from COD database

XRDidentifier Pytorch implementation of XRD spectral identification from COD database. Details will be explained in the paper to be submitted to NeurI

Masaki Adachi 4 Jan 07, 2023
Code for ACL 21: Generating Query Focused Summaries from Query-Free Resources

marge This repository releases the code for Generating Query Focused Summaries from Query-Free Resources. Please cite the following paper [bib] if you

Yumo Xu 28 Nov 10, 2022
Code for Motion Representations for Articulated Animation paper

Motion Representations for Articulated Animation This repository contains the source code for the CVPR'2021 paper Motion Representations for Articulat

Snap Research 851 Jan 09, 2023
Inverse Rendering for Complex Indoor Scenes: Shape, Spatially-Varying Lighting and SVBRDF From a Single Image

Inverse Rendering for Complex Indoor Scenes: Shape, Spatially-Varying Lighting and SVBRDF From a Single Image (Project page) Zhengqin Li, Mohammad Sha

209 Jan 05, 2023
Kaggle Feedback Prize - Evaluating Student Writing 15th solution

Kaggle Feedback Prize - Evaluating Student Writing 15th solution First of all, I would like to thank the excellent notebooks and discussions from http

Lingyuan Zhang 6 Mar 24, 2022
Jupyter notebooks showing best practices for using cx_Oracle, the Python DB API for Oracle Database

Python cx_Oracle Notebooks, 2022 The repository contains Jupyter notebooks showing best practices for using cx_Oracle, the Python DB API for Oracle Da

Christopher Jones 13 Dec 15, 2022
PyTorch Implementation of Small Lesion Segmentation in Brain MRIs with Subpixel Embedding (ORAL, MICCAIW 2021)

Small Lesion Segmentation in Brain MRIs with Subpixel Embedding PyTorch implementation of Small Lesion Segmentation in Brain MRIs with Subpixel Embedd

22 Oct 21, 2022
YOLOX-Paddle - A reproduction of YOLOX by PaddlePaddle

YOLOX-Paddle A reproduction of YOLOX by PaddlePaddle 数据集准备 下载COCO数据集,准备为如下路径 /ho

QuanHao Guo 6 Dec 18, 2022