Machine learning for NeuroImaging in Python

Overview
Github Actions Build Status Coverage Status Azure Build Status

nilearn

Nilearn enables approachable and versatile analyses of brain volumes. It provides statistical and machine-learning tools, with instructive documentation & friendly community.

It supports general linear model (GLM) based analysis and leverages the scikit-learn Python toolbox for multivariate statistics with applications such as predictive modelling, classification, decoding, or connectivity analysis.

Important links

Dependencies

The required dependencies to use the software are:

  • Python >= 3.5,
  • setuptools
  • Numpy >= 1.11
  • SciPy >= 0.19
  • Scikit-learn >= 0.19
  • Joblib >= 0.12
  • Nibabel >= 2.0.2

If you are using nilearn plotting functionalities or running the examples, matplotlib >= 1.5.1 is required.

If you want to run the tests, you need pytest >= 3.9 and pytest-cov for coverage reporting.

Install

First make sure you have installed all the dependencies listed above. Then you can install nilearn by running the following command in a command prompt:

pip install -U --user nilearn

More detailed instructions are available at http://nilearn.github.io/introduction.html#installation.

Development

Detailed instructions on how to contribute are available at http://nilearn.github.io/development.html

Comments
  • [ENH] Initial visual reports

    [ENH] Initial visual reports

    Closes #2022 .

    An initial implementation of visual reports for Nilearn. Adds:

    • [x] The templating library tempita as an external dependency
    • [x] A reorganization of HTMLDocument into a new reporting module
    • [x] A new reporting HTML template
    • [x] A super class Report to populate the report HTML template with tempita populated text
    • [x] Relevant CSS styling to improve report UX, using pure-css
    • [x] An ability to display reports directly in Jupyter Notebooks, without iframe rendering, thanks to @GaelVaroquaux
    • [x] Documentation of this functionality, with examples
    • [x] A new Sphinx Gallery image scraper to embed these example HTML reports

    For a current rendering of reports see: https://github.com/emdupre/nilearn/pull/4#issuecomment-527984327 and the plot_mask_computation example.

    opened by emdupre 172
  • switch from papaya to brainsprite in plotting.view_stat_map

    switch from papaya to brainsprite in plotting.view_stat_map

    I really love the new 3D interactive viewer (plotting.view_stat_map), but the notebooks it is producing are huge. In this PR, I am proposing to switch from papaya to brainsprite, which is a js library I developed for the exact purpose of embedding lightweight 3D viewers in html pages (http://github.com/simexp/brainsprite.js).

    The first difference with papaya is that it is using a jpg or a png containing all sagital slices of a volume as well as json metadata to store the brain images. That tend to be quite smaller than a nifti (depending on the numerical precision of the nifti). That also means that brainsprite can render brains with core html5 features, and no dependencies. So the brainsprite library weighs 15kb (500 lines...), as opposed to 2Mb for the current papaya html template. I have attached two brain viewers embedded in jupyter notebooks. The Papaya-based notebook is 12Mb, while the brainsprite-based notebook is 500kb. Again, this reflects a core difference in design: papaya is a full brain viewer app, featuring nifti reading as well as colorbar etc. Brainsprite is a minimal, fast brain viewer working from a pre-generated sprite.

    Which makes a transition with the second point: all the action for the generation of the brain volume happens in python. There is a new function called save_sprite that generates the brain sprite as well as the json meta data. It relies on matplotlib, as well as nilearn's own functions. In particular, thresholding and colormap generation are all done with nilearn's code. Resampling as well. This means that it will be easier to maintain and evolve for nilearn's developpers. The current version replicates all the arguments of plot_stat_map, including draw_cross, annotate, cut_coords and a few other (with a few as bonus, such as opacity).

    This PR is far from polished, there are a few oustanding issues, here. I also need to look into the doc and testing. Finally, I dumped some functions in html_stat_map.py which should probably live elsewhere. But I think it is time to get feedback, and in particular I'd like to know if there is an interest in merging this PR at all...

    opened by pbellec 166
  • [MRG] Cortex surface projections

    [MRG] Cortex surface projections

    Hello @GaelVaroquaux , @mrahim , @agramfort, @juhuntenburg and others, this is a PR about surface plotting.

    nilearn has some awesome functions to plot surface data in nilearn.plotting.surf_plotting. However, it doesn't offer a conversion from volumetric to surface data.

    It would be great to add a function to sample or project volumetric data on the nodes of a cortical mesh; this would allow users to look at surface plots of their 3d images (e.g. statistical maps).

    In this PR we will try to add this to nilearn.

    Most tools which offer this functionality (e.g. caret, freesurfer, pycortex) usually propose several projection and sampling strategies, offering different quality / speed tradeoffs. However, it seems to me that naive strategies are not so far behind more elaborate ones - see for example [Operto, Grégory, et al. "Projection of fMRI data onto the cortical surface using anatomically-informed convolution kernels." Neuroimage 39.1 (2008): 127-135]. For plotting and visualisation, the results of a simple strategy are probably accurate enough for most users.

    I therefore suggest to start by including a very simple and fast projection scheme, and we can add more elaborate ones later if we want. I'm just getting started but I think we can already start a discussion.

    The proposed strategy is simply to draw a sample from a 3mm sphere around each mesh node, and average the measures.

    The image below illustrates that strategy: each red circle is a mesh node. Samples are drawn from the blue crosses that are attached to it, and are inside the image, and then averaged to compute the color inside the circle. (This image is produced by the show_sampling.py example, which is only there to clarify the strategy implemented in this PR and will be removed).

    illustration_2d

    Here is an example surface plot for a brainpedia image (id 32015 on Neurovault, https://neurovault.org/media/images/1952/task007_face_vs_baseline_pycortex/index.html), produced by brainpedia_surface.py:

    brainpedia_inflated

    And here is the plot produced by pycortex for the same image, as shown on Neurovault:

    brainpedia_pycortex

    Note about performance: in order to choose the positions of the samples to draw from a unit ball, for now, we cluster points drawn from a uniform distribution on the ball and keep the centroids (we can think of something better). This takes a few seconds and the results are cached with joblib for the time being, but since it only needs to be done once, when we have decided how many samples we want, the positions will be hardcoded once and for all (no computing, no caching). with 100 samples per ball, projecting a stat map of the full brain with 2mm voxels on an fsaverage hemisphere mesh takes around 60 ms.

    opened by jeromedockes 79
  • (WIP) Sparse models: S-LASSO and TV-l1

    (WIP) Sparse models: S-LASSO and TV-l1

    • Supports TV-l1 and S-LASSO priors
    • Supports logistic and squared losses
    • Has cross validation
    • Can automatically select alpha by CV (+ automatic computation of useful alpha ranges for the CV)
    • Warning: User must supply l1_ratio
    opened by dohmatob 78
  • Add a Neurovault fetcher.

    Add a Neurovault fetcher.

    This is based on PR #832 opened by @bcipolli in answer to issue #640. The contribution is to add a fetcher for downloading selected

    images from http://neurovault.org/. I have tried to address some of the remarks that were made in the discussion about #832. I have included the examples plot_ica_neurovault.py and plot_neurovault_meta_analysis.py from the previous PR; they remain almost identical.

    The interface to the fetcher is similar to that proposed in #832. I have kept the possibility to filter collections and images either with a function or with a dictionary of {field: desired_value} pairs (or both), but they are now separate arguments called respectively collection_filter (image_filter for images) and collection_terms (image_terms for images). Also, now the filters passed in a dictionary are only inserted in the query URL if they are actually available on the server (which is the case for the collection owner, the collection DOI, and the collection name); otherwise they are applied to the metadata once it has been downloaded.

    For users who want a specific set of image or collection ids, downloading all the Neurovault metadata and filtering on the ids is inefficient; so they can use the image_ids or collection_ids parameters to pass the lists of ids they want to download; in this case any filter is ignored and the server is queried directly for the required image and collection ids. Note: this is also done under the hood if the collection id is used as a filter term - either by specifying the collection 'id' field or the image 'collection_id' field - except that in this case the other filters are still applied.

    I have included a ResultFilter class and a bunch of special values such as NotNull which can be used to specify filters more easily. In some neurovault metadata jsons, some values are strings such as "", "null", "None"... instead of actual null; these (more precisely strings that match ($|n/?a$|none|null) in a case-insensitive way) are replaced by a true null value (null in the json, converted to None when loaded in a python dict) upon download so that comparing to None or testing for truth should yield the expected results. In particular dict.get(field) will give the same value wether the original value was null, "null", ... , or plain missing.

    This should make using fetch_neurovault somewhat easier. However, for users who are interested in a large subset of the neurovault data and are not too short on disk space, I would recommend using only very simple filters (e.g. the defaults) when calling fetch_neurovault to download (almost all) the data, and, once it is on disk, only access it through read_sql_query, local_database_connection or local_database_cursor. The metadata is stored in an sqlite database, so instead of having to read the docstring for fetch_neurovault, writing their own filters, etc., most users will probably prefer to download it all and then simply use SQL syntax to select the subset they're interested in. read_sql_query queries the database and returns the result as an OrderedDict of columns (the default) or as a list of rows. local_database_connection gives a connection to the sqlite file, so that pandas users can load e.g., all the images' metadata by typing:

    images = pandas.read_sql_query(
         "SELECT * FROM images", neurovault.local_database_connection())
    

    Of course if they prefer manipulating sqlite3 objects directly they can use the connection given by local_database_connection or the cursor given by local_database_cursor.

    @bcipolli, @chrisfilo, @GaelVaroquaux, or anyone else, please let me know what modifications need to be made! In particular:

    • What should be the default filters? The current behaviour is to exclude:
      • Collections from a list of known bad collections (found in PR #832, with one addition).
      • Empty collections.
      • Images from a list of known bad images (found in PR #832).
      • Images that are not in MNI space.
      • Images for which the metadata field 'is_valid' is cleared.
      • Images for which the metadata field 'is_thresholded' is set.
      • Images for which 'map_type' is 'ROI/mask', 'anatomical' or 'parcellation'
      • Images for which 'image_type' is 'atlas'
    • Should the fetcher, by default, download all the images matching the filters, or only a limited number (the current behaviour)?
    opened by jeromedockes 67
  • [MRG] Glass brain visualisation

    [MRG] Glass brain visualisation

    This pull request is WIP for now and was only opened to get some feedback, both visualisation-wise and code-wise.

    Here is how it looks like atm: glass_brain

    The code to generate the plots is there.

    New feature Plotting 
    opened by lesteve 64
  • Surface plots

    Surface plots

    Hi again,

    This PR replaces #2454 which is a continuation of #1730 submitted by @dangom to resolve #1722. The intention is to check that everything other than the symmetric_cbar is working properly and merge the surface montages into master. I'll open an issue for the symmetric_cbar and fix that separately.

    Also, thank you so much @GaelVaroquaux and @effigies for your help creating the PR properly.

    opened by ZviBaratz 62
  • Brainomics/Localizer

    Brainomics/Localizer

    opened by DimitriPapadopoulos 60
  • SpaceNet (this PR succeeds PR #219)

    SpaceNet (this PR succeeds PR #219)

    This PR succeeds PR #219 (aka unicorn factory). All discussions should be done here henceforth. #219 is now classified, and should be referred to solely for histological purposes.

    opened by dohmatob 54
  • [ENH, MRG] fREM

    [ENH, MRG] fREM

    Following the merge of #2000, here is the introduction of fREMClassifier and fREMRegressor objects which run pipelines with clustering, feature selection and ensembling of best models across a grid of parameters.

    • The files changes are minor.
    • The current implementation yields results as expected on an exemple (see attached below on plot_haxby_tutorial)
    • But the test of fREM accuracy on small datasets in test_decoder.py keep failing just because accuracy of fREM is not good enough in this setting but after trying various parameters I don't know which tests to do that would be better suited to this usecase. Anybody ?
    • If I remember correctly you wanted to replace Decoder use by fREM in many examples to alleviate their computational cost @GaelVaroquaux . Which one are the main targets ? I will benchmark how many time we could gain but on ROIs it doesn't seem very useful to cluster. (e.g. it slows things down on Haxby ROI example)
    Capture d’écran 2020-03-05 à 18 40 24
    opened by thomasbazeille 52
  • [MRG] Dictionary learning + nilearn.decomposition refactoring

    [MRG] Dictionary learning + nilearn.decomposition refactoring

    Decomposition estimators (DictLearning / MultiPCA) now inherit a DecompositionEstimator.

    Loading of data is made through a PCAMultiNiftiMasker, which loads data from files and compress it.

    Potentially, the function check_masker could solve issue #688, as it factorizes the input checking of estimator to which you provide either a masker or parameters for a mask. It is tuned to be able to use PCAMultiNiftiMasker

    opened by arthurmensch 51
  • Collaboration with NiiVue and possibilities for integration

    Collaboration with NiiVue and possibilities for integration

    The NiiVue project uses WebGL 2.0 to provide interactive web-based visualization capabilities for viewing medical imaging. We have started to meet with NiiVue developers to discuss possible avenues for integration of NiiVue into Nilearn. They have started working on a Python interface so one path to take would be to work on building this with them. We can also start by providing examples of how Nilearn users can make use of NiiVue functionality. Some relevant repositories include https://github.com/niivue/niivue and https://github.com/niivue/ipyniivue. NiiVue demos can be found here: https://niivue.github.io/niivue/. We can keep track of progress and decisions made to advance this collaboration here as well as have a general discussion on benefits of the integration.

    Enhancement Discussion 
    opened by ymzayek 1
  • Remove ci-skip action from GitHub Actions workflows

    Remove ci-skip action from GitHub Actions workflows

    GitHub Actions now supports skipping workflows out of the box: https://docs.github.com/en/actions/managing-workflow-runs/skipping-workflow-runs and so the Action mstachniuk/ci-skip is deprecated and should be removed from all workflows before it starts failing.

    Infrastructure 
    opened by ymzayek 4
  • [ENH] Flat maps for all fsaverage resolutions

    [ENH] Flat maps for all fsaverage resolutions

    Closes #3171 .

    If we're happy with the flat maps generated in #3171 for all fsaverage resolutions, I suggest the following roadmap to integrate them to nilearn:

    • [x] clean up my current script so that it generates flat maps for both hemispheres and all fsaverage resolutions
    • [ ] publish it somewhere? (as a gist maybe? I'm happy to hear suggestions here)
    • [x] run the script I used in #2815 to generate our fsaverage tarballs, and add flat maps to it
    • [ ] update our OSF datasets for fsaverage 3-7
    • [ ] have at least one example's thumbnail display this feature (which is why this PR leverages unmerged changes from #3173, as I want this thumbnail to show curvature sign as well)

    As a reminder, here is what the current flat maps look like for fsaverage 3 to 7:

    image

    image

    image

    image

    image

    opened by alexisthual 7
  • `filter` option in `signal.clean` is not exposed to `nilearn.maskers.NiftiMasker` and potentially other masker objects

    `filter` option in `signal.clean` is not exposed to `nilearn.maskers.NiftiMasker` and potentially other masker objects

    However, I don't think a filter option is provided for nilearn.maskers.NiftiMasker so seems it's using default aka butterworth

    Originally posted by @DasDominus in https://github.com/nilearn/nilearn/issues/3434#issuecomment-1333064573

    Good first issue effort: low 
    opened by htwangtw 6
  • All CI failing because flake8 --diff option has been removed

    All CI failing because flake8 --diff option has been removed

    All CI are currently failing because flake8==6.0.0 (which has been released on Nov 23) does not have the --diff flag anymore.

    The reason has been explained in this issue.

    A solution which is an adaptation of this comment is replacing the flake8 --diff from build_tools/flake8_diff.sh#L79 with the following command:

    git diff --name-only $COMMIT | grep '\.py$' | xargs --delimiter='\n' --no-run-if-empty flake8 --show-source
    

    Opened PR #3432

    Bug 
    opened by RaphaelMeudec 2
A simple python stock Predictor

Python Stock Predictor A simple python stock Predictor Demo Run Locally Clone the project git clone https://github.com/yashraj-n/stock-price-predict

Yashraj narke 5 Nov 29, 2021
Official Implementation and Dataset of "PPR10K: A Large-Scale Portrait Photo Retouching Dataset with Human-Region Mask and Group-Level Consistency", CVPR 2021

Portrait Photo Retouching with PPR10K Paper | Supplementary Material PPR10K: A Large-Scale Portrait Photo Retouching Dataset with Human-Region Mask an

184 Dec 11, 2022
DilatedNet in Keras for image segmentation

Keras implementation of DilatedNet for semantic segmentation A native Keras implementation of semantic segmentation according to Multi-Scale Context A

303 Mar 15, 2022
Predicting Event Memorability from Contextual Visual Semantics

Predicting Event Memorability from Contextual Visual Semantics

0 Oct 06, 2021
Politecnico of Turin Thesis: "Implementation and Evaluation of an Educational Chatbot based on NLP Techniques"

THESIS_CAIRONE_FIORENTINO Politecnico of Turin Thesis: "Implementation and Evaluation of an Educational Chatbot based on NLP Techniques" GENERATE TOKE

cairone_fiorentino97 1 Dec 10, 2021
Empower Sequence Labeling with Task-Aware Language Model

LM-LSTM-CRF Check Our New NER Toolkit 🚀 🚀 🚀 Inference: LightNER: inference w. models pre-trained / trained w. any following tools, efficiently. Tra

Liyuan Liu 838 Jan 05, 2023
Run Keras models in the browser, with GPU support using WebGL

**This project is no longer active. Please check out TensorFlow.js.** The Keras.js demos still work but is no longer updated. Run Keras models in the

Leon Chen 4.9k Dec 29, 2022
As a part of the HAKE project, includes the reproduced SOTA models and the corresponding HAKE-enhanced versions (CVPR2020).

HAKE-Action HAKE-Action (TensorFlow) is a project to open the SOTA action understanding studies based on our Human Activity Knowledge Engine. It inclu

Yong-Lu Li 94 Nov 18, 2022
Simple is not Easy: A Simple Strong Baseline for TextVQA and TextCaps[AAAI2021]

Simple is not Easy: A Simple Strong Baseline for TextVQA and TextCaps Here is the code for ssbassline model. We also provide OCR results/features/mode

ZephyrZhuQi 51 Nov 18, 2022
Using VapourSynth with super resolution models and speeding them up with TensorRT.

VSGAN-tensorrt-docker Using image super resolution models with vapoursynth and speeding them up with TensorRT. Using NVIDIA/Torch-TensorRT combined wi

111 Jan 05, 2023
Code for MarioNette: Self-Supervised Sprite Learning, in NeurIPS 2021

MarioNette | Webpage | Paper | Video MarioNette: Self-Supervised Sprite Learning Dmitriy Smirnov, Michaël Gharbi, Matthew Fisher, Vitor Guizilini, Ale

Dima Smirnov 28 Nov 18, 2022
Face Mask Detection is a project to determine whether someone is wearing mask or not, using deep neural network.

face-mask-detection Face Mask Detection is a project to determine whether someone is wearing mask or not, using deep neural network. It contains 3 scr

amirsalar 13 Jan 18, 2022
[ICCV2021] Learning to Track Objects from Unlabeled Videos

Unsupervised Single Object Tracking (USOT) 🌿 Learning to Track Objects from Unlabeled Videos Jilai Zheng, Chao Ma, Houwen Peng and Xiaokang Yang 2021

53 Dec 28, 2022
a pytorch implementation of auto-punctuation learned character by character

Learning Auto-Punctuation by Reading Engadget Articles Link to Other of my work 🌟 Deep Learning Notes: A collection of my notes going from basic mult

Ge Yang 137 Nov 09, 2022
VoxHRNet - Whole Brain Segmentation with Full Volume Neural Network

VoxHRNet This is the official implementation of the following paper: Whole Brain Segmentation with Full Volume Neural Network Yeshu Li, Jonathan Cui,

Microsoft 12 Nov 24, 2022
SSL_SLAM2: Lightweight 3-D Localization and Mapping for Solid-State LiDAR (mapping and localization separated) ICRA 2021

SSL_SLAM2 Lightweight 3-D Localization and Mapping for Solid-State LiDAR (Intel Realsense L515 as an example) This repo is an extension work of SSL_SL

Wang Han 王晗 1.3k Jan 08, 2023
SCALoss: Side and Corner Aligned Loss for Bounding Box Regression (AAAI2022).

SCALoss PyTorch implementation of the paper "SCALoss: Side and Corner Aligned Loss for Bounding Box Regression" (AAAI 2022). Introduction IoU-based lo

TuZheng 20 Sep 07, 2022
Code for CVPR2021 "Visualizing Adapted Knowledge in Domain Transfer". Visualization for domain adaptation. #explainable-ai

Visualizing Adapted Knowledge in Domain Transfer @inproceedings{hou2021visualizing, title={Visualizing Adapted Knowledge in Domain Transfer}, auth

Yunzhong Hou 80 Dec 25, 2022
FCOSR: A Simple Anchor-free Rotated Detector for Aerial Object Detection

FCOSR: A Simple Anchor-free Rotated Detector for Aerial Object Detection FCOSR: A Simple Anchor-free Rotated Detector for Aerial Object Detection arXi

59 Nov 29, 2022
A high-level Python library for Quantum Natural Language Processing

lambeq About lambeq is a toolkit for quantum natural language processing (QNLP). Documentation: https://cqcl.github.io/lambeq/ User support: lambeq-su

Cambridge Quantum 315 Jan 01, 2023