MNE: Magnetoencephalography (MEG) and Electroencephalography (EEG) in Python

Overview

GH-Linux GH-macOS Azure Circle Codecov PyPI conda-forge Zenodo

MNE

MNE-Python

MNE-Python software is an open-source Python package for exploring, visualizing, and analyzing human neurophysiological data such as MEG, EEG, sEEG, ECoG, and more. It includes modules for data input/output, preprocessing, visualization, source estimation, time-frequency analysis, connectivity analysis, machine learning, and statistics.

Documentation

MNE documentation for MNE-Python is available online.

Installing MNE-Python

To install the latest stable version of MNE-Python, you can use pip in a terminal:

pip install -U mne
  • MNE-Python 0.17 was the last release to support Python 2.7
  • MNE-Python 0.18 requires Python 3.5 or higher
  • MNE-Python 0.21 requires Python 3.6 or higher
  • MNE-Python 0.24 requires Python 3.7 or higher

For more complete instructions and more advanced installation methods (e.g. for the latest development version), see the installation guide.

Get the latest code

To install the latest version of the code using pip open a terminal and type:

pip install -U https://github.com/mne-tools/mne-python/archive/main.zip

To get the latest code using git, open a terminal and type:

git clone https://github.com/mne-tools/mne-python.git

Alternatively, you can also download a zip file of the latest development version.

Dependencies

The minimum required dependencies to run MNE-Python are:

  • Python >= 3.7
  • NumPy >= 1.18.1
  • SciPy >= 1.4.1

For full functionality, some functions require:

  • Matplotlib >= 3.1.0
  • Scikit-learn >= 0.22.0
  • Numba >= 0.48.0
  • NiBabel >= 2.5.0
  • Pandas >= 1.0.0
  • Picard >= 0.3
  • CuPy >= 7.1.1 (for NVIDIA CUDA acceleration)
  • DIPY >= 1.1.0
  • Imageio >= 2.6.1
  • PyVista >= 0.32
  • pyvistaqt >= 0.4
  • mffpy >= 0.5.7

Contributing to MNE-Python

Please see the documentation on the MNE-Python homepage:

https://mne.tools/dev/install/contributing.html

Forum

https://mne.discourse.group

Licensing

MNE-Python is BSD-licenced (3 clause):

This software is OSI Certified Open Source Software. OSI Certified is a certification mark of the Open Source Initiative.

Copyright (c) 2011-2021, authors of MNE-Python. All rights reserved.

Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:

  • Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
  • Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
  • Neither the names of MNE-Python authors nor the names of any contributors may be used to endorse or promote products derived from this software without specific prior written permission.

This software is provided by the copyright holders and contributors "as is" and any express or implied warranties, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose are disclaimed. In no event shall the copyright owner or contributors be liable for any direct, indirect, incidental, special, exemplary, or consequential damages (including, but not limited to, procurement of substitute goods or services; loss of use, data, or profits; or business interruption) however caused and on any theory of liability, whether in contract, strict liability, or tort (including negligence or otherwise) arising in any way out of the use of this software, even if advised of the possibility of such damage.

Comments
  • ENH Concatenated epoch plot

    ENH Concatenated epoch plot

    Sorry for the delay. I'll make up for it over the weekend. This works at least on my computer for displaying the first few of the epochs. The scrollbars are under construction.

    TODOs

    • [x] keyboard shortcuts
    • [x] dropping bad epochs
    • [x] deprecate old plot function
    • [x] scale data using pageup and pagedown keys (for both raw and epochs)
    • [ ] make epochs dropping work for preload=False
    • [x] mark bad epochs in scrollbar
    • [x] update example
    • [ ] mark bad channels ?
    • [x] red border for bad epochs instead of shading
    • [x] add button for projector
    • [x] add vertical lines to show event color in epochs

    BUGS

    • [x] tight_layout on macosx
    • [x] projs
    • [x] no legend
    • [x] vertical line at t=0 won't work if epoch is from tmin > 0
    opened by jaeilepp 303
  • MRG+4: Epochs metadata

    MRG+4: Epochs metadata

    This is a rough draft of the metadata attribute for the Epochs object. The code is mostly there, and I put together a tutorial + some tests that still need tweaking. Let's see what the rendered circle output looks like and then decide whether we like it or not :-)

    The main thing this does is:

    • Lets you add a metadata attribute to Epochs objects. This is a dataframe that can be stored w/ the object.
    • Lets you do pandas query-style things with the __getattr__ method in Epochs. (see example)

    Todo

    • [x] Add I/O stuff
    • [x] tutorial

    ping @agramfort @Eric89GXL @jona-sassenhagen @kingjr

    Follow-up PRs

    • Deal with groupby functionality for making Evoked (or Epochs?) instances
    opened by choldgraf 209
  • [MRG+2] adding receptive field module

    [MRG+2] adding receptive field module

    Just when you thought it was safe, it returns! Unnecessarily complicated API changes make a comeback in "Gitastrophe 2: Attack of the clone(d repository)"

    This is a new branch off of master and a greatly simplified version of the long discussion in #2796. The basic idea is that we decided tackling the general encoding model problem is probably too much to bite off in one PR, especially when the sklearn API might change somewhat. This is a PR to add a receptive field module. It includes some (unfinished) tests, a new class, a few new functions, and an example.

    LMK

    @agramfort @Eric89GXL @kingjr @jona-sassenhagen

    Closes #2796.

    opened by choldgraf 191
  • Trans GUI

    Trans GUI

    [not ready for review, replaces https://github.com/mne-tools/mne-python/pull/379]

    Open issues:

    1. [x] FIX: For scaling, allow the bem file to have a different name from fsaverage
    2. [x] TESTs: try to add TESTs for the GUI through traits
    3. [ ] DOC: Add documentation to the Python page
    4. [ ] DOC: Document traitsui backend issue and possibly handle with automatic default selection
    5. [x] scale source spaces
    6. [x] wait for and use https://github.com/mne-tools/mne-python/pull/739
    7. [ ] add trans-fname parameter
    8. [x] load existing trans file
    9. [ ] automatically load trans when selecting raw and MRI?
    ENH 
    opened by christianbrodbeck 174
  • MRG eeglab .set reader

    MRG eeglab .set reader

    closes https://github.com/mne-tools/mne-python/issues/2672

    very much WIP and works only for the sample data provided by EEGLAB at the moment. If anyone has files to share, I'd be happy to try them.

    TODOs

    • [x] Fix scaling issue when plotting raw
    • [x] Check if chanlocs already exists in the set file
    • [x] Check if data already exists in the set file
    • [x] Handle epochs data
    • [x] Make it work with .dat file
    • [x] eog topoplot
    opened by jasmainak 172
  • MRG+1: Elekta averager

    MRG+1: Elekta averager

    Discussed in #3097

    Summary: Elekta/Neuromag DACQ (data acquisition) supports rather flexible event and averaging logic that is currently not implemented in mne-python. It also stores all averaging parameters in the fiff file, so raw data can be easily reaveraged in postprocessing. The purpose of this PR is

    1. extract the relevant info from the fiff file
    2. implement support for averaging according to DACQ categories (or to modify the categories first)

    Implementation: a class that takes all the relevant info from raw.info['acq_pars'].

    API: get_condition() gives averaging info for a given condition, that can be fed to mne.Epochs

    Technical details: DACQ supports defining 32 different events which correspond to trigger transitions. Events support pre- and post-transition bit masking. Based on the events, 32 averaging categories can be defined. Each category defines a reference event that is the zero-time point for collecting the corresponding epochs. Epoch length is defined by the start and end times (given relative to the reference event). A conditional ('required') event is also supported; if defined, it must appear in a specified time window (before or after the reference event) for the epoch to be valid.

    opened by jjnurminen 164
  • WIP/ENH: add repeated measures twoway anova function

    WIP/ENH: add repeated measures twoway anova function

    This addresses #226 and some aspects of #535

    Adapted from my gist: https://gist.github.com/dengemann/5427106

    which is a translation of MATLAB code by Rik Henson: http://www.mrc-cbu.cam.ac.uk/people/rik.henson/personal/repanova.m

    and to some lesser extend by Python code from pvttbl by Roger Lew. http://code.google.com/p/pyvttbl/

    While there is a new WIP PR in statsmodels related to this (https://github.com/statsmodels/statsmodels/pull/786), this minimal version aims at supporting our (mass-univariate) use case.

    Some features:

    • supports joblib (on my 2011 macbook air it took me 7 minutes to compute 1.000.000 repeated measures anovas for 18 subjects and all three effects from a 2 x 2 design using 2 jobs). I might find a more efficient way to do this, but I think this is definitely start.
    • supports sphericity correction for factor levels > 2

    Current limitations are:

    • to keep things simple, I constrained this function to only estimate models with 2 factors. This should make sense in an MEG context.
    • both factors are expected to be repeated, so no 'between-subject' effects.

    Tests and examples are coming, so please wait with strict reviewing. For the API, however I could already need some feedback. I was wondering whether to add some iterator support for retrieving data matrices for the effects desired sequentially (assuming you don't want all three 20484 * e.g. 150 matrices at once when applied in a source space analysis). Another option would be to add some mini formula language allowing to request one effect e.g. using some R conventions: 'A' or 'B' for the main effects, 'A:B' for just the interaction or 'A*B' for the full model. I think both iteration and formula will be too much.

    opened by dengemann 151
  • MRG: refactor PSD functions

    MRG: refactor PSD functions

    This is a quick update that adds information to docstring so people know where to look for the under-the-hood PSD estimation. It also adds code to keep the estimated psd/frequencies with the object after the transform method is called.

    opened by choldgraf 148
  • ENH: Forward solution in python

    ENH: Forward solution in python

    This code now produces similair results in Python as in C, with computation time faster for oct-6 source spaces on my machine. Should be ready for review, and hopefully extensive testing to make sure there aren't issues across different machines, installs, etc.

    ENH 
    opened by larsoner 143
  • ENH: draft of mne report command

    ENH: draft of mne report command

    Closes https://github.com/mne-tools/mne-python/issues/1056

    Try

     mne report -p MNE-sample-data/ -i MNE-sample-data/MEG/sample/sample_audvis-ave.fif -d MNE-sample-data/subjects/ -s sample -x -v
    

    and it should generate report.html in MNE-sample-data/

    Sample output here: https://dl.dropboxusercontent.com/u/3915954/report.html

    TODOS

    • [x] recursive exploration
    • [x] rebase and use read_evokeds
    • [x] extend the support to as many fif files types
      • [x] cov
      • [x] fwd
      • [x] inv
      • [x] raw
      • [x] trans (display head in helmet to check coregistration quality in mne.viz.plot_trans())
      • [x] epo : plot_drop_log
    • [ ] check dpi settings
    • [x] Slicer coordinates
    • [x] Table of contents linking to different parts of html?
    • [x] the bootstrap/JS theme should allow to select which type of fif file to display. See jquery toggle
    • [x] also bad fif files should appear in red for example if the fif fname is not standard (evoked should end with -ave.fif, cov with -cov.fif, raw with raw.fif or sss.fiff etc....)
    • [x] banner + footer
    • [x] open the report in a browser when generated
    opened by mainakjas 142
  • WIP: Prototype of notebook viz (screencast)

    WIP: Prototype of notebook viz (screencast)

    This PR acts as a new prototype for mne 3d viz in the notebook. The goal here is to provide a lightweight, easy to maintain integration. That's why the design is very simple for now:

    A class WebInteractor (this is not a good name, suggestions are welcome) manages the rendering by 'streaming' a screenshot image from the server-side _Renderer to the notebook through matplotlib. This class also manages the ipywidgets responsible for the interaction with the renderer. The interactions available at the moment are limited to camera settings (with set_camera(azimuth, elevation)) but more will come.

    This design provides integration 'for cheap' since the client does not hold any rendering loop for example and virtually anything that is possible to render in 'standalone' mne with the pyvista backend is casted upon request and should work out of the box.

    Please note that only the first cell is modified compared to the original example.

    More details:

    • Configuration on the user side is limited to just define the env variable MNE_3D_NOTEBOOK=True and %matplotlib widget backend
    • This feature is only available with the pyvista 3d backend
    • The default plotter used to achieve this does not use PyQt
    • _TimeViewer is not supported but we can imagine that a separation frontend/backend can translate very well in this situation
    • Requires (at least) matplotlib, ipywidgets and IPython

    ToDo

    • [x] Add support for headless remote server (suggested in https://github.com/mne-tools/mne-python/pull/7758#issuecomment-627812174)
    • [x] Create a conda YAML file to configure the server environment
    • [x] Hide the subplot controller when it is not needed (suggested in https://github.com/mne-tools/mne-python/pull/7758#issuecomment-635595420)
    • [x] Add a DPI controller
    • [x] Reduce data transfer with continuous_update=False (suggested in https://github.com/mne-tools/mne-python/pull/7758#issuecomment-627812174)
    • [ ] Add an option for client-side only data exploration (suggested in https://github.com/mne-tools/mne-python/pull/7758#issuecomment-627812174)
    • [ ] Add support for click-n-drag (suggested in https://github.com/mne-tools/mne-python/pull/7758#issuecomment-627812174)
    • [ ] Check compatibility with Jupyter Lab (suggested in https://github.com/mne-tools/mne-python/pull/7758#issuecomment-634679364)

    Bug

    • [x] Reset interactivity mode (reported in https://github.com/mne-tools/mne-python/pull/7758#discussion_r424352219)
    • [x] Issue with window resize (reported in https://github.com/mne-tools/mne-python/pull/7758#issuecomment-627812174)
    • [x] Jumpiness and flickering are expected with ipywidgets and my very naive implementation of the rendering.

    Ideas

    • Dockerfile for the server setup (suggested in https://github.com/mne-tools/mne-python/pull/7758#issuecomment-633605246, prototype in https://github.com/mne-tools/mne-python/pull/7758#issuecomment-634653808)
    Original proof of concept

    Here is how this could work for plot_parcellation for example:

    %env MNE_3D_NOTEBOOK=True  # enable notebook support here
    
    import mne
    Brain = mne.viz.get_brain_class()
    
    subjects_dir = mne.datasets.sample.data_path() + '/subjects'
    mne.datasets.fetch_hcp_mmp_parcellation(subjects_dir=subjects_dir,
                                            verbose=True)
    
    mne.datasets.fetch_aparc_sub_parcellation(subjects_dir=subjects_dir,
                                              verbose=True)
    
    labels = mne.read_labels_from_annot(
        'fsaverage', 'HCPMMP1', 'lh', subjects_dir=subjects_dir)
    
    brain = Brain('fsaverage', 'lh', 'inflated', subjects_dir=subjects_dir,
                  cortex='low_contrast', background='white', size=(800, 600))
    brain.add_annotation('HCPMMP1')
    aud_label = [label for label in labels if label.name == 'L_A1_ROI-lh'][0]
    brain.add_label(aud_label, borders=False)
    
    brain = Brain('fsaverage', 'lh', 'inflated', subjects_dir=subjects_dir,
                  cortex='low_contrast', background='white', size=(800, 600))
    brain.add_annotation('HCPMMP1_combined')
    
    brain = Brain('fsaverage', 'lh', 'inflated', subjects_dir=subjects_dir,
                  cortex='low_contrast', background='white', size=(800, 600))
    brain.add_annotation('aparc_sub')
    

    output

    Related to https://github.com/mne-tools/mne-python/pull/7056, https://github.com/mne-tools/mne-python/pull/6232 This is an item of #7162

    VIZ 
    opened by GuillaumeFavelier 133
  • [ENH] Read annotation duration from SNIRF files.

    [ENH] Read annotation duration from SNIRF files.

    Reference issue

    Before this PR, when reading SNIRF files, the annotation durations were ALL set to 1.0

    What does this implement/fix?

    This PR changes the read_raw_snirf function to read annotation durations from the file.

    Additional information

    Relevant file type definition is here: https://github.com/fNIRS/snirf/blob/master/snirf_specification.md#nirsistimjdata

    A PR has been opened to add correct stimulus duration to SNIRF writing at https://github.com/mne-tools/mne-nirs/pull/497

    opened by rob-luke 1
  • Missing API documentation for sample dataset's data_path() (and possibly others)

    Missing API documentation for sample dataset's data_path() (and possibly others)

    Proposed documentation enhancement

    To figure out what the parameters of datasets.sample.data_path() do, I had to dig into the source code. This is … not great, especially since we constantly ask users to provide MWEs based on sample. I assume this affects other datasets as well, but I only checked sample.

    Screenshot 2022-12-30 at 20 20 44 DOC 
    opened by hoechenberger 2
  • coregistration GUI does not exit cleanly

    coregistration GUI does not exit cleanly

    Describe the bug

    If I open the coregistration GUI and it crashes for some reason (e.g., missing subjects_dir), a dangling welcome banner is left which becomes distracting the next time the GUI is opened. One can exit ipython and restart the script, but this might require redoing lengthy computation.

    Steps to reproduce

    $ unset SUBJECTS_DIR
    

    then in ipython:

    >>> from mne.gui import coregistration
    >>> coregistration()
    

    Expected results

    The welcome banner closes

    Actual results☺

    image

    Additional information

    In [2]: mne.sys_info() Platform: Linux-3.10.0-1160.76.1.el7.x86_64-x86_64-with-glibc2.10 Python: 3.8.6 | packaged by conda-forge | (default, Oct 7 2020, 19:08:05) [GCC 7.5.0] Executable: /autofs/space/meghnn_001/users/mjas/anaconda3/envs/mne/bin/python3.8 CPU: x86_64: 64 cores Memory: 125.4 GB

    Exception ignored on calling ctypes callback function: <function _ThreadpoolInfo._find_modules_with_dl_iterate_phdr..match_module_callback at 0x7f8f64346940> Traceback (most recent call last): File "/autofs/space/meghnn_001/users/mjas/anaconda3/envs/mne/lib/python3.8/site-packages/threadpoolctl.py", line 400, in match_module_callback self._make_module_from_path(filepath) File "/autofs/space/meghnn_001/users/mjas/anaconda3/envs/mne/lib/python3.8/site-packages/threadpoolctl.py", line 515, in _make_module_from_path module = module_class(filepath, prefix, user_api, internal_api) File "/autofs/space/meghnn_001/users/mjas/anaconda3/envs/mne/lib/python3.8/site-packages/threadpoolctl.py", line 606, in init self.version = self.get_version() File "/autofs/space/meghnn_001/users/mjas/anaconda3/envs/mne/lib/python3.8/site-packages/threadpoolctl.py", line 646, in get_version config = get_config().split() AttributeError: 'NoneType' object has no attribute 'split' mne: 1.4.dev0 numpy: 1.23.0 {unknown linalg bindings} scipy: 1.5.3 matplotlib: 3.3.3 {backend=Qt5Agg}

    sklearn: 0.23.2 numba: Not found nibabel: 3.2.1 nilearn: 0.7.0 dipy: 1.3.0 openmeeg: Not found cupy: Not found pandas: 1.1.5 pyvista: 0.36.1 {OpenGL 4.5.0 NVIDIA 455.45.01 via Quadro P5000/PCIe/SSE2} pyvistaqt: 0.9.0 ipyvtklink: Not found vtk: 9.0.1 qtpy: 2.0.1 {PyQt5=5.12.9} ipympl: Not found /autofs/space/meghnn_001/users/mjas/anaconda3/envs/mne/lib/python3.8/site-packages/pyqtgraph/colors/palette.py:1: RuntimeWarning: PyQtGraph supports Qt version >= 5.15, but 5.12.9 detected. from ..Qt import QtGui pyqtgraph: 0.13.1 pooch: v1.5.2

    mne_bids: Not found mne_nirs: Not found mne_features: Not found mne_qt_browser: 0.4.0 mne_connectivity: Not found mne_icalabel: Not found

    opened by jasmainak 3
  • Evoked IO cannot handle channel names with colons

    Evoked IO cannot handle channel names with colons

    Description of the problem

    I have data where the channel names contain :. While MNE is able to successfully write this data, it throws a weird error when reading. While I can fix the channel names, I think the error (if at all) should be raised when writing the data, not reading.

    Steps to reproduce

    import mne
    
    root = mne.datasets.sample.data_path() / 'MEG' / 'sample'
    evk_file = root / 'sample_audvis-ave.fif'
    evoked = mne.read_evokeds(evk_file, baseline=(None, 0), proj=True,
                              verbose=False)[0]
    # comment out the following line to see now error
    evoked.rename_channels(lambda x: x.replace(' ', ':'))
    
    evoked.info['bads'] = [evoked.info['ch_names'][0]]
    mne.write_evokeds('test_ave.fif', evoked, overwrite=True)
    
    evoked2 = mne.read_evokeds('test_ave.fif')[0]
    

    Link to data

    No response

    Expected results

    No error

    Actual results

    ---------------------------------------------------------------------------
    ValueError                                Traceback (most recent call last)
    File /autofs/space/meghnn_001/users/mjas/github_repos/mne-opm/mne_bug.py:14
         11 evoked.info['bads'] = [evoked.info['ch_names'][0]]
         12 mne.write_evokeds('test_ave.fif', evoked, overwrite=True)
    ---> 14 evoked2 = mne.read_evokeds('test_ave.fif')[0]
    
    File <decorator-gen-255>:12, in read_evokeds(fname, condition, baseline, kind, proj, allow_maxshield, verbose)
    
    File /autofs/space/meghnn_001/users/mjas/github_repos/mne-python/mne/evoked.py:1186, in read_evokeds(fname, condition, baseline, kind, proj, allow_maxshield, verbose)
       1184 return_list = True
       1185 if condition is None:
    -> 1186     evoked_node = _get_evoked_node(fname)
       1187     condition = range(len(evoked_node))
       1188 elif not isinstance(condition, list):
    
    File /autofs/space/meghnn_001/users/mjas/github_repos/mne-python/mne/evoked.py:1005, in _get_evoked_node(fname)
       1003 f, tree, _ = fiff_open(fname)
       1004 with f as fid:
    -> 1005     _, meas = read_meas_info(fid, tree, verbose=False)
       1006     evoked_node = dir_tree_find(meas, FIFF.FIFFB_EVOKED)
       1007 return evoked_node
    
    File <decorator-gen-35>:10, in read_meas_info(fid, tree, clean_bads, verbose)
    
    File /autofs/space/meghnn_001/users/mjas/github_repos/mne-python/mne/io/meas_info.py:1552, in read_meas_info(fid, tree, clean_bads, verbose)
       1549             acq_stim = tag.data
       1551 #   Load the SSP data
    -> 1552 projs = _read_proj(
       1553     fid, meas_info, ch_names_mapping=ch_names_mapping)
       1555 #   Load the CTF compensation data
       1556 comps = _read_ctf_comp(
       1557     fid, meas_info, chs, ch_names_mapping=ch_names_mapping)
    
    File <decorator-gen-15>:12, in _read_proj(fid, node, ch_names_mapping, verbose)
    
    File /autofs/space/meghnn_001/users/mjas/github_repos/mne-python/mne/io/proj.py:528, in _read_proj(fid, node, ch_names_mapping, verbose)
        525     data = data.T
        527 if data.shape[1] != len(names):
    --> 528     raise ValueError('Number of channel names does not match the '
        529                      'size of data matrix')
        531 # just always use this, we used to have bugs with writing the
        532 # number correctly...
        533 nchan = len(names)
    
    ValueError: Number of channel names does not match the size of data matrix
    

    Additional information

    Platform:         Linux-3.10.0-1160.76.1.el7.x86_64-x86_64-with-glibc2.10
    Python:           3.8.6 | packaged by conda-forge | (default, Oct  7 2020, 19:08:05)  [GCC 7.5.0]
    Executable:       /autofs/space/meghnn_001/users/mjas/anaconda3/envs/mne/bin/python3.8
    CPU:              x86_64: 64 cores
    Memory:           125.4 GB
    
    Exception ignored on calling ctypes callback function: <function _ThreadpoolInfo._find_modules_with_dl_iterate_phdr.<locals>.match_module_callback at 0x7f921e1c2790>
    Traceback (most recent call last):
      File "/autofs/space/meghnn_001/users/mjas/anaconda3/envs/mne/lib/python3.8/site-packages/threadpoolctl.py", line 400, in match_module_callback
        self._make_module_from_path(filepath)
      File "/autofs/space/meghnn_001/users/mjas/anaconda3/envs/mne/lib/python3.8/site-packages/threadpoolctl.py", line 515, in _make_module_from_path
        module = module_class(filepath, prefix, user_api, internal_api)
      File "/autofs/space/meghnn_001/users/mjas/anaconda3/envs/mne/lib/python3.8/site-packages/threadpoolctl.py", line 606, in __init__
        self.version = self.get_version()
      File "/autofs/space/meghnn_001/users/mjas/anaconda3/envs/mne/lib/python3.8/site-packages/threadpoolctl.py", line 646, in get_version
        config = get_config().split()
    AttributeError: 'NoneType' object has no attribute 'split'
    mne:              1.4.dev0
    numpy:            1.23.0 {unknown linalg bindings}
    scipy:            1.5.3
    matplotlib:       3.3.3 {backend=Qt5Agg}
    
    sklearn:          0.23.2
    numba:            Not found
    nibabel:          3.2.1
    nilearn:          0.7.0
    dipy:             1.3.0
    openmeeg:         Not found
    cupy:             Not found
    pandas:           1.1.5
    pyvista:          0.36.1 {OpenGL 4.5.0 NVIDIA 455.45.01 via Quadro P5000/PCIe/SSE2}
    pyvistaqt:        0.9.0
    ipyvtklink:       Not found
    vtk:              9.0.1
    qtpy:             2.0.1 {PyQt5=5.12.9}
    ipympl:           Not found
    /autofs/space/meghnn_001/users/mjas/anaconda3/envs/mne/lib/python3.8/site-packages/pyqtgraph/colors/palette.py:1: RuntimeWarning: PyQtGraph supports Qt version >= 5.15, but 5.12.9 detected.
      from ..Qt import QtGui
    pyqtgraph:        0.13.1
    pooch:            v1.5.2
    
    mne_bids:         Not found
    mne_nirs:         Not found
    mne_features:     Not found
    mne_qt_browser:   0.4.0
    mne_connectivity: Not found
    mne_icalabel:     Not found
    
    
    BUG 
    opened by jasmainak 0
  • allow hat correction for f-tests

    allow hat correction for f-tests

    Describe the new feature or enhancement

    It would be nice to allow low-variance correction for F-tests (not just t-tests) in our cluster permutation code.

    Describe your proposed implementation

    add a sigma (and method='relative'|'absolute') to mne.stats.f_oneway. May require changing how we compute F, as it wasn't obvious to me on a quick glance where I would insert sigma into the current implementation.

    Describe possible alternatives

    currently, users must write their own stat_func to do this. I guess that's an option to leave it that way.

    Additional context

    See equation 8 of the "hat" paper, in the section "implications for F-contrasts" https://doi.org/10.1016/j.neuroimage.2011.10.027

    ENH 
    opened by drammock 1
Releases(v1.3.0)
Owner
MNE tools for MEG and EEG data analysis
MNE tools for MEG and EEG data analysis
optimization routines for hyperparameter tuning

Hyperopt: Distributed Hyperparameter Optimization Hyperopt is a Python library for serial and parallel optimization over awkward search spaces, which

Marc Claesen 398 Nov 09, 2022
Python scripts for performing lane detection using the LSTR model in ONNX

ONNX LSTR Lane Detection Python scripts for performing lane detection using the Lane Shape Prediction with Transformers (LSTR) model in ONNX. Requirem

Ibai Gorordo 29 Aug 30, 2022
Time Dependent DFT in Tamm-Dancoff Approximation

Density Function Theory Program - kspy-tddft(tda) This is an implementation of Time-Dependent Density Functional Theory(TDDFT) using the Tamm-Dancoff

Peter Borthwick 2 Nov 17, 2022
Driller: augmenting AFL with symbolic execution!

Driller Driller is an implementation of the driller paper. This implementation was built on top of AFL with angr being used as a symbolic tracer. Dril

Shellphish 791 Jan 06, 2023
Pseudo lidar - (CVPR 2019) Pseudo-LiDAR from Visual Depth Estimation: Bridging the Gap in 3D Object Detection for Autonomous Driving

Pseudo-LiDAR from Visual Depth Estimation: Bridging the Gap in 3D Object Detection for Autonomous Driving This paper has been accpeted by Conference o

Yan Wang 881 Dec 27, 2022
How to Learn a Domain Adaptive Event Simulator? ACM MM, 2021

LETGAN How to Learn a Domain Adaptive Event Simulator? ACM MM 2021 Running Environment: pytorch=1.4, 1 NVIDIA-1080TI. More details can be found in pap

CVTEAM 4 Sep 20, 2022
Python interface for the DIGIT tactile sensor

DIGIT-INTERFACE Python interface for the DIGIT tactile sensor. For updates and discussions please join the #DIGIT channel at the www.touch-sensing.org

Facebook Research 35 Dec 22, 2022
Sequential GCN for Active Learning

Sequential GCN for Active Learning Please cite if using the code: Link to paper. Requirements: python 3.6+ torch 1.0+ pip libraries: tqdm, sklearn, sc

45 Dec 26, 2022
Official Repository for our ICCV2021 paper: Continual Learning on Noisy Data Streams via Self-Purified Replay

Continual Learning on Noisy Data Streams via Self-Purified Replay This repository contains the official PyTorch implementation for our ICCV2021 paper.

Jinseo Jeong 22 Nov 23, 2022
Just playing with getting VQGAN+CLIP running locally, rather than having to use colab.

Just playing with getting VQGAN+CLIP running locally, rather than having to use colab.

Nerdy Rodent 2.3k Jan 04, 2023
Toolchain to build Yoshi's Island from source code

Project-Y Toolchain to build Yoshi's Island (J) V1.0 from source code, by MrL314 Last updated: September 17, 2021 Setup To begin, download this toolch

MrL314 19 Apr 18, 2022
Affine / perspective transformation in Pose Estimation with Tensorflow 2

Pose Transformation Affine / Perspective transformation in Pose Estimation with Tensorflow 2 Introduction 이 repo는 pose estimation을 연구하고 개발하는 데 도움이 되기

Kim Junho 1 Dec 22, 2021
PolyphonicFormer: Unified Query Learning for Depth-aware Video Panoptic Segmentation

PolyphonicFormer: Unified Query Learning for Depth-aware Video Panoptic Segmentation Winner method of the ICCV-2021 SemKITTI-DVPS Challenge. [arxiv] [

Yuan Haobo 38 Jan 03, 2023
A framework that constructs deep neural networks, autoencoders, logistic regressors, and linear networks

A framework that constructs deep neural networks, autoencoders, logistic regressors, and linear networks without the use of any outside machine learning libraries - all from scratch.

Kordel K. France 2 Nov 14, 2022
Estimating Example Difficulty using Variance of Gradients

Estimating Example Difficulty using Variance of Gradients This repository contains source code necessary to reproduce some of the main results in the

Chirag Agarwal 48 Dec 26, 2022
A TikTok-like recommender system for GitHub repositories based on Gorse

GitRec GitRec is the missing recommender system for GitHub repositories based on Gorse. Architecture The trending crawler crawls trending repositories

337 Jan 04, 2023
Spontaneous Facial Micro Expression Recognition using 3D Spatio-Temporal Convolutional Neural Networks

Spontaneous Facial Micro Expression Recognition using 3D Spatio-Temporal Convolutional Neural Networks Abstract Facial expression recognition in video

Bogireddy Sai Prasanna Teja Reddy 103 Dec 29, 2022
🛠 All-in-one web-based IDE specialized for machine learning and data science.

All-in-one web-based development environment for machine learning Getting Started • Features & Screenshots • Support • Report a Bug • FAQ • Known Issu

Machine Learning Tooling 2.9k Jan 09, 2023
Image Data Augmentation in Keras

Image data augmentation is a technique that can be used to artificially expand the size of a training dataset by creating modified versions of images in the dataset.

Grace Ugochi Nneji 3 Feb 15, 2022
PyTorch implementation of SwAV (Swapping Assignments between Views)

Unsupervised Learning of Visual Features by Contrasting Cluster Assignments This code provides a PyTorch implementation and pretrained models for SwAV

Meta Research 1.7k Jan 04, 2023