Lucid library adapted for PyTorch

Related tags

Deep Learninglucent
Overview

Lucent

Travis build status Code coverage

PyTorch + Lucid = Lucent

The wonderful Lucid library adapted for the wonderful PyTorch!

Lucent is not affiliated with Lucid or OpenAI's Clarity team, although we would love to be! Credit is due to the original Lucid authors, we merely adapted the code for PyTorch and we take the blame for all issues and bugs found here.

Usage

Lucent is still in pre-alpha phase and can be installed locally with the following command:

pip install torch-lucent

In the spirit of Lucid, get up and running with Lucent immediately, thanks to Google's Colab!

You can also clone this repository and run the notebooks locally with Jupyter.

Quickstart

import torch

from lucent.optvis import render
from lucent.modelzoo import inceptionv1

device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model = inceptionv1(pretrained=True)
model.to(device).eval()

render.render_vis(model, "mixed4a:476")

Tutorials

Other Notebooks

Here, we have tried to recreate some of the Lucid notebooks! You can also check out the lucent-notebooks repo to clone all the notebooks.

Recommended Readings

Related Talks

Slack

Check out #proj-lucid and #circuits on the Distill slack!

Additional Information

License and Disclaimer

You may use this software under the Apache 2.0 License. See LICENSE.

Comments
  • use custom model?

    use custom model?

    Hi, I see its possible to use models from the modelzoo, is it possible to use a custom trained model? Ay documentation or direction would be appreciated.

    opened by dvschultz 5
  • Add activation grids notebook

    Add activation grids notebook

    Issue #3, reproducing activation grids (https://github.com/tensorflow/lucid/blob/master/notebooks/building-blocks/ActivationGrid.ipynb)

    It's possible to try it here: https://colab.research.google.com/drive/1pEe-KmXeDJcWQYLOHwcMubS69wVVCHLe#scrollTo=xidm-QrXvL2X

    Here are the results so far with inceptionv1 and layer mixed4d:

    • reproduced: https://imgur.com/twmizR4
    • original: https://imgur.com/eaYwEWR

    Some remarks and a question:

    • I added channel_reducer as is from the original repo
    • default transforms in transform.py produce a different size (due to random scaling) each time it is called, then resampling to 224 is done after that to have a fixed size. Is that the same in lucid? I need to debug a little bit in the original repo to be sure to answer that, but please let me know if you have the answer. The reason I ask is that in this specific notebook, the cells in the grid are much smaller than 224. I added an argument "fixed_image_size" to handle this specific case where we want a fixed image size (after resampling) which is not 224.
    • Since all layers are computed and with this commit we can accept smaller images, this means that it's possible to have an exception on higher layers because the image size is not big enough, but it should be fine as long as the layer we are interested in is computed, I handled this exception
    opened by mehdidc 5
  • ValueError in Render

    ValueError in Render

    Hi there,

    I am trying to run the tutorial and am running into the following error:

    >>> import torch
    >>> from lucent.optvis import render, param, transform, objectives
    >>> from lucent.modelzoo import inceptionv1
    >>> device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
    >>> model = inceptionv1(pretrained=True)
    >>> _ = model.to(device).eval()
    >>> _ = render.render_vis(model, "mixed4a:476", show_inline=True)
    
      0%|                                                                                       | 0/512 [00:00<?, ?it/s]
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "/Users/tatekeller/opt/anaconda3/envs/spec/lib/python3.8/site-packages/lucent/optvis/render.py", line 113, in render_vis
        optimizer.step(closure)
      File "/Users/tatekeller/.local/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 26, in decorate_context
        return func(*args, **kwargs)
      File "/Users/tatekeller/.local/lib/python3.8/site-packages/torch/optim/adam.py", line 66, in step
        loss = closure()
      File "/Users/tatekeller/opt/anaconda3/envs/spec/lib/python3.8/site-packages/lucent/optvis/render.py", line 97, in closure
        model(transform_f(image_f()))
      File "/Users/tatekeller/opt/anaconda3/envs/spec/lib/python3.8/site-packages/lucent/optvis/transform.py", line 85, in inner
        x = transform(x)
      File "/Users/tatekeller/opt/anaconda3/envs/spec/lib/python3.8/site-packages/lucent/optvis/transform.py", line 75, in inner
        M = kornia.get_rotation_matrix2d(center, angle, scale).to(device)
      File "/Users/tatekeller/opt/anaconda3/envs/spec/lib/python3.8/site-packages/kornia/geometry/transform/imgwarp.py", line 347, in get_rotation_matrix2d
        raise ValueError("Input scale must be a B tensor. Got {}"
    ValueError: Input scale must be a B tensor. Got torch.Size([1, 2])
    

    I am using a conda environment with Python 3.8.5 and pytorch=1.7.0.

    Any help regarding this error would be much appreciated!

    opened by tatkeller 4
  • Utils to show modulename with its repr(); Add Linear weighted activations as objective; Add pretrained GAN as parametrization

    Utils to show modulename with its repr(); Add Linear weighted activations as objective; Add pretrained GAN as parametrization

    Dear author,

    Thanks so much for implement lucid in PyTorch! I really enjoyed using it in my projects of leveraging deep neural networks as a way to understand real neurons in visual cortices. In my usage, I want to activate multiple channels together to match the selectivity of the biological neuron or units in other networks. We can achieve this by adding up the original channel objective or neuron objective. But it becomes very inefficient in back prop.

    So here are my 2 cents, in this commit I

    • Add linearly weighted activations of the channel, neuron, neuron group as objective. Using tensor operations.
    • Add a function in util.py to output the module names to ease the usage for custom networks.
    opened by Animadversio 4
  • Generating a batch of optimal stimuli, one for each unit in a layer

    Generating a batch of optimal stimuli, one for each unit in a layer

    Hi, I was trying to use Lucent to generate optimal stimuli for several units/neurons of a layer parallely. So, I figured I would leverage the batch processing. As illustrated in the neuron interaction tutorial notebook, I was passing a sum of objectives to the render.render_vis() function. Here is a toy example of what I want and my approach: Units to be visualized = [10,20,30] Layer = 'readout_fc' tot_objective = objectives.channel("readout_fc",10,batch=0)+objectives.channel("readout_fc",20,batch=1)+objectives.channel("readout_fc",30,batch=2) param_f = lambda:param.image(135,batch=3) imgs = render.render_vis(model,tot_objective,param_f=param_f,preprocess=False,fixed_input_image_size=135)

    The parameter settings works beautifully when I try one unit.😄 However, I wasn't sure if this is the correct way to approach this for multiple units in parallel (this gives me seperate images for each unit). Also when the number of units is more, I was hoping to avoid writing it out individually or run an explicit for loop to compute the objective. I tried using reduce as below: neurons = [10,20,30] tot_objective = reduce(lambda x,y: x+objectives.channel("readout_fc",y[0],batch=y[1]),list(zip(neurons,np.arange(len(neurons)))),0) Doing so gives me the same image 3 times. So, I was wondering if there is something wrong in how I am using the objective function to generate optimal stimuli from multiple units in parallel. Thanks in advance.

    opened by arnaghosh 3
  • Q: Do you use the same architecture and weights as Clarity does?

    Q: Do you use the same architecture and weights as Clarity does?

    Hi,

    I am looking for a trainable InceptionV1 model which shares the same weights with the ones Clarity team uses. Reading your code, I've found these lines:

    model_urls = {
        # InceptionV1 model used in Lucid examples, converted by ProGamerGov
        'inceptionv1': 'https://github.com/ProGamerGov/pytorch-old-tensorflow-models/raw/master/inception5h.pth',
    }
    

    Does it mean you're using exactly the same architecture and weights, so your render_vis function can reproduce the same pictures that Clarity has published?

    Thanks!

    opened by gergopool 3
  • Add direction and direction_neuron objectives

    Add direction and direction_neuron objectives

    Hi @greentfrapp,

    Thanks so much for making this repo! It has been a great help for me. For my use-case I needed the direction and direction_neuron objectives so I added them into lucent. I also included two demo files but let me know if they should be rolled into a docstring instead. This PR also lays the groundwork to reproduce the activation atlas notebooks from lucid. Would love to hear your thoughts :)

    opened by ndey96 3
  • Activation Grid Notebook

    Activation Grid Notebook

    Reproduce Lucid's Activation Grid Notebook with PyTorch and Lucent.

    The only new function required seems to be ChannelReducer, which doesn't rely on Tensorflow so it should be relatively simple to port over.

    Help wanted for this!

    help wanted good first issue 
    opened by greentfrapp 3
  • get raw_activations

    get raw_activations

    Hi, thanks for this great library! I'm trying to reproduce the Activation Atlas notebook using lucent, creating grid cells in the end.

    In the notebook, "raw activation" is available in a numpy.ndarray format by "model.layers[7].activations" to utilise in the next Dimensionality reduction section. How can I get this raw activation using lucent?

    I did create visualised images using lucent render_vis first, and then flattened them to use UMAP fit method, but I'm not sure this is correct. Any suggestion would be appreciated.

    opened by 2nayk 2
  • Temporarily freeze kornia at 0.4.0 to prevent breaking change

    Temporarily freeze kornia at 0.4.0 to prevent breaking change

    kornia 0.4.1 released recently and it includes a breaking change to get_rotation_matrix2d

    I've opened an Issue to hopefully address this soon: https://github.com/kornia/kornia/issues/742 but in the meantime, I suggest freezing kornia at 0.4.0 so that the random_rotate transform continues working as before

    opened by ivanzvonkov 2
  • Suggestion for `lucent.optvis.render.hook_model`

    Suggestion for `lucent.optvis.render.hook_model`

    First, thanks for making this. Lifesaver. Two thoughts (Fwiw, the nested functions, higher-order functions and decorators make things a biiiiit hard to follow when debugging):

    1. I initially dun goofed and didn't eval the model (even though the very example notebook I'm using from lucent does lol). Maybe the hook_model function could check for nonetypes and tell the user to eval, if no saved feature maps are found?
    2. PyTorch module names usually use dot notation. Maybe use dots instead of underscores? Or just tell the user which feature map names are available and the user'll figure it out quickly enough

    Suggested replacement for this function: https://github.com/greentfrapp/lucent/blob/a2b015ce95f29460a329f750428077bcde5e4e94/lucent/optvis/render.py#L194

    def hook(layer):
            if layer == "input":
                out = image_f()
            elif layer == "labels":
                out = list(features.values())[-1].features
            else:
                assert layer in features, f"Invalid layer {layer}. Pick from one of {features.keys()}"  # suggestion 2 ish
                out = features[layer].features
            assert out is not None, "There are no saved feature maps. Make sure to put the model in eval mode, like so: `model.to(device).eval()`. See Lucent notebook for example."   # suggestion 1, tell user to eval
            return out
    

    *I ran it on resnet18. Gorgeous and worked out of the box btw.

    download-4 download-5 download-6 download-7

    opened by alvinwan 2
  • Code Breaks as GPU Index > 0

    Code Breaks as GPU Index > 0

    When using GPU, this codebase only works for torch.device('cuda:0') -- the GPU index has to be 0.

    For example, if you choos torch.device('cuda:1'), then when you run the code demo

    import torch
    
    from lucent.optvis import render
    from lucent.modelzoo import inceptionv1
    
    # Let's use cuda:1
    device = torch.device("cuda:1")
    model = inceptionv1(pretrained=True)
    model.to(device).eval()
    
    render.render_vis(model, "mixed4a:476")
    

    you will see an error like

    ..........
    File .....lucent/optvis/render.py:206, in hook_model.<locals>.hook(layer)
        204     assert layer in features, f"Invalid layer {layer}. Retrieve the list of layers with `lucent.modelzoo.util.get_model_layers(model)`."
        205     out = features[layer].features
    --> 206 assert out is not None, "There are no saved feature maps. Make sure to put the model in eval mode, like so: `model.to(device).eval()`. See README for example."
        207 return out
    
    AssertionError: There are no saved feature maps. Make sure to put the model in eval mode, like so: `model.to(device).eval()`. See README for example.
    
    opened by Haoxiang-Wang 0
  • Lucent handles greyscale images in function view incorrectly

    Lucent handles greyscale images in function view incorrectly

    When rendering and visualizing greyscale images not inline, i.e., with show_inline=False, PIL throws following error: TypeError: Cannot handle this data type: (1, 1, 1), |u1 The problem is that Lucent passes a tensor of shape [H, W, C] with C=1 and range from 0-225 to PIL, but PIL can handle only two-dimensional tensors with integer values. This Stackoverflow answer provides more information.

    Solution: Lucent should check if shape is [H, W, C=1] and reduce to [H, W]. Alternatively, introduce a param, e.g, greycale=True in the view function.

    opened by neoglez 0
  • activation grid for hierarchical custom model

    activation grid for hierarchical custom model

    Hi, is there a way to visualize the activation grid for a custom model with nested modules, which are not explicitely named as a model's attribute? E.g. when I call get_model_layers(), you can see the following output for this custom model: image

    I followed your notebook on the activation grid (https://colab.research.google.com/github/greentfrapp/lucent-notebooks/blob/master/notebooks/activation_grids.ipynb#scrollTo=BDH9cXnSuu5Q). For example, I choose layer = "net_down1_maxpool_conv" (is there some kind of syntax for specifying the layers?) I also rewrote the get_layer() helper function to parse the networks layer from the string, because that layer is not a direct attribute of the network class. But when I then try to use the rendering function, there is an error in the first line of the objective function: In lines 203-206 of render.py either one of the assertions is thrown, depending on how I choose the layer string. Can you help me with this problem? Many thanks!

    opened by An-nay-marks 1
  • Support batches for CPPN image representation

    Support batches for CPPN image representation

    When representing the optimized image using CPPN network, current implementation allows for optimizing for a single image per run. This limitation prevents using, e.g., "diversity" objectives during optimization. This PR adds support for batching for cppn image representation by creating a batch of networks.

    here's an example of generating a diverse batch=2 images for objective "mixed4d_3x3_bottleneck_pre_relu_conv:139" of inception network.

    image

    opened by shaibagon 0
  • Low GPU utilization

    Low GPU utilization

    I am trying to use Lucent to visualize deep neurons, but whatever I do it seems like GPU is under-utilized: Examining utilization via nvidia-smi I see low utilization (~10%) with occasional peaks at ~50%, but never above that. This happens both for cppn prior as well as fourier image representation.

    Any suggestions?

    opened by shaibagon 0
Releases(v0.1.8)
Owner
Lim Swee Kiat
Lim Swee Kiat
Neural Re-rendering for Full-frame Video Stabilization

NeRViS: Neural Re-rendering for Full-frame Video Stabilization Project Page | Video | Paper | Google Colab Setup Setup environment for [Yu and Ramamoo

Yu-Lun Liu 9 Jun 17, 2022
Continuum Learning with GEM: Gradient Episodic Memory

Gradient Episodic Memory for Continual Learning Source code for the paper: @inproceedings{GradientEpisodicMemory, title={Gradient Episodic Memory

Facebook Research 360 Dec 27, 2022
This is an official implementation for "PlaneRecNet".

PlaneRecNet This is an official implementation for PlaneRecNet: A multi-task convolutional neural network provides instance segmentation for piece-wis

yaxu 50 Nov 17, 2022
This repo implements several applications of the proposed generalized Bures-Wasserstein (GBW) geometry on symmetric positive definite matrices.

GBW This repo implements several applications of the proposed generalized Bures-Wasserstein (GBW) geometry on symmetric positive definite matrices. Ap

Andi Han 0 Oct 22, 2021
A framework for Quantification written in Python

QuaPy QuaPy is an open source framework for quantification (a.k.a. supervised prevalence estimation, or learning to quantify) written in Python. QuaPy

41 Dec 14, 2022
The lightweight PyTorch wrapper for high-performance AI research. Scale your models, not the boilerplate.

The lightweight PyTorch wrapper for high-performance AI research. Scale your models, not the boilerplate. Website • Key Features • How To Use • Docs •

Pytorch Lightning 21.1k Dec 29, 2022
Monify: an Expense tracker Program implemented in a Graphical User Interface that allows users to keep track of their expenses

💳 MONIFY (EXPENSE TRACKER PRO) 💳 Description Monify is an Expense tracker Program implemented in a Graphical User Interface allows users to add inco

Moyosore Weke 1 Dec 14, 2021
Music library streaming app written in Flask & VueJS

djtaytay This is a little toy app made to explore Vue, brush up on my Python, and make a remote music collection accessable through a web interface. I

Ryan Tasson 6 May 27, 2022
Sparse Progressive Distillation: Resolving Overfitting under Pretrain-and-Finetune Paradigm

Sparse Progressive Distillation: Resolving Overfitting under Pretrain-and-Finetu

3 Dec 05, 2022
Code for paper "ASAP-Net: Attention and Structure Aware Point Cloud Sequence Segmentation"

ASAP-Net This project implements ASAP-Net of paper ASAP-Net: Attention and Structure Aware Point Cloud Sequence Segmentation (BMVC2020). Overview We i

Hanwen Cao 26 Aug 25, 2022
Source code for our paper "Empathetic Response Generation with State Management"

Source code for our paper "Empathetic Response Generation with State Management" this repository is maintained by both Jun Gao and Yuhan Liu Model Ove

Yuhan Liu 3 Oct 08, 2022
This is a pytorch implementation of the NeurIPS paper GAN Memory with No Forgetting.

GAN Memory for Lifelong learning This is a pytorch implementation of the NeurIPS paper GAN Memory with No Forgetting. Please consider citing our paper

Miaoyun Zhao 43 Dec 27, 2022
Code for sound field predictions in domains with impedance boundaries. Used for generating results from the paper

Code for sound field predictions in domains with impedance boundaries. Used for generating results from the paper

DTU Acoustic Technology Group 11 Dec 17, 2022
Class activation maps for your PyTorch models (CAM, Grad-CAM, Grad-CAM++, Smooth Grad-CAM++, Score-CAM, SS-CAM, IS-CAM, XGrad-CAM, Layer-CAM)

TorchCAM: class activation explorer Simple way to leverage the class-specific activation of convolutional layers in PyTorch. Quick Tour Setting your C

F-G Fernandez 1.2k Dec 29, 2022
PyTorch implementation of VAGAN: Visual Feature Attribution Using Wasserstein GANs

Prototypical Networks for Few shot Learning in PyTorch Simple alternative Implementation of Prototypical Networks for Few Shot Learning (paper, code)

Orobix 93 Aug 17, 2022
(CVPR 2022 - oral) Multi-View Depth Estimation by Fusing Single-View Depth Probability with Multi-View Geometry

Multi-View Depth Estimation by Fusing Single-View Depth Probability with Multi-View Geometry Official implementation of the paper Multi-View Depth Est

Bae, Gwangbin 138 Dec 28, 2022
Code and data accompanying our SVRHM'21 paper.

Code and data accompanying our SVRHM'21 paper. Requires tensorflow 1.13, python 3.7, scikit-learn, and pytorch 1.6.0 to be installed. Python scripts i

5 Nov 17, 2021
IDA file loader for UF2, created for the DEFCON 29 hardware badge

UF2 Loader for IDA The DEFCON 29 badge uses the UF2 bootloader, which conveniently allows you to dump and flash the firmware over USB as a mass storag

Kevin Colley 6 Feb 08, 2022
A sequence of Jupyter notebooks featuring the 12 Steps to Navier-Stokes

CFD Python Please cite as: Barba, Lorena A., and Forsyth, Gilbert F. (2018). CFD Python: the 12 steps to Navier-Stokes equations. Journal of Open Sour

Barba group 2.6k Dec 30, 2022
Lama-cleaner: Image inpainting tool powered by LaMa

Lama-cleaner: Image inpainting tool powered by LaMa

Qing 5.8k Jan 05, 2023