Ecco is a python library for exploring and explaining Natural Language Processing models using interactive visualizations.

Overview



PyPI Package latest release Supported versions

Ecco is a python library for exploring and explaining Natural Language Processing models using interactive visualizations.

Ecco provides multiple interfaces to aid the explanation and intuition of Transformer-based language models. Read: Interfaces for Explaining Transformer Language Models.

Ecco runs inside Jupyter notebooks. It is built on top of pytorch and transformers.

Ecco is not concerned with training or fine-tuning models. Only exploring and understanding existing pre-trained models. The library is currently an alpha release of a research project. You're welcome to contribute to make it better!

Documentation: ecco.readthedocs.io

Features

  • Support for a wide variety of language models (GPT2, BERT, RoBERTA, T5, T0, and others).
  • Ability to add your own local models (if they're based on Hugging Face pytorch models).
  • Feature attribution (IntegratedGradients, Saliency, InputXGradient, DeepLift, DeepLiftShap, GuidedBackprop, GuidedGradCam, Deconvolution, and LRP via Captum)
  • Capture neuron activations in the FFNN layer in the Transformer block
  • Identify and visualize neuron activation patterns (via Non-negative Matrix Factorization)
  • Examine neuron activations via comparisons of activations spaces using SVCCA, PWCCA, and CKA
  • Visualizations for:
    • Evolution of processing a token through the layers of the model (Logit lens)
    • Candidate output tokens and their probabilities (at each layer in the model)

Examples:

What is the sentiment of this film review?

Use a large language model (T5 in this case) to detect text sentiment. In addition to the sentiment, see the tokens the model broke the text into (which can help debug some edge cases).

Which words in this review lead the model to classify its sentiment as "negative"?

Feature attribution using Integrated Gradients helps you explore model decisions. In this case, switching "weakness" to "inclination" allows the model to correctly switch the prediction to positive.

Explore the world knowledge of GPT models by posing fill-in-the blank questions.

Asking GPT2 where heathrow airport is

Does GPT2 know where Heathrow Airport is? Yes. It does.

What other cities/words did the model consider in addition to London?

The model also considered Birmingham and Manchester

Visualize the candidate output tokens and their probability scores.

Which input words lead it to think of London?

Asking GPT2 where heathrow airport is

At which layers did the model gather confidence that London is the right answer?

The order of the token in each layer, layer 11 makes it number 1

The model chose London by making the highest probability token (ranking it #1) after the last layer in the model. How much did each layer contribute to increasing the ranking of London? This is a logit lens visualizations that helps explore the activity of different model layers.

What are the patterns in BERT neuron activation when it processes a piece of text?

Colored line graphs on the left, a piece of text on the right. The line graphs indicate the activation of BERT neuron groups in response to the text

A group of neurons in BERT tend to fire in response to commas and other punctuation. Other groups of neurons tend to fire in response to pronouns. Use this visualization to factorize neuron activity in individual FFNN layers or in the entire model.

Read the paper:

Ecco: An Open Source Library for the Explainability of Transformer Language Models Association for Computational Linguistics (ACL) System Demonstrations, 2021

Tutorials

How-to Guides

API Reference

The API reference and the architecture page explain Ecco's components and how they work together.

Gallery & Examples

Predicted Tokens: View the model's prediction for the next token (with probability scores). See how the predictions evolved through the model's layers. [Notebook] [Colab]


Rankings across layers: After the model picks an output token, Look back at how each layer ranked that token. [Notebook] [Colab]


Layer Predictions:Compare the rankings of multiple tokens as candidates for a certain position in the sequence. [Notebook] [Colab]


Primary Attributions: How much did each input token contribute to producing the output token? [Notebook] [Colab]


Detailed Primary Attributions: See more precise input attributions values using the detailed view. [Notebook] [Colab]


Neuron Activation Analysis: Examine underlying patterns in neuron activations using non-negative matrix factorization. [Notebook] [Colab]

Getting Help

Having trouble?

  • The Discussion board might have some relevant information. If not, you can post your questions there.
  • Report bugs at Ecco's issue tracker

Bibtex for citations:

@inproceedings{alammar-2021-ecco,
    title = "Ecco: An Open Source Library for the Explainability of Transformer Language Models",
    author = "Alammar, J",
    booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: System Demonstrations",
    year = "2021",
    publisher = "Association for Computational Linguistics",
}
Comments
  • Support for T5-like Seq2SeqLM

    Support for T5-like Seq2SeqLM

    Hello, I was wondering if there are any plans for explicit encoder-decoder models like T5. Although T5 was not pre-trained with auto-regressive LM objective it is a pretty good candidate for ecco's generate method. I tried running t5 as it was listed in model-config.yaml but soon ran into issues because the current implementation is very much suited to gpt like models.

    I made some changes on a fork to get attribution working, but not sure if I did it correctly https://colab.research.google.com/drive/1zahIWgOCySoQXQkAaEAORZ5DID11qpkH?usp=sharing https://github.com/chiragjn/ecco/tree/t5_exp

    I would love to contribute to add support with some help, especially on the overall implementation design

    opened by chiragjn 8
  • Adds a model config field use_causal_lm and config entries for gpt-neo

    Adds a model config field use_causal_lm and config entries for gpt-neo

    Adding gpt-neo models to model-config.yaml failed because the model needs to be loaded using AutoModelForCausalLM, but init identified such models by looking for gpt2 in the name. A TODO comment in init mentioned using config instead. I refactored config loading slightly to enable this - not sure if that is the direction you intended or not.

    opened by stprior 8
  • Add a `conda` install option for `ecco`

    Add a `conda` install option for `ecco`

    A conda install option for ecco could be helpful for two reasons:

    1. Easy installation with version management with conda.
    2. For other libraries, which if depend on ecco, if you want them on conda-forge channel as well, ecco must be available on conda-forge.

    :bulb: I have already have started work on this. PR: https://github.com/conda-forge/staged-recipes/pull/17388

    Once, the PR gets merged, you will be able to install ecco as:

    conda install -c conda-forge ecco
    

    I will send a PR to update your documentation, once the PR gets merged.

    opened by sugatoray 7
  • Add support for PEGASUS model

    Add support for PEGASUS model

    I would like to add the support of PEGASUS in model-config.yaml.

    PEGASUS model is an encoder-decoder type and the implementation is completely inherited from BartForConditionalGeneration. So the config is similar to the BART model.

    Notes: This is my first time making a pull request on an open-source project, but hope this helps!

    opened by thomas-chong 6
  • Add support for Integrated Gradients explainability method

    Add support for Integrated Gradients explainability method

    In this PR, me and @SSamDav add support for the IG algorithm and make use of the same visualization plots used for input saliency. Besides, we also fix a saliency visualization bug for enc-dec models that was not addressed in the previous PR.

    Notes:

    • The generate method became even slower with the IG method. We added an option to choose which attribution method to calculate, but it can be further improved. Maybe the visualization could be coupled with the generation itself.
    • The IG score has a convergence delta error that could be shown in the plot or, for example, be used to change the IG default parameters when a minimum error is not met.
    opened by JoaoLages 5
  • attention head

    attention head

    Hi @jalammar, I tested some examples with Ecco, and I wanted to know if it is possible to change the head to view the activations for each one and for each layer?

    opened by afcarvallo 5
  • Add support for more attribution methods

    Add support for more attribution methods

    Hi, Currently, the project seems to be relying on grad-norm and grad-x-input to obtain the attributions. However, there are other arguably better (as discussed in recent work) methods to obtain saliency maps. Integrating them in this project would also provide a good way to compare them on the same input examples.

    Some of these methods from the top of my head are- integrated gradients, gradient shapley, and LIME. Perhaps support for visualizing the attention map from the model being interpreted itself could also be added. Methods based on feature ablation are also possible but they might need more work to integrate.

    There is support for these aforementioned methods on Captum, but it takes effort to get them working for NLP tasks, especially those based on language modeling. Thus, I feel this would be a useful addition here.

    enhancement help wanted 
    opened by RachitBansal 5
  • token prefix in roberta model?

    token prefix in roberta model?

    Trying to use a custom trained Roberta model by loading the config file but getting the error the token prefix is not present in the config. Any idea how to fix it? Screenshot 2022-02-02 at 3 30 31 PM

    opened by sarthusarth 4
  • output.saliency() displays nothing

    output.saliency() displays nothing

    I am trying to visualize saliency maps from a custom GPT model. Since I am concerned only about saliency maps, I just do the following:

    out = OutputSeq(token_ids = input_ids, n_input_tokens = n_input_tokens, tokens = tokens, attribution = attr)
    out.saliency()
    

    I get no errors and nothing is displayed in the jupyter notebook, but when I open Chrome's Javascript console, I see the following thing.

    
    (unknown) Ecco initialize.
    
      | l | @ | storage.googleapis.c…ust=1610606118793:1
    -- | -- | -- | --
      | (anonymous) | @ | storage.googleapis.c…ust=1610606118793:1
      | autoTextColor | @ | storage.googleapis.c…ust=1610606118793:1
      | (anonymous) | @ | storage.googleapis.c…ust=1610606118793:1
      | (anonymous) | @ | d3js.org/d3.v5.min.j…ust=1610606118793:2
      | each | @ | d3js.org/d3.v5.min.j…ust=1610606118793:2
      | style | @ | d3js.org/d3.v5.min.j…ust=1610606118793:2
      | enter | @ | storage.googleapis.c…ust=1610606118793:1
      | (anonymous) | @ | storage.googleapis.c…ust=1610606118793:1
      | join | @ | d3js.org/d3.v5.min.j…ust=1610606118793:2
      | setupTokenBoxes | @ | storage.googleapis.c…ust=1610606118793:1
      | init | @ | storage.googleapis.c…ust=1610606118793:1
      | eval
      | execCb | @ | require.js:1693
      | check | @ | require.js:881
      | enable | @ | require.js:1173
      | init | @ | require.js:786
      | (anonymous) | @ | require.js:1457
    
    DevTools failed to load SourceMap: Could not load content for http://localhost:8888/static/notebook/js/main.min.js.map: HTTP error: status code 404, net::ERR_HTTP_RESPONSE_CODE_FAILURE
    DevTools failed to load SourceMap: Could not load content for https://storage.googleapis.com/wandb-cdn/production/d4e2434e6/raven.min.js.map: HTTP error: status code 404, net::ERR_HTTP_RESPONSE_CODE_FAILURE
    

    How do I resolve this issue? Btw, I am running this notebook by sshing into my institute's remote machine.

    opened by VirajBagal 4
  • Tell pip to install from setup.py

    Tell pip to install from setup.py

    Forces pip install -r requirements.txt to install the same package versions specified in setup.py.

    For details, see this comment.

    Confirmed that tests pass locally after merging this and #13 . (Since #13 fixes tests, they won't pass until it is merged.)

    opened by nostalgebraist 4
  • Memory management and tweaks

    Memory management and tweaks

    Hello Jay, thanks for all your work on GPT interpretation!

    This PR contains changes I made in a personal fork while attempting to use ecco with a 1.5B-size GPT-2 model. There are 3 kinds of changes:

    1. Attempts to plug memory leaks / otherwise reduce memory footprint
    2. Bug fixes
    3. Usability tweaks and new features

    In retrospect, I wish I had made distinct branches for these 3 types of change, as together they now make up a pretty large PR. I can still go back and do that, if (say) you want to merge the bug fixes without the other ones.


    Context: I am using ecco on a 1.5B-size GPT-2 model, using a Tesla T4 GPU (~15GB memory) on Colab.

    I am using version 3.4.0 of transformers, which is the max version consistent with ecco's setup.py and hence the one I got on installation.

    1. Memory management

    Running lm.generate with this large model, I ran out of GPU memory. This surprised me, because memory has not been an issue for me using the same model in tensorflow.

    After looking into it, I found a few places where use of GPU memory could be lowered:

    • past, which we don't use here, was still being computed on each step.
      • More importantly, python garbage collection was not (as far as I could tell) freeing the values of past produced on previous steps, so generating N tokens required enough memory to store the N pasts emitted from steps 1, 2, ..., N.
      • Mitigation: pass use_cache=False to the model's forward pass, so it doesn't return pasts
    • Saliency calculations all used retain_graph=True, so the backward graphs were never cleared.
      • Mitigation: when we do several gradient calculations per step, pass retain_graph=False to the last one
    • hidden_states were stored on the GPU during generation.
      • They don't need to be on the GPU at that time (because they aren't used in generation).
      • And, since we have a low CPU memory footprint otherwise, we have plenty of CPU memory to store them in.
      • Mitigation: call .cpu() on hidden states emitted from each step. If we want to calculate with them later on, move them back to self.device.
    • (Minor) Memory allocated for logit matrices from each step was not freed after sampling
      • Mitigation: output['logits']=None after rolling a sample

    With these changes, I can run lm.generate for many 100s of steps, where previously I could only manage a small number, maybe ~10.


    2. Bug fixes

    • activations_dict_to_array would fail in the edge case where we only have a single token in the prompt.
      • Issue: np.squeeze would wrongly eliminate the position axis (because its size was 1).
      • Mitigation: use np.concatenate, which doesn't add an unwanted singleton dimension, so we don't have to squeeze
    • top-p sampling did not work
      • Issue: top_k_top_p_filtering apparently expects a position axis in its input, even if that axis only has length 1
      • Mitigation: replace [-1, :] with [-1: ,:] and then squeeze after rolling a sample

    3. Usability tweaks and new features

    • Added an option to not track hidden_states. This feels consistent with the way you can choose whether or not to track other things (activations, attn).
      • To help this work properly, switched from position-based indexing into the CausalLMOutputWithPast objects to key lookup, so we're robust to changes in the length/order of these objects.
    • Added the option to only track hidden states for a user-defined subset of layers, through the new kwarg collect_activations_layer_nums.
      • This is valuable with a large model where you may be only interested in a specific layer, and storing activations from all layers has high memory cost.
      • NMF now takes this kwarg and (if not None) uses it to map between row indices in activations and actual layer numbers. For example, if we are tracking layers 7 and 23, we will have an activation matrix with 2 rows. If passed from_layer=7, to_layer=8, we should retrieve the row slice [:1, :], not [7:8, :].

    I realize this PR is unwieldy -- I just wanted to get my changes up in some form, since at least some of them seemed unambiguously helpful (bug fixes).

    Let me know if you want me to break it down into smaller pieces, or if it needs other work, or if it is generally unhelpful for your goals, or whatever.

    Did not run tox tests because I could not get them to run properly on my machine, even after downloading the tox.ini from one of the CI-related branches.

    opened by nostalgebraist 4
  • AttributeError: 'OutputSeq' object has no attribute 'saliency'

    AttributeError: 'OutputSeq' object has no attribute 'saliency'

    captum 0.5.0 torch 1.13.0+cu117

    Language_Models_and_Ecco_PyData_Khobar.ipynb

    text= "The countries of the European Union are:\n1. Austria\n2. Belgium\n3. Bulgaria\n4."
    output_3 = lm.generate(text, generate=20, do_sample=True)
    output_3.saliency()
    

    AttributeError Traceback (most recent call last) Cell In [13], line 1 ----> 1 output_3.saliency()

    AttributeError: 'OutputSeq' object has no attribute 'saliency'

    opened by Claus1 1
  • Rankings_watch displaying wrong sequence

    Rankings_watch displaying wrong sequence

    Hello, I have a problem with the rankings_watch() function. I used a predefined GPT2 model and gave it the input "Today, the weather is". However, in the visualization, only the first token is shown although the model creates the output correctly: image

    Thank you for your help :D

    bug 
    opened by MiriUll 1
  • Running Eccomap for Pre Trained BertForMaskedLM

    Running Eccomap for Pre Trained BertForMaskedLM

    Hi, I was trying to run my pretrained model for which i had used BERTForMaskedLM model class from hugging face but its giving me this error. Plese help me in resolving this error. Thanks in advance. image

    opened by iamakshay1 1
  • Remove `tokenizer_config` usage from the library

    Remove `tokenizer_config` usage from the library

    This config parameter was made to easily package config to send to the Javascript components. Ecco now handles all tokenization on the Python side to separate the concerns between the python and JS components. Subsequently, this needs to be removed.

    opened by jalammar 0
  • Tokenizer has partial token suffix instead of prefix

    Tokenizer has partial token suffix instead of prefix

    Following your guide for identifying model configuration

    MODEL_ID = "vinai/bertweet-base"
    
    from transformers import AutoModelForSequenceClassification, AutoTokenizer
    model = AutoModelForSequenceClassification.from_pretrained(MODEL_ID)
    tokenizer = AutoTokenizer.from_pretrained(MODEL_ID, normalization=True, use_fast=False)
    
    ids= tokenizer('tokenization')
    ids
    

    returns:

    {'input_ids': [0, 969, 6186, 6680, 2], 'token_type_ids': [0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1]}
    

    Then

    tokenizer.convert_ids_to_tokens(ids['input_ids'])
    

    returns:

    ['<s>', 'to@@', 'ken@@', 'ization', '</s>']
    

    Here I noticed that the tokenizer adds a partial token suffix instead of partial token prefix. Having a suffix instead of prefix is not configurable in the config.

    opened by guustfranssensEY 1
Releases(v0.1.2)
Owner
Jay Alammar
ML Research Engineer. Focused on NLP language models and visualization. @cohere-ai. Ex ML content dev @ Udacity.
Jay Alammar
Rethinking the Truly Unsupervised Image-to-Image Translation - Official PyTorch Implementation (ICCV 2021)

Rethinking the Truly Unsupervised Image-to-Image Translation (ICCV 2021) Each image is generated with the source image in the left and the average sty

Clova AI Research 436 Dec 27, 2022
A number of methods in order to perform Natural Language Processing on live data derived from Twitter

A number of methods in order to perform Natural Language Processing on live data derived from Twitter

1 Nov 24, 2021
The RWKV Language Model

RWKV-LM We propose the RWKV language model, with alternating time-mix and channel-mix layers: The R, K, V are generated by linear transforms of input,

PENG Bo 877 Jan 05, 2023
simpleT5 is built on top of PyTorch-lightning⚡️ and Transformers🤗 that lets you quickly train your T5 models.

Quickly train T5 models in just 3 lines of code + ONNX support simpleT5 is built on top of PyTorch-lightning ⚡️ and Transformers 🤗 that lets you quic

Shivanand Roy 220 Dec 30, 2022
HF's ML for Audio study group

Hugging Face Machine Learning for Audio Study Group Welcome to the ML for Audio Study Group. Through a series of presentations, paper reading and disc

Vaibhav Srivastav 110 Jan 01, 2023
Behavioral Testing of Clinical NLP Models

Behavioral Testing of Clinical NLP Models This repository contains code for testing the behavior of clinical prediction models based on patient letter

Betty van Aken 2 Sep 20, 2022
Under the hood working of transformers, fine-tuning GPT-3 models, DeBERTa, vision models, and the start of Metaverse, using a variety of NLP platforms: Hugging Face, OpenAI API, Trax, and AllenNLP

Transformers-for-NLP-2nd-Edition @copyright 2022, Packt Publishing, Denis Rothman Contact me for any question you have on LinkedIn Get the book on Ama

Denis Rothman 150 Dec 23, 2022
Open source annotation tool for machine learning practitioners.

doccano doccano is an open source text annotation tool for humans. It provides annotation features for text classification, sequence labeling and sequ

7.1k Jan 01, 2023
skweak: A software toolkit for weak supervision applied to NLP tasks

Labelled data remains a scarce resource in many practical NLP scenarios. This is especially the case when working with resource-poor languages (or text domains), or when using task-specific labels wi

Norsk Regnesentral (Norwegian Computing Center) 850 Dec 28, 2022
ConvBERT-Prod

ConvBERT 目录 0. 仓库结构 1. 简介 2. 数据集和复现精度 3. 准备数据与环境 3.1 准备环境 3.2 准备数据 3.3 准备模型 4. 开始使用 4.1 模型训练 4.2 模型评估 4.3 模型预测 5. 模型推理部署 5.1 基于Inference的推理 5.2 基于Serv

yujun 7 Apr 08, 2022
Modeling cumulative cases of Covid-19 in the US during the Covid 19 Delta wave using Bayesian methods.

Introduction The goal of this analysis is to find a model that fits the observed cumulative cases of COVID-19 in the US, starting in Mid-July 2021 and

Alexander Keeney 1 Jan 05, 2022
📝An easy-to-use package to restore punctuation of the text.

✏️ rpunct - Restore Punctuation This repo contains code for Punctuation restoration. This package is intended for direct use as a punctuation restorat

Daulet Nurmanbetov 72 Dec 30, 2022
CoNLL-English NER Task (NER in English)

CoNLL-English NER Task en | ch Motivation Course Project review the pytorch framework and sequence-labeling task practice using the transformers of Hu

Kevin 2 Jan 14, 2022
The official implementation of "BERT is to NLP what AlexNet is to CV: Can Pre-Trained Language Models Identify Analogies?, ACL 2021 main conference"

BERT is to NLP what AlexNet is to CV This is the official implementation of BERT is to NLP what AlexNet is to CV: Can Pre-Trained Language Models Iden

Asahi Ushio 20 Nov 03, 2022
Easy to use, state-of-the-art Neural Machine Translation for 100+ languages

EasyNMT - Easy to use, state-of-the-art Neural Machine Translation This package provides easy to use, state-of-the-art machine translation for more th

Ubiquitous Knowledge Processing Lab 748 Jan 06, 2023
Snips Python library to extract meaning from text

Snips NLU Snips NLU (Natural Language Understanding) is a Python library that allows to extract structured information from sentences written in natur

Snips 3.7k Dec 30, 2022
ACL'22: Structured Pruning Learns Compact and Accurate Models

☕ CoFiPruning: Structured Pruning Learns Compact and Accurate Models This repository contains the code and pruned models for our ACL'22 paper Structur

Princeton Natural Language Processing 130 Jan 04, 2023
Official PyTorch implementation of Time-aware Large Kernel (TaLK) Convolutions (ICML 2020)

Time-aware Large Kernel (TaLK) Convolutions (Lioutas et al., 2020) This repository contains the source code, pre-trained models, as well as instructio

Vasileios Lioutas 28 Dec 07, 2022
Pangu-Alpha for Transformers

Pangu-Alpha for Transformers Usage Download MindSpore FP32 weights for GPU from here to data/Pangu-alpha_2.6B.ckpt Activate MindSpore environment and

One 5 Oct 01, 2022