This codebase facilitates fast experimentation of differentially private training of Hugging Face transformers.

Overview

private-transformers

This codebase facilitates fast experimentation of differentially private training of Hugging Face transformers.


What is this? Why an extra codebase?

  • This codebase provides a privacy engine that builds off Opacus, but works way more smoothly with Hugging Face's transformers library.
  • Additionally, we support the ghost clipping technique (see Section 4 of this preprint on how it works) which allows privately training large transformers with considerably reduced memory cost -- in many cases, almost as light as non-private training -- at a modest run-time overhead.
  • With this codebase, we have fine-tuned very large pretrained models, yielding some of the best performing differentially private NLP models to date. Some of these models have performance matching strong non-private baseline approaches. We see strong empirical evidence that highly performant DP NLP models could be built on modest datasets.

Installation

Make sure you have python>=3.8; run the following command:

pip install git+https://github.com/lxuechen/private-transformers.git

To check the package is installed properly, be sure to run the test suite (requires pytest and a GPU) via the following command:

pytest -s tests

Usage

Basic usage

Privately training Hugging Face transformers with our codebase simply consists of 4 steps:

  1. Create your favourite transformer model and optimizer; attach this optimizer to a PrivacyEngine
  2. Compute a per-example loss (1-D tensor) for a mini-batch of data
  3. Pass the loss to optimizer.step or optimizer.virtual_step as a keyword argument
  4. Repeat from step 2

Below is a quick example:

import transformers, torch
from private_transformers import PrivacyEngine
import torch.nn.functional as F

device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model = transformers.GPT2LMHeadModel.from_pretrained('distilgpt2').to(device)
optimizer = torch.optim.Adam(params=model.parameters(), lr=1e-4)
privacy_engine = PrivacyEngine(
    model,
    batch_size=10,
    sample_size=50000,
    epochs=3,
    max_grad_norm=0.1,
    target_epsilon=3,
)
privacy_engine.attach(optimizer)

batch_size, seq_len = 10, 20
# Inputs are batch-first format, i.e., the first dimension of tensors must be batch dimension.
input_ids = torch.randint(size=[batch_size, seq_len], low=0, high=100, device=device)
# Calling `.train()` is very important; otherwise underlying forward and backward hooks don't run.
model.train()
outputs = model(input_ids=input_ids, return_dict=True)
labels = input_ids[:, 1:, ]
logits = outputs.logits[:, :-1, :].permute(0, 2, 1)
# `loss` is a 1-D tensor of shape (batch_size,).
loss = F.cross_entropy(logits, labels, reduction="none").mean(dim=1)
# This step is different from existing workflows: 
#   Don't call `loss.backward`; leave it to `optimizer.step` to handle backward.
optimizer.step(loss=loss)

The biggest differences compared to Opacus are:

  • We require the per-example loss (a 1-D tensor) be passed into optimizer.step (or optimizer.virtual_step)
  • The per-example loss must be passed in as a keyword argument.
  • loss.backward() shouldn't be called on the user end; it's called internally in optimizer.step ( or optimizer.virtual_step).
  • Inputs should be in batch-first format; there isn't a toggle to switch between different formats in the engine.

Ghost clipping: memory saving differentially private learning

Turning on ghost clipping requires changing only 1 line. You should notice a drastic reduction in peak GPU memory usage once this is turned on, at a potential cost of slower training speed. One might find this especially useful when constrained to only use older GPUs with small VRAMs or fitting super large models.

import transformers, torch
from private_transformers import PrivacyEngine

device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model = transformers.GPT2LMHeadModel.from_pretrained('distilgpt2').to(device)
optimizer = torch.optim.Adam(params=model.parameters(), lr=1e-4)
privacy_engine = PrivacyEngine(
    model,
    batch_size=10,
    sample_size=50000,
    epochs=3,
    max_grad_norm=0.1,
    target_epsilon=3,
    ghost_clipping=True,  # The only change you need to make!
)
privacy_engine.attach(optimizer)

We ran stringent numerical tests to ensure the double-backward implementation is correct. Check out files in the tests folder for more on this.

Examples

Code in the examples folder roughly reproduces our results for the table-to-text and classification tasks. There may be some minor discrepancies, since hyperparameters there aren't exactly what's used in the paper. Nevertheless, it should be sufficient to get things started. Detailed instructions are in the readme file of each subfolder.

Currently supported Hugging Face models

Not all models in the Hugging Face library are supported. The main additional work here is to

  1. support per-example gradients for bespoke modules (e.g., T5LayerNorm), and
  2. ensure position_ids are repeated.

We plan to support more models in the future if there's such a need. Feel free to open an issue if you may want to try out specific models that aren't in the current list.

FAQ

I wrote some answers to potential questions here.

Acknowledgements

It would have been impossible to develop this codebase without cool past works and existing codebases. We roughly follow the PrivacyEngine design in Opacus==0.13.0. We directly use an off-the-shelf package for tightly tracking tradeoff functions while composing multiple private mechanisms.

Disclaimer

  • This codebase is not yet production-grade, e.g., cryptographically secure PRNGs are required for sampling noise -- our codebase currently does not use these strong PRNGs.
  • This codebase is born out of the need to experiment with various things for differentially private NLP in rapidly succession. I've tried my best to write clean code, though parts of this codebase may be less tidy than I had hoped given the extremely tight timeline.

Citation

If you found this codebase useful in your research, please consider citing:

@misc{li2021large,
      title={Large Language Models Can Be Strong Differentially Private Learners}, 
      author={Xuechen Li and Florian Tramèr and Percy Liang and Tatsunori Hashimoto},
      year={2021},
      eprint={2110.05679},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}
Comments
  • Support BART model

    Support BART model

    Hi, I'm trying to apply your code in BART model. But I got the error like the below:

    ValueError: Ghost clipping does not support parameter sharing. Parameter sharing may be due to default parameter sharing between lm_head and embedding.Please use a model without parameter sharing for ghost clipping.
    

    Does it not support BART model yet??

    opened by SeolhwaLee 7
  • Set another seed won't change the result

    Set another seed won't change the result

    Hi Xuechen,

    I have another issue with the training seed. I would like to relax the random seed so that I can get some statistical results. Tried many different ways but even comment out the set_seed() function, the eva acc is the same until the last digit. May I ask how to relax the random seed? I'm doing experiments on examples/classification.

    Thanks!

    opened by JunyiZhu-AI 6
  • Customize loss function / adding regularizer under privacy setting?

    Customize loss function / adding regularizer under privacy setting?

    Hi, thanks again for the great work and the codebase!

    I have a question -- how I'd want to customize loss function in the codebase? I've been trying to do that, e.g. adding a per-example L1 regularization term to vector_loss in trainer, but I didn't manage to get it running after several attempts.

    There's a related discussion/PR in Opacus codebase https://github.com/pytorch/opacus/issues/249.

    However, there're a few tricky things I can see: -- In private-transformers, backward() behavior is not managed on the user end. -- also, 1-D vector_loss is required for private gradient update - optimizer.stepor optimizer.virtual_step

    My intuition is that I can add to vector_loss (per-example loss) at this line before the loss gets passed to the privacy engine.

    However I am afraid privacy concern is also an issue. I am aware of that Private-Transformers overrides compute_loss() in HF trainer, to exclude regularization terms that might mess up with privacy accounting.

    Sorry my question is not super detailed but I hope this makes sense and really appreciate for any comments.

    Thank you!

    opened by shi-kejian 4
  • Using dataloader with fixed batch size

    Using dataloader with fixed batch size

    Hi, thanks for providing this codebase!

    So for a while I've been using Opacus to experiment with DP-SGD and RoBERTa, but I wanted to check out your PrivacyEngine, mainly because of the training speed and memory optimizations. With Opacus, I always trained with their UniformWithReplacementSampler for accurate RDP accounting and as far as I can tell, you're training with fixed size batches in your examples. I'm wondering if there's a reason the UniformWithReplacementSampler isn't needed in your codebase anymore, and if the uniform sampler is compatible with your modified PrivacyEngine because the optimizer needs to be able to deal with variations in batch size?

    opened by xplip 4
  • How to set max_compositions

    How to set max_compositions

    Hi Chen, do you know how to set the max_compositions/steps param? The default at https://github.com/lxuechen/private-transformers/blob/684e27fcd9978539fbabc357c7ea506c0353c771/private_transformers/privacy_utils/privacy_engine.py#L148 is 0 but would raise an error

    private-transformers/private_transformers/privacy_utils/accounting/gdp_accounting.py:33: RuntimeWarning: invalid value encountered in double_scalars return norm.cdf(-eps / mu + mu / 2) - np.exp(eps) * norm.cdf(-eps / mu - mu / 2)
    rv_accountant/accountant.py:55: RuntimeWarning: divide by zero encountered in double_scalars mesh_size = 2*eps_error / np.sqrt(2*max_compositions*np.log(2/eta0))
    
    opened by hlzhang109 3
  • Setting small target epsilon like 0.1 fails training

    Setting small target epsilon like 0.1 fails training

    Hi, @lxuechen I tried to set epsilon as 0.1 on SST-2, but it results in a large noise_multiplier: 20853.95 and fails the training where the accuracy is near 0.5 However, setting epsilon as 1 works well. Any idea about this?

    opened by LinkToPast1900 3
  • Private gradient seemingly has been overwritten by non-private gradient.

    Private gradient seemingly has been overwritten by non-private gradient.

    Hi Xuechen, thanks for providing this codebase!

    I tried tweaking the code in examples/classification but the network does not perform as expected. In particular, I tried zeroing out all gradient by this command in _step() and _ghost_step() functions in privacy_engine.py:

    param.grad /= self.batch_size
    param.grad.mul_(0)
    

    After adding this multiplication the network has been trained as normally. And because with the same seed, the network has been even trained to give out the same eval acc. Could you reproduce this result at your private repo? If it behaves like this, then I suppose that the private gradient has been overwritten by the non-private one.

    opened by JunyiZhu-AI 3
  • Questions about sigma search and epsilon from composed tradeoff functions

    Questions about sigma search and epsilon from composed tradeoff functions

    (Making a new issue for this because you probably weren't notified of my comment in the closed original issue)

    Sorry for having to reopen this, but I do have two more (perhaps related) questions after all and would really appreciate if you could help clarify them.

    1. When using the automated sigma search (based on a specified target epsilon and N epochs), the final epsilon computed by the PrivacyEngine after training for N epochs is always much higher than the target epsilon, so it seems that the sigma chosen by get_sigma_from_rdp is too high. This also happens when I run the sentence classification and the table2test examples in the repo. E.g., instead of my target epsilon 8, I will end up with something like epsilon 10-11. How did you get your final epsilon to match the target epsilon in the experiments in your paper?

    2. How do you compute the converted epsilon from composed tradeoff functions when let's say training SST-2 with the default hyperparameters from the examples? Do you reduce the num_compositions=1000 in _eps_from_glw to something way lower than 1000 because the script only runs for ~400 optimization steps and would otherwise always throw the Numerical composition of tradeoff functions failed! Double check privacy parameters. error?

    Originally posted by @xplip in https://github.com/lxuechen/private-transformers/issues/7#issuecomment-987020758

    opened by xplip 3
  • What is the best way to handle large models?

    What is the best way to handle large models?

    Hi all, I was trying to fine-tune GPT-J 6B but I encounter Out Of Memory errors if I use a single-gpu, for non-private training I managed to solve them by using deepspeed but it seems that I cannot use that with opacus or with this codebase. Do you know how I could solve this problem? Thank you in advance:)

    opened by Pier297 2
  • No such file or directory

    No such file or directory

    I want to finetune qqp and here comes an error:

    File "/private-transformers-main/examples/classification/run_classification.py", line 545, in main train_dataset = FewShotDataset(data_args, tokenizer=tokenizer, mode="train", use_demo=use_demo) File "/private-transformers-main/examples/classification/src/dataset.py", line 377, in init with FileLock(lock_path): File "/home/anaconda3/envs/fuck/lib/python3.8/site-packages/filelock/_api.py", line 214, in enter self.acquire() File "/home/anaconda3/envs/fuck/lib/python3.8/site-packages/filelock/_api.py", line 170, in acquire self._acquire() File "/home/anaconda3/envs/fuck/lib/python3.8/site-packages/filelock/_unix.py", line 35, in _acquire fd = os.open(self._lock_file, open_mode) FileNotFoundError: [Errno 2] No such file or directory: 'classification/data/original/QQP/cached_train_RobertaTokenizer_256_qqp_few_shot.lock'

    how can I get this file? thanks.

    opened by trestad 2
  • [DistilBERT] RuntimeError: stack expects each tensor to be equal size

    [DistilBERT] RuntimeError: stack expects each tensor to be equal size

    Hi, @lxuechen, thanks for your repo.

    I met a problem as follows when I tied to finetune DistilBERT. Both BERT and Roberta work well. Any idea about this? Thanks!

    Traceback (most recent call last): ... File "/opt/conda/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "/opt/conda/lib/python3.8/site-packages/private_transformers/privacy_utils/privacy_engine.py", line 360, in step self._ghost_step(loss=kwargs.pop("loss")) File "/opt/conda/lib/python3.8/site-packages/private_transformers/privacy_utils/privacy_engine.py", line 261, in _ghost_step self._ghost_helper(loss) File "/opt/conda/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "/opt/conda/lib/python3.8/site-packages/private_transformers/privacy_utils/privacy_engine.py", line 334, in _ghost_helper coef_sample = self.get_coef_sample() File "/opt/conda/lib/python3.8/site-packages/private_transformers/privacy_utils/privacy_engine.py", line 348, in get_coef_sample norm_sample = self.get_norm_sample() File "/opt/conda/lib/python3.8/site-packages/private_transformers/privacy_utils/privacy_engine.py", line 343, in get_norm_sample norm_sample = torch.stack([param.norm_sample for name, param in self.named_params], dim=0).norm(2, dim=0) RuntimeError: stack expects each tensor to be equal size, but got [50] at entry 0 and [1] at entry 1

    (50 is my batch size)

    opened by LinkToPast1900 1
  • v0.3.0 fixes

    v0.3.0 fixes

    Non-structural fixes.

    • [ ] Convert to make_private style to avoid bad syntax highlighting during static analysis
    • [ ] Improve the cleanliness of examples
    • [ ] Refactor test file and use functorch to simplify ground truth gradients' logic
    • [ ] Don't compute per-sample gradients for weights which don't require gradients
    • [ ] Use the new smart resizer for tokenizer and model
    • [ ] Refactor decoding to use new left padding based construction
    opened by lxuechen 0
Releases(v0.2.3)
Owner
Xuechen Li
learning to learn
Xuechen Li
A script that automatically creates a branch name using google translation api and jira api

About google translation api와 jira api을 사용하여 자동으로 브랜치 이름을 만들어주는 스크립트 Setup 환경변수에 다음 3가지를 등록해야 한다. JIRA_USER : JIRA email (ex: hyunwook.kim 2 Dec 20, 2021

ReCoin - Restoring our environment and businesses in parallel

Shashank Ojha, Sabrina Button, Abdellah Ghassel, Joshua Gonzales "Reduce Reuse R

sabrina button 1 Mar 14, 2022
StarGAN - Official PyTorch Implementation

StarGAN - Official PyTorch Implementation ***** New: StarGAN v2 is available at https://github.com/clovaai/stargan-v2 ***** This repository provides t

Yunjey Choi 5.1k Dec 30, 2022
Natural Language Processing Tasks and Examples.

Natural Language Processing Tasks and Examples With the advancement of A.I. technology in recent years, natural language processing technology has bee

Soohwan Kim 53 Dec 20, 2022
LUKE -- Language Understanding with Knowledge-based Embeddings

LUKE (Language Understanding with Knowledge-based Embeddings) is a new pre-trained contextualized representation of words and entities based on transf

Studio Ousia 587 Dec 30, 2022
A Python/Pytorch app for easily synthesising human voices

Voice Cloning App A Python/Pytorch app for easily synthesising human voices Documentation Discord Server Video guide Voice Sharing Hub FAQ's System Re

Ben Andrew 840 Jan 04, 2023
An Explainable Leaderboard for NLP

ExplainaBoard: An Explainable Leaderboard for NLP Introduction | Website | Download | Backend | Paper | Video | Bib Introduction ExplainaBoard is an i

NeuLab 319 Dec 20, 2022
Treemap visualisation of Maya scene files

Ever wondered which nodes are responsible for that 600 mb+ Maya scene file? Features Fast, resizable UI Parsing at 50 mb/sec Dependency-free, single-f

Marcus Ottosson 76 Nov 12, 2022
CodeBERT: A Pre-Trained Model for Programming and Natural Languages.

CodeBERT This repo provides the code for reproducing the experiments in CodeBERT: A Pre-Trained Model for Programming and Natural Languages. CodeBERT

Microsoft 1k Jan 03, 2023
Autoregressive Entity Retrieval

The GENRE (Generative ENtity REtrieval) system as presented in Autoregressive Entity Retrieval implemented in pytorch. @inproceedings{decao2020autoreg

Meta Research 611 Dec 16, 2022
Diaformer: Automatic Diagnosis via Symptoms Sequence Generation

Diaformer Diaformer: Automatic Diagnosis via Symptoms Sequence Generation (AAAI 2022) Diaformer is an efficient model for automatic diagnosis via symp

Junying Chen 20 Dec 13, 2022
TFIDF-based QA system for AIO2 competition

AIO2 TF-IDF Baseline This is a very simple question answering system, which is developed as a lightweight baseline for AIO2 competition. In the traini

Masatoshi Suzuki 4 Feb 19, 2022
Words_And_Phrases - Just a repo for useful words and phrases that might come handy in some scenarios. Feel free to add yours

Words_And_Phrases Just a repo for useful words and phrases that might come handy in some scenarios. Feel free to add yours Abbreviations Abbreviation

Subhadeep Mandal 1 Feb 01, 2022
Addon for adding subtitle files to blender VSE as Text sequences. Using pysub2 python module.

Import Subtitles for Blender VSE Addon for adding subtitle files to blender VSE as Text sequences. Using pysub2 python module. Supported formats by py

4 Feb 27, 2022
뉴스 도메인 질의응답 시스템 (21-1학기 졸업 프로젝트)

뉴스 도메인 질의응답 시스템 본 프로젝트는 뉴스기사에 대한 질의응답 서비스 를 제공하기 위해서 진행한 프로젝트입니다. 약 3개월간 ( 21. 03 ~ 21. 05 ) 진행하였으며 Transformer 아키텍쳐 기반의 Encoder를 사용하여 한국어 질의응답 데이터셋으로

TaegyeongEo 4 Jul 08, 2022
English loanwords in the world's languages

Wiktionary as CLDF Content cldf1 and cldf2 contain cldf-conform data sets with a total of 2 377 756 entries about the vocabulary of all 1403 languages

Viktor Martinović 3 Jan 14, 2022
CYGNUS, the Cynical AI, combines snarky responses with uncanny aggression.

New & (hopefully) Improved CYGNUS with several API updates, user updates, and online/offline operations added!!!

Simran Farrukh 0 Mar 28, 2022
Fine-tuning scripts for evaluating transformer-based models on KLEJ benchmark.

The KLEJ Benchmark Baselines The KLEJ benchmark (Kompleksowa Lista Ewaluacji Językowych) is a set of nine evaluation tasks for the Polish language und

Allegro Tech 17 Oct 18, 2022
A Telegram bot to add notes to Flomo.

flomo bot 使用 Telegram 机器人发送笔记到你的 Flomo. 你需要有一台可访问 Telegram 的服务器。 Steps @BotFather 新建机器人,获取 token Flomo 官网获取 API,链接 https://flomoapp.com/mine?source=in

Zhen 44 Dec 30, 2022