A fast python implementation of the SimHash algorithm.

Overview

FLoC SimHash

This Python package provides hashing algorithms for computing cohort ids of users based on their browsing history. As such, it may be used to compute cohort ids of users following Google's Federated Learning of Cohorts (FLoC) proposal.

The FLoC proposal is an important part of The Privacy Sandbox, which is Google's replacement for third-party cookies. FLoC will enable interest-based advertising, thus preserving an important source of monetization for today's web.

The main idea, as outlined in the FLoC whitepaper, is to replace user cookie ids, which enable user-targeting across multiple sites, by cohort ids. A cohort would consist of a set of users sharing similar browsing behaviour. By targeting a given cohort, advertisers can ensure that relevant ads are shown while user privacy is preserved by a hiding in the pack mechanism.

The FLoC whitepaper mentions several mechanisms to map users to cohorts, with varying amounts of centralized information. The algorithms currently being implemented in Google Chrome as a POC are methods based on SimHash, which is a type of locality-sensitive hashing initially introduced for detecting near-duplicate documents.

Contents

Installation

The floc-simhash package is available at PyPI. Install using pip as follows.

pip install floc-simhash

The package requires python>=3.7 and will install scikit-learn as a dependency.

Usage

The package provides two main classes.

  • SimHash, applying the SimHash algorithm on the md5 hashes of tokens in the given document.

  • SimHashTransformer, applying the SimHash algorithm to a document vectorization as part of a scikit-learn pipeline

Finally, there is a third class available:

  • SortingSimHash, which performs the SortingLSH algorithm by first applying SimHash and then clipping the resulting hashes to a given precision.

Individual document-based SimHash

The SimHash class provides a way to calculate the SimHash of any given document, without using any information coming from other documents.

In this case, the document hash is computed by looking at md5 hashes of individual tokens. We use:

  • The implementation of the md5 hashing algorithm available in the hashlib module in the Python standard library.

  • Bitwise arithmetic for fast computations of the document hash from the individual hashed tokens.

The program below, for example, will print the following hexadecimal string: cf48b038108e698418650807001800c5.

from floc_simhash import SimHash

document = "Lorem ipsum dolor sit amet consectetur adipiscing elit"
hashed_document = SimHash(n_bits=128).hash(document)

print(hashed_document)

An example more related to computing cohort ids: the following program computes the cohort id of a user by applying SimHash to the document formed by the pipe-separated list of domains in the user browsing history.

from floc_simhash import SimHash

document = "google.com|hybridtheory.com|youtube.com|reddit.com"
hasher = SimHash(n_bits=128, tokenizer=lambda x: x.split("|"))
hashed_document = hasher.hash(document)

print(hashed_document)

The code above will print the hexadecimal string: 14dd1064800880b40025764cd0014715.

Providing your own tokenizer

The SimHash constructor will split the given document according to white space by default. However, it is possible to pass any callable that parses a string into a list of strings in the tokenizer parameter. We have provided an example above where we pass tokenizer=lambda x: x.split("|").

A good example of a more complex tokenization could be passing the word tokenizer in NLTK. This would be a nice choice if we wished to compute hashes of text documents.

Using the SimHashTransformer in scikit-learn pipelines

The approach to SimHash outlined in the FLoC Whitepaper consists of choosing random unit vectors and working on already vectorized data.

The choice of a random unit vector is equivalent to choosing a random hyperplane in feature space. Choosing p random hyperplanes partitions the feature space into 2^p regions. Then, a p-bit SimHash of a vector encodes the region to which it belongs.

It is reasonable to expect similar documents to have the same hash, provided the vectorization respects the given notion of similarity.

Two vectorizations are discussed in the aforementioned whitepaper: one-hot and tf-idf; they are available in scikit-learn.

The SimHashTransformer supplies a transformer (implementing the fit and transform methods) that can be used directly on the output of any of these two vectorizers in order to obtain hashes.

For example, given a 1d-array X containing strings, each of them corresponding to a concatenation of the domains visited by a given user and separated by "|", the following code will store in y the cohort id of each user, using one-hot encoding and a 32-bit SimHash.

from sklearn.feature_extraction.text import CountVectorizer
from sklearn.pipeline import Pipeline

from floc_simhash import SimHashTransformer


X = [
    "google.com|hybridtheory.com|youtube.com|reddit.com",
    "google.com|youtube.com|reddit.com",
    "github.com",
    "google.com|github.com",
]

one_hot_simhash = Pipeline(
    [
        ("vect", CountVectorizer(tokenizer=lambda x: x.split("|"), binary=True)),
        ("simhash", SimHashTransformer(n_bits=32)),
    ]
)

y = one_hot_simhash.fit_transform(X)

After running this code, the value of y would look similar to the following (expect same lengths; actual hash values depend on the choice of random vectors during fit):

['0xd98c7e93' '0xd10b79b3' '0x1085154d' '0x59cd150d']

Caveats

  • The implementation works on the sparse matrices output by CountVectorizer and TfidfTransformer, in order to manage memory efficiently.

  • At the moment, the choice of precision in the numpy arrays results in overflow errors for p >= 64. While we are waiting for implementation details of the FLoC POCs, the first indications hint at choices around p = 50.

Development

This project uses poetry for managing dependencies.

In order to clone the repository and run the unit tests, execute the following steps on an environment with python>=3.7.

git clone https://github.com/hybridtheory/floc-simhash.git
cd floc-simhash
poetry install
pytest

The unit tests are property-based, using the hypothesis library. This allows for algorithm veritication against hundreds or thousands of random generated inputs.

Since running many examples may lengthen the test suite runtime, we also use pytest-xdist in order to parallelize the tests. For example, the following call will run up to 1000 examples for each test with parallelism 4.

pytest -n 4 --hypothesis-profile=ci
Owner
Hybrid Theory
(formerly Affectv)
Hybrid Theory
marching rectangles algorithm in python with clean code.

Marching Rectangles marching rectangles algorithm in python with clean code. Tools Python 3 EasyDraw Creators Mohammad Dori Run the Code Installation

Mohammad Dori 3 Jul 15, 2022
:computer: Data Structures and Algorithms in Python

Algorithms in Python Implementations of a few algorithms and datastructures for fun and profit! Completed Karatsuba Multiplication Basic Sorting Rabin

Prakhar Srivastav 2.9k Jan 01, 2023
A collection of design patterns/idioms in Python

python-patterns A collection of design patterns and idioms in Python. Current Patterns Creational Patterns: Pattern Description abstract_factory use a

Sakis Kasampalis 36.2k Jan 05, 2023
Better control of your asyncio tasks

quattro: task control for asyncio quattro is an Apache 2 licensed library, written in Python, for task control in asyncio applications. quattro is inf

Tin Tvrtković 37 Dec 28, 2022
A command line tool for memorizing algorithms in Python by typing them.

Algo Drills A command line tool for memorizing algorithms in Python by typing them. In alpha and things will change. How it works Type out an algorith

Travis Jungroth 43 Dec 02, 2022
Pathfinding visualizer in pygame: A*

Pathfinding Visualizer A* What is this A* algorithm ? Simply put, it is an algorithm that aims to find the shortest possible path between two location

0 Feb 26, 2022
The DarkRift2 networking framework written in Python 3

DarkRiftPy is Darkrift2 written in Python 3. The implementation is fully compatible with the original version. So you can write a client side on Python that connects to a Darkrift2 server written in

Anton Dobryakov 6 May 23, 2022
Zipline, a Pythonic Algorithmic Trading Library

Zipline, a Pythonic Algorithmic Trading Library

Stefan Jansen 463 Jan 08, 2023
A collection of Python Scripts made for fun, while exploring Python 🐍

JFF-Python-Scripts A collection of Python Scripts made for fun, while exploring Python 🐍 Inspiration 💡 Many of the programs in this repository are i

Pushkar Patel 16 Oct 07, 2022
Benchmark for Robustness Tests of Control Alrogithms

A gym-like classical control benchmark for evaluating the robustnesses of control and reinforcement learning algorithms.

Kim Taekyung 4 Jan 18, 2022
QDax is a tool to accelerate Quality-Diveristy (QD) algorithms through hardware accelerators and massive parallelism

QDax: Accelerated Quality-Diversity QDax is a tool to accelerate Quality-Diveristy (QD) algorithms through hardware accelerators and massive paralleli

Adaptive and Intelligent Robotics Lab 183 Dec 30, 2022
Python sample codes for robotics algorithms.

PythonRobotics Python codes for robotics algorithm. Table of Contents What is this? Requirements Documentation How to use Localization Extended Kalman

Atsushi Sakai 17.2k Jan 01, 2023
Using A * search algorithm and GBFS search algorithm to solve the Romanian problem

Romanian-problem-using-Astar-and-GBFS Using A * search algorithm and GBFS search algorithm to solve the Romanian problem Romanian problem: The agent i

Mahdi Hassanzadeh 6 Nov 22, 2022
Programming Foundations Algorithms With Python

Programming-Foundations-Algorithms Algorithms purpose to solve a specific proplem with a sequential sets of steps for instance : if you need to add di

omar nafea 1 Nov 01, 2021
A simple python application to visualize sorting algorithms.

Visualize sorting algorithms A simple python application to visualize sorting algorithms. Sort Algorithms Name Function Name O( ) Bubble Sort bubble_s

Duc Tran 3 Apr 01, 2022
Policy Gradient Algorithms (One Step Actor Critic & PPO) from scratch using Numpy

Policy Gradient Algorithms From Scratch (NumPy) This repository showcases two policy gradient algorithms (One Step Actor Critic and Proximal Policy Op

1 Jan 17, 2022
Repository for Comparison based sorting algorithms in python

Repository for Comparison based sorting algorithms in python. This was implemented for project one submission for ITCS 6114 Data Structures and Algorithms under the guidance of Dr. Dewan at the Unive

Devashri Khagesh Gadgil 1 Dec 20, 2021
An NUS timetable generator which uses a genetic algorithm to optimise timetables to suit the needs of NUS students.

A timetable optimiser for NUS which uses an evolutionary algorithm to "breed" a timetable suited to your needs.

Nicholas Lee 3 Jan 09, 2022
A simple python implementation of A* and bfs algorithm solving Eight-Puzzle

A simple python implementation of A* and bfs algorithm solving Eight-Puzzle

2 May 22, 2022
Python algorithm to determine the optimal elevation threshold of a GNSS receiver, by using a statistical test known as the Brown-Forsynthe test.

Levene and Brown-Forsynthe: Test for variances Application to Global Navigation Satellite Systems (GNSS) Python algorithm to determine the optimal ele

Nicolas Gachancipa 2 Aug 09, 2022