A Topic Modeling toolbox

Related tags

Deep Learningtopik
Overview

Build Status Coverage Status Scrutinizer Code Quality Documentation Status

Topik

A Topic Modeling toolbox.

Introduction

The aim of topik is to provide a full suite and high-level interface for anyone interested in applying topic modeling. For that purpose, topik includes many utilities beyond statistical modeling algorithms and wraps all of its features into an easy callable function and a command line interface.

Topik is built on top of existing natural language and topic modeling libraries and primarily provides a wrapper around them, for a quick and easy exploratory analysis of your text data sets.

Please see our complete documentation at ReadTheDocs.

LICENSE

New BSD. See License File.

Comments
  • Error in `/home/usr/anaconda2/bin/python': free(): invalid pointer:

    Error in `/home/usr/anaconda2/bin/python': free(): invalid pointer:

    Hi, I installed topik 0.3.0 on ubuntu 15.0, however, I got this error when running topik. Anyone has idea why and how to fix it?

    Error in `/home/usr/anaconda2/bin/python': free(): invalid pointer:

    Thanks

    opened by kenyeung128 10
  • Problem running tutorial code

    Problem running tutorial code

    Hello, I was trying out the topik package and ran into some problems with the basic examples in the tutorial (http://topik.readthedocs.org/en/latest/example.html). Specifically, I was trying to get an LDAvis visualization using a variation of your basic code:

    from topik.run import run_model run_model("reviews", content_field="text", r_ldavis=True, dir_path="./topic_model")

    The parameters don't seem to match what's on the documentation, so I'm going by trial and error. With the present code, I get the error below. Could you kindly let me know how to properly invoke lDAvis services? Thanks and best regards

    Alex

    ----> 1 run_model("reviews", content_field="text", r_ldavis=True, dir_path="./topic_model") /Users/alexmckenzie/anaconda/lib/python2.7/site-packages/topik-0.1.0-py2.7.egg/topik/run.pyc in run_model(data_source, source_type, year_field, start_year, stop_year, content_field, clear_es_index, tokenizer, n_topics, dir_path, model, termite_plot, output_file, r_ldavis, json_prefix, seed, **kwargs) 116 117 if r_ldavis: --> 118 to_r_ldavis(processed_data, dir_name=os.path.join(dir_path, 'ldavis'), lda=lda) 119 os.environ["LDAVIS_DIR"] = os.path.join(dir_path, 'ldavis') 120 try: /Users/alexmckenzie/anaconda/lib/python2.7/site-packages/topik-0.1.0-py2.7.egg/topik/utils.pyc in to_r_ldavis(corpus_bow, lda, dir_name) 40 np.savetxt(os.path.join(dir_name, 'topicTermDist'), tt_dist, delimiter=',', newline='\n',) 41 ---> 42 corpus_file = corpus_bow.filename 43 corpus = gensim.corpora.MmCorpus(corpus_file) 44 docTopicProbMat = lda.model[corpus] AttributeError: 'DigestedDocumentCollection' object has no attribute 'filename'

    bug 
    opened by AHMcKenzie 8
  • conda installation of 0.3.0 is not working ->

    conda installation of 0.3.0 is not working -> "ImportError: No module named cli "

    Hi, I just installed the 0.3.0 update with conda, however I get an error msg even when executing a simple command-line "help". This is the error:

    $ topik --help Traceback (most recent call last): File "/Users/alexmckenzie/anaconda/bin/topik", line 4, in from topik.cli import run ImportError: No module named cli

    I'd rather keep using conda and not download the source zip. Thanks for your help Alex

    bug 
    opened by AHMcKenzie 6
  • Various fixes + logging + refactoring.

    Various fixes + logging + refactoring.

    Added numpy 1.9.4 as requirement (argpartition bug was showing up in termite parsing code; it was fixed in np 1.9.4, numpy issue 5524) Added requirement for nose, stop_words In fileio/in_document_folder.py - Added support to ignore invalid UTF; but progress normally + log the fact that we encountered an error Added suitable test data (_junk) and test case to test_in_document_folder Added connectionerror handling for elasticsearch tests; if elasticsearch is not running, simply skip the tests Corrected tokenizer names in simple_run/cli.py Added stopword support to simple_run/run.py Corrected tokenizer names in simple_run/run.py Added logging in simple_run/run.py Tee generator in entities.py to avoid exhaustion. Support quadgrams and refactored code in ngrams.py Tee generator in ngrams.py + added some logging Added appropriate test case for quadgrams + tweaked test data in test_ngrams.py Added test case using a generator that will demonstrate exhaustion problem. All tests now succeeding (NB: ElasticSearch ones not tested - no changes aside from exception handling in tests though.)

    opened by brianrusso 3
  • ValueError: could not convert string to float: s

    ValueError: could not convert string to float: s

    I get this identical error on both Ubuntu 15 + pip install (or directly from the latest github) AND on Ubuntu 14 LTS + conda2; so I am pretty sure this is not an issue with my environment.

    Following the tutorial on the movie reviews data (not sure if that matters).. I get..

    [email protected]:[~]$ topik -d reviews -c text 2016-04-18 14:29:00,880 : WARNING : too few updates, training might not converge; consider increasing the number of passes or iterations to improve accuracy Traceback (most recent call last): File "/home/brian/anaconda2/bin/topik", line 6, in sys.exit(run()) File "/home/brian/anaconda2/lib/python2.7/site-packages/click/core.py", line 716, in call return self.main(_args, *_kwargs) File "/home/brian/anaconda2/lib/python2.7/site-packages/click/core.py", line 696, in main rv = self.invoke(ctx) File "/home/brian/anaconda2/lib/python2.7/site-packages/click/core.py", line 889, in invoke return ctx.invoke(self.callback, *_ctx.params) File "/home/brian/anaconda2/lib/python2.7/site-packages/click/core.py", line 534, in invoke return callback(_args, *_kwargs) File "/home/brian/anaconda2/lib/python2.7/site-packages/topik/simple_run/cli.py", line 27, in run termite_plot=termite) File "/home/brian/anaconda2/lib/python2.7/site-packages/topik/simple_run/run.py", line 67, in run_pipeline model = models.registered_models[model](vectorized_data, ntopics=ntopics, **kwargs) File "/home/brian/anaconda2/lib/python2.7/site-packages/topik/models/lda.py", line 82, in lda return ModelOutput(vectorized_corpus=vectorized_output, model_func=_LDA, ntopics=ntopics, *_kwargs) File "/home/brian/anaconda2/lib/python2.7/site-packages/topik/models/base_model_output.py", line 20, in init vectorized_corpus, **kwargs) File "/home/brian/anaconda2/lib/python2.7/site-packages/topik/models/lda.py", line 72, in _LDA for topic_no in range(ntopics)} File "/home/brian/anaconda2/lib/python2.7/site-packages/topik/models/lda.py", line 72, in for topic_no in range(ntopics)} File "/home/brian/anaconda2/lib/python2.7/site-packages/topik/models/lda.py", line 12, in _topic_term_to_array term_scores = {term: float(score) for score, term in topic} File "/home/brian/anaconda2/lib/python2.7/site-packages/topik/models/lda.py", line 12, in term_scores = {term: float(score) for score, term in topic} ValueError: could not convert string to float: s

    opened by brianrusso 2
  • Add wait mechanism in preprocess between append and subsequent get_field

    Add wait mechanism in preprocess between append and subsequent get_field

    I consistently encounter a KeyError for the "token_..." field when using the 'elastic' output_type. I can see that the field exists if I manually view an individual document in the browser, but it appears there is some lag between appending the tokenized document and actually being able to retrieve it back. I added a 1-second wait after the append loop and that appears to have solved the problem.

    bug 
    opened by youngblood 2
  • Need way of saving corpus

    Need way of saving corpus

    Obviously, per-class specific. I envision dictionary storage doing some serialization, but the Elasticsearch backend should store a file with only connection details and current field selections.

    enhancement 
    opened by msarahan 2
  • Avoid use of variables that are commonly used for other purposes, like `np`

    Avoid use of variables that are commonly used for other purposes, like `np`

    Example: https://github.com/ContinuumIO/topik/blob/master/topik/tokenizers/entities.py#L87

    np is commonly used to point at NumPy, via import numpy as np.

    Eliminate all such occurrences (as well as other common ones, like sp for scipy, pd for pandas, etc.).

    opened by gpfreitas 1
  • Youngblood/store param strings

    Youngblood/store param strings

    adds vectorization to CLI changed project run_model default to lda changed the datatype of the individual weight values in the lda matrices from numpy.float64 to float in order to match plsa and more importantly successfully decode from file using jsonpickle. minor documentation updates

    opened by youngblood 1
  • Youngblood/cli fixes

    Youngblood/cli fixes

    -renames run.run_model to run.run_pipeline, updates imports and function calls accordingly -changes default visualization for run_pipeline to lda_vis -fixes some default parameters in models.run_model -minor updates to documentation code examples -prevents TFIDF/LDA combination when using projects -(full fix including storage of corpus parameter strings coming in separate PR)

    opened by youngblood 1
  • Youngblood/add viz to docs

    Youngblood/add viz to docs

    Added plots to documentation. This is a workaround to keep using readthedocs for now, and I am intentionally not closing the associated issue because it will need to be solved again once we switch doc hosting platforms.

    opened by youngblood 1
  • pyLDAvis ValidationError: Not all rows (distributions) in doc_topic_dists sum to 1

    pyLDAvis ValidationError: Not all rows (distributions) in doc_topic_dists sum to 1

    i am getting the below error when trying to visualize HDP model trained on gensim

    **_--------------------------------------------------------------------------- ValidationError Traceback (most recent call last) in () ----> 1 vis_data_hdp = gensimvis.prepare(hdpmodel, corpus, dictionary) 2 #pyLDAvis.display(vis_data_hdp)

    C:\Anaconda2\lib\site-packages\pyLDAvis\gensim.pyc in prepare(topic_model, corpus, dictionary, doc_topic_dist, **kwargs) 110 """ 111 opts = fp.merge(_extract_data(topic_model, corpus, dictionary, doc_topic_dist), kwargs) --> 112 return vis_prepare(**opts)

    C:\Anaconda2\lib\site-packages\pyLDAvis_prepare.pyc in prepare(topic_term_dists, doc_topic_dists, doc_lengths, vocab, term_frequency, R, lambda_step, mds, n_jobs, plot_opts, sort_topics) 372 doc_lengths = _series_with_name(doc_lengths, 'doc_length') 373 vocab = _series_with_name(vocab, 'vocab') --> 374 _input_validate(topic_term_dists, doc_topic_dists, doc_lengths, vocab, term_frequency) 375 R = min(R, len(vocab)) 376

    C:\Anaconda2\lib\site-packages\pyLDAvis_prepare.pyc in _input_validate(*args) 63 res = _input_check(*args) 64 if res: ---> 65 raise ValidationError('\n' + '\n'.join([' * ' + s for s in res])) 66 67 ValidationError:

    • Not all rows (distributions) in doc_topic_dists sum to 1._**

    To train hdp model i have used the following syntax: hdpmodel = models.hdpmodel.HdpModel(corpus, dictionary)

    corpus looks like this: [[(0, 1), (1, 1), (2, 1), (3, 1), (4, 1), (5, 1), (6, 1), (7, 1), (8, 1), (9, 1), (10, 1), (11, 2), (12, 1), (13, 2), (14, 1), (15, 1), (16, 1), (17, 1), (18, 1), (19, 1), (20, 1), (21, 1), (22, 1), (23, 1), (24, 1), (25, 1), (26, 1), (27, 1), (28, 1), (29, 1), (30, 1), (31, 1), (32, 1), (33, 4), (34, 1), (35, 1), (36, 1), (37, 1), (38, 1), (39, 2), (40, 1), (41, 2), (42, 1), (43, 2), (44, 1), (45, 1), (46, 1), (47, 3), (48, 1), (49, 1), (50, 2), (51, 1), (52, 1), (53, 1), (54, 1), (55, 1), (56, 1), (57, 1), (58, 1), (59, 1), (60, 1), (61, 1), (62, 1), (63, 1), (64, 1), (65, 1)]

    dictionary looks like this: [u'', u'dacteur', u'reallocations', u'advcompliance', u'resolveboth............

    opened by imranshaikmuma 2
  • Intel MKL FATAL ERROR: Cannot load libmkl_avx.so or libmkl_def.so.

    Intel MKL FATAL ERROR: Cannot load libmkl_avx.so or libmkl_def.so.

    I encountered the error saying "Intel MKL FATAL ERROR: Cannot load libmkl_avx.so or libmkl_def.so." when running topik --help.

    I installed topik using conda install -c memex topik and running with Python 2.7.11 :: Anachonda 2.5.0

    The two files in question are in /home/user/anaconda2/lib directory and they look intact, 36M and 30M respectively in size. and the directory path is in my LD_LIBRARY_PATH and DYLD_LIBRARY_PATH env variable.

    Is there anything I am missing here? Any help?

    opened by geledek 0
  • pyLDAvis Plotting Data Structures Issues

    pyLDAvis Plotting Data Structures Issues

    There are several issues with the various data structures that need fixing. These fixes will make them much more coherent. I'll list them here:

    • [ ] prepared_model_vis_data.token_table uses a non-unique index, namely the unique id for each term. This needs to be a proper index, as it causes attempts to serialize the DataFrame to fail.
    enhancement 
    opened by brittainhard 0
  • Exclude empty documents and log their occurrence.

    Exclude empty documents and log their occurrence.

    We should exclude empty documents because they generate useless output at best, and crashes at worst.

    However, we must not silently drop the document, as it may be useful for the user to know that there is an empty document in the database.

    opened by gpfreitas 0
  • Add list of phrases to look for in the simple parser.

    Add list of phrases to look for in the simple parser.

    In some domains, certain expressions (phrases, compound words) are very common and meaningful. Having the simple tokenizer recognize such words would be very useful, and it could be done simply by passing all tokens through a transformation that recognizes those expressions and replaces sequences of tokens with said expressions. Reference:

    http://www.mimno.org/articles/phrases/

    That would improve the performance of models using tokenizers.simple, especially in certain domains.

    enhancement 
    opened by gpfreitas 0
Releases(v0.3.1)
  • v0.3.1(Apr 21, 2016)

  • v0.3.0(Nov 30, 2015)

    This version is a major update of the API to be consistent across all modules. Each step is now expected to be a function that returns either an iterator of content or some more complicated object that aids in presentation of results. Each step is registered with a borg-pattern dictionary, which hopefully will facilitate future integration with GUIs.

    Source code(tar.gz)
    Source code(zip)
  • v0.2.2(Oct 15, 2015)

    • update documentation to show (interactive!) plots
    • fix LDA model issue where word weights did not sum to 1, causing an LDAvis validation error
    Source code(tar.gz)
    Source code(zip)
  • v0.2.1(Oct 10, 2015)

  • v0.2.0(Oct 9, 2015)

    • Refactor with aim towards modularity at each step
    • add elasticsearch input source
    • add elasticsearch as output backend option
    • add initial PLSA model algorithm
    • expand documentation; add examples of using Topik with Python API
    • add API docs from docstrings
    • add continuous integration with Travis CI
    • add code coverage monitoring with Coveralls
    • add code analysis with Scrutinizer
    • replace R-LDAvis with PyLDAvis to eliminate R dependency for simplicity
    • multitudinous bug fixes guided by Travis + doctests
    Source code(tar.gz)
    Source code(zip)
Owner
Anaconda, Inc. (formerly Continuum Analytics, Inc.)
Advanced data processing, analysis, and visualization tools for Python & R.
Anaconda, Inc. (formerly Continuum Analytics, Inc.)
Prml - Repository of notes, code and notebooks in Python for the book Pattern Recognition and Machine Learning by Christopher Bishop

Pattern Recognition and Machine Learning (PRML) This project contains Jupyter notebooks of many the algorithms presented in Christopher Bishop's Patte

Gerardo Durán-Martín 1k Jan 07, 2023
LWCC: A LightWeight Crowd Counting library for Python that includes several pretrained state-of-the-art models.

LWCC: A LightWeight Crowd Counting library for Python LWCC is a lightweight crowd counting framework for Python. It wraps four state-of-the-art models

Matija Teršek 39 Dec 28, 2022
Deep Learning Algorithms for Hedging with Frictions

Deep Learning Algorithms for Hedging with Frictions This repository contains the Forward-Backward Stochastic Differential Equation (FBSDE) solver and

Xiaofei Shi 3 Dec 22, 2022
Pytorch Lightning Distributed Accelerators using Ray

Distributed PyTorch Lightning Training on Ray This library adds new PyTorch Lightning accelerators for distributed training using the Ray distributed

166 Dec 27, 2022
Deep Convolutional Generative Adversarial Networks

Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks Alec Radford, Luke Metz, Soumith Chintala All images in t

Alec Radford 3.4k Dec 29, 2022
[ICCV'21] PlaneTR: Structure-Guided Transformers for 3D Plane Recovery

PlaneTR: Structure-Guided Transformers for 3D Plane Recovery This is the official implementation of our ICCV 2021 paper News There maybe some bugs in

73 Nov 30, 2022
The devkit of the nuPlan dataset.

The devkit of the nuPlan dataset.

Motional 264 Jan 03, 2023
OpenL3: Open-source deep audio and image embeddings

OpenL3 OpenL3 is an open-source Python library for computing deep audio and image embeddings. Please refer to the documentation for detailed instructi

Music and Audio Research Laboratory - NYU 326 Jan 02, 2023
Tutorial in Python targeted at Epidemiologists. Will discuss the basics of analysis in Python 3

Python-for-Epidemiologists This repository is an introduction to epidemiology analyses in Python. Additionally, the tutorials for my library zEpid are

Paul Zivich 120 Nov 17, 2022
State-Relabeling Adversarial Active Learning

State-Relabeling Adversarial Active Learning Code for SRAAL [2020 CVPR Oral] Requirements torch = 1.6.0 numpy = 1.19.1 tqdm = 4.31.1 AL Results The

10 Jul 14, 2022
Incomplete easy-to-use math solver and PDF generator.

Math Expert Let me do your work Preview preview.mp4 Introduction Math Expert is our (@salastro, @younis-tarek, @marawn-mogeb) math high school graduat

SalahDin Ahmed 22 Jul 11, 2022
[NeurIPS 2021] Well-tuned Simple Nets Excel on Tabular Datasets

[NeurIPS 2021] Well-tuned Simple Nets Excel on Tabular Datasets Introduction This repo contains the source code accompanying the paper: Well-tuned Sim

52 Jan 04, 2023
SMD-Nets: Stereo Mixture Density Networks

SMD-Nets: Stereo Mixture Density Networks This repository contains a Pytorch implementation of "SMD-Nets: Stereo Mixture Density Networks" (CVPR 2021)

Fabio Tosi 115 Dec 26, 2022
Statistical-Rethinking-with-Python-and-PyMC3 - Python/PyMC3 port of the examples in " Statistical Rethinking A Bayesian Course with Examples in R and Stan" by Richard McElreath

Statistical Rethinking with Python and PyMC3 This repository has been deprecated in favour of this one, please check that repository for updates, for

Osvaldo Martin 786 Dec 29, 2022
LSTMs (Long Short Term Memory) RNN for prediction of price trends

Price Prediction with Recurrent Neural Networks LSTMs BTC-USD price prediction with deep learning algorithm. Artificial Neural Networks specifically L

5 Nov 12, 2021
WormMovementSimulation - 3D Simulation of Worm Body Movement with Neurons attached to its body

Generate 3D Locomotion Data This module is intended to create 2D video trajector

1 Aug 09, 2022
Atomistic Line Graph Neural Network

Table of Contents Introduction Installation Examples Pre-trained models Quick start using colab JARVIS-ALIGNN webapp Peformances on a few datasets Use

National Institute of Standards and Technology 91 Dec 30, 2022
ICML 21 - Voice2Series: Reprogramming Acoustic Models for Time Series Classification

Voice2Series-Reprogramming Voice2Series: Reprogramming Acoustic Models for Time Series Classification International Conference on Machine Learning (IC

49 Jan 03, 2023
PCACE: A Statistical Approach to Ranking Neurons for CNN Interpretability

PCACE: A Statistical Approach to Ranking Neurons for CNN Interpretability PCACE is a new algorithm for ranking neurons in a CNN architecture in order

4 Jan 04, 2022
A framework for GPU based high-performance medical image processing and visualization

FAST is an open-source cross-platform framework with the main goal of making it easier to do high-performance processing and visualization of medical images on heterogeneous systems utilizing both mu

Erik Smistad 315 Dec 30, 2022