Lightweight NLP library in pure Python - currently implements a text classifier
Go to file
2022-07-14 20:06:48 -07:00
gptc Switch to pyproject.toml 2022-07-14 19:18:16 -07:00
models Add ngrams 2022-07-13 11:45:17 -07:00
utils Make pack a function 2022-05-21 13:09:53 -07:00
.gitignore Use good gitignore 2020-08-14 16:21:22 -07:00
benchmark.py Add ngrams 2022-07-13 11:45:17 -07:00
GPL-3.0 Switch to LGPL v3 or later 2022-03-05 10:17:17 -08:00
LGPL-3.0 Switch to LGPL v3 or later 2022-03-05 10:17:17 -08:00
LICENSE Switch to LGPL v3 or later 2022-03-05 10:17:17 -08:00
pyproject.toml Remove license field from pyproject.toml 2022-07-14 20:06:48 -07:00
README.md Add ngrams 2022-07-13 11:45:17 -07:00

GPTC

General-purpose text classifier in Python

GPTC provides both a CLI tool and a Python library.

CLI Tool

Classifying text

python -m gptc classify [-n <max_ngram_length>] <compiled model file>

This will prompt for a string and classify it, then print (in JSON) a dict of the format {category: probability, category:probability, ...} to stdout. (For information about -n <max_ngram_length>, see section "Ngrams.")

Alternatively, if you only need the most likely category, you can use this:

python -m gptc classify [-n <max_ngram_length>] <-c|--category> <compiled model file>

This will prompt for a string and classify it, outputting the category on stdout (or "None" if it cannot determine anything).

Compiling models

python -m gptc compile [-n <max_ngram_length>] <raw model file>

This will print the compiled model in JSON to stdout.

Library

gptc.Classifier(model, max_ngram_length=1)

Create a Classifier object using the given compiled model (as a dict, not JSON).

For information about max_ngram_length, see section "Ngrams."

Classifier.confidence(text)

Classify text. Returns a dict of the format {category: probability, category:probability, ...}

Classifier.classify(text)

Classify text. Returns the category into which the text is placed (as a string), or None when it cannot classify the text.

gptc.compile(raw_model, max_ngram_length=1)

Compile a raw model (as a list, not JSON) and return the compiled model (as a dict).

For information about max_ngram_length, see section "Ngrams."

Ngrams

GPTC optionally supports using ngrams to improve classification accuracy. They are disabled by default (maximum length set to 1) for performance and compatibility reasons. Enabling them significantly increases the time required both for compilation and classification. The effect seems more significant for compilation than for classification. Compiled models are also much larger when ngrams are enabled. Larger maximum ngram lengths will result in slower performance and larger files. It is a good idea to experiment with different values and use the highest one at which GPTC is fast enough and models are small enough for your needs.

Once a model is compiled at a certain maximum ngram length, it cannot be used for classification with a higher value. If you instantiate a Classifier with a model compiled with a lower max_ngram_length, the value will be silently reduced to the one used when compiling the model.

Models compiled with older versions of GPTC which did not support ngrams are handled the same way as models compiled with max_ngram_length=1.

Model format

This section explains the raw model format, which is how you should create and edit models.

Raw models are formatted as a list of dicts. See below for the format:

[
    {
        "text": "<text in the category>",
        "category": "<the category>"
    }
]

GPTC handles models as Python lists of dicts of strs (for raw models) or dicts of strs and floats (for compiled models), and they can be stored in any way these Python objects can be. However, it is recommended to store them in JSON format for compatibility with the command-line tool.

Example model

An example model, which is designed to distinguish between texts written by Mark Twain and those written by William Shakespeare, is available in models. The raw model is in models/raw.json; the compiled model is in models/compiled.json.

The example model was compiled with max_ngram_length=10.

Benchmark

A benchmark script is available for comparing performance of GPTC between different Python versions. To use it, run benchmark.py with all of the Python installations you want to test. It tests both compilation and classification. It uses the default Twain/Shakespeare model for both, and for classification it uses Mark Antony's "Friends, Romans, countrymen" speech from Shakespeare's Julius Caesar.