# GPTC General-purpose text classifier in Python GPTC provides both a CLI tool and a Python library. ## Installation pip install gptc ## CLI Tool ### Classifying text gptc classify [-n ] This will prompt for a string and classify it, then print (in JSON) a dict of the format `{category: probability, category:probability, ...}` to stdout. (For information about `-n `, see section "Ngrams.") Alternatively, if you only need the most likely category, you can use this: gptc classify [-n ] <-c|--category> This will prompt for a string and classify it, outputting the category on stdout (or "None" if it cannot determine anything). ### Compiling models gptc compile [-n ] [-c ] This will print the compiled model encoded in binary format to stdout. If `-c` is specified, words and ngrams used less than `min_count` times will be excluded from the compiled model. ### Packing models gptc pack This will print the raw model in JSON to stdout. See `models/unpacked/` for an example of the format. Any exceptions will be printed to stderr. ## Library ### `Model.serialize()` Returns a `bytes` representing the model. ### `gptc.deserialize(encoded_model)` Deserialize a `Model` from a `bytes` returned by `Model.serialize()`. ### `Model.confidence(text, max_ngram_length)` Classify `text`. Returns a dict of the format `{category: probability, category:probability, ...}` Note that this may not include values for all categories. If there are no common words between the input and the training data (likely, for example, with input in a different language from the training data), an empty dict will be returned. For information about `max_ngram_length`, see section "Ngrams." ### `Model.get(token)` Return a confidence dict for the given token or ngram. This function is very similar to `Model.confidence()`, except it treats the input as a single token or ngram. ### `gptc.compile(raw_model, max_ngram_length=1, min_count=1)` Compile a raw model (as a list, not JSON) and return the compiled model (as a `gptc.Model` object). For information about `max_ngram_length`, see section "Ngrams." Words or ngrams used less than `min_count` times throughout the input text are excluded from the model. ### `gptc.pack(directory, print_exceptions=False)` Pack the model in `directory` and return a tuple of the format: (raw_model, [(exception,),(exception,)...]) Note that the exceptions are contained in single-item tuples. This is to allow more information to be provided without breaking the API in future versions of GPTC. See `models/unpacked/` for an example of the format. ### `gptc.Classifier(model, max_ngram_length=1)` `Classifier` objects are deprecated starting with GPTC 3.1.0, and will be removed in 4.0.0. See [the README from 3.0.2](https://git.kj7rrv.com/kj7rrv/gptc/src/tag/v3.0.1/README.md) if you need documentation. ## Ngrams GPTC optionally supports using ngrams to improve classification accuracy. They are disabled by default (maximum length set to 1) for performance reasons. Enabling them significantly increases the time required both for compilation and classification. The effect seems more significant for compilation than for classification. Compiled models are also much larger when ngrams are enabled. Larger maximum ngram lengths will result in slower performance and larger files. It is a good idea to experiment with different values and use the highest one at which GPTC is fast enough and models are small enough for your needs. Once a model is compiled at a certain maximum ngram length, it cannot be used for classification with a higher value. If you instantiate a `Classifier` with a model compiled with a lower `max_ngram_length`, the value will be silently reduced to the one used when compiling the model. ## Model format This section explains the raw model format, which is how models are created and edited. Raw models are formatted as a list of dicts. See below for the format: [ { "text": "", "category": "" } ] GPTC handles raw models as `list`s of `dict`s of `str`s (`List[Dict[str, str]]`), and they can be stored in any way these Python objects can be. However, it is recommended to store them in JSON format for compatibility with the command-line tool. ## Emoji GPTC treats individual emoji as words. ## Example model An example model, which is designed to distinguish between texts written by Mark Twain and those written by William Shakespeare, is available in `models`. The raw model is in `models/raw.json`; the compiled model is in `models/compiled.json`. The example model was compiled with `max_ngram_length=10`. ## Benchmark A benchmark script is available for comparing performance of GPTC between different Python versions. To use it, run `benchmark.py` with all of the Python installations you want to test. It tests both compilation and classification. It uses the default Twain/Shakespeare model for both, and for classification it uses [Mark Antony's "Friends, Romans, countrymen" speech](https://en.wikipedia.org/wiki/Friends,_Romans,_countrymen,_lend_me_your_ears) from Shakespeare's *Julius Caesar*.