Compare commits

...

80 Commits

Author SHA1 Message Date
71e9249ff4 Classifier objects will be removed in 5.0 2023-05-31 13:42:42 -07:00
97c4eef086
Move deserialize to Model object 2023-04-17 21:35:38 -07:00
457b569741
Update README 2023-04-17 21:33:03 -07:00
4546c4cffa
Fix profiler and benchmark 2023-04-17 21:28:24 -07:00
7b7ef39d0b
Merge compiler into model.py 2023-04-17 21:15:18 -07:00
a252a15e9d
Clean up code 2023-04-17 21:06:47 -07:00
9513025e60
Fix type annotations 2023-04-17 18:16:20 -07:00
2c3fc77ba6
Finish classification explanations
A couple things I missed in 7f68dc6fc6
2023-04-16 15:48:19 -07:00
d8f3d2e701
Bump model version
99ad07a876 broke the model format,
although probably only in a few edge cases

Still enough of a change for a model version bump
2023-04-16 15:36:49 -07:00
7f68dc6fc6
Add classification explanations
Closes #17
2023-04-16 15:35:53 -07:00
99ad07a876
Casefold
Closes #14
2023-04-16 14:49:03 -07:00
f38f4ca801
Add profiler 2023-04-16 14:27:31 -07:00
56550ca457
Remove Classifier objects
Closes #16
2023-04-16 14:27:07 -07:00
75fdb5ba3c
Split compiler into two functions 2023-01-15 09:39:35 -08:00
071656c2d2
Bump version to 4.0.1 2022-12-24 12:49:12 -08:00
aad590636a
Fix type annotations 2022-12-24 12:48:43 -08:00
099e810a18
Fix check 2022-12-24 12:44:09 -08:00
822aa7d1fd
Bump version to 4.0.0 2022-12-24 12:18:51 -08:00
8417c8acda
Recompile model 2022-12-24 12:18:25 -08:00
ec7f4116fc
Include file name of output in arguments 2022-12-24 12:17:44 -08:00
f8dbc78b82
Allow hash algorithm selection
Closes #9
2022-12-24 11:18:05 -08:00
6f21e0d4e9
Remove debug print lines from compiler 2022-12-24 10:48:09 -08:00
41bba61410
Remove has_emoji and bump model version
Closes #11
2022-12-24 10:47:23 -08:00
10668691ea
Normalize characters
Closes #3
2022-12-24 10:46:40 -08:00
295a1189de
Include numbers in tokenized output
Closes #12
2022-12-24 10:42:50 -08:00
74b2ba81b9
Deserialize from file 2022-12-23 10:49:24 -08:00
9916744801
New type annotation for serialize 2022-12-23 10:33:56 -08:00
7e7b5f3e9c
Performance improvements 2022-12-22 18:01:37 -08:00
a76c6d3da8
Bump version to 3.1.1 2022-11-27 15:01:06 -08:00
c84758af56
list, not tuple 2022-11-27 15:00:37 -08:00
3a9c8d2bf2
Revert "Bump version to 3.1.1"
This reverts commit 12f97ae765.
2022-11-27 14:56:10 -08:00
12f97ae765
Bump version to 3.1.1 2022-11-27 14:54:11 -08:00
c754293d69
Compiler performance improvements 2022-11-27 14:32:44 -08:00
8d42a92848
Add type annotation to Model.get() 2022-11-27 13:36:49 -08:00
e4eb322aa7
Bump version to 3.1.0 2022-11-26 18:37:11 -08:00
83ef71e8ce
Remove doc for gptc classify --category 2022-11-26 18:36:41 -08:00
991d3fd54a
Revert "Bump version to 3.1.0"
This reverts commit b3e6a13e65.
2022-11-26 18:36:18 -08:00
b3e6a13e65
Bump version to 3.1.0 2022-11-26 18:34:04 -08:00
b1228edd9c
Add CLI for Model.get() 2022-11-26 18:28:44 -08:00
25192ffddf
Add ability to look up individual token
Closes #10
2022-11-26 18:17:02 -08:00
548d670960
Use Classifier for --category 2022-11-26 17:50:26 -08:00
b3a43150d8
Split hash function 2022-11-26 17:42:42 -08:00
08437a2696
Add normalize() 2022-11-26 17:17:28 -08:00
fc4665bb9e
Separate tokenization and hashing 2022-11-26 17:04:56 -08:00
30287288f2
Fix README issues 2022-11-26 16:45:30 -08:00
448f200923
Add confidence to Model; deprecate Classifier 2022-11-26 16:41:29 -08:00
b4766cb613
Bump version to 3.0.1 2022-11-25 19:44:32 -08:00
f1a1ed9e2a
Remove most emoji-optional code
Almost all of the code previously used to make the emoji module optional
is removed in this commit. It was always my intent to make the `emoji`
module a hard dependency in v3.0.0 and remove the code for making it
optional, but for some reason I remembered to do the former but not the
latter; in fact, I added emoji-optional code to the new model handling
code. I can't completely remove this code because 3.0.0 will not
successfully deserialize a model without the `has_emoji` field in the
JSON config options, but this commit removes as much as possible without
breaking the model format and API version.

See also issue #11
2022-11-25 19:39:31 -08:00
7ecb7dd90a
Bump version to 3.0.0 2022-11-23 17:48:46 -08:00
3340abbf15
Fix CLI tool 2022-11-23 17:47:27 -08:00
a10569b5ab
New model format
Use Model objects and binary serialization format
2022-11-23 17:01:04 -08:00
f4ae5f851d
Hash words and ngrams 2022-11-23 12:53:01 -08:00
1d1ccbb7cc
Add min_count 2022-11-23 11:42:58 -08:00
e17c79c231
Remove obsolete licensing note in README 2022-11-23 11:34:55 -08:00
af1d1749d2
Refactor word count dict in compiler
This makes future changes to the algorithm much simpler.
2022-11-23 11:33:40 -08:00
aea35ad059
Switch to GPL 2022-11-23 11:28:27 -08:00
30a2ebe33e
Bump version to 2.1.3 2022-11-22 11:47:40 -08:00
4cb8b71407
Merge branch 'master' of https://git.kj7rrv.com/kj7rrv/gptc 2022-11-22 11:46:13 -08:00
7d1cbcaee0
Make sure text is lowercase 2022-11-22 11:44:13 -08:00
82524345f3 Update 'README.md' 2022-09-23 19:15:16 -07:00
c2cd6f62fb Revert "Switch to statistics.stdev"
This reverts commit 76df1dc56d.

Fix major performance regression
2022-07-22 14:45:43 -07:00
76df1dc56d Switch to statistics.stdev 2022-07-22 14:22:01 -07:00
ad138b37d6 Bump version to 2.1.2 2022-07-21 11:49:59 -07:00
3634a10aeb Fix another emoji bug 2022-07-21 11:49:35 -07:00
7250787228 Bump version to 2.1.1 2022-07-20 14:06:56 -07:00
9538cf8c22 Fix emoji handling 2022-07-20 14:06:27 -07:00
185692790f Add emoji checks, improve docs 2022-07-19 19:15:59 -07:00
73b800d60d remove python -m 2022-07-19 17:02:57 -07:00
b61ad35ae7 Bump version to v2.1.0 2022-07-19 17:01:08 -07:00
dc6eb48625 Optional dependency on emoji 2022-07-19 16:43:39 -07:00
ff8cba84c7 format pack.py 2022-07-19 16:02:05 -07:00
8c6dd0bde9 Type checks for pack 2022-07-19 10:43:10 -07:00
5082c2226b Move pack to main module; format code 2022-07-18 16:03:58 -07:00
e711767d24 Add type checks to all functions that need them 2022-07-17 18:42:38 -07:00
67ac3a4591 Working type checks 2022-07-17 18:22:19 -07:00
b36d8e6081 Fix annotations 2022-07-17 17:08:11 -07:00
48639f5d8d Non-working type checks 2022-07-17 16:51:19 -07:00
a207e281e7 Format code with black 2022-07-17 16:28:04 -07:00
e272ab42d1 Document emojis 2022-07-17 16:27:16 -07:00
bd0028a108 Add emoji support to tokenizer 2022-07-17 16:14:02 -07:00
18 changed files with 644 additions and 474 deletions

165
LGPL-3.0
View File

@ -1,165 +0,0 @@
GNU LESSER GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
This version of the GNU Lesser General Public License incorporates
the terms and conditions of version 3 of the GNU General Public
License, supplemented by the additional permissions listed below.
0. Additional Definitions.
As used herein, "this License" refers to version 3 of the GNU Lesser
General Public License, and the "GNU GPL" refers to version 3 of the GNU
General Public License.
"The Library" refers to a covered work governed by this License,
other than an Application or a Combined Work as defined below.
An "Application" is any work that makes use of an interface provided
by the Library, but which is not otherwise based on the Library.
Defining a subclass of a class defined by the Library is deemed a mode
of using an interface provided by the Library.
A "Combined Work" is a work produced by combining or linking an
Application with the Library. The particular version of the Library
with which the Combined Work was made is also called the "Linked
Version".
The "Minimal Corresponding Source" for a Combined Work means the
Corresponding Source for the Combined Work, excluding any source code
for portions of the Combined Work that, considered in isolation, are
based on the Application, and not on the Linked Version.
The "Corresponding Application Code" for a Combined Work means the
object code and/or source code for the Application, including any data
and utility programs needed for reproducing the Combined Work from the
Application, but excluding the System Libraries of the Combined Work.
1. Exception to Section 3 of the GNU GPL.
You may convey a covered work under sections 3 and 4 of this License
without being bound by section 3 of the GNU GPL.
2. Conveying Modified Versions.
If you modify a copy of the Library, and, in your modifications, a
facility refers to a function or data to be supplied by an Application
that uses the facility (other than as an argument passed when the
facility is invoked), then you may convey a copy of the modified
version:
a) under this License, provided that you make a good faith effort to
ensure that, in the event an Application does not supply the
function or data, the facility still operates, and performs
whatever part of its purpose remains meaningful, or
b) under the GNU GPL, with none of the additional permissions of
this License applicable to that copy.
3. Object Code Incorporating Material from Library Header Files.
The object code form of an Application may incorporate material from
a header file that is part of the Library. You may convey such object
code under terms of your choice, provided that, if the incorporated
material is not limited to numerical parameters, data structure
layouts and accessors, or small macros, inline functions and templates
(ten or fewer lines in length), you do both of the following:
a) Give prominent notice with each copy of the object code that the
Library is used in it and that the Library and its use are
covered by this License.
b) Accompany the object code with a copy of the GNU GPL and this license
document.
4. Combined Works.
You may convey a Combined Work under terms of your choice that,
taken together, effectively do not restrict modification of the
portions of the Library contained in the Combined Work and reverse
engineering for debugging such modifications, if you also do each of
the following:
a) Give prominent notice with each copy of the Combined Work that
the Library is used in it and that the Library and its use are
covered by this License.
b) Accompany the Combined Work with a copy of the GNU GPL and this license
document.
c) For a Combined Work that displays copyright notices during
execution, include the copyright notice for the Library among
these notices, as well as a reference directing the user to the
copies of the GNU GPL and this license document.
d) Do one of the following:
0) Convey the Minimal Corresponding Source under the terms of this
License, and the Corresponding Application Code in a form
suitable for, and under terms that permit, the user to
recombine or relink the Application with a modified version of
the Linked Version to produce a modified Combined Work, in the
manner specified by section 6 of the GNU GPL for conveying
Corresponding Source.
1) Use a suitable shared library mechanism for linking with the
Library. A suitable mechanism is one that (a) uses at run time
a copy of the Library already present on the user's computer
system, and (b) will operate properly with a modified version
of the Library that is interface-compatible with the Linked
Version.
e) Provide Installation Information, but only if you would otherwise
be required to provide such information under section 6 of the
GNU GPL, and only to the extent that such information is
necessary to install and execute a modified version of the
Combined Work produced by recombining or relinking the
Application with a modified version of the Linked Version. (If
you use option 4d0, the Installation Information must accompany
the Minimal Corresponding Source and Corresponding Application
Code. If you use option 4d1, you must provide the Installation
Information in the manner specified by section 6 of the GNU GPL
for conveying Corresponding Source.)
5. Combined Libraries.
You may place library facilities that are a work based on the
Library side by side in a single library together with other library
facilities that are not Applications and are not covered by this
License, and convey such a combined library under terms of your
choice, if you do both of the following:
a) Accompany the combined library with a copy of the same work based
on the Library, uncombined with any other library facilities,
conveyed under the terms of this License.
b) Give prominent notice with the combined library that part of it
is a work based on the Library, and explaining where to find the
accompanying uncombined form of the same work.
6. Revised Versions of the GNU Lesser General Public License.
The Free Software Foundation may publish revised and/or new versions
of the GNU Lesser General Public License from time to time. Such new
versions will be similar in spirit to the present version, but may
differ in detail to address new problems or concerns.
Each version is given a distinguishing version number. If the
Library as you received it specifies that a certain numbered version
of the GNU Lesser General Public License "or any later version"
applies to it, you have the option of following the terms and
conditions either of that published version or of any later version
published by the Free Software Foundation. If the Library as you
received it does not specify a version number of the GNU Lesser
General Public License, you may choose any version of the GNU Lesser
General Public License ever published by the Free Software Foundation.
If the Library as you received it specifies that a proxy can decide
whether future versions of the GNU Lesser General Public License shall
apply, that proxy's public statement of acceptance of any version is
permanent authorization for you to choose that version for the
Library.

11
LICENSE
View File

@ -1,14 +1,13 @@
Copyright (c) 2020-2022 Samuel L Sloniker
This program is free software: you can redistribute it and/or modify it under
the terms of the GNU Lesser General Public License as published by the Free
Software Foundation, either version 3 of the License, or (at your option) any
later version.
the terms of the GNU General Public License as published by the Free Software
Foundation, either version 3 of the License, or (at your option) any later
version.
This program is distributed in the hope that it will be useful, but WITHOUT ANY
WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A
PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received copies of the GNU General Public License and the GNU
Lesser General Public License along with this program. If not, see
<https://www.gnu.org/licenses/>.
You should have received a copy of the GNU General Public License along with
this program. If not, see <https://www.gnu.org/licenses/>.

142
README.md
View File

@ -4,78 +4,142 @@ General-purpose text classifier in Python
GPTC provides both a CLI tool and a Python library.
## Installation
pip install gptc
## CLI Tool
### Classifying text
python -m gptc classify [-n <max_ngram_length>] <compiled model file>
gptc classify [-n <max_ngram_length>] <compiled model file>
This will prompt for a string and classify it, then print (in JSON) a dict of
the format `{category: probability, category:probability, ...}` to stdout. (For
information about `-n <max_ngram_length>`, see section "Ngrams.")
Alternatively, if you only need the most likely category, you can use this:
### Checking individual words or ngrams
python -m gptc classify [-n <max_ngram_length>] <-c|--category> <compiled model file>
gptc check <compiled model file> <token or ngram>
This will prompt for a string and classify it, outputting the category on
stdout (or "None" if it cannot determine anything).
This is very similar to `gptc classify`, except it takes the input as an
argument, and it treats the input as a single token or ngram.
### Compiling models
python -m gptc compile [-n <max_ngram_length>] <raw model file>
gptc compile [-n <max_ngram_length>] [-c <min_count>] <raw model file> <compiled model file>
This will print the compiled model in JSON to stdout.
This will write the compiled model encoded in binary format to `<compiled model
file>`.
If `-c` is specified, words and ngrams used less than `min_count` times will be
excluded from the compiled model.
### Packing models
gptc pack <dir>
This will print the raw model in JSON to stdout. See `models/unpacked/` for an
example of the format. Any exceptions will be printed to stderr.
## Library
### `gptc.Classifier(model, max_ngram_length=1)`
### `Model.serialize(file)`
Create a `Classifier` object using the given *compiled* model (as a dict, not
JSON).
Write binary data representing the model to `file`.
For information about `max_ngram_length`, see section "Ngrams."
### `Model.deserialize(encoded_model)`
#### `Classifier.confidence(text)`
Deserialize a `Model` from a file containing data from `Model.serialize()`.
### `Model.confidence(text, max_ngram_length)`
Classify `text`. Returns a dict of the format `{category: probability,
category:probability, ...}`
#### `Classifier.classify(text)`
Classify `text`. Returns the category into which the text is placed (as a
string), or `None` when it cannot classify the text.
### `gptc.compile(raw_model, max_ngram_length=1)`
Compile a raw model (as a list, not JSON) and return the compiled model (as a
dict).
Note that this may not include values for all categories. If there are no
common words between the input and the training data (likely, for example, with
input in a different language from the training data), an empty dict will be
returned.
For information about `max_ngram_length`, see section "Ngrams."
### `Model.get(token)`
Return a confidence dict for the given token or ngram. This function is very
similar to `Model.confidence()`, except it treats the input as a single token
or ngram.
### `Model.compile(raw_model, max_ngram_length=1, min_count=1, hash_algorithm="sha256")`
Compile a raw model (as a list, not JSON) and return the compiled model (as a
`gptc.Model` object).
For information about `max_ngram_length`, see section "Ngrams."
Words or ngrams used less than `min_count` times throughout the input text are
excluded from the model.
The hash algorithm should be left as the default, which may change with a minor
version update, but it can be changed by the application if needed. It is
stored in the model, so changing the algorithm does not affect compatibility.
The following algorithms are supported:
* `md5`
* `sha1`
* `sha224`
* `sha256`
* `sha384`
* `sha512`
* `sha3_224`
* `sha3_384`
* `sha3_256`
* `sha3_512`
* `shake_128`
* `shake_256`
* `blake2b`
* `blake2s`
### `gptc.pack(directory, print_exceptions=False)`
Pack the model in `directory` and return a tuple of the format:
(raw_model, [(exception,),(exception,)...])
Note that the exceptions are contained in single-item tuples. This is to allow
more information to be provided without breaking the API in future versions of
GPTC.
See `models/unpacked/` for an example of the format.
### `gptc.Classifier(model, max_ngram_length=1)`
`Classifier` objects are deprecated starting with GPTC 3.1.0, and will be
removed in 5.0.0. See [the README from
3.0.2](https://git.kj7rrv.com/kj7rrv/gptc/src/tag/v3.0.1/README.md) if you need
documentation.
## Ngrams
GPTC optionally supports using ngrams to improve classification accuracy. They
are disabled by default (maximum length set to 1) for performance and
compatibility reasons. Enabling them significantly increases the time required
both for compilation and classification. The effect seems more significant for
compilation than for classification. Compiled models are also much larger when
ngrams are enabled. Larger maximum ngram lengths will result in slower
performance and larger files. It is a good idea to experiment with different
values and use the highest one at which GPTC is fast enough and models are
small enough for your needs.
are disabled by default (maximum length set to 1) for performance reasons.
Enabling them significantly increases the time required both for compilation
and classification. The effect seems more significant for compilation than for
classification. Compiled models are also much larger when ngrams are enabled.
Larger maximum ngram lengths will result in slower performance and larger
files. It is a good idea to experiment with different values and use the
highest one at which GPTC is fast enough and models are small enough for your
needs.
Once a model is compiled at a certain maximum ngram length, it cannot be used
for classification with a higher value. If you instantiate a `Classifier` with
a model compiled with a lower `max_ngram_length`, the value will be silently
reduced to the one used when compiling the model.
Models compiled with older versions of GPTC which did not support ngrams are
handled the same way as models compiled with `max_ngram_length=1`.
## Model format
This section explains the raw model format, which is how you should create and
edit models.
This section explains the raw model format, which is how models are created and
edited.
Raw models are formatted as a list of dicts. See below for the format:
@ -86,10 +150,14 @@ Raw models are formatted as a list of dicts. See below for the format:
}
]
GPTC handles models as Python `list`s of `dict`s of `str`s (for raw models) or
`dict`s of `str`s and `float`s (for compiled models), and they can be stored
in any way these Python objects can be. However, it is recommended to store
them in JSON format for compatibility with the command-line tool.
GPTC handles raw models as `list`s of `dict`s of `str`s (`List[Dict[str,
str]]`), and they can be stored in any way these Python objects can be.
However, it is recommended to store them in JSON format for compatibility with
the command-line tool.
## Emoji
GPTC treats individual emoji as words.
## Example model

View File

@ -1,3 +1,5 @@
# SPDX-License-Identifier: GPL-3.0-or-later
import timeit
import gptc
import json
@ -23,7 +25,7 @@ print(
round(
1000000
* timeit.timeit(
"gptc.compile(raw_model, max_ngram_length)",
"gptc.Model.compile(raw_model, max_ngram_length)",
number=compile_iterations,
globals=globals(),
)
@ -33,7 +35,9 @@ print(
)
classifier = gptc.Classifier(gptc.compile(raw_model, max_ngram_length), max_ngram_length)
classifier = gptc.Classifier(
gptc.compile(raw_model, max_ngram_length), max_ngram_length
)
print(
"Average classification time over",
classify_iterations,

View File

@ -1,7 +1,12 @@
# SPDX-License-Identifier: LGPL-3.0-or-later
# SPDX-License-Identifier: GPL-3.0-or-later
"""General-Purpose Text Classifier"""
from gptc.compiler import compile
from gptc.classifier import Classifier
from gptc.exceptions import *
from gptc.pack import pack
from gptc.model import Model
from gptc.tokenizer import normalize
from gptc.exceptions import (
GPTCError,
ModelError,
InvalidModelError,
)

View File

@ -1,57 +1,87 @@
#!/usr/bin/env python3
# SPDX-License-Identifier: LGPL-3.0-or-later
# SPDX-License-Identifier: GPL-3.0-or-later
import argparse
import json
import sys
import gptc
def main():
def main() -> None:
parser = argparse.ArgumentParser(
description="General Purpose Text Classifier", prog="gptc"
)
subparsers = parser.add_subparsers(dest="subparser_name", required=True)
compile_parser = subparsers.add_parser("compile", help="compile a raw model")
compile_parser = subparsers.add_parser(
"compile", help="compile a raw model"
)
compile_parser.add_argument("model", help="raw model to compile")
compile_parser.add_argument("--max-ngram-length", "-n", help="maximum ngram length", type=int, default=1)
compile_parser.add_argument(
"out", help="name of file to write compiled model to"
)
compile_parser.add_argument(
"--max-ngram-length",
"-n",
help="maximum ngram length",
type=int,
default=1,
)
compile_parser.add_argument(
"--min-count",
"-c",
help="minimum use count for word/ngram to be included in model",
type=int,
default=1,
)
classify_parser = subparsers.add_parser("classify", help="classify text")
classify_parser.add_argument("model", help="compiled model to use")
classify_parser.add_argument("--max-ngram-length", "-n", help="maximum ngram length", type=int, default=1)
group = classify_parser.add_mutually_exclusive_group()
group.add_argument(
"-j",
"--json",
help="output confidence dict as JSON (default)",
action="store_true",
classify_parser.add_argument(
"--max-ngram-length",
"-n",
help="maximum ngram length",
type=int,
default=1,
)
group.add_argument(
"-c",
"--category",
help="output most likely category or `None`",
action="store_true",
check_parser = subparsers.add_parser(
"check", help="check one word or ngram in model"
)
check_parser.add_argument("model", help="compiled model to use")
check_parser.add_argument("token", help="token or ngram to check")
pack_parser = subparsers.add_parser(
"pack", help="pack a model from a directory"
)
pack_parser.add_argument("model", help="directory containing model")
args = parser.parse_args()
with open(args.model, "r") as f:
model = json.load(f)
if args.subparser_name == "compile":
print(json.dumps(gptc.compile(model, args.max_ngram_length)))
else:
classifier = gptc.Classifier(model, args.max_ngram_length)
with open(args.model, "r", encoding="utf-8") as input_file:
model = json.load(input_file)
with open(args.out, "wb+") as output_file:
gptc.Model.compile(
model, args.max_ngram_length, args.min_count
).serialize(output_file)
elif args.subparser_name == "classify":
with open(args.model, "rb") as model_file:
model = gptc.Model.deserialize(model_file)
if sys.stdin.isatty():
text = input("Text to analyse: ")
else:
text = sys.stdin.read()
if args.category:
print(classifier.classify(text))
else:
print(json.dumps(classifier.confidence(text)))
print(json.dumps(model.confidence(text, args.max_ngram_length)))
elif args.subparser_name == "check":
with open(args.model, "rb") as model_file:
model = gptc.Model.deserialize(model_file)
print(json.dumps(model.get(args.token)))
else:
print(json.dumps(gptc.pack(args.model, True)[0]))
if __name__ == "__main__":

View File

@ -1,96 +0,0 @@
# SPDX-License-Identifier: LGPL-3.0-or-later
import gptc.tokenizer, gptc.compiler, gptc.exceptions, gptc.weighting
import warnings
class Classifier:
"""A text classifier.
Parameters
----------
model : dict
A compiled GPTC model.
max_ngram_length : int
The maximum ngram length to use when tokenizing input. If this is
greater than the value used when the model was compiled, it will be
silently lowered to that value.
Attributes
----------
model : dict
The model used.
"""
def __init__(self, model, max_ngram_length=1):
if model.get("__version__", 0) != 3:
raise gptc.exceptions.UnsupportedModelError(
f"unsupported model version"
)
self.model = model
self.max_ngram_length = min(
max_ngram_length, model.get("__ngrams__", 1)
)
def confidence(self, text):
"""Classify text with confidence.
Parameters
----------
text : str
The text to classify
Returns
-------
dict
{category:probability, category:probability...} or {} if no words
matching any categories in the model were found
"""
model = self.model
text = gptc.tokenizer.tokenize(text, self.max_ngram_length)
probs = {}
for word in text:
try:
weight, weighted_numbers = gptc.weighting.weight(
[i / 65535 for i in model[word]]
)
for category, value in enumerate(weighted_numbers):
try:
probs[category] += value
except KeyError:
probs[category] = value
except KeyError:
pass
probs = {
model["__names__"][category]: value
for category, value in probs.items()
}
total = sum(probs.values())
probs = {category: value / total for category, value in probs.items()}
return probs
def classify(self, text):
"""Classify text.
Parameters
----------
text : str
The text to classify
Returns
-------
str or None
The most likely category, or None if no words matching any
category in the model were found.
"""
probs = self.confidence(text)
try:
return sorted(probs.items(), key=lambda x: x[1])[-1][0]
except IndexError:
return None

View File

@ -1,73 +0,0 @@
# SPDX-License-Identifier: LGPL-3.0-or-later
import gptc.tokenizer
def compile(raw_model, max_ngram_length=1):
"""Compile a raw model.
Parameters
----------
raw_model : list of dict
A raw GPTC model.
max_ngram_length : int
Maximum ngram lenght to compile with.
Returns
-------
dict
A compiled GPTC model.
"""
categories = {}
for portion in raw_model:
text = gptc.tokenizer.tokenize(portion["text"], max_ngram_length)
category = portion["category"]
try:
categories[category] += text
except KeyError:
categories[category] = text
categories_by_count = {}
names = []
for category, text in categories.items():
if not category in names:
names.append(category)
categories_by_count[category] = {}
for word in text:
try:
categories_by_count[category][word] += 1 / len(
categories[category]
)
except KeyError:
categories_by_count[category][word] = 1 / len(
categories[category]
)
word_weights = {}
for category, words in categories_by_count.items():
for word, value in words.items():
try:
word_weights[word][category] = value
except KeyError:
word_weights[word] = {category: value}
model = {}
for word, weights in word_weights.items():
total = sum(weights.values())
model[word] = []
for category in names:
model[word].append(
round((weights.get(category, 0) / total) * 65535)
)
model["__names__"] = names
model["__ngrams__"] = max_ngram_length
model["__version__"] = 3
return model

View File

@ -1,4 +1,4 @@
# SPDX-License-Identifier: LGPL-3.0-or-later
# SPDX-License-Identifier: GPL-3.0-or-later
class GPTCError(BaseException):
@ -9,5 +9,5 @@ class ModelError(GPTCError):
pass
class UnsupportedModelError(ModelError):
class InvalidModelError(ModelError):
pass

322
gptc/model.py Normal file
View File

@ -0,0 +1,322 @@
# SPDX-License-Identifier: GPL-3.0-or-later
from typing import (
Iterable,
Mapping,
List,
Dict,
cast,
BinaryIO,
Tuple,
TypedDict,
)
import json
import gptc.tokenizer
from gptc.exceptions import InvalidModelError
import gptc.weighting
def _count_words(
raw_model: Iterable[Mapping[str, str]],
max_ngram_length: int,
hash_algorithm: str,
) -> Tuple[Dict[int, Dict[str, int]], Dict[str, int], List[str]]:
word_counts: Dict[int, Dict[str, int]] = {}
category_lengths: Dict[str, int] = {}
names: List[str] = []
for portion in raw_model:
text = gptc.tokenizer.hash_list(
gptc.tokenizer.tokenize(portion["text"], max_ngram_length),
hash_algorithm,
)
category = portion["category"]
if not category in names:
names.append(category)
category_lengths[category] = category_lengths.get(category, 0) + len(
text
)
for word in text:
if word in word_counts:
try:
word_counts[word][category] += 1
except KeyError:
word_counts[word][category] = 1
else:
word_counts[word] = {category: 1}
return word_counts, category_lengths, names
def _get_weights(
min_count: int,
word_counts: Dict[int, Dict[str, int]],
category_lengths: Dict[str, int],
names: List[str],
) -> Dict[int, List[int]]:
model: Dict[int, List[int]] = {}
for word, counts in word_counts.items():
if sum(counts.values()) >= min_count:
weights = {
category: value / category_lengths[category]
for category, value in counts.items()
}
total = sum(weights.values())
new_weights: List[int] = []
for category in names:
new_weights.append(
round((weights.get(category, 0) / total) * 65535)
)
model[word] = new_weights
return model
class ExplanationEntry(TypedDict):
weight: float
probabilities: Dict[str, float]
count: int
Explanation = Dict[
str,
ExplanationEntry,
]
Log = List[Tuple[str, float, List[float]]]
class Confidences(dict[str, float]):
def __init__(self, probs: Dict[str, float]):
dict.__init__(self, probs)
class TransparentConfidences(Confidences):
def __init__(
self,
probs: Dict[str, float],
explanation: Explanation,
):
self.explanation = explanation
Confidences.__init__(self, probs)
def convert_log(log: Log, names: List[str]) -> Explanation:
explanation: Explanation = {}
for word2, weight, word_probs in log:
if word2 in explanation:
explanation[word2]["count"] += 1
else:
explanation[word2] = {
"weight": weight,
"probabilities": {
name: word_probs[index] for index, name in enumerate(names)
},
"count": 1,
}
return explanation
class Model:
def __init__(
self,
weights: Dict[int, List[int]],
names: List[str],
max_ngram_length: int,
hash_algorithm: str,
):
self.weights = weights
self.names = names
self.max_ngram_length = max_ngram_length
self.hash_algorithm = hash_algorithm
def confidence(
self, text: str, max_ngram_length: int, transparent: bool = False
) -> Confidences:
"""Classify text with confidence.
Parameters
----------
text : str
The text to classify
max_ngram_length : int
The maximum ngram length to use in classifying
Returns
-------
dict
{category:probability, category:probability...} or {} if no words
matching any categories in the model were found
"""
model = self.weights
max_ngram_length = min(self.max_ngram_length, max_ngram_length)
raw_tokens = gptc.tokenizer.tokenize(
text, min(max_ngram_length, self.max_ngram_length)
)
tokens = gptc.tokenizer.hash_list(
raw_tokens,
self.hash_algorithm,
)
if transparent:
token_map = {tokens[i]: raw_tokens[i] for i in range(len(tokens))}
log: Log = []
numbered_probs: Dict[int, float] = {}
for word in tokens:
try:
unweighted_numbers = [
i / 65535 for i in cast(List[float], model[word])
]
weight, weighted_numbers = gptc.weighting.weight(
unweighted_numbers
)
if transparent:
log.append(
(
token_map[word],
weight,
unweighted_numbers,
)
)
for category, value in enumerate(weighted_numbers):
try:
numbered_probs[category] += value
except KeyError:
numbered_probs[category] = value
except KeyError:
pass
total = sum(numbered_probs.values())
probs: Dict[str, float] = {
self.names[category]: value / total
for category, value in numbered_probs.items()
}
if transparent:
explanation = convert_log(log, self.names)
return TransparentConfidences(probs, explanation)
return Confidences(probs)
def get(self, token: str) -> Dict[str, float]:
try:
weights = self.weights[
gptc.tokenizer.hash_single(
gptc.tokenizer.normalize(token), self.hash_algorithm
)
]
except KeyError:
return {}
return {
category: weights[index] / 65535
for index, category in enumerate(self.names)
}
def serialize(self, file: BinaryIO) -> None:
file.write(b"GPTC model v6\n")
file.write(
json.dumps(
{
"names": self.names,
"max_ngram_length": self.max_ngram_length,
"hash_algorithm": self.hash_algorithm,
}
).encode("utf-8")
+ b"\n"
)
for word, weights in self.weights.items():
file.write(
word.to_bytes(6, "big")
+ b"".join([weight.to_bytes(2, "big") for weight in weights])
)
@staticmethod
def compile(
raw_model: Iterable[Mapping[str, str]],
max_ngram_length: int = 1,
min_count: int = 1,
hash_algorithm: str = "sha256",
) -> 'Model':
"""Compile a raw model.
Parameters
----------
raw_model : list of dict
A raw GPTC model.
max_ngram_length : int
Maximum ngram lenght to compile with.
Returns
-------
dict
A compiled GPTC model.
"""
word_counts, category_lengths, names = _count_words(
raw_model, max_ngram_length, hash_algorithm
)
model = _get_weights(min_count, word_counts, category_lengths, names)
return Model(model, names, max_ngram_length, hash_algorithm)
@staticmethod
def deserialize(encoded_model: BinaryIO) -> "Model":
prefix = encoded_model.read(14)
if prefix != b"GPTC model v6\n":
raise InvalidModelError()
config_json = b""
while True:
byte = encoded_model.read(1)
if byte == b"\n":
break
if byte == b"":
raise InvalidModelError()
config_json += byte
try:
config = json.loads(config_json.decode("utf-8"))
except (UnicodeDecodeError, json.JSONDecodeError) as exc:
raise InvalidModelError() from exc
try:
names = config["names"]
max_ngram_length = config["max_ngram_length"]
hash_algorithm = config["hash_algorithm"]
except KeyError as exc:
raise InvalidModelError() from exc
if not (
isinstance(names, list) and isinstance(max_ngram_length, int)
) or not all(isinstance(name, str) for name in names):
raise InvalidModelError()
weight_code_length = 6 + 2 * len(names)
weights: Dict[int, List[int]] = {}
while True:
code = encoded_model.read(weight_code_length)
if not code:
break
if len(code) != weight_code_length:
raise InvalidModelError()
weights[int.from_bytes(code[:6], "big")] = [
int.from_bytes(value, "big")
for value in [code[x : x + 2] for x in range(6, len(code), 2)]
]
return Model(weights, names, max_ngram_length, hash_algorithm)

38
gptc/pack.py Normal file
View File

@ -0,0 +1,38 @@
# SPDX-License-Identifier: GPL-3.0-or-later
import sys
import os
from typing import List, Dict, Tuple
def pack(
directory: str, print_exceptions: bool = False
) -> Tuple[List[Dict[str, str]], List[Tuple[OSError]]]:
paths = os.listdir(directory)
texts: Dict[str, List[str]] = {}
exceptions = []
for path in paths:
texts[path] = []
try:
for file in os.listdir(os.path.join(directory, path)):
try:
with open(
os.path.join(directory, path, file), encoding="utf-8"
) as input_file:
texts[path].append(input_file.read())
except OSError as error:
exceptions.append((error,))
if print_exceptions:
print(error, file=sys.stderr)
except OSError as error:
exceptions.append((error,))
if print_exceptions:
print(error, file=sys.stderr)
raw_model = []
for category, cat_texts in texts.items():
raw_model += [{"category": category, "text": i} for i in cat_texts]
return raw_model, exceptions

View File

@ -1,13 +1,35 @@
# SPDX-License-Identifier: LGPL-3.0-or-later
# SPDX-License-Identifier: GPL-3.0-or-later
import unicodedata
from typing import List, cast
import hashlib
import emoji
def tokenize(text, max_ngram_length=1):
"""Convert a string to a list of lemmas."""
def tokenize(text: str, max_ngram_length: int = 1) -> List[str]:
text = unicodedata.normalize("NFKD", text).casefold()
parts = []
highest_end = 0
for emoji_part in emoji.emoji_list(text):
parts += list(text[highest_end : emoji_part["match_start"]])
parts.append(emoji_part["emoji"])
highest_end = emoji_part["match_end"]
parts += list(text[highest_end:])
converted_text = [part for part in parts if part]
tokens = [""]
for char in text.lower():
if char.isalpha() or char == "'":
for char in converted_text:
if (
char.isalpha()
or char.isnumeric()
or char == "'"
or (char in ",." and (" " + tokens[-1])[-1].isnumeric())
):
tokens[-1] += char
elif emoji.is_emoji(char):
tokens.append(char)
tokens.append("")
elif tokens[-1] != "":
tokens.append("")
@ -15,9 +37,50 @@ def tokenize(text, max_ngram_length=1):
if max_ngram_length == 1:
return tokens
else:
ngrams = []
for ngram_length in range(1, max_ngram_length + 1):
for index in range(len(tokens) + 1 - ngram_length):
ngrams.append(" ".join(tokens[index : index + ngram_length]))
return ngrams
ngrams = []
for ngram_length in range(1, max_ngram_length + 1):
for index in range(len(tokens) + 1 - ngram_length):
ngrams.append(" ".join(tokens[index : index + ngram_length]))
return ngrams
def _hash_single(token: str, hash_function: type) -> int:
return int.from_bytes(
hash_function(token.encode("utf-8")).digest()[:6], "big"
)
def _get_hash_function(hash_algorithm: str) -> type:
if hash_algorithm in {
"sha224",
"md5",
"sha512",
"sha3_256",
"blake2s",
"sha3_224",
"sha1",
"sha256",
"sha384",
"shake_256",
"blake2b",
"sha3_512",
"shake_128",
"sha3_384",
}:
return cast(type, getattr(hashlib, hash_algorithm))
raise ValueError("not a valid hash function: " + hash_algorithm)
def hash_single(token: str, hash_algorithm: str) -> int:
return _hash_single(token, _get_hash_function(hash_algorithm))
def hash_list(tokens: List[str], hash_algorithm: str) -> List[int]:
hash_function = _get_hash_function(hash_algorithm)
return [_hash_single(token, hash_function) for token in tokens]
def normalize(text: str) -> str:
return " ".join(tokenize(text, 1))

View File

@ -1,9 +1,10 @@
# SPDX-License-Identifier: LGPL-3.0-or-later
# SPDX-License-Identifier: GPL-3.0-or-later
import math
from typing import Sequence, Tuple, List
def _mean(numbers):
def _mean(numbers: Sequence[float]) -> float:
"""Calculate the mean of a group of numbers
Parameters
@ -19,7 +20,7 @@ def _mean(numbers):
return sum(numbers) / len(numbers)
def _standard_deviation(numbers):
def _standard_deviation(numbers: Sequence[float]) -> float:
"""Calculate the standard deviation of a group of numbers
Parameters
@ -38,8 +39,8 @@ def _standard_deviation(numbers):
return math.sqrt(_mean(squared_deviations))
def weight(numbers):
def weight(numbers: Sequence[float]) -> Tuple[float, List[float]]:
standard_deviation = _standard_deviation(numbers)
weight = standard_deviation * 2
weighted_numbers = [i * weight for i in numbers]
return weight, weighted_numbers
weight_assigned = standard_deviation * 2
weighted_numbers = [i * weight_assigned for i in numbers]
return weight_assigned, weighted_numbers

BIN
models/compiled.gptc Normal file

Binary file not shown.

File diff suppressed because one or more lines are too long

16
profiler.py Normal file
View File

@ -0,0 +1,16 @@
# SPDX-License-Identifier: GPL-3.0-or-later
import cProfile
import gptc
import json
import sys
max_ngram_length = 10
with open("models/raw.json") as f:
raw_model = json.load(f)
with open("models/benchmark_text.txt") as f:
text = f.read()
cProfile.run("gptc.Model.compile(raw_model, max_ngram_length)")

View File

@ -4,7 +4,7 @@ build-backend = "setuptools.build_meta"
[project]
name = "gptc"
version = "2.0.1"
version = "4.0.1"
description = "General-purpose text classifier"
readme = "README.md"
authors = [{ name = "Samuel Sloniker", email = "sam@kj7rrv.com"}]
@ -12,10 +12,10 @@ classifiers = [
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Development Status :: 5 - Production/Stable",
"License :: OSI Approved :: GNU Lesser General Public License v3 or later (LGPLv3+)",
"License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+)",
"Operating System :: OS Independent",
]
dependencies = []
dependencies = ["emoji"]
requires-python = ">=3.7"
[project.urls]

View File

@ -1,41 +0,0 @@
# SPDX-License-Identifier: LGPL-3.0-or-later
import sys
import os
import json
def pack(directory, print_exceptions=True):
paths = os.listdir(directory)
texts = {}
exceptions = []
for path in paths:
texts[path] = []
try:
for file in os.listdir(os.path.join(sys.argv[1], path)):
try:
with open(os.path.join(sys.argv[1], path, file)) as f:
texts[path].append(f.read())
except Exception as e:
exceptions.append((e,))
if print_exceptions:
print(e, file=sys.stderr)
except Exception as e:
exceptions.append((e,))
if print_exceptions:
print(e, file=sys.stderr)
raw_model = []
for category, cat_texts in texts.items():
raw_model += [{"category": category, "text": i} for i in cat_texts]
return raw_model, exceptions
if len(sys.argv) != 2:
print("usage: pack.py <path>", file=sys.stderr)
exit(1)
print(json.dumps(pack(sys.argv[1])[0]))