Compare commits
62 Commits
Author | SHA1 | Date | |
---|---|---|---|
71e9249ff4 | |||
97c4eef086 | |||
457b569741 | |||
4546c4cffa | |||
7b7ef39d0b | |||
a252a15e9d | |||
9513025e60 | |||
2c3fc77ba6 | |||
d8f3d2e701 | |||
7f68dc6fc6 | |||
99ad07a876 | |||
f38f4ca801 | |||
56550ca457 | |||
75fdb5ba3c | |||
071656c2d2 | |||
aad590636a | |||
099e810a18 | |||
822aa7d1fd | |||
8417c8acda | |||
ec7f4116fc | |||
f8dbc78b82 | |||
6f21e0d4e9 | |||
41bba61410 | |||
10668691ea | |||
295a1189de | |||
74b2ba81b9 | |||
9916744801 | |||
7e7b5f3e9c | |||
a76c6d3da8 | |||
c84758af56 | |||
3a9c8d2bf2 | |||
12f97ae765 | |||
c754293d69 | |||
8d42a92848 | |||
e4eb322aa7 | |||
83ef71e8ce | |||
991d3fd54a | |||
b3e6a13e65 | |||
b1228edd9c | |||
25192ffddf | |||
548d670960 | |||
b3a43150d8 | |||
08437a2696 | |||
fc4665bb9e | |||
30287288f2 | |||
448f200923 | |||
b4766cb613 | |||
f1a1ed9e2a | |||
7ecb7dd90a | |||
3340abbf15 | |||
a10569b5ab | |||
f4ae5f851d | |||
1d1ccbb7cc | |||
e17c79c231 | |||
af1d1749d2 | |||
aea35ad059 | |||
30a2ebe33e | |||
4cb8b71407 | |||
7d1cbcaee0 | |||
82524345f3 | |||
c2cd6f62fb | |||
76df1dc56d |
165
LGPL-3.0
165
LGPL-3.0
|
@ -1,165 +0,0 @@
|
||||||
GNU LESSER GENERAL PUBLIC LICENSE
|
|
||||||
Version 3, 29 June 2007
|
|
||||||
|
|
||||||
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
|
|
||||||
Everyone is permitted to copy and distribute verbatim copies
|
|
||||||
of this license document, but changing it is not allowed.
|
|
||||||
|
|
||||||
|
|
||||||
This version of the GNU Lesser General Public License incorporates
|
|
||||||
the terms and conditions of version 3 of the GNU General Public
|
|
||||||
License, supplemented by the additional permissions listed below.
|
|
||||||
|
|
||||||
0. Additional Definitions.
|
|
||||||
|
|
||||||
As used herein, "this License" refers to version 3 of the GNU Lesser
|
|
||||||
General Public License, and the "GNU GPL" refers to version 3 of the GNU
|
|
||||||
General Public License.
|
|
||||||
|
|
||||||
"The Library" refers to a covered work governed by this License,
|
|
||||||
other than an Application or a Combined Work as defined below.
|
|
||||||
|
|
||||||
An "Application" is any work that makes use of an interface provided
|
|
||||||
by the Library, but which is not otherwise based on the Library.
|
|
||||||
Defining a subclass of a class defined by the Library is deemed a mode
|
|
||||||
of using an interface provided by the Library.
|
|
||||||
|
|
||||||
A "Combined Work" is a work produced by combining or linking an
|
|
||||||
Application with the Library. The particular version of the Library
|
|
||||||
with which the Combined Work was made is also called the "Linked
|
|
||||||
Version".
|
|
||||||
|
|
||||||
The "Minimal Corresponding Source" for a Combined Work means the
|
|
||||||
Corresponding Source for the Combined Work, excluding any source code
|
|
||||||
for portions of the Combined Work that, considered in isolation, are
|
|
||||||
based on the Application, and not on the Linked Version.
|
|
||||||
|
|
||||||
The "Corresponding Application Code" for a Combined Work means the
|
|
||||||
object code and/or source code for the Application, including any data
|
|
||||||
and utility programs needed for reproducing the Combined Work from the
|
|
||||||
Application, but excluding the System Libraries of the Combined Work.
|
|
||||||
|
|
||||||
1. Exception to Section 3 of the GNU GPL.
|
|
||||||
|
|
||||||
You may convey a covered work under sections 3 and 4 of this License
|
|
||||||
without being bound by section 3 of the GNU GPL.
|
|
||||||
|
|
||||||
2. Conveying Modified Versions.
|
|
||||||
|
|
||||||
If you modify a copy of the Library, and, in your modifications, a
|
|
||||||
facility refers to a function or data to be supplied by an Application
|
|
||||||
that uses the facility (other than as an argument passed when the
|
|
||||||
facility is invoked), then you may convey a copy of the modified
|
|
||||||
version:
|
|
||||||
|
|
||||||
a) under this License, provided that you make a good faith effort to
|
|
||||||
ensure that, in the event an Application does not supply the
|
|
||||||
function or data, the facility still operates, and performs
|
|
||||||
whatever part of its purpose remains meaningful, or
|
|
||||||
|
|
||||||
b) under the GNU GPL, with none of the additional permissions of
|
|
||||||
this License applicable to that copy.
|
|
||||||
|
|
||||||
3. Object Code Incorporating Material from Library Header Files.
|
|
||||||
|
|
||||||
The object code form of an Application may incorporate material from
|
|
||||||
a header file that is part of the Library. You may convey such object
|
|
||||||
code under terms of your choice, provided that, if the incorporated
|
|
||||||
material is not limited to numerical parameters, data structure
|
|
||||||
layouts and accessors, or small macros, inline functions and templates
|
|
||||||
(ten or fewer lines in length), you do both of the following:
|
|
||||||
|
|
||||||
a) Give prominent notice with each copy of the object code that the
|
|
||||||
Library is used in it and that the Library and its use are
|
|
||||||
covered by this License.
|
|
||||||
|
|
||||||
b) Accompany the object code with a copy of the GNU GPL and this license
|
|
||||||
document.
|
|
||||||
|
|
||||||
4. Combined Works.
|
|
||||||
|
|
||||||
You may convey a Combined Work under terms of your choice that,
|
|
||||||
taken together, effectively do not restrict modification of the
|
|
||||||
portions of the Library contained in the Combined Work and reverse
|
|
||||||
engineering for debugging such modifications, if you also do each of
|
|
||||||
the following:
|
|
||||||
|
|
||||||
a) Give prominent notice with each copy of the Combined Work that
|
|
||||||
the Library is used in it and that the Library and its use are
|
|
||||||
covered by this License.
|
|
||||||
|
|
||||||
b) Accompany the Combined Work with a copy of the GNU GPL and this license
|
|
||||||
document.
|
|
||||||
|
|
||||||
c) For a Combined Work that displays copyright notices during
|
|
||||||
execution, include the copyright notice for the Library among
|
|
||||||
these notices, as well as a reference directing the user to the
|
|
||||||
copies of the GNU GPL and this license document.
|
|
||||||
|
|
||||||
d) Do one of the following:
|
|
||||||
|
|
||||||
0) Convey the Minimal Corresponding Source under the terms of this
|
|
||||||
License, and the Corresponding Application Code in a form
|
|
||||||
suitable for, and under terms that permit, the user to
|
|
||||||
recombine or relink the Application with a modified version of
|
|
||||||
the Linked Version to produce a modified Combined Work, in the
|
|
||||||
manner specified by section 6 of the GNU GPL for conveying
|
|
||||||
Corresponding Source.
|
|
||||||
|
|
||||||
1) Use a suitable shared library mechanism for linking with the
|
|
||||||
Library. A suitable mechanism is one that (a) uses at run time
|
|
||||||
a copy of the Library already present on the user's computer
|
|
||||||
system, and (b) will operate properly with a modified version
|
|
||||||
of the Library that is interface-compatible with the Linked
|
|
||||||
Version.
|
|
||||||
|
|
||||||
e) Provide Installation Information, but only if you would otherwise
|
|
||||||
be required to provide such information under section 6 of the
|
|
||||||
GNU GPL, and only to the extent that such information is
|
|
||||||
necessary to install and execute a modified version of the
|
|
||||||
Combined Work produced by recombining or relinking the
|
|
||||||
Application with a modified version of the Linked Version. (If
|
|
||||||
you use option 4d0, the Installation Information must accompany
|
|
||||||
the Minimal Corresponding Source and Corresponding Application
|
|
||||||
Code. If you use option 4d1, you must provide the Installation
|
|
||||||
Information in the manner specified by section 6 of the GNU GPL
|
|
||||||
for conveying Corresponding Source.)
|
|
||||||
|
|
||||||
5. Combined Libraries.
|
|
||||||
|
|
||||||
You may place library facilities that are a work based on the
|
|
||||||
Library side by side in a single library together with other library
|
|
||||||
facilities that are not Applications and are not covered by this
|
|
||||||
License, and convey such a combined library under terms of your
|
|
||||||
choice, if you do both of the following:
|
|
||||||
|
|
||||||
a) Accompany the combined library with a copy of the same work based
|
|
||||||
on the Library, uncombined with any other library facilities,
|
|
||||||
conveyed under the terms of this License.
|
|
||||||
|
|
||||||
b) Give prominent notice with the combined library that part of it
|
|
||||||
is a work based on the Library, and explaining where to find the
|
|
||||||
accompanying uncombined form of the same work.
|
|
||||||
|
|
||||||
6. Revised Versions of the GNU Lesser General Public License.
|
|
||||||
|
|
||||||
The Free Software Foundation may publish revised and/or new versions
|
|
||||||
of the GNU Lesser General Public License from time to time. Such new
|
|
||||||
versions will be similar in spirit to the present version, but may
|
|
||||||
differ in detail to address new problems or concerns.
|
|
||||||
|
|
||||||
Each version is given a distinguishing version number. If the
|
|
||||||
Library as you received it specifies that a certain numbered version
|
|
||||||
of the GNU Lesser General Public License "or any later version"
|
|
||||||
applies to it, you have the option of following the terms and
|
|
||||||
conditions either of that published version or of any later version
|
|
||||||
published by the Free Software Foundation. If the Library as you
|
|
||||||
received it does not specify a version number of the GNU Lesser
|
|
||||||
General Public License, you may choose any version of the GNU Lesser
|
|
||||||
General Public License ever published by the Free Software Foundation.
|
|
||||||
|
|
||||||
If the Library as you received it specifies that a proxy can decide
|
|
||||||
whether future versions of the GNU Lesser General Public License shall
|
|
||||||
apply, that proxy's public statement of acceptance of any version is
|
|
||||||
permanent authorization for you to choose that version for the
|
|
||||||
Library.
|
|
11
LICENSE
11
LICENSE
|
@ -1,14 +1,13 @@
|
||||||
Copyright (c) 2020-2022 Samuel L Sloniker
|
Copyright (c) 2020-2022 Samuel L Sloniker
|
||||||
|
|
||||||
This program is free software: you can redistribute it and/or modify it under
|
This program is free software: you can redistribute it and/or modify it under
|
||||||
the terms of the GNU Lesser General Public License as published by the Free
|
the terms of the GNU General Public License as published by the Free Software
|
||||||
Software Foundation, either version 3 of the License, or (at your option) any
|
Foundation, either version 3 of the License, or (at your option) any later
|
||||||
later version.
|
version.
|
||||||
|
|
||||||
This program is distributed in the hope that it will be useful, but WITHOUT ANY
|
This program is distributed in the hope that it will be useful, but WITHOUT ANY
|
||||||
WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A
|
WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A
|
||||||
PARTICULAR PURPOSE. See the GNU General Public License for more details.
|
PARTICULAR PURPOSE. See the GNU General Public License for more details.
|
||||||
|
|
||||||
You should have received copies of the GNU General Public License and the GNU
|
You should have received a copy of the GNU General Public License along with
|
||||||
Lesser General Public License along with this program. If not, see
|
this program. If not, see <https://www.gnu.org/licenses/>.
|
||||||
<https://www.gnu.org/licenses/>.
|
|
||||||
|
|
149
README.md
149
README.md
|
@ -6,9 +6,7 @@ GPTC provides both a CLI tool and a Python library.
|
||||||
|
|
||||||
## Installation
|
## Installation
|
||||||
|
|
||||||
pip install gptc[emoji] # handles emojis! (see section "Emoji")
|
pip install gptc
|
||||||
# Or, if you don't need emoji support,
|
|
||||||
pip install gptc # no dependencies!
|
|
||||||
|
|
||||||
## CLI Tool
|
## CLI Tool
|
||||||
|
|
||||||
|
@ -20,18 +18,22 @@ This will prompt for a string and classify it, then print (in JSON) a dict of
|
||||||
the format `{category: probability, category:probability, ...}` to stdout. (For
|
the format `{category: probability, category:probability, ...}` to stdout. (For
|
||||||
information about `-n <max_ngram_length>`, see section "Ngrams.")
|
information about `-n <max_ngram_length>`, see section "Ngrams.")
|
||||||
|
|
||||||
Alternatively, if you only need the most likely category, you can use this:
|
### Checking individual words or ngrams
|
||||||
|
|
||||||
gptc classify [-n <max_ngram_length>] <-c|--category> <compiled model file>
|
gptc check <compiled model file> <token or ngram>
|
||||||
|
|
||||||
This will prompt for a string and classify it, outputting the category on
|
This is very similar to `gptc classify`, except it takes the input as an
|
||||||
stdout (or "None" if it cannot determine anything).
|
argument, and it treats the input as a single token or ngram.
|
||||||
|
|
||||||
### Compiling models
|
### Compiling models
|
||||||
|
|
||||||
gptc compile [-n <max_ngram_length>] <raw model file>
|
gptc compile [-n <max_ngram_length>] [-c <min_count>] <raw model file> <compiled model file>
|
||||||
|
|
||||||
This will print the compiled model in JSON to stdout.
|
This will write the compiled model encoded in binary format to `<compiled model
|
||||||
|
file>`.
|
||||||
|
|
||||||
|
If `-c` is specified, words and ngrams used less than `min_count` times will be
|
||||||
|
excluded from the compiled model.
|
||||||
|
|
||||||
### Packing models
|
### Packing models
|
||||||
|
|
||||||
|
@ -42,40 +44,63 @@ example of the format. Any exceptions will be printed to stderr.
|
||||||
|
|
||||||
## Library
|
## Library
|
||||||
|
|
||||||
### `gptc.Classifier(model, max_ngram_length=1)`
|
### `Model.serialize(file)`
|
||||||
|
|
||||||
Create a `Classifier` object using the given *compiled* model (as a dict, not
|
Write binary data representing the model to `file`.
|
||||||
JSON).
|
|
||||||
|
|
||||||
For information about `max_ngram_length`, see section "Ngrams."
|
### `Model.deserialize(encoded_model)`
|
||||||
|
|
||||||
#### `Classifier.confidence(text)`
|
Deserialize a `Model` from a file containing data from `Model.serialize()`.
|
||||||
|
|
||||||
|
### `Model.confidence(text, max_ngram_length)`
|
||||||
|
|
||||||
Classify `text`. Returns a dict of the format `{category: probability,
|
Classify `text`. Returns a dict of the format `{category: probability,
|
||||||
category:probability, ...}`
|
category:probability, ...}`
|
||||||
|
|
||||||
#### `Classifier.classify(text)`
|
Note that this may not include values for all categories. If there are no
|
||||||
|
common words between the input and the training data (likely, for example, with
|
||||||
Classify `text`. Returns the category into which the text is placed (as a
|
input in a different language from the training data), an empty dict will be
|
||||||
string), or `None` when it cannot classify the text.
|
returned.
|
||||||
|
|
||||||
#### `Classifier.model`
|
|
||||||
|
|
||||||
The classifier's model.
|
|
||||||
|
|
||||||
#### `Classifier.has_emoji`
|
|
||||||
|
|
||||||
Check whether emojis are supported by the `Classifier`. (See section "Emoji.")
|
|
||||||
Equivalent to `gptc.has_emoji and gptc.model_has_emoji(model)`.
|
|
||||||
|
|
||||||
### `gptc.compile(raw_model, max_ngram_length=1)`
|
|
||||||
|
|
||||||
Compile a raw model (as a list, not JSON) and return the compiled model (as a
|
|
||||||
dict).
|
|
||||||
|
|
||||||
For information about `max_ngram_length`, see section "Ngrams."
|
For information about `max_ngram_length`, see section "Ngrams."
|
||||||
|
|
||||||
### `gptc.pack(directory, print_exceptions=False)
|
### `Model.get(token)`
|
||||||
|
|
||||||
|
Return a confidence dict for the given token or ngram. This function is very
|
||||||
|
similar to `Model.confidence()`, except it treats the input as a single token
|
||||||
|
or ngram.
|
||||||
|
|
||||||
|
### `Model.compile(raw_model, max_ngram_length=1, min_count=1, hash_algorithm="sha256")`
|
||||||
|
|
||||||
|
Compile a raw model (as a list, not JSON) and return the compiled model (as a
|
||||||
|
`gptc.Model` object).
|
||||||
|
|
||||||
|
For information about `max_ngram_length`, see section "Ngrams."
|
||||||
|
|
||||||
|
Words or ngrams used less than `min_count` times throughout the input text are
|
||||||
|
excluded from the model.
|
||||||
|
|
||||||
|
The hash algorithm should be left as the default, which may change with a minor
|
||||||
|
version update, but it can be changed by the application if needed. It is
|
||||||
|
stored in the model, so changing the algorithm does not affect compatibility.
|
||||||
|
The following algorithms are supported:
|
||||||
|
|
||||||
|
* `md5`
|
||||||
|
* `sha1`
|
||||||
|
* `sha224`
|
||||||
|
* `sha256`
|
||||||
|
* `sha384`
|
||||||
|
* `sha512`
|
||||||
|
* `sha3_224`
|
||||||
|
* `sha3_384`
|
||||||
|
* `sha3_256`
|
||||||
|
* `sha3_512`
|
||||||
|
* `shake_128`
|
||||||
|
* `shake_256`
|
||||||
|
* `blake2b`
|
||||||
|
* `blake2s`
|
||||||
|
|
||||||
|
### `gptc.pack(directory, print_exceptions=False)`
|
||||||
|
|
||||||
Pack the model in `directory` and return a tuple of the format:
|
Pack the model in `directory` and return a tuple of the format:
|
||||||
|
|
||||||
|
@ -87,50 +112,34 @@ GPTC.
|
||||||
|
|
||||||
See `models/unpacked/` for an example of the format.
|
See `models/unpacked/` for an example of the format.
|
||||||
|
|
||||||
### `gptc.has_emoji`
|
### `gptc.Classifier(model, max_ngram_length=1)`
|
||||||
|
|
||||||
`True` if the `emoji` package is installed (see section "Emoji"), `False`
|
`Classifier` objects are deprecated starting with GPTC 3.1.0, and will be
|
||||||
otherwise.
|
removed in 5.0.0. See [the README from
|
||||||
|
3.0.2](https://git.kj7rrv.com/kj7rrv/gptc/src/tag/v3.0.1/README.md) if you need
|
||||||
### `gptc.model_has_emoji(compiled_model)`
|
documentation.
|
||||||
|
|
||||||
Returns `True` if `compiled_model` was compiled with emoji support, `False`
|
|
||||||
otherwise.
|
|
||||||
|
|
||||||
## Ngrams
|
## Ngrams
|
||||||
|
|
||||||
GPTC optionally supports using ngrams to improve classification accuracy. They
|
GPTC optionally supports using ngrams to improve classification accuracy. They
|
||||||
are disabled by default (maximum length set to 1) for performance and
|
are disabled by default (maximum length set to 1) for performance reasons.
|
||||||
compatibility reasons. Enabling them significantly increases the time required
|
Enabling them significantly increases the time required both for compilation
|
||||||
both for compilation and classification. The effect seems more significant for
|
and classification. The effect seems more significant for compilation than for
|
||||||
compilation than for classification. Compiled models are also much larger when
|
classification. Compiled models are also much larger when ngrams are enabled.
|
||||||
ngrams are enabled. Larger maximum ngram lengths will result in slower
|
Larger maximum ngram lengths will result in slower performance and larger
|
||||||
performance and larger files. It is a good idea to experiment with different
|
files. It is a good idea to experiment with different values and use the
|
||||||
values and use the highest one at which GPTC is fast enough and models are
|
highest one at which GPTC is fast enough and models are small enough for your
|
||||||
small enough for your needs.
|
needs.
|
||||||
|
|
||||||
Once a model is compiled at a certain maximum ngram length, it cannot be used
|
Once a model is compiled at a certain maximum ngram length, it cannot be used
|
||||||
for classification with a higher value. If you instantiate a `Classifier` with
|
for classification with a higher value. If you instantiate a `Classifier` with
|
||||||
a model compiled with a lower `max_ngram_length`, the value will be silently
|
a model compiled with a lower `max_ngram_length`, the value will be silently
|
||||||
reduced to the one used when compiling the model.
|
reduced to the one used when compiling the model.
|
||||||
|
|
||||||
Models compiled with older versions of GPTC which did not support ngrams are
|
|
||||||
handled the same way as models compiled with `max_ngram_length=1`.
|
|
||||||
|
|
||||||
## Emoji
|
|
||||||
|
|
||||||
If the [`emoji`](https://pypi.org/project/emoji/) package is installed, GPTC
|
|
||||||
will automatically handle emojis the same way as words. If it is not installed,
|
|
||||||
GPTC will still work but will ignore emojis.
|
|
||||||
|
|
||||||
`emoji` must be installed on both the system used to compile the model and the
|
|
||||||
system used to classify text. Emojis are ignored if it is missing on either
|
|
||||||
system.
|
|
||||||
|
|
||||||
## Model format
|
## Model format
|
||||||
|
|
||||||
This section explains the raw model format, which is how you should create and
|
This section explains the raw model format, which is how models are created and
|
||||||
edit models.
|
edited.
|
||||||
|
|
||||||
Raw models are formatted as a list of dicts. See below for the format:
|
Raw models are formatted as a list of dicts. See below for the format:
|
||||||
|
|
||||||
|
@ -141,10 +150,14 @@ Raw models are formatted as a list of dicts. See below for the format:
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
|
|
||||||
GPTC handles models as Python `list`s of `dict`s of `str`s (for raw models) or
|
GPTC handles raw models as `list`s of `dict`s of `str`s (`List[Dict[str,
|
||||||
`dict`s of `str`s and `float`s (for compiled models), and they can be stored
|
str]]`), and they can be stored in any way these Python objects can be.
|
||||||
in any way these Python objects can be. However, it is recommended to store
|
However, it is recommended to store them in JSON format for compatibility with
|
||||||
them in JSON format for compatibility with the command-line tool.
|
the command-line tool.
|
||||||
|
|
||||||
|
## Emoji
|
||||||
|
|
||||||
|
GPTC treats individual emoji as words.
|
||||||
|
|
||||||
## Example model
|
## Example model
|
||||||
|
|
||||||
|
|
|
@ -1,3 +1,5 @@
|
||||||
|
# SPDX-License-Identifier: GPL-3.0-or-later
|
||||||
|
|
||||||
import timeit
|
import timeit
|
||||||
import gptc
|
import gptc
|
||||||
import json
|
import json
|
||||||
|
@ -23,7 +25,7 @@ print(
|
||||||
round(
|
round(
|
||||||
1000000
|
1000000
|
||||||
* timeit.timeit(
|
* timeit.timeit(
|
||||||
"gptc.compile(raw_model, max_ngram_length)",
|
"gptc.Model.compile(raw_model, max_ngram_length)",
|
||||||
number=compile_iterations,
|
number=compile_iterations,
|
||||||
globals=globals(),
|
globals=globals(),
|
||||||
)
|
)
|
||||||
|
|
|
@ -1,14 +1,12 @@
|
||||||
# SPDX-License-Identifier: LGPL-3.0-or-later
|
# SPDX-License-Identifier: GPL-3.0-or-later
|
||||||
|
|
||||||
"""General-Purpose Text Classifier"""
|
"""General-Purpose Text Classifier"""
|
||||||
|
|
||||||
from gptc.compiler import compile as compile
|
from gptc.pack import pack
|
||||||
from gptc.classifier import Classifier as Classifier
|
from gptc.model import Model
|
||||||
from gptc.pack import pack as pack
|
from gptc.tokenizer import normalize
|
||||||
from gptc.tokenizer import has_emoji as has_emoji
|
|
||||||
from gptc.model_info import model_has_emoji as model_has_emoji
|
|
||||||
from gptc.exceptions import (
|
from gptc.exceptions import (
|
||||||
GPTCError as GPTCError,
|
GPTCError,
|
||||||
ModelError as ModelError,
|
ModelError,
|
||||||
UnsupportedModelError as UnsupportedModelError,
|
InvalidModelError,
|
||||||
)
|
)
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
#!/usr/bin/env python3
|
#!/usr/bin/env python3
|
||||||
# SPDX-License-Identifier: LGPL-3.0-or-later
|
# SPDX-License-Identifier: GPL-3.0-or-later
|
||||||
|
|
||||||
import argparse
|
import argparse
|
||||||
import json
|
import json
|
||||||
|
@ -17,6 +17,9 @@ def main() -> None:
|
||||||
"compile", help="compile a raw model"
|
"compile", help="compile a raw model"
|
||||||
)
|
)
|
||||||
compile_parser.add_argument("model", help="raw model to compile")
|
compile_parser.add_argument("model", help="raw model to compile")
|
||||||
|
compile_parser.add_argument(
|
||||||
|
"out", help="name of file to write compiled model to"
|
||||||
|
)
|
||||||
compile_parser.add_argument(
|
compile_parser.add_argument(
|
||||||
"--max-ngram-length",
|
"--max-ngram-length",
|
||||||
"-n",
|
"-n",
|
||||||
|
@ -24,6 +27,13 @@ def main() -> None:
|
||||||
type=int,
|
type=int,
|
||||||
default=1,
|
default=1,
|
||||||
)
|
)
|
||||||
|
compile_parser.add_argument(
|
||||||
|
"--min-count",
|
||||||
|
"-c",
|
||||||
|
help="minimum use count for word/ngram to be included in model",
|
||||||
|
type=int,
|
||||||
|
default=1,
|
||||||
|
)
|
||||||
|
|
||||||
classify_parser = subparsers.add_parser("classify", help="classify text")
|
classify_parser = subparsers.add_parser("classify", help="classify text")
|
||||||
classify_parser.add_argument("model", help="compiled model to use")
|
classify_parser.add_argument("model", help="compiled model to use")
|
||||||
|
@ -34,19 +44,12 @@ def main() -> None:
|
||||||
type=int,
|
type=int,
|
||||||
default=1,
|
default=1,
|
||||||
)
|
)
|
||||||
group = classify_parser.add_mutually_exclusive_group()
|
|
||||||
group.add_argument(
|
check_parser = subparsers.add_parser(
|
||||||
"-j",
|
"check", help="check one word or ngram in model"
|
||||||
"--json",
|
|
||||||
help="output confidence dict as JSON (default)",
|
|
||||||
action="store_true",
|
|
||||||
)
|
|
||||||
group.add_argument(
|
|
||||||
"-c",
|
|
||||||
"--category",
|
|
||||||
help="output most likely category or `None`",
|
|
||||||
action="store_true",
|
|
||||||
)
|
)
|
||||||
|
check_parser.add_argument("model", help="compiled model to use")
|
||||||
|
check_parser.add_argument("token", help="token or ngram to check")
|
||||||
|
|
||||||
pack_parser = subparsers.add_parser(
|
pack_parser = subparsers.add_parser(
|
||||||
"pack", help="pack a model from a directory"
|
"pack", help="pack a model from a directory"
|
||||||
|
@ -56,25 +59,27 @@ def main() -> None:
|
||||||
args = parser.parse_args()
|
args = parser.parse_args()
|
||||||
|
|
||||||
if args.subparser_name == "compile":
|
if args.subparser_name == "compile":
|
||||||
with open(args.model, "r") as f:
|
with open(args.model, "r", encoding="utf-8") as input_file:
|
||||||
model = json.load(f)
|
model = json.load(input_file)
|
||||||
|
|
||||||
print(json.dumps(gptc.compile(model, args.max_ngram_length)))
|
with open(args.out, "wb+") as output_file:
|
||||||
|
gptc.Model.compile(
|
||||||
|
model, args.max_ngram_length, args.min_count
|
||||||
|
).serialize(output_file)
|
||||||
elif args.subparser_name == "classify":
|
elif args.subparser_name == "classify":
|
||||||
with open(args.model, "r") as f:
|
with open(args.model, "rb") as model_file:
|
||||||
model = json.load(f)
|
model = gptc.Model.deserialize(model_file)
|
||||||
|
|
||||||
classifier = gptc.Classifier(model, args.max_ngram_length)
|
|
||||||
|
|
||||||
if sys.stdin.isatty():
|
if sys.stdin.isatty():
|
||||||
text = input("Text to analyse: ")
|
text = input("Text to analyse: ")
|
||||||
else:
|
else:
|
||||||
text = sys.stdin.read()
|
text = sys.stdin.read()
|
||||||
|
|
||||||
if args.category:
|
print(json.dumps(model.confidence(text, args.max_ngram_length)))
|
||||||
print(classifier.classify(text))
|
elif args.subparser_name == "check":
|
||||||
else:
|
with open(args.model, "rb") as model_file:
|
||||||
print(json.dumps(classifier.confidence(text)))
|
model = gptc.Model.deserialize(model_file)
|
||||||
|
print(json.dumps(model.get(args.token)))
|
||||||
else:
|
else:
|
||||||
print(json.dumps(gptc.pack(args.model, True)[0]))
|
print(json.dumps(gptc.pack(args.model, True)[0]))
|
||||||
|
|
||||||
|
|
|
@ -1,100 +0,0 @@
|
||||||
# SPDX-License-Identifier: LGPL-3.0-or-later
|
|
||||||
|
|
||||||
import gptc.tokenizer, gptc.compiler, gptc.exceptions, gptc.weighting, gptc.model_info
|
|
||||||
import warnings
|
|
||||||
from typing import Dict, Union, cast, List
|
|
||||||
|
|
||||||
|
|
||||||
class Classifier:
|
|
||||||
"""A text classifier.
|
|
||||||
|
|
||||||
Parameters
|
|
||||||
----------
|
|
||||||
model : dict
|
|
||||||
A compiled GPTC model.
|
|
||||||
|
|
||||||
max_ngram_length : int
|
|
||||||
The maximum ngram length to use when tokenizing input. If this is
|
|
||||||
greater than the value used when the model was compiled, it will be
|
|
||||||
silently lowered to that value.
|
|
||||||
|
|
||||||
Attributes
|
|
||||||
----------
|
|
||||||
model : dict
|
|
||||||
The model used.
|
|
||||||
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, model: gptc.compiler.MODEL, max_ngram_length: int = 1):
|
|
||||||
if model.get("__version__", 0) != 3:
|
|
||||||
raise gptc.exceptions.UnsupportedModelError(
|
|
||||||
f"unsupported model version"
|
|
||||||
)
|
|
||||||
self.model = model
|
|
||||||
model_ngrams = cast(int, model.get("__ngrams__", 1))
|
|
||||||
self.max_ngram_length = min(max_ngram_length, model_ngrams)
|
|
||||||
self.has_emoji = (
|
|
||||||
gptc.tokenizer.has_emoji and gptc.model_info.model_has_emoji(model)
|
|
||||||
)
|
|
||||||
|
|
||||||
def confidence(self, text: str) -> Dict[str, float]:
|
|
||||||
"""Classify text with confidence.
|
|
||||||
|
|
||||||
Parameters
|
|
||||||
----------
|
|
||||||
text : str
|
|
||||||
The text to classify
|
|
||||||
|
|
||||||
Returns
|
|
||||||
-------
|
|
||||||
dict
|
|
||||||
{category:probability, category:probability...} or {} if no words
|
|
||||||
matching any categories in the model were found
|
|
||||||
|
|
||||||
"""
|
|
||||||
|
|
||||||
model = self.model
|
|
||||||
|
|
||||||
tokens = gptc.tokenizer.tokenize(
|
|
||||||
text, self.max_ngram_length, self.has_emoji
|
|
||||||
)
|
|
||||||
numbered_probs: Dict[int, float] = {}
|
|
||||||
for word in tokens:
|
|
||||||
try:
|
|
||||||
weighted_numbers = gptc.weighting.weight(
|
|
||||||
[i / 65535 for i in cast(List[float], model[word])]
|
|
||||||
)
|
|
||||||
for category, value in enumerate(weighted_numbers):
|
|
||||||
try:
|
|
||||||
numbered_probs[category] += value
|
|
||||||
except KeyError:
|
|
||||||
numbered_probs[category] = value
|
|
||||||
except KeyError:
|
|
||||||
pass
|
|
||||||
total = sum(numbered_probs.values())
|
|
||||||
probs: Dict[str, float] = {
|
|
||||||
cast(List[str], model["__names__"])[category]: value / total
|
|
||||||
for category, value in numbered_probs.items()
|
|
||||||
}
|
|
||||||
return probs
|
|
||||||
|
|
||||||
def classify(self, text: str) -> Union[str, None]:
|
|
||||||
"""Classify text.
|
|
||||||
|
|
||||||
Parameters
|
|
||||||
----------
|
|
||||||
text : str
|
|
||||||
The text to classify
|
|
||||||
|
|
||||||
Returns
|
|
||||||
-------
|
|
||||||
str or None
|
|
||||||
The most likely category, or None if no words matching any
|
|
||||||
category in the model were found.
|
|
||||||
|
|
||||||
"""
|
|
||||||
probs: Dict[str, float] = self.confidence(text)
|
|
||||||
try:
|
|
||||||
return sorted(probs.items(), key=lambda x: x[1])[-1][0]
|
|
||||||
except IndexError:
|
|
||||||
return None
|
|
|
@ -1,82 +0,0 @@
|
||||||
# SPDX-License-Identifier: LGPL-3.0-or-later
|
|
||||||
|
|
||||||
import gptc.tokenizer
|
|
||||||
from typing import Iterable, Mapping, List, Dict, Union
|
|
||||||
|
|
||||||
WEIGHTS_T = List[int]
|
|
||||||
CONFIG_T = Union[List[str], int, str]
|
|
||||||
MODEL = Dict[str, Union[WEIGHTS_T, CONFIG_T]]
|
|
||||||
|
|
||||||
|
|
||||||
def compile(
|
|
||||||
raw_model: Iterable[Mapping[str, str]], max_ngram_length: int = 1
|
|
||||||
) -> MODEL:
|
|
||||||
"""Compile a raw model.
|
|
||||||
|
|
||||||
Parameters
|
|
||||||
----------
|
|
||||||
raw_model : list of dict
|
|
||||||
A raw GPTC model.
|
|
||||||
|
|
||||||
max_ngram_length : int
|
|
||||||
Maximum ngram lenght to compile with.
|
|
||||||
|
|
||||||
Returns
|
|
||||||
-------
|
|
||||||
dict
|
|
||||||
A compiled GPTC model.
|
|
||||||
|
|
||||||
"""
|
|
||||||
|
|
||||||
categories: Dict[str, List[str]] = {}
|
|
||||||
|
|
||||||
for portion in raw_model:
|
|
||||||
text = gptc.tokenizer.tokenize(portion["text"], max_ngram_length)
|
|
||||||
category = portion["category"]
|
|
||||||
try:
|
|
||||||
categories[category] += text
|
|
||||||
except KeyError:
|
|
||||||
categories[category] = text
|
|
||||||
|
|
||||||
categories_by_count: Dict[str, Dict[str, float]] = {}
|
|
||||||
|
|
||||||
names = []
|
|
||||||
|
|
||||||
for category, text in categories.items():
|
|
||||||
if not category in names:
|
|
||||||
names.append(category)
|
|
||||||
|
|
||||||
categories_by_count[category] = {}
|
|
||||||
for word in text:
|
|
||||||
try:
|
|
||||||
categories_by_count[category][word] += 1 / len(
|
|
||||||
categories[category]
|
|
||||||
)
|
|
||||||
except KeyError:
|
|
||||||
categories_by_count[category][word] = 1 / len(
|
|
||||||
categories[category]
|
|
||||||
)
|
|
||||||
word_weights: Dict[str, Dict[str, float]] = {}
|
|
||||||
for category, words in categories_by_count.items():
|
|
||||||
for word, value in words.items():
|
|
||||||
try:
|
|
||||||
word_weights[word][category] = value
|
|
||||||
except KeyError:
|
|
||||||
word_weights[word] = {category: value}
|
|
||||||
|
|
||||||
model: MODEL = {}
|
|
||||||
for word, weights in word_weights.items():
|
|
||||||
total = sum(weights.values())
|
|
||||||
new_weights: List[int] = []
|
|
||||||
for category in names:
|
|
||||||
new_weights.append(
|
|
||||||
round((weights.get(category, 0) / total) * 65535)
|
|
||||||
)
|
|
||||||
model[word] = new_weights
|
|
||||||
|
|
||||||
model["__names__"] = names
|
|
||||||
model["__ngrams__"] = max_ngram_length
|
|
||||||
model["__version__"] = 3
|
|
||||||
model["__emoji__"] = int(gptc.tokenizer.has_emoji)
|
|
||||||
|
|
||||||
return model
|
|
|
@ -1,4 +1,4 @@
|
||||||
# SPDX-License-Identifier: LGPL-3.0-or-later
|
# SPDX-License-Identifier: GPL-3.0-or-later
|
||||||
|
|
||||||
|
|
||||||
class GPTCError(BaseException):
|
class GPTCError(BaseException):
|
||||||
|
@ -9,5 +9,5 @@ class ModelError(GPTCError):
|
||||||
pass
|
pass
|
||||||
|
|
||||||
|
|
||||||
class UnsupportedModelError(ModelError):
|
class InvalidModelError(ModelError):
|
||||||
pass
|
pass
|
||||||
|
|
322
gptc/model.py
Normal file
322
gptc/model.py
Normal file
|
@ -0,0 +1,322 @@
|
||||||
|
# SPDX-License-Identifier: GPL-3.0-or-later
|
||||||
|
|
||||||
|
from typing import (
|
||||||
|
Iterable,
|
||||||
|
Mapping,
|
||||||
|
List,
|
||||||
|
Dict,
|
||||||
|
cast,
|
||||||
|
BinaryIO,
|
||||||
|
Tuple,
|
||||||
|
TypedDict,
|
||||||
|
)
|
||||||
|
import json
|
||||||
|
import gptc.tokenizer
|
||||||
|
from gptc.exceptions import InvalidModelError
|
||||||
|
import gptc.weighting
|
||||||
|
|
||||||
|
def _count_words(
|
||||||
|
raw_model: Iterable[Mapping[str, str]],
|
||||||
|
max_ngram_length: int,
|
||||||
|
hash_algorithm: str,
|
||||||
|
) -> Tuple[Dict[int, Dict[str, int]], Dict[str, int], List[str]]:
|
||||||
|
word_counts: Dict[int, Dict[str, int]] = {}
|
||||||
|
category_lengths: Dict[str, int] = {}
|
||||||
|
names: List[str] = []
|
||||||
|
|
||||||
|
for portion in raw_model:
|
||||||
|
text = gptc.tokenizer.hash_list(
|
||||||
|
gptc.tokenizer.tokenize(portion["text"], max_ngram_length),
|
||||||
|
hash_algorithm,
|
||||||
|
)
|
||||||
|
category = portion["category"]
|
||||||
|
|
||||||
|
if not category in names:
|
||||||
|
names.append(category)
|
||||||
|
|
||||||
|
category_lengths[category] = category_lengths.get(category, 0) + len(
|
||||||
|
text
|
||||||
|
)
|
||||||
|
|
||||||
|
for word in text:
|
||||||
|
if word in word_counts:
|
||||||
|
try:
|
||||||
|
word_counts[word][category] += 1
|
||||||
|
except KeyError:
|
||||||
|
word_counts[word][category] = 1
|
||||||
|
else:
|
||||||
|
word_counts[word] = {category: 1}
|
||||||
|
|
||||||
|
return word_counts, category_lengths, names
|
||||||
|
|
||||||
|
|
||||||
|
def _get_weights(
|
||||||
|
min_count: int,
|
||||||
|
word_counts: Dict[int, Dict[str, int]],
|
||||||
|
category_lengths: Dict[str, int],
|
||||||
|
names: List[str],
|
||||||
|
) -> Dict[int, List[int]]:
|
||||||
|
model: Dict[int, List[int]] = {}
|
||||||
|
for word, counts in word_counts.items():
|
||||||
|
if sum(counts.values()) >= min_count:
|
||||||
|
weights = {
|
||||||
|
category: value / category_lengths[category]
|
||||||
|
for category, value in counts.items()
|
||||||
|
}
|
||||||
|
total = sum(weights.values())
|
||||||
|
new_weights: List[int] = []
|
||||||
|
for category in names:
|
||||||
|
new_weights.append(
|
||||||
|
round((weights.get(category, 0) / total) * 65535)
|
||||||
|
)
|
||||||
|
model[word] = new_weights
|
||||||
|
return model
|
||||||
|
|
||||||
|
class ExplanationEntry(TypedDict):
|
||||||
|
weight: float
|
||||||
|
probabilities: Dict[str, float]
|
||||||
|
count: int
|
||||||
|
|
||||||
|
|
||||||
|
Explanation = Dict[
|
||||||
|
str,
|
||||||
|
ExplanationEntry,
|
||||||
|
]
|
||||||
|
|
||||||
|
Log = List[Tuple[str, float, List[float]]]
|
||||||
|
|
||||||
|
|
||||||
|
class Confidences(dict[str, float]):
|
||||||
|
def __init__(self, probs: Dict[str, float]):
|
||||||
|
dict.__init__(self, probs)
|
||||||
|
|
||||||
|
|
||||||
|
class TransparentConfidences(Confidences):
|
||||||
|
def __init__(
|
||||||
|
self,
|
||||||
|
probs: Dict[str, float],
|
||||||
|
explanation: Explanation,
|
||||||
|
):
|
||||||
|
self.explanation = explanation
|
||||||
|
Confidences.__init__(self, probs)
|
||||||
|
|
||||||
|
|
||||||
|
def convert_log(log: Log, names: List[str]) -> Explanation:
|
||||||
|
explanation: Explanation = {}
|
||||||
|
for word2, weight, word_probs in log:
|
||||||
|
if word2 in explanation:
|
||||||
|
explanation[word2]["count"] += 1
|
||||||
|
else:
|
||||||
|
explanation[word2] = {
|
||||||
|
"weight": weight,
|
||||||
|
"probabilities": {
|
||||||
|
name: word_probs[index] for index, name in enumerate(names)
|
||||||
|
},
|
||||||
|
"count": 1,
|
||||||
|
}
|
||||||
|
return explanation
|
||||||
|
|
||||||
|
|
||||||
|
class Model:
|
||||||
|
def __init__(
|
||||||
|
self,
|
||||||
|
weights: Dict[int, List[int]],
|
||||||
|
names: List[str],
|
||||||
|
max_ngram_length: int,
|
||||||
|
hash_algorithm: str,
|
||||||
|
):
|
||||||
|
self.weights = weights
|
||||||
|
self.names = names
|
||||||
|
self.max_ngram_length = max_ngram_length
|
||||||
|
self.hash_algorithm = hash_algorithm
|
||||||
|
|
||||||
|
def confidence(
|
||||||
|
self, text: str, max_ngram_length: int, transparent: bool = False
|
||||||
|
) -> Confidences:
|
||||||
|
"""Classify text with confidence.
|
||||||
|
|
||||||
|
Parameters
|
||||||
|
----------
|
||||||
|
text : str
|
||||||
|
The text to classify
|
||||||
|
|
||||||
|
max_ngram_length : int
|
||||||
|
The maximum ngram length to use in classifying
|
||||||
|
|
||||||
|
Returns
|
||||||
|
-------
|
||||||
|
dict
|
||||||
|
{category:probability, category:probability...} or {} if no words
|
||||||
|
matching any categories in the model were found
|
||||||
|
|
||||||
|
"""
|
||||||
|
|
||||||
|
model = self.weights
|
||||||
|
max_ngram_length = min(self.max_ngram_length, max_ngram_length)
|
||||||
|
|
||||||
|
raw_tokens = gptc.tokenizer.tokenize(
|
||||||
|
text, min(max_ngram_length, self.max_ngram_length)
|
||||||
|
)
|
||||||
|
|
||||||
|
tokens = gptc.tokenizer.hash_list(
|
||||||
|
raw_tokens,
|
||||||
|
self.hash_algorithm,
|
||||||
|
)
|
||||||
|
|
||||||
|
if transparent:
|
||||||
|
token_map = {tokens[i]: raw_tokens[i] for i in range(len(tokens))}
|
||||||
|
log: Log = []
|
||||||
|
|
||||||
|
numbered_probs: Dict[int, float] = {}
|
||||||
|
|
||||||
|
for word in tokens:
|
||||||
|
try:
|
||||||
|
unweighted_numbers = [
|
||||||
|
i / 65535 for i in cast(List[float], model[word])
|
||||||
|
]
|
||||||
|
|
||||||
|
weight, weighted_numbers = gptc.weighting.weight(
|
||||||
|
unweighted_numbers
|
||||||
|
)
|
||||||
|
|
||||||
|
if transparent:
|
||||||
|
log.append(
|
||||||
|
(
|
||||||
|
token_map[word],
|
||||||
|
weight,
|
||||||
|
unweighted_numbers,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
for category, value in enumerate(weighted_numbers):
|
||||||
|
try:
|
||||||
|
numbered_probs[category] += value
|
||||||
|
except KeyError:
|
||||||
|
numbered_probs[category] = value
|
||||||
|
except KeyError:
|
||||||
|
pass
|
||||||
|
|
||||||
|
total = sum(numbered_probs.values())
|
||||||
|
probs: Dict[str, float] = {
|
||||||
|
self.names[category]: value / total
|
||||||
|
for category, value in numbered_probs.items()
|
||||||
|
}
|
||||||
|
|
||||||
|
if transparent:
|
||||||
|
explanation = convert_log(log, self.names)
|
||||||
|
return TransparentConfidences(probs, explanation)
|
||||||
|
|
||||||
|
return Confidences(probs)
|
||||||
|
|
||||||
|
def get(self, token: str) -> Dict[str, float]:
|
||||||
|
try:
|
||||||
|
weights = self.weights[
|
||||||
|
gptc.tokenizer.hash_single(
|
||||||
|
gptc.tokenizer.normalize(token), self.hash_algorithm
|
||||||
|
)
|
||||||
|
]
|
||||||
|
except KeyError:
|
||||||
|
return {}
|
||||||
|
return {
|
||||||
|
category: weights[index] / 65535
|
||||||
|
for index, category in enumerate(self.names)
|
||||||
|
}
|
||||||
|
|
||||||
|
def serialize(self, file: BinaryIO) -> None:
|
||||||
|
file.write(b"GPTC model v6\n")
|
||||||
|
file.write(
|
||||||
|
json.dumps(
|
||||||
|
{
|
||||||
|
"names": self.names,
|
||||||
|
"max_ngram_length": self.max_ngram_length,
|
||||||
|
"hash_algorithm": self.hash_algorithm,
|
||||||
|
}
|
||||||
|
).encode("utf-8")
|
||||||
|
+ b"\n"
|
||||||
|
)
|
||||||
|
for word, weights in self.weights.items():
|
||||||
|
file.write(
|
||||||
|
word.to_bytes(6, "big")
|
||||||
|
+ b"".join([weight.to_bytes(2, "big") for weight in weights])
|
||||||
|
)
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def compile(
|
||||||
|
raw_model: Iterable[Mapping[str, str]],
|
||||||
|
max_ngram_length: int = 1,
|
||||||
|
min_count: int = 1,
|
||||||
|
hash_algorithm: str = "sha256",
|
||||||
|
) -> 'Model':
|
||||||
|
"""Compile a raw model.
|
||||||
|
|
||||||
|
Parameters
|
||||||
|
----------
|
||||||
|
raw_model : list of dict
|
||||||
|
A raw GPTC model.
|
||||||
|
|
||||||
|
max_ngram_length : int
|
||||||
|
Maximum ngram lenght to compile with.
|
||||||
|
|
||||||
|
Returns
|
||||||
|
-------
|
||||||
|
dict
|
||||||
|
A compiled GPTC model.
|
||||||
|
|
||||||
|
"""
|
||||||
|
word_counts, category_lengths, names = _count_words(
|
||||||
|
raw_model, max_ngram_length, hash_algorithm
|
||||||
|
)
|
||||||
|
model = _get_weights(min_count, word_counts, category_lengths, names)
|
||||||
|
return Model(model, names, max_ngram_length, hash_algorithm)
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def deserialize(encoded_model: BinaryIO) -> "Model":
|
||||||
|
prefix = encoded_model.read(14)
|
||||||
|
if prefix != b"GPTC model v6\n":
|
||||||
|
raise InvalidModelError()
|
||||||
|
|
||||||
|
config_json = b""
|
||||||
|
while True:
|
||||||
|
byte = encoded_model.read(1)
|
||||||
|
if byte == b"\n":
|
||||||
|
break
|
||||||
|
|
||||||
|
if byte == b"":
|
||||||
|
raise InvalidModelError()
|
||||||
|
|
||||||
|
config_json += byte
|
||||||
|
|
||||||
|
try:
|
||||||
|
config = json.loads(config_json.decode("utf-8"))
|
||||||
|
except (UnicodeDecodeError, json.JSONDecodeError) as exc:
|
||||||
|
raise InvalidModelError() from exc
|
||||||
|
|
||||||
|
try:
|
||||||
|
names = config["names"]
|
||||||
|
max_ngram_length = config["max_ngram_length"]
|
||||||
|
hash_algorithm = config["hash_algorithm"]
|
||||||
|
except KeyError as exc:
|
||||||
|
raise InvalidModelError() from exc
|
||||||
|
|
||||||
|
if not (
|
||||||
|
isinstance(names, list) and isinstance(max_ngram_length, int)
|
||||||
|
) or not all(isinstance(name, str) for name in names):
|
||||||
|
raise InvalidModelError()
|
||||||
|
|
||||||
|
weight_code_length = 6 + 2 * len(names)
|
||||||
|
|
||||||
|
weights: Dict[int, List[int]] = {}
|
||||||
|
|
||||||
|
while True:
|
||||||
|
code = encoded_model.read(weight_code_length)
|
||||||
|
if not code:
|
||||||
|
break
|
||||||
|
if len(code) != weight_code_length:
|
||||||
|
raise InvalidModelError()
|
||||||
|
|
||||||
|
weights[int.from_bytes(code[:6], "big")] = [
|
||||||
|
int.from_bytes(value, "big")
|
||||||
|
for value in [code[x : x + 2] for x in range(6, len(code), 2)]
|
||||||
|
]
|
||||||
|
|
||||||
|
return Model(weights, names, max_ngram_length, hash_algorithm)
|
|
@ -1,8 +0,0 @@
|
||||||
# SPDX-License-Identifier: LGPL-3.0-or-later
|
|
||||||
|
|
||||||
import gptc.compiler
|
|
||||||
from typing import Dict, Union, cast, List
|
|
||||||
|
|
||||||
|
|
||||||
def model_has_emoji(model: gptc.compiler.MODEL) -> bool:
|
|
||||||
return cast(int, model.get("__emoji__", 0)) == 1
|
|
22
gptc/pack.py
22
gptc/pack.py
|
@ -1,4 +1,4 @@
|
||||||
# SPDX-License-Identifier: LGPL-3.0-or-later
|
# SPDX-License-Identifier: GPL-3.0-or-later
|
||||||
|
|
||||||
import sys
|
import sys
|
||||||
import os
|
import os
|
||||||
|
@ -7,7 +7,7 @@ from typing import List, Dict, Tuple
|
||||||
|
|
||||||
def pack(
|
def pack(
|
||||||
directory: str, print_exceptions: bool = False
|
directory: str, print_exceptions: bool = False
|
||||||
) -> Tuple[List[Dict[str, str]], List[Tuple[Exception]]]:
|
) -> Tuple[List[Dict[str, str]], List[Tuple[OSError]]]:
|
||||||
paths = os.listdir(directory)
|
paths = os.listdir(directory)
|
||||||
texts: Dict[str, List[str]] = {}
|
texts: Dict[str, List[str]] = {}
|
||||||
exceptions = []
|
exceptions = []
|
||||||
|
@ -17,16 +17,18 @@ def pack(
|
||||||
try:
|
try:
|
||||||
for file in os.listdir(os.path.join(directory, path)):
|
for file in os.listdir(os.path.join(directory, path)):
|
||||||
try:
|
try:
|
||||||
with open(os.path.join(directory, path, file)) as f:
|
with open(
|
||||||
texts[path].append(f.read())
|
os.path.join(directory, path, file), encoding="utf-8"
|
||||||
except Exception as e:
|
) as input_file:
|
||||||
exceptions.append((e,))
|
texts[path].append(input_file.read())
|
||||||
|
except OSError as error:
|
||||||
|
exceptions.append((error,))
|
||||||
if print_exceptions:
|
if print_exceptions:
|
||||||
print(e, file=sys.stderr)
|
print(error, file=sys.stderr)
|
||||||
except Exception as e:
|
except OSError as error:
|
||||||
exceptions.append((e,))
|
exceptions.append((error,))
|
||||||
if print_exceptions:
|
if print_exceptions:
|
||||||
print(e, file=sys.stderr)
|
print(error, file=sys.stderr)
|
||||||
|
|
||||||
raw_model = []
|
raw_model = []
|
||||||
|
|
||||||
|
|
|
@ -1,37 +1,33 @@
|
||||||
# SPDX-License-Identifier: LGPL-3.0-or-later
|
# SPDX-License-Identifier: GPL-3.0-or-later
|
||||||
|
|
||||||
from typing import List, Union
|
import unicodedata
|
||||||
|
from typing import List, cast
|
||||||
try:
|
import hashlib
|
||||||
import emoji
|
import emoji
|
||||||
|
|
||||||
has_emoji = True
|
|
||||||
except ImportError:
|
|
||||||
has_emoji = False
|
|
||||||
|
|
||||||
|
|
||||||
def tokenize(
|
def tokenize(text: str, max_ngram_length: int = 1) -> List[str]:
|
||||||
text: str, max_ngram_length: int = 1, use_emoji: bool = True
|
text = unicodedata.normalize("NFKD", text).casefold()
|
||||||
) -> List[str]:
|
parts = []
|
||||||
"""Convert a string to a list of lemmas."""
|
highest_end = 0
|
||||||
converted_text: Union[str, List[str]] = text.lower()
|
for emoji_part in emoji.emoji_list(text):
|
||||||
|
parts += list(text[highest_end : emoji_part["match_start"]])
|
||||||
if has_emoji and use_emoji:
|
parts.append(emoji_part["emoji"])
|
||||||
parts = []
|
highest_end = emoji_part["match_end"]
|
||||||
highest_end = 0
|
parts += list(text[highest_end:])
|
||||||
for emoji_part in emoji.emoji_list(text):
|
converted_text = [part for part in parts if part]
|
||||||
parts += list(text[highest_end : emoji_part["match_start"]])
|
|
||||||
parts.append(emoji_part["emoji"])
|
|
||||||
highest_end = emoji_part["match_end"]
|
|
||||||
parts += list(text[highest_end:])
|
|
||||||
converted_text = [part for part in parts if part]
|
|
||||||
|
|
||||||
tokens = [""]
|
tokens = [""]
|
||||||
|
|
||||||
for char in converted_text:
|
for char in converted_text:
|
||||||
if char.isalpha() or char == "'":
|
if (
|
||||||
|
char.isalpha()
|
||||||
|
or char.isnumeric()
|
||||||
|
or char == "'"
|
||||||
|
or (char in ",." and (" " + tokens[-1])[-1].isnumeric())
|
||||||
|
):
|
||||||
tokens[-1] += char
|
tokens[-1] += char
|
||||||
elif has_emoji and emoji.is_emoji(char):
|
elif emoji.is_emoji(char):
|
||||||
tokens.append(char)
|
tokens.append(char)
|
||||||
tokens.append("")
|
tokens.append("")
|
||||||
elif tokens[-1] != "":
|
elif tokens[-1] != "":
|
||||||
|
@ -41,9 +37,50 @@ def tokenize(
|
||||||
|
|
||||||
if max_ngram_length == 1:
|
if max_ngram_length == 1:
|
||||||
return tokens
|
return tokens
|
||||||
else:
|
|
||||||
ngrams = []
|
ngrams = []
|
||||||
for ngram_length in range(1, max_ngram_length + 1):
|
for ngram_length in range(1, max_ngram_length + 1):
|
||||||
for index in range(len(tokens) + 1 - ngram_length):
|
for index in range(len(tokens) + 1 - ngram_length):
|
||||||
ngrams.append(" ".join(tokens[index : index + ngram_length]))
|
ngrams.append(" ".join(tokens[index : index + ngram_length]))
|
||||||
return ngrams
|
return ngrams
|
||||||
|
|
||||||
|
|
||||||
|
def _hash_single(token: str, hash_function: type) -> int:
|
||||||
|
return int.from_bytes(
|
||||||
|
hash_function(token.encode("utf-8")).digest()[:6], "big"
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def _get_hash_function(hash_algorithm: str) -> type:
|
||||||
|
if hash_algorithm in {
|
||||||
|
"sha224",
|
||||||
|
"md5",
|
||||||
|
"sha512",
|
||||||
|
"sha3_256",
|
||||||
|
"blake2s",
|
||||||
|
"sha3_224",
|
||||||
|
"sha1",
|
||||||
|
"sha256",
|
||||||
|
"sha384",
|
||||||
|
"shake_256",
|
||||||
|
"blake2b",
|
||||||
|
"sha3_512",
|
||||||
|
"shake_128",
|
||||||
|
"sha3_384",
|
||||||
|
}:
|
||||||
|
return cast(type, getattr(hashlib, hash_algorithm))
|
||||||
|
|
||||||
|
raise ValueError("not a valid hash function: " + hash_algorithm)
|
||||||
|
|
||||||
|
|
||||||
|
def hash_single(token: str, hash_algorithm: str) -> int:
|
||||||
|
return _hash_single(token, _get_hash_function(hash_algorithm))
|
||||||
|
|
||||||
|
|
||||||
|
def hash_list(tokens: List[str], hash_algorithm: str) -> List[int]:
|
||||||
|
hash_function = _get_hash_function(hash_algorithm)
|
||||||
|
return [_hash_single(token, hash_function) for token in tokens]
|
||||||
|
|
||||||
|
|
||||||
|
def normalize(text: str) -> str:
|
||||||
|
return " ".join(tokenize(text, 1))
|
||||||
|
|
|
@ -1,7 +1,7 @@
|
||||||
# SPDX-License-Identifier: LGPL-3.0-or-later
|
# SPDX-License-Identifier: GPL-3.0-or-later
|
||||||
|
|
||||||
import math
|
import math
|
||||||
from typing import Sequence, Union, Tuple, List
|
from typing import Sequence, Tuple, List
|
||||||
|
|
||||||
|
|
||||||
def _mean(numbers: Sequence[float]) -> float:
|
def _mean(numbers: Sequence[float]) -> float:
|
||||||
|
@ -39,8 +39,8 @@ def _standard_deviation(numbers: Sequence[float]) -> float:
|
||||||
return math.sqrt(_mean(squared_deviations))
|
return math.sqrt(_mean(squared_deviations))
|
||||||
|
|
||||||
|
|
||||||
def weight(numbers: Sequence[float]) -> List[float]:
|
def weight(numbers: Sequence[float]) -> Tuple[float, List[float]]:
|
||||||
standard_deviation = _standard_deviation(numbers)
|
standard_deviation = _standard_deviation(numbers)
|
||||||
weight = standard_deviation * 2
|
weight_assigned = standard_deviation * 2
|
||||||
weighted_numbers = [i * weight for i in numbers]
|
weighted_numbers = [i * weight_assigned for i in numbers]
|
||||||
return weighted_numbers
|
return weight_assigned, weighted_numbers
|
||||||
|
|
BIN
models/compiled.gptc
Normal file
BIN
models/compiled.gptc
Normal file
Binary file not shown.
File diff suppressed because one or more lines are too long
16
profiler.py
Normal file
16
profiler.py
Normal file
|
@ -0,0 +1,16 @@
|
||||||
|
# SPDX-License-Identifier: GPL-3.0-or-later
|
||||||
|
|
||||||
|
import cProfile
|
||||||
|
import gptc
|
||||||
|
import json
|
||||||
|
import sys
|
||||||
|
|
||||||
|
max_ngram_length = 10
|
||||||
|
|
||||||
|
with open("models/raw.json") as f:
|
||||||
|
raw_model = json.load(f)
|
||||||
|
|
||||||
|
with open("models/benchmark_text.txt") as f:
|
||||||
|
text = f.read()
|
||||||
|
|
||||||
|
cProfile.run("gptc.Model.compile(raw_model, max_ngram_length)")
|
|
@ -4,7 +4,7 @@ build-backend = "setuptools.build_meta"
|
||||||
|
|
||||||
[project]
|
[project]
|
||||||
name = "gptc"
|
name = "gptc"
|
||||||
version = "2.1.2"
|
version = "4.0.1"
|
||||||
description = "General-purpose text classifier"
|
description = "General-purpose text classifier"
|
||||||
readme = "README.md"
|
readme = "README.md"
|
||||||
authors = [{ name = "Samuel Sloniker", email = "sam@kj7rrv.com"}]
|
authors = [{ name = "Samuel Sloniker", email = "sam@kj7rrv.com"}]
|
||||||
|
@ -12,15 +12,12 @@ classifiers = [
|
||||||
"Programming Language :: Python",
|
"Programming Language :: Python",
|
||||||
"Programming Language :: Python :: 3",
|
"Programming Language :: Python :: 3",
|
||||||
"Development Status :: 5 - Production/Stable",
|
"Development Status :: 5 - Production/Stable",
|
||||||
"License :: OSI Approved :: GNU Lesser General Public License v3 or later (LGPLv3+)",
|
"License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+)",
|
||||||
"Operating System :: OS Independent",
|
"Operating System :: OS Independent",
|
||||||
]
|
]
|
||||||
dependencies = []
|
dependencies = ["emoji"]
|
||||||
requires-python = ">=3.7"
|
requires-python = ">=3.7"
|
||||||
|
|
||||||
[project.optional-dependencies]
|
|
||||||
emoji = ["emoji"]
|
|
||||||
|
|
||||||
[project.urls]
|
[project.urls]
|
||||||
Homepage = "https://git.kj7rrv.com/kj7rrv/gptc"
|
Homepage = "https://git.kj7rrv.com/kj7rrv/gptc"
|
||||||
|
|
||||||
|
|
Loading…
Reference in New Issue
Block a user