BaseTokenizer

class lightautoml.text.tokenizer.BaseTokenizer(n_jobs=4, to_string=True, **kwargs)[source]

Bases: object

Base class for tokenizer method.

__init__(n_jobs=4, to_string=True, **kwargs)[source]

Tokenization with simple text cleaning and preprocessing.

Parameters
  • n_jobs (int) – Number of threads for multiprocessing.

  • to_string (bool) – Return string or list of tokens.

preprocess_sentence(snt)[source]

Preprocess sentence string (lowercase, etc.).

Parameters

snt (str) – Sentence string.

Return type

str

Returns

Resulting string.

tokenize_sentence(snt)[source]

Convert sentence string to a list of tokens.

Parameters

snt (str) – Sentence string.

Return type

List[str]

Returns

Resulting list of tokens.

filter_tokens(snt)[source]

Clean list of sentence tokens.

Parameters

snt (List[str]) – List of tokens.

Return type

List[str]

Returns

Resulting list of filtered tokens

postprocess_tokens(snt)[source]

Additional processing steps: lemmatization, pos tagging, etc.

Parameters

snt (List[str]) – List of tokens.

Return type

List[str]

Returns

Resulting list of processed tokens.

postprocess_sentence(snt)[source]

Postprocess sentence string (merge words).

Parameters

snt (str) – Sentence string.

Return type

str

Returns

Resulting string.

tokenize(text)[source]

Tokenize list of texts.

Parameters

text (List[str]) – List of texts.

Return type

Union[List[List[str]], List[str]]

Returns

Resulting tokenized list.