AutoCVWrap

class lightautoml.transformers.image.AutoCVWrap(model='efficientnet-b0', weights_path=None, cache_dir='./cache_CV', subs=None, device=torch.device, n_jobs=4, random_state=42, is_advprop=True, batch_size=128, verbose=True)[source]

Bases: LAMLTransformer

Calculate image embeddings.

Parameters
  • model – Name of effnet model.

  • weights_path (Optional[str]) – Path to saved weights.

  • cache_dir (str) – Path to cache directory or None.

  • subs (Optional[Any]) – Subsample to fit transformer. If None - full data.

  • device (device) – Torch device.

  • n_jobs (int) – Number of threads for dataloader.

  • random_state (int) – Random state to take subsample and set torch seed.

  • is_advprop (bool) – Use adversarial training.

  • batch_size (int) – Batch size for embedding model.

  • verbose (bool) – Verbose data processing.

property features

Features list.

Return type

List[str]

Returns

List of features names.

fit(dataset)[source]

Fit chosen transformer and create feature names.

Parameters

dataset (Union[NumpyDataset, PandasDataset]) – Pandas or Numpy dataset of text features.

Returns

self.

transform(dataset)[source]

Transform dataset to image embeddings.

Parameters

dataset (Union[NumpyDataset, PandasDataset]) – Pandas or Numpy dataset of image paths.

Return type

NumpyDataset

Returns

Numpy dataset with image embeddings.