tasks.classification package
Submodules
tasks.classification.classification module
- class Classification(model: Module, optimizer: Optimizer, loss_fn: Optional[Callable] = None, metric_train: Optional[Metric] = None, metric_val: Optional[Metric] = None, metric_test: Optional[Metric] = None, confusion_matrix_val: Optional[bool] = False, confusion_matrix_test: Optional[bool] = False, confusion_matrix_log_every_n_epoch: Optional[int] = 1, lr: float = 0.001)[source]
Bases:
AbstractTask
Class that performs the task of classification. It overrides the methods of the base :class: AbstractTask. During all stages (training, validation, test), the model is called with the input batch and the output is compared with the target batch. The loss is computed and the metrics are updated. There are no files or folder created during testing.
- Parameters:
model (nn.Module) – The model to train, validate and test.
optimizer (torch.optim.Optimizer) – The optimizer used during training.
loss_fn (Callable) – The loss function used during training, validation, and testing.
metric_train (torchmetrics.Metric) – The metric used during training.
metric_val (torchmetrics.Metric) – The metric used during validation.
metric_test (torchmetrics.Metric) – The metric used during testing.
confusion_matrix_val (bool) – Whether to compute the confusion matrix during validation.
confusion_matrix_test (bool) – Whether to compute the confusion matrix during testing.
confusion_matrix_log_every_n_epoch (int) – The frequency of logging the confusion matrix.
lr (float) – The learning rate.
- forward(x)[source]
Same as
torch.nn.Module.forward()
.- Parameters:
*args – Whatever you decide to pass into the forward method.
**kwargs – Keyword arguments are also possible.
- Returns:
Your model’s output
- setup(stage: str) None [source]
Called at the beginning of fit (train + validate), validate, test, or predict. This is a good hook when you need to build models dynamically or adjust something about them. This hook is called on every process when using DDP.
- Parameters:
stage – either
'fit'
,'validate'
,'test'
, or'predict'
Example:
class LitModel(...): def __init__(self): self.l1 = None def prepare_data(self): download_data() tokenize() # don't do this self.something = else def setup(self, stage): data = load_data(...) self.l1 = nn.Linear(28, data.num_classes)
- test_step(batch, batch_idx, **kwargs)[source]
Operates on a single batch of data from the test set. In this step you’d normally generate examples or calculate anything of interest such as accuracy.
# the pseudocode for these calls test_outs = [] for test_batch in test_data: out = test_step(test_batch) test_outs.append(out) test_epoch_end(test_outs)
- Parameters:
batch – The output of your
DataLoader
.batch_idx – The index of this batch.
dataloader_id – The index of the dataloader that produced this batch. (only if multiple test dataloaders used).
- Returns:
Any of.
Any object or value
None
- Testing will skip to the next batch
# if you have one test dataloader: def test_step(self, batch, batch_idx): ... # if you have multiple test dataloaders: def test_step(self, batch, batch_idx, dataloader_idx=0): ...
Examples:
# CASE 1: A single test dataset def test_step(self, batch, batch_idx): x, y = batch # implement your own out = self(x) loss = self.loss(out, y) # log 6 example images # or generated text... or whatever sample_imgs = x[:6] grid = torchvision.utils.make_grid(sample_imgs) self.logger.experiment.add_image('example_images', grid, 0) # calculate acc labels_hat = torch.argmax(out, dim=1) test_acc = torch.sum(y == labels_hat).item() / (len(y) * 1.0) # log the outputs! self.log_dict({'test_loss': loss, 'test_acc': test_acc})
If you pass in multiple test dataloaders,
test_step()
will have an additional argument. We recommend setting the default value of 0 so that you can quickly switch between single and multiple dataloaders.# CASE 2: multiple test dataloaders def test_step(self, batch, batch_idx, dataloader_idx=0): # dataloader_idx tells you which dataset this is. ...
Note
If you don’t need to test you don’t need to implement this method.
Note
When the
test_step()
is called, the model has been put in eval mode and PyTorch gradients have been disabled. At the end of the test epoch, the model goes back to training mode and gradients are enabled.
- training: bool
- training_step(batch, batch_idx, **kwargs)[source]
The training step. Calls the step method and logs the metrics and loss.
- Parameters:
batch (Any) – The current batch to train on.
batch_idx (int) – The index of the current batch.
kwargs (Any) – Additional arguments.
- Returns:
The output of the step method.
- Return type:
Any
- validation_step(batch, batch_idx, **kwargs)[source]
the validation step. Calls the step method and logs the metrics and loss.
- Parameters:
batch (Any) – The current batch to validate on.
batch_idx (int) – The index of the current batch.
kwargs (Any) – Additional arguments.
- Returns:
The output of the step method.
- Return type:
Any