tasks.RGB package

Submodules

tasks.RGB.semantic_segmentation module

class SemanticSegmentationRGB(model: Module, optimizer: Optimizer, loss_fn: Optional[Callable] = None, metric_train: Optional[Metric] = None, metric_val: Optional[Metric] = None, metric_test: Optional[Metric] = None, test_output_path: Optional[Union[str, Path]] = 'test_output', predict_output_path: Optional[Union[str, Path]] = 'predict_output', confusion_matrix_val: Optional[bool] = False, confusion_matrix_test: Optional[bool] = False, confusion_matrix_log_every_n_epoch: Optional[int] = 1, lr: float = 0.001)[source]

Bases: AbstractTask

Semantic Segmentation task for whole images that are RGB encoded, so the class is encoded in the color. The output for the test are also full images in the RGB format.

Parameters:
  • model (nn.Module) – The model to train, validate and test.

  • optimizer (torch.optim.Optimizer) – The optimizer used during training.

  • loss_fn (Callable) – The loss function used during training, validation, and testing.

  • metric_train (torchmetrics.Metric) – The metric used during training.

  • metric_val (torchmetrics.Metric) – The metric used during validation.

  • metric_test (torchmetrics.Metric) – The metric used during testing.

  • confusion_matrix_val (bool) – Whether to compute the confusion matrix during validation.

  • confusion_matrix_test (bool) – Whether to compute the confusion matrix during testing.

  • confusion_matrix_log_every_n_epoch (int) – The frequency of logging the confusion matrix.

  • lr (float) – The learning rate.

forward(x)[source]

Same as torch.nn.Module.forward().

Parameters:
  • *args – Whatever you decide to pass into the forward method.

  • **kwargs – Keyword arguments are also possible.

Returns:

Your model’s output

on_predict_start() None[source]

Called at the beginning of predicting.

on_test_end() None[source]

Called at the end of testing.

on_test_start() None[source]

Called at the beginning of testing.

predict_step(batch: Any, batch_idx: int, dataloader_idx: Optional[int] = None) Any[source]

Step function called during predict(). By default, it calls forward(). Override to add any processing logic.

The predict_step() is used to scale inference on multi-devices.

To prevent an OOM error, it is possible to use BasePredictionWriter callback to write the predictions to disk or database after each batch or on epoch end.

The BasePredictionWriter should be used while using a spawn based accelerator. This happens for Trainer(strategy="ddp_spawn") or training on 8 TPU cores with Trainer(accelerator="tpu", devices=8) as predictions won’t be returned.

Example

class MyModel(LightningModule):

    def predict_step(self, batch, batch_idx, dataloader_idx=0):
        return self(batch)

dm = ...
model = MyModel()
trainer = Trainer(accelerator="gpu", devices=2)
predictions = trainer.predict(model, dm)
Parameters:
  • batch – Current batch.

  • batch_idx – Index of current batch.

  • dataloader_idx – Index of the current dataloader.

Returns:

Predicted output

setup(stage: str) None[source]

Called at the beginning of fit (train + validate), validate, test, or predict. This is a good hook when you need to build models dynamically or adjust something about them. This hook is called on every process when using DDP.

Parameters:

stage – either 'fit', 'validate', 'test', or 'predict'

Example:

class LitModel(...):
    def __init__(self):
        self.l1 = None

    def prepare_data(self):
        download_data()
        tokenize()

        # don't do this
        self.something = else

    def setup(self, stage):
        data = load_data(...)
        self.l1 = nn.Linear(28, data.num_classes)
test_step(batch, batch_idx, **kwargs)[source]

Operates on a single batch of data from the test set. In this step you’d normally generate examples or calculate anything of interest such as accuracy.

# the pseudocode for these calls
test_outs = []
for test_batch in test_data:
    out = test_step(test_batch)
    test_outs.append(out)
test_epoch_end(test_outs)
Parameters:
  • batch – The output of your DataLoader.

  • batch_idx – The index of this batch.

  • dataloader_id – The index of the dataloader that produced this batch. (only if multiple test dataloaders used).

Returns:

Any of.

  • Any object or value

  • None - Testing will skip to the next batch

# if you have one test dataloader:
def test_step(self, batch, batch_idx):
    ...


# if you have multiple test dataloaders:
def test_step(self, batch, batch_idx, dataloader_idx=0):
    ...

Examples:

# CASE 1: A single test dataset
def test_step(self, batch, batch_idx):
    x, y = batch

    # implement your own
    out = self(x)
    loss = self.loss(out, y)

    # log 6 example images
    # or generated text... or whatever
    sample_imgs = x[:6]
    grid = torchvision.utils.make_grid(sample_imgs)
    self.logger.experiment.add_image('example_images', grid, 0)

    # calculate acc
    labels_hat = torch.argmax(out, dim=1)
    test_acc = torch.sum(y == labels_hat).item() / (len(y) * 1.0)

    # log the outputs!
    self.log_dict({'test_loss': loss, 'test_acc': test_acc})

If you pass in multiple test dataloaders, test_step() will have an additional argument. We recommend setting the default value of 0 so that you can quickly switch between single and multiple dataloaders.

# CASE 2: multiple test dataloaders
def test_step(self, batch, batch_idx, dataloader_idx=0):
    # dataloader_idx tells you which dataset this is.
    ...

Note

If you don’t need to test you don’t need to implement this method.

Note

When the test_step() is called, the model has been put in eval mode and PyTorch gradients have been disabled. At the end of the test epoch, the model goes back to training mode and gradients are enabled.

static to_metrics_format(x: Tensor, **kwargs) Tensor[source]

Convert the output of the model to the format needed for the metrics.

Parameters:
  • x (torch.Tensor) – the output of the model

  • kwargs (Any) – additional arguments

Returns:

the output in the format needed for the metrics

Return type:

torch.Tensor

training: bool
training_step(batch, batch_idx, **kwargs)[source]

The training step. Calls the step method and logs the metrics and loss.

Parameters:
  • batch (Any) – The current batch to train on.

  • batch_idx (int) – The index of the current batch.

  • kwargs (Any) – Additional arguments.

Returns:

The output of the step method.

Return type:

Any

validation_step(batch, batch_idx, **kwargs)[source]

the validation step. Calls the step method and logs the metrics and loss.

Parameters:
  • batch (Any) – The current batch to validate on.

  • batch_idx (int) – The index of the current batch.

  • kwargs (Any) – Additional arguments.

Returns:

The output of the step method.

Return type:

Any

static write_file_mapping(output_file_list: List[str], image_path_list: List[Path], output_path: Path, info_filename: str)[source]

tasks.RGB.semantic_segmentation_cropped module

class SemanticSegmentationCroppedRGB(model: Module, optimizer: Optimizer, loss_fn: Optional[Callable] = None, metric_train: Optional[Metric] = None, metric_val: Optional[Metric] = None, metric_test: Optional[Metric] = None, test_output_path: Optional[Union[str, Path]] = 'test_output', predict_output_path: Optional[Union[str, Path]] = 'predict_output', confusion_matrix_val: Optional[bool] = False, confusion_matrix_test: Optional[bool] = False, confusion_matrix_log_every_n_epoch: Optional[int] = 1, lr: float = 0.001)[source]

Bases: AbstractTask

Semantic Segmentation task for cropped images that are RGB encoded, so the class is encoded in the color. The output for the test are also patches that can be stitched together with the :class: CroppedOutputMergerRGB and are in the RGB format as well as raw prediction of the network in numpy format.

Parameters:
  • model (nn.Module) – The model to train, validate and test.

  • optimizer (torch.optim.Optimizer) – The optimizer used during training.

  • loss_fn (Callable) – The loss function used during training, validation, and testing.

  • metric_train (torchmetrics.Metric) – The metric used during training.

  • metric_val (torchmetrics.Metric) – The metric used during validation.

  • metric_test (torchmetrics.Metric) – The metric used during testing.

  • confusion_matrix_val (bool) – Whether to compute the confusion matrix during validation.

  • confusion_matrix_test (bool) – Whether to compute the confusion matrix during testing.

  • confusion_matrix_log_every_n_epoch (int) – The frequency of logging the confusion matrix.

  • lr (float) – The learning rate.

on_test_end() None[source]

Called at the end of testing.

setup(stage: str) None[source]

Called at the beginning of fit (train + validate), validate, test, or predict. This is a good hook when you need to build models dynamically or adjust something about them. This hook is called on every process when using DDP.

Parameters:

stage – either 'fit', 'validate', 'test', or 'predict'

Example:

class LitModel(...):
    def __init__(self):
        self.l1 = None

    def prepare_data(self):
        download_data()
        tokenize()

        # don't do this
        self.something = else

    def setup(self, stage):
        data = load_data(...)
        self.l1 = nn.Linear(28, data.num_classes)
test_step(batch, batch_idx, **kwargs)[source]

Operates on a single batch of data from the test set. In this step you’d normally generate examples or calculate anything of interest such as accuracy.

# the pseudocode for these calls
test_outs = []
for test_batch in test_data:
    out = test_step(test_batch)
    test_outs.append(out)
test_epoch_end(test_outs)
Parameters:
  • batch – The output of your DataLoader.

  • batch_idx – The index of this batch.

  • dataloader_id – The index of the dataloader that produced this batch. (only if multiple test dataloaders used).

Returns:

Any of.

  • Any object or value

  • None - Testing will skip to the next batch

# if you have one test dataloader:
def test_step(self, batch, batch_idx):
    ...


# if you have multiple test dataloaders:
def test_step(self, batch, batch_idx, dataloader_idx=0):
    ...

Examples:

# CASE 1: A single test dataset
def test_step(self, batch, batch_idx):
    x, y = batch

    # implement your own
    out = self(x)
    loss = self.loss(out, y)

    # log 6 example images
    # or generated text... or whatever
    sample_imgs = x[:6]
    grid = torchvision.utils.make_grid(sample_imgs)
    self.logger.experiment.add_image('example_images', grid, 0)

    # calculate acc
    labels_hat = torch.argmax(out, dim=1)
    test_acc = torch.sum(y == labels_hat).item() / (len(y) * 1.0)

    # log the outputs!
    self.log_dict({'test_loss': loss, 'test_acc': test_acc})

If you pass in multiple test dataloaders, test_step() will have an additional argument. We recommend setting the default value of 0 so that you can quickly switch between single and multiple dataloaders.

# CASE 2: multiple test dataloaders
def test_step(self, batch, batch_idx, dataloader_idx=0):
    # dataloader_idx tells you which dataset this is.
    ...

Note

If you don’t need to test you don’t need to implement this method.

Note

When the test_step() is called, the model has been put in eval mode and PyTorch gradients have been disabled. At the end of the test epoch, the model goes back to training mode and gradients are enabled.

static to_metrics_format(x: Tensor, **kwargs) Tensor[source]

Convert the output of the model to the format needed for the metrics.

Parameters:
  • x (torch.Tensor) – the output of the model

  • kwargs (Any) – additional arguments

Returns:

the output in the format needed for the metrics

Return type:

torch.Tensor

training: bool
training_step(batch, batch_idx, **kwargs)[source]

The training step. Calls the step method and logs the metrics and loss.

Parameters:
  • batch (Any) – The current batch to train on.

  • batch_idx (int) – The index of the current batch.

  • kwargs (Any) – Additional arguments.

Returns:

The output of the step method.

Return type:

Any

validation_step(batch, batch_idx, **kwargs)[source]

the validation step. Calls the step method and logs the metrics and loss.

Parameters:
  • batch (Any) – The current batch to validate on.

  • batch_idx (int) – The index of the current batch.

  • kwargs (Any) – Additional arguments.

Returns:

The output of the step method.

Return type:

Any

Module contents