Kornia v0.3.0 Release Notes

Release Date: 2020-04-27 // almost 4 years ago
  • ๐Ÿš€ Kornia 0.3.0 release

    ๐Ÿš€ Today we released 0.3.0 which aligns with PyTorch releases cycle and includes:

    • ๐Ÿ‘ Full support to PyTorch v1.5.
    • โœ… Semi-automated GPU tests coverage.
    • ๐Ÿ“š Documentation has been reorganized [docs]
    • Data augmentation API compatible with torchvision v0.6.0.
    • Well integration with ecosystem e.g. Pytorch-Lightning.

    ๐Ÿš€ For more detailed changes check out v0.2.1 and v0.2.2.

    Highlights

    Data Augmentation

    โœ… We provide kornia.augmentation a high-level framework that implements kornia-core functionalities and is fully compatible with torchvision supporting batched mode, multi device cpu, gpu, and xla/tpu (comming), auto differentiable and able to retrieve (and chain) applied geometric transforms. To check how to reproduce torchvision in kornia refer to this Colab: Kornia vs. Torchvision @shijianjian

    import kornia as Kimport torchvision as T# korniatransform\_fcn = torch.nn.Sequential( K.augmentation.RandomAffine( [-45., 45.], [0., 0.5], [0.5, 1.5], [0., 0.5], return\_transform=True), K.color.Normalize(0.1307, 0.3081), )# torchvisiontransform\_fcn = T.transforms.Compose([T.transforms.RandomAffine( [-45., 45.], [0., 0.5], [0.5, 1.5], [0., 0.5]), T.transforms.ToTensor(), T.transforms.Normalize((0.1307,), (0.3081,)), ])
    

    Ecosystem compatibility

    Kornia has been designed to be very flexible in order to be integrated in other existing frameworks. See the example below about how easy you can define a custom data augmentation pipeline to later be integrated into any training framework such as Pytorch-Lighting. We provide examples in [here] and [here].

    class DataAugmentatonPipeline(nn.Module): """Module to perform data augmentation using Kornia on torch tensors."""def \_\_init\_\_(self, apply\_color\_jitter: bool = False) -\> None: super().\_\_init\_\_() self.\_apply\_color\_jitter = apply\_color\_jitterself.\_max\_val: float = 1024.self.transforms = nn.Sequential( K.augmentation.Normalize(0., self.\_max\_val), K.augmentation.RandomHorizontalFlip(p=0.5) ) self.jitter = K.augmentation.ColorJitter(0.5, 0.5, 0.5, 0.5) @torch.no\_grad() # disable gradients for effiencydef forward(self, x: torch.Tensor) -\> torch.Tensor: x\_out = self.transforms(x) if self.\_apply\_color\_jitter: x\_out = self.jitter(x\_out) return x\_out
    

    โœ… GPU tests

    โœ… Now easy to run GPU tests with pytest --typetest cuda