kaolin.models

PointNetFeatureExtractor(in_channels: int = 3, feat_size: int = 1024, layer_dims: Iterable[int] = [64, 128], global_feat: bool = True, activation=<function relu>, batchnorm: bool = True, transposed_input: bool = False)[source]

PointNet feature extractor (extracts either global or local, i.e., per-point features).

Based on the original PointNet paper:.

Note

If you use this code, please cite the original paper in addition to Kaolin.

@article{qi2016pointnet,
  title={PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation},
  author={Qi, Charles R and Su, Hao and Mo, Kaichun and Guibas, Leonidas J},
  journal={arXiv preprint arXiv:1612.00593},
  year={2016}
}
Parameters
  • in_channels (int) – Number of channels in the input pointcloud (default: 3, for X, Y, Z coordinates respectively).

  • feat_size (int) – Size of the global feature vector (default: 1024)

  • layer_dims (Iterable[int]) – Sizes of fully connected layers to be used in the feature extractor (excluding the input and the output layer sizes). Note: the number of layers in the feature extractor is implicitly parsed from this variable.

  • global_feat (bool) – Extract global features (i.e., one feature for the entire pointcloud) if set to True. If set to False, extract per-point (local) features (default: True).

  • activation (function) – Nonlinearity to be used as activation function after each batchnorm (default: F.relu)

  • batchnorm (bool) – Whether or not to use batchnorm layers (default: True)

  • transposed_input (bool) – Whether the input’s second and third dimension is already transposed. If so, a transpose operation can be avoided, improving performance. See documentation for the forward method for more details.

For example, to specify a PointNet feature extractor with 4 linear layers (sizes 6 -> 10, 10 -> 40, 40 -> 500, 500 -> 1024), with 3 input channels in the pointcloud and a global feature vector of size 1024, see the example below.

Example

>>> pointnet = PointNetFeatureExtractor(in_channels=3, feat_size=1024,
                                   layer_dims=[10, 20, 40, 500])
>>> x = torch.rand(2, 3, 30)
>>> y = pointnet(x)
print(y.shape)
PointNetClassifier(in_channels: int = 3, feat_size: int = 1024, num_classes: int = 2, dropout: float = 0.0, classifier_layer_dims: Iterable[int] = [512, 256], feat_layer_dims: Iterable[int] = [64, 128], activation=<function relu>, batchnorm: bool = True, transposed_input: bool = False)[source]

PointNet classifier. Uses the PointNet feature extractor, and adds classification layers on top.

Note

If you use this code, please cite the original paper in addition to Kaolin.

@article{qi2016pointnet,
  title={PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation},
  author={Qi, Charles R and Su, Hao and Mo, Kaichun and Guibas, Leonidas J},
  journal={arXiv preprint arXiv:1612.00593},
  year={2016}
}
Parameters
  • in_channels (int) – Number of channels in the input pointcloud (default: 3, for X, Y, Z coordinates respectively).

  • feat_size (int) – Size of the global feature vector (default: 1024)

  • num_classes (int) – Number of classes (for the classification task) (default: 2).

  • dropout (float) – Dropout ratio to use (default: 0.). Note: If the ratio is set to 0., we altogether skip using a dropout layer.

  • layer_dims (Iterable[int]) – Sizes of fully connected layers to be used in the feature extractor (excluding the input and the output layer sizes). Note: the number of layers in the feature extractor is implicitly parsed from this variable.

  • activation (function) – Nonlinearity to be used as activation function after each batchnorm (default: F.relu)

  • batchnorm (bool) – Whether or not to use batchnorm layers (default: True)

  • transposed_input (bool) – Whether the input’s second and third dimension is already transposed. If so, a transpose operation can be avoided, improving performance. See documentation of PointNetFeatureExtractor for more details.

Example

pointnet = PointNetClassifier(in_channels=6, feat_size=1024,

feat_layer_dims=[32, 64, 256], classifier_layer_dims=[500, 200, 100])

x = torch.rand(5, 6, 30) y = pointnet(x) print(y.shape)

PointNetSegmenter(in_channels: int = 3, feat_size: int = 1024, num_classes: int = 2, dropout: float = 0.0, classifier_layer_dims: Iterable[int] = [512, 256], feat_layer_dims: Iterable[int] = [64, 128], activation=<function relu>, batchnorm: bool = True, transposed_input: bool = False)[source]

PointNet segmenter. Uses the PointNet feature extractor, and adds per-point segmentation layers on top.

Note

If you use this code, please cite the original paper in addition to Kaolin.

@article{qi2016pointnet,
  title={PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation},
  author={Qi, Charles R and Su, Hao and Mo, Kaichun and Guibas, Leonidas J},
  journal={arXiv preprint arXiv:1612.00593},
  year={2016}
}
Parameters
  • in_channels (int) – Number of channels in the input pointcloud (default: 3, for X, Y, Z coordinates respectively).

  • feat_size (int) – Size of the global feature vector (default: 1024)

  • num_classes (int) – Number of classes (for the segmentation task) (default: 2).

  • dropout (float) – Dropout ratio to use (default: 0.). Note: If the ratio is set to 0., we altogether skip using a dropout layer.

  • layer_dims (Iterable[int]) – Sizes of fully connected layers to be used in the feature extractor (excluding the input and the output layer sizes). Note: the number of layers in the feature extractor is implicitly parsed from this variable.

  • activation (function) – Nonlinearity to be used as activation function after each batchnorm (default: F.relu)

  • batchnorm (bool) – Whether or not to use batchnorm layers (default: True)

  • transposed_input (bool) – Whether the input’s second and third dimension is already transposed. If so, a transpose operation can be avoided, improving performance. See documentation of PointNetFeatureExtractor for more details.

Example

pointnet = PointNetSegmenter(in_channels=6, feat_size=1024,

feat_layer_dims=[32, 64, 256], classifier_layer_dims=[500, 200, 100])

x = torch.rand(5, 6, 30) y = pointnet(x) print(y.shape)

PointNet2SetAbstraction(num_points_out, pointnet_in_features, pointnet_layer_dims_list, radii_list=None, num_samples_list=None, batchnorm=True, use_xyz_feature=True, use_random_ball_query=False)[source]

A single set-abstraction layer for the PointNet++ architecture. Supports multi-scale grouping (MSG).

Note

If you use this code, please cite the original paper in addition to Kaolin.

@article{qi2017pointnet2,
    title = {PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space},
    author = {Qi, Charles R. and Yi, Li and Su, Hao and Guibas, Leonidas J.},
    year = {2017},
    journal={arXiv preprint arXiv:1706.02413},
}
Parameters
  • num_points_out (int|None) – The number of output points. If None, group all points together.

  • pointnet_in_features (int) – The number of features to input into pointnet. Note: if use_xyz_feature is true, this value will be increased by 3.

  • pointnet_layer_dims_list (List[List[int]]) – The pointnet MLP dimensions list for each scale. Note: the first (input) dimension SHOULD NOT be included in each list, while the last (output) dimension SHOULD be included in each list.

  • radii_list (List[float]|None) – The grouping radius for each scale. If num_points_out is None, this value is ignored.

  • num_samples_list (List[int]|None) – The number of samples in each ball query for each scale. If num_points_out is None, this value is ignored.

  • batchnorm (bool) – Whether or not to use batch normalization.

  • use_xyz_feature (bool) – Whether or not to use the coordinates of the points as feature.

  • use_random_ball_query (bool) – Whether or not to use random sampling when there are too many points per ball.

PointNet2FeaturePropagator(num_features, num_features_prev, layer_dims, batchnorm=True)[source]

A single feature-propagation layer for the PointNet++ architecture.

Used for segmentation.

Note

If you use this code, please cite the original paper in addition to Kaolin.

@article{qi2017pointnet2,
    title = {PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space},
    author = {Qi, Charles R. and Yi, Li and Su, Hao and Guibas, Leonidas J.},
    year = {2017},
    journal={arXiv preprint arXiv:1706.02413},
}
Parameters
  • num_features (int) – The number of features in the current layer. Note: this is the number of output features of the corresponding set abstraction layer.

  • num_features_prev (int) – The number of features from the previous feature propagation layer (corresponding to the next layer during feature extraction). Note: this is the number of output features of the previous feature propagation layer (or the number of output features of the final set abstraction layer, if this is the very first feature propagation layer)

  • layer_dims (List[int]) – Sizes of the MLP layer. Note: the first (input) dimension SHOULD NOT be included in the list, while the last (output) dimension SHOULD be included in the list.

  • batchnorm (bool) – Whether or not to use batch normalization.

PointNet2Classifier(in_features=0, num_classes=2, batchnorm=True, use_xyz_feature=True, use_random_ball_query=False)[source]

PointNet++ classification network.

Based on the original PointNet++ paper.

Note

If you use this code, please cite the original paper in addition to Kaolin.

@article{qi2017pointnet2,
    title = {PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space},
    author = {Qi, Charles R. and Yi, Li and Su, Hao and Guibas, Leonidas J.},
    year = {2017},
    journal={arXiv preprint arXiv:1706.02413},
}
Parameters
  • in_features (int) – Number of features (not including xyz coordinates) in the input point cloud (default: 0).

  • num_classes (int) – Number of classes (for the classification task) (default: 2).

  • batchnorm (bool) – Whether or not to use batch normalization. (default: True)

  • use_xyz_feature (bool) – Whether or not to use the coordinates of the points as feature.

  • use_random_ball_query (bool) – Whether or not to use random sampling when there are too many points per ball.

TODO: Documentation: add example

PointNet2Segmenter(in_features=0, num_classes=2, batchnorm=True, use_xyz_feature=True, use_random_ball_query=False)[source]

PointNet++ classification network.

Note

If you use this code, please cite the original paper in addition to Kaolin.

@article{qi2017pointnet2,
    title = {PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space},
    author = {Qi, Charles R. and Yi, Li and Su, Hao and Guibas, Leonidas J.},
    year = {2017},
    journal={arXiv preprint arXiv:1706.02413},
}
Parameters
  • in_features (int) – Number of features (not including xyz coordinates) in the input point cloud (default: 0).

  • num_classes (int) – Number of classes (for the classification task) (default: 2).

  • batchnorm (bool) – Whether or not to use batch normalization. (default: True)

  • use_xyz_feature (bool) – Whether or not to use the coordinates of the points as feature.

  • use_random_ball_query (bool) – Whether or not to use random sampling when there are too many points per ball.

TODO: Documentation: add example

DGCNN(input_dim=3, conv_dims=[64, 64, 128, 256], emb_dims=1024, fc_dims=[512, 256], output_channels=64, dropout=0.5, k=20, use_cuda=True)[source]

Implementation of the DGCNNfor pointcloud classificaiton.

Parameters
  • input_dim (int) – number of features per point. Default: 3 (xyz point coordinates)

  • conv_dims (list) – list of output feature dimensions of the convolutional layers. Default: [64,64,128,256] (as preoposed in original implementation).

  • emb_dims (int) – dimensionality of the intermediate embedding.

  • fc_dims (list) – list of output feature dimensions of the fully connected layers. Default: [512, 256] (as preoposed in original implementation).

  • output_channels (int) – number of output channels. Default: 64.

  • dropout (float) – dropout probability (applied to fully connected layers only). Default: 0.5.

  • k (int) – number of nearest neighbors.

  • use_cuda (bool) – if True will move the model to GPU

Note

If you use this code, please cite the original paper in addition to Kaolin.

@article{dgcnn,
    title={Dynamic Graph CNN for Learning on Point Clouds},
    author={Wang, Yue and Sun, Yongbin and Liu, Ziwei and Sarma, Sanjay E. and Bronstein, Michael M. and Solomon, Justin M.},
    journal={ACM Transactions on Graphics (TOG)},
    year={2019}
}
Voxel3DIWGenerator()[source]

TODO: Add docstring.

https://arxiv.org/abs/1707.09557

Input shape: B x 200 Output shape: B x 32 x 32 x 32

Note

If you use this code, please cite the original paper in addition to Kaolin.

@article{DBLP:journals/corr/SmithM17,
  author    = {Edward J. Smith and
               David Meger},
  title     = {Improved Adversarial Systems for 3D Object Generation and Reconstruction},
  journal   = {CoRR},
  volume    = {abs/1707.09557},
  year      = {2017},
  url       = {http://arxiv.org/abs/1707.09557},
  archivePrefix = {arXiv},
  eprint    = {1707.09557},
  timestamp = {Mon, 13 Aug 2018 16:46:50 +0200},
  biburl    = {https://dblp.org/rec/bib/journals/corr/SmithM17},
  bibsource = {dblp computer science bibliography, https://dblp.org}
}
Voxel3DIWDiscriminator()[source]

TODO: Add docstring.

https://arxiv.org/abs/1707.09557

Input shape: B x 32 x 32 x 32 Output shape: B x 1

SuperresNetwork(high, low)[source]

TODO: Add docstring.

https://arxiv.org/abs/1802.09987

Input shape: B x 128 x 128 x 128 Output shape: B x (high//low * 128) x (high//low * 128) x (high//low * 128)

Note

If you use this code, please cite the original paper in addition to Kaolin.

@incollection{ODM,
    title = {Multi-View Silhouette and Depth Decomposition for High Resolution 3D Object Representation},
    author = {Smith, Edward and Fujimoto, Scott and Meger, David},
    booktitle = {Advances in Neural Information Processing Systems 31},
    editor = {S. Bengio and H. Wallach and H. Larochelle and K. Grauman and N. Cesa-Bianchi and R. Garnett},
    pages = {6479--6489},
    year = {2018},
    publisher = {Curran Associates, Inc.},
    url = {http://papers.nips.cc/paper/7883-multi-view-silhouette-and-depth-decomposition-for-high-resolution-3d-object-representation.pdf}
}
EncoderDecoder()[source]

A simple encoder-decoder style voxel superresolution network

class DIBREncoder(N_CHANNELS, N_KERNELS, BATCH_SIZE, IMG_DIM, VERTS)[source]

Encoder architecture used for single-image based mesh prediction in the Neurips 2019 paper “Learning to Predict 3D Objects with an Interpolation-based Differentiable Renderer”

Note

If you use this code, please cite the original paper in addition to Kaolin.

@inproceedings{chen2019dibrender,
    title={Learning to Predict 3D Objects with an Interpolation-based Differentiable Renderer},
    author={Wenzheng Chen and Jun Gao and Huan Ling and Edward Smith and Jaakko Lehtinen and Alec Jacobson and Sanja Fidler},
    booktitle={Advances In Neural Information Processing Systems},
    year={2019}
}
forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class VoxelDecoder(latent_length)[source]

Note

If you use this code, please cite the original paper in addition to Kaolin.

@InProceedings{smith19a,
    title = {{GEOM}etrics: Exploiting Geometric Structure for Graph-Encoded Objects},
    author = {Smith, Edward and Fujimoto, Scott and Romero, Adriana and Meger, David},
    booktitle = {Proceedings of the 36th International Conference on Machine Learning},
    pages = {5866--5876},
    year = {2019},
    volume = {97},
    series = {Proceedings of Machine Learning Research},
    publisher = {PMLR},
}
forward(latent)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class GraphResNet(input_features, hidden=192, output_features=3)[source]

An enhanced version of the MeshEncoder; used residual connections across graph convolution layers.

forward(features, adj)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class ImageToMeshReconstructionBaseline(N_CHANNELS, N_KERNELS, BATCH_SIZE, IMG_DIM, VERTS)[source]

A simple mesh reconstruction architecture from images. This serves as a baseline for mesh reconstruction systems.

Note

If you use this code, please cite the original paper in addition to Kaolin.

@InProceedings{smith19a,
    title = {{GEOM}etrics: Exploiting Geometric Structure for Graph-Encoded Objects},
    author = {Smith, Edward and Fujimoto, Scott and Romero, Adriana and Meger, David},
    booktitle = {Proceedings of the 36th International Conference on Machine Learning},
    pages = {5866--5876},
    year = {2019},
    volume = {97},
    series = {Proceedings of Machine Learning Research},
    publisher = {PMLR},
}
forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class MeshEncoder(latent_length)[source]

A simple mesh encoder architecture. Takes in a polygon mesh (graph) and encodes each node feature into a compact latent code.

forward(positions, adj)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class CBatchNorm1d(c_dim, f_dim, norm_method='batch_norm')[source]

Conditional batch normalization layer class. :param c_dim: dimension of latent conditioned code c :type c_dim: int :param f_dim: feature dimension :type f_dim: int :param norm_method: normalization method :type norm_method: str

forward(x, c)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class CResnetBlockConv1d(c_dim, size_in, size_h=None, size_out=None, norm_method='batch_norm', legacy=False)[source]

Conditional batch normalization-based Resnet block class. :param c_dim: dimension of latend conditioned code c :type c_dim: int :param size_in: input dimension :type size_in: int :param size_out: output dimension :type size_out: int :param size_h: hidden dimension :type size_h: int :param norm_method: normalization method :type norm_method: str :param legacy: whether to use legacy blocks :type legacy: bool

forward(x, c)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class DecoderCBatchNorm(dim=3, z_dim=128, c_dim=128, hidden_size=256, leaky=False, legacy=False)[source]

Decoder with conditional batch normalization (CBN) class. :param dim: input dimension :type dim: int :param z_dim: dimension of latent code z :type z_dim: int :param c_dim: dimension of latent conditioned code c :type c_dim: int :param hidden_size: hidden size of Decoder network :type hidden_size: int :param leaky: whether to use leaky ReLUs :type leaky: bool :param legacy: whether to use the legacy structure :type legacy: bool

forward(p, z, c, **kwargs)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class OccupancyNetwork(device)[source]

Occupancy Network class. :param decoder: decoder network :type decoder: nn.Module :param encoder: encoder network :type encoder: nn.Module :param p0_z: prior distribution for latent code z :type p0_z: dist :param device: torch device :type device: device

Note

If you use this code, please cite the original paper in addition to Kaolin.

@inproceedings{Occupancy Networks,
    title = {Occupancy Networks: Learning 3D Reconstruction in Function Space},
    author = {Mescheder, Lars and Oechsle, Michael and Niemeyer, Michael and Nowozin, Sebastian and Geiger, Andreas},
    booktitle = {Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)},
    year = {2019}
}
compute_elbo(p, occ, inputs, **kwargs)[source]

Computes the expectation lower bound. :param p: sampled points :type p: tensor :param occ: occupancy values for p :type occ: tensor :param inputs: conditioning input :type inputs: tensor

decode(p, z, c, **kwargs)[source]

Returns occupancy probabilities for the sampled points. :param p: points :type p: tensor :param z: latent code z :type z: tensor :param c: latent conditioned code c :type c: tensor

encode_inputs(inputs)[source]

Encodes the input. :param input: the input :type input: tensor

forward(p, inputs, sample=True, **kwargs)[source]

Performs a forward pass through the network. :param p: sampled points :type p: tensor :param inputs: conditioning input :type inputs: tensor :param sample: whether to sample for z :type sample: bool

get_z_from_prior(size=torch.Size([]), sample=True)[source]

Returns z from prior distribution. :param size: size of z :type size: Size :param sample: whether to sample :type sample: bool

infer_z(p, occ, c, **kwargs)[source]

Infers z. :param p: points tensor :type p: tensor :param occ: occupancy values for occ :type occ: tensor :param c: latent conditioned code c :type c: tensor

class Resnet18(c_dim, normalize=True, use_linear=True)[source]

ResNet-18 encoder network for image input. :param c_dim: output dimension of the latent embedding :type c_dim: int :param normalize: whether the input images should be normalized :type normalize: bool :param use_linear: whether a final linear layer should be used :type use_linear: bool

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

get_prior_z(device)[source]

Returns prior distribution for latent code z. :param cfg: imported yaml config :type cfg: dict :param device: pytorch device :type device: device

normalize_imagenet(x)[source]

Normalize input images according to ImageNet standards. :param x: input images :type x: tensor

class GCN(in_features, out_features)[source]

Simple GCN layer, similar to https://arxiv.org/abs/1609.02907

Note

If you use this code, please cite the original paper in addition to Kaolin.

@article{kipf2016semi,
  title={Semi-Supervised Classification with Graph Convolutional Networks},
  author={Kipf, Thomas N and Welling, Max},
  journal={arXiv preprint arXiv:1609.02907},
  year={2016}
}
forward(input, adj)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class G_Res_Net(input_features, hidden=128, output_features=3)[source]

Pixel2Mesh architecture.

Note

If you use this code, please cite the original paper in addition to Kaolin.

@inProceedings{wang2018pixel2mesh,
  title={Pixel2Mesh: Generating 3D Mesh Models from Single RGB Images},
  author={Nanyang Wang and Yinda Zhang and Zhuwen Li and Yanwei Fu and Wei Liu and Yu-Gang Jiang},
  booktitle={ECCV},
  year={2018}
}
forward(features, adj)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class VGG(channels=4)[source]

Note

If you use this code, please cite the original paper in addition to Kaolin.

@InProceedings{Simonyan15,
  author       = "Karen Simonyan and Andrew Zisserman",
  title        = "Very Deep Convolutional Networks for Large-Scale Image Recognition",
  booktitle    = "International Conference on Learning Representations",
  year         = "2015",
}
forward(img)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class SimpleGCN(in_features, out_features, bias=True)[source]

A simple graph convolution layer, similar to the one defined in Kipf et al. https://arxiv.org/abs/1609.02907

Note

If you use this code, please cite the original paper in addition to Kaolin.

@article{kipf2016semi,
  title={Semi-Supervised Classification with Graph Convolutional Networks},
  author={Kipf, Thomas N and Welling, Max},
  journal={arXiv preprint arXiv:1609.02907},
  year={2016}
}
forward(input, adj)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class VGG18[source]

Note

If you use this code, please cite the original paper in addition to Kaolin.

@InProceedings{Simonyan15,
  author       = "Karen Simonyan and Andrew Zisserman",
  title        = "Very Deep Convolutional Networks for Large-Scale Image Recognition",
  booktitle    = "International Conference on Learning Representations",
  year         = "2015",
}
forward(tensor)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.