kaolin.datasets.shapenet¶
-
class
ShapeNet_Meshes
(root: str, categories: list = ['chair'], train: bool = True, split: float = 0.7, no_progress: bool = False)[source]¶ ShapeNet Dataset class for meshes.
- Parameters
root (str) – Path to the root directory of the ShapeNet dataset.
categories (str) – List of categories to load from ShapeNet. This list may contain synset ids, class label names (for ShapeNetCore classes), or a combination of both.
train (bool) – return the training set else the test set
split (float) – amount of dataset that is training out of 1
no_progress (bool) – if True, disables progress bar
- Returns
dict: { attributes: {name: str, path: str, synset: str, label: str}, data: {vertices: torch.Tensor, faces: torch.Tensor} }
Example
>>> meshes = ShapeNet_Meshes(root='../data/ShapeNet/') >>> obj = next(iter(meshes)) >>> obj['data']['vertices'].shape torch.Size([2133, 3]) >>> obj['data']['faces'].shape torch.Size([1910, 3])
-
class
ShapeNet_Images
(root: str, categories: list = ['chair'], train: bool = True, split: float = 0.7, views: int = 24, transform=None)[source]¶ ShapeNet Dataset class for images.
- Parameters
root (str) – Path to the root directory of the ShapeNet dataset.
categories (str) – List of categories to load from ShapeNet. This list may contain synset ids, class label names (for ShapeNetCore classes), or a combination of both.
train (bool) – if true use the training set, else use the test set
split (float) – amount of dataset that is training out of
views (int) – number of viewpoints per object to load
transform (torchvision.transforms) – transformation to apply to images
no_progress (bool) – if True, disables progress bar
- Returns
dict: { attributes: {name: str, path: str, synset: str, label: str}, data: {vertices: torch.Tensor, faces: torch.Tensor} params: { cam_mat: torch.Tensor, cam_pos: torch.Tensor, azi: float, elevation: float, distance: float } }
Example
>>> from torch.utils.data import DataLoader >>> images = ShapeNet_Images(root='../data/ShapeNetImages') >>> train_loader = DataLoader(images, batch_size=10, shuffle=True, num_workers=8) >>> obj = next(iter(train_loader)) >>> image = obj['data']['imgs'] >>> image.shape torch.Size([10, 4, 137, 137])
-
class
ShapeNet_Voxels
(root: str, cache_dir: str, categories: list = ['chair'], train: bool = True, split: float = 0.7, resolutions=[128, 32], no_progress: bool = False)[source]¶ ShapeNet Dataset class for voxels.
- Parameters
root (str) – Path to the root directory of the ShapeNet dataset.
cache_dir (str) – Path to save cached converted representations.
categories (str) – List of categories to load from ShapeNet. This list may contain synset ids, class label names (for ShapeNetCore classes), or a combination of both.
train (bool) – return the training set else the test set
split (float) – amount of dataset that is training out of 1
resolutions (list) – list of resolutions to be returned
no_progress (bool) – if True, disables progress bar
- Returns
dict: { attributes: {name: str, synset: str, label: str}, data: {[res]: torch.Tensor} }
Example
>>> from torch.utils.data import DataLoader >>> voxels = ShapeNet_Voxels(root='../data/ShapeNet/', cache_dir='cache/') >>> train_loader = DataLoader(voxels, batch_size=10, shuffle=True, num_workers=8 ) >>> obj = next(iter(train_loader)) >>> obj['data']['128'].shape torch.Size([10, 128, 128, 128])
-
class
ShapeNet_Surface_Meshes
(root: str, cache_dir: str, categories: list = ['chair'], train: bool = True, split: float = 0.7, resolution: int = 100, smoothing_iterations: int = 3, mode='Tri', no_progress: bool = False)[source]¶ ShapeNet Dataset class for watertight meshes with only the surface preserved.
- Parameters
root (str) – Path to the root directory of the ShapeNet dataset.
cache_dir (str) – Path to save cached converted representations.
categories (str) – List of categories to load from ShapeNet. This list may contain synset ids, class label names (for ShapeNetCore classes), or a combination of both.
train (bool) – return the training set else the test set
split (float) – amount of dataset that is training out of 1
resolution (int) – resolution of voxel object to use when converting
smoothing_iteration (int) – number of applications of laplacian smoothing
no_progress (bool) – if True, disables progress bar
- Returns
dict: { attributes: {name: str, synset: str, label: str}, data: {vertices: torch.Tensor, faces: torch.Tensor} }
Example
>>> surface_meshes = ShapeNet_Surface_Meshes(root='../data/ShapeNet', cache_dir='cache/') >>> obj = next(iter(surface_meshes)) >>> obj['data']['vertices'].shape torch.Size([11617, 3]) >>> obj['data']['faces'].shape torch.Size([23246, 3])
-
class
ShapeNet_Points
(root: str, cache_dir: str, categories: list = ['chair'], train: bool = True, split: float = 0.7, num_points: int = 5000, smoothing_iterations=3, surface=True, resolution=100, normals=True, no_progress: bool = False)[source]¶ ShapeNet Dataset class for sampled point cloud from each object.
- Parameters
root (str) – Path to the root directory of the ShapeNet dataset.
cache_dir (str) – Path to save cached converted representations.
categories (str) – List of categories to load from ShapeNet. This list may contain synset ids, class label names (for ShapeNetCore classes), or a combination of both.
train (bool) – return the training set else the test set
split (float) – amount of dataset that is training out of 1
num_points (int) – number of point sampled on mesh
smoothing_iteration (int) – number of application of laplacian smoothing
surface (bool) – if only the surface of the original mesh should be used
resolution (int) – resolution of voxel object to use when converting
normals (bool) – should the normals of the points be saved
no_progress (bool) – if True, disables progress bar
- Returns
dict: { attributes: {name: str, synset: str, label: str}, data: {points: torch.Tensor, normals: torch.Tensor} }
Example
>>> from torch.utils.data import DataLoader >>> points = ShapeNet_Points(root='../data/ShapeNet', cache_dir='cache/') >>> train_loader = DataLoader(points, batch_size=10, shuffle=True, num_workers=8) >>> obj = next(iter(train_loader)) >>> obj['data']['points'].shape torch.Size([10, 5000, 3])
-
class
ShapeNet_SDF_Points
(root: str, cache_dir: str, categories: list = ['chair'], train: bool = True, split: float = 0.7, resolution: int = 100, num_points: int = 5000, occ: bool = False, smoothing_iterations: int = 3, sample_box=True, no_progress: bool = False)[source]¶ ShapeNet Dataset class for signed distance functions.
- Parameters
root (str) – Path to the root directory of the ShapeNet dataset.
cache_dir (str) – Path to save cached converted representations.
categories (str) – List of categories to load from ShapeNet. This list may contain synset ids, class label names (for ShapeNetCore classes), or a combination of both.
train (bool) – return the training set else the test set
split (float) – amount of dataset that is training out of 1
resolution (int) – resolution of voxel object to use when converting
num_points (int) – number of sdf points sampled on mesh
occ (bool) – should only occupancy values be returned instead of distances
smoothing_iteration (int) – number of application of laplacian smoothing
sample_box (bool) – whether to sample only from within mesh extents
no_progress (bool) – if True, disables progress bar
- Returns
dict: { attributes: {name: str, synset: str, label: str}, data: { Union['occ_values', 'sdf_distances']: torch.Tensor, Union['occ_points, 'sdf_points']: torch.Tensor} }
Example
>>> from torch.utils.data import DataLoader >>> sdf_points = ShapeNet_SDF_Points(root='../data/ShapeNet', cache_dir='cache/') >>> train_loader = DataLoader(sdf_points, batch_size=10, shuffle=True, num_workers=8) >>> obj = next(iter(train_loader)) >>> obj['data']['sdf_points'].shape torch.Size([10, 5000, 3])
-
class
ShapeNet_Tags
(dataset, tag_aug=True)[source]¶ ShapeNet Dataset class for tags.
- Parameters
dataset (kal.dataloader.shapenet.ShapeNet) – One of the ShapeNet datasets
download (bool) – If True will load taxonomy of objects if it is not loaded yet
transform – transformation to apply to tags
- Returns
Dictionary with key for the input tags encod and : ‘tag_enc’:
- Return type
Example
>>> from torch.utils.data import DataLoader >>> meshes = ShapeNet_Meshes(root='../data/ShapeNet/', cache_dir='cache/') >>> tags = ShapeNet_Tags(meshes) >>> train_loader = DataLoader(tags, batch_size=10, shuffle=True, num_workers=8 ) >>> obj = next(iter(train_loader)) >>> obj['data']['tag_enc'].shape torch.Size([10, N])
-
class
ShapeNet_Combination
(datasets)[source]¶ ShapeNet Dataset class for combinations of representations.
- Parameters
dataset (list) – List of datasets to be combined
categories (str) – List of categories to load from ShapeNet. This list may contain synset ids, class label names (for ShapeNetCore classes), or a combination of both.
root (str) – Path to the root directory of the ShapeNet dataset.
train (bool) – if true use the training set, else use the test set
- Returns
Dictionary with keys indicated by passed datasets
- Return type
Example
>>> from torch.utils.data import DataLoader >>> shapenet = ShapeNet_Meshes(root='../data/ShapeNet', cache_dir='cache/') >>> voxels = ShapeNet_Voxels(root='../data/ShapeNet', cache_dir='cache/') >>> images = ShapeNet_Images(root='../data/ShapeNet', cache_dir='cache/') >>> points = ShapeNet_Points(root='../data/ShapeNet', cache_dir='cache/') >>> dataset = ShapeNet_Combination([voxels, images, points]) >>> train_loader = DataLoader(dataset, batch_size=10, shuffle=True, num_workers=8) >>> obj = next(iter(train_loader)) >>> for key in obj['data']: ... print (key) ... params 128 32 imgs cam_mat cam_pos azi elevation distance points normals