Performance issue in the definition of render_splat, tensorflow_graphics/rendering/tests/splat_test.py

Hello, I found a performance issue in the definition of render_splat, tensorflow_graphics/rendering/tests/splat_test.py, tf.reduce_mean in line 160 and 162 will be calculated repeatedly during program execution. I think it should be created before the loop in render_splat.

The same issue exist in tf.gather.

Looking forward to your reply. Btw, I am very glad to create a PR to fix it if you are too busy.



Security certificate on the Stanford side is expired, so auto-download of resources fails.

Major hack to get it working in the meantime:

  1. download https://shapenet.cs.stanford.edu/media/modelnet40_ply_hdf5_2048.zip manually
  2. in the download folder, start a server with python3 -m http.server 8080
  3. modify _URL in tensorflow_graphics/datasets/modelnet40/modelnet40.pyto be _URL = 'http://localhost:8080/modelnet40_ply_hdf5_2048.zip'
  4. allow dataset to be built by executing
from tensorflow_graphics.datasets.modelnet40 import ModelNet40
ds_train, info = ModelNet40.load(split='train', with_info=True)
  1. after this, ensure you do not re-download:
data_dir = '~/tensorflow_datasets'
ds_train, info = ModelNet40.load(split='train', with_info=True, data_dir=data_dir, download=False)

documenation of num_levels in tfg.image.pyramid.downsample


The documentation of tfg.image.pyramid.downsample states it to return "A list containing num_levels tensors" but it actually returns a list containing num_levels+1 tensors

import tensorflow as tf 
import tensorflow_graphics as tfg

x = tf.zeros((1,64,64,1))
pyramid = tfg.image.pyramid.downsample(x, num_levels=3)


Partnet segmentation experiments

Hi, now I'm trying to evaluate the segmentation performance with PartNet as Figure 11 in the paper.

However, I'm confused about the details. for example, how to calculate the "accuracy" of the points? is it depends on the labels from BAE?
When I downloaded the PartNet dataset, there are no labels like back, arm, base, seat, etc...

Could you explain how to get the numbers? or could you upload experience code for the PartNet?

Thank you for reading! I'm looking forward to your kind answer.


`tensorflow_graphics.shape.check_static(...)` throwing error in TF2

I was writing a data augmentation layer for a PointNet implementation and ran into what appears to be a bug in tensorflow_graphics.shape.check_static(...), as seen on this line.

Offending layer:

class RandomRot(Layer):
  def __init__(self):
    super(RandomRot, self).__init__()

  def build(self, input_shape):
    self.s = tf.constant([input_shape[-1],])

  def call(self, inputs, training=None):
    if not training: return inputs
    r = tf.random.uniform(

    return tf.linalg.matmul(inputs,from_euler(r))

Error message:

AttributeError: in user code:

    <ipython-input-135-d11754641da6>:81 call  *
        self.x = self.r(self.x,training)
    <ipython-input-130-07bfe7ac5ab9>:25 call  *
        return tf.linalg.matmul(inputs,from_euler(r))
    /usr/local/lib/python3.6/dist-packages/tensorflow_graphics/geometry/transformation/rotation_matrix_3d.py:201 from_euler  *
    /usr/local/lib/python3.6/dist-packages/tensorflow_graphics/util/shape.py:206 check_static  *
        if _get_dim(tensor, axis) != value:
    /usr/local/lib/python3.6/dist-packages/tensorflow_graphics/util/shape.py:135 _get_dim  *
        return tensor.shape[axis].value

    AttributeError: 'int' object has no attribute 'value'

It appears that check_static is expecting each element from .shape to be a tensor, but in TF2 they're just ints. If I comment out check_static from from_euler, the function works fine. Strangely enough, it seems to work fine for tensors in eager execution, and only seems to throw errors when using Dataset objects with graph compilation.


How to render a mesh to Silhouette?

I'm interesting about use a rendered sihouette or texture to cal loss for 3D construction, but it seems that the tf-graphics dont have these functions. Can you pls give any advice or do you have any plan to develop this? Thanks.


Simple render

Is there a way to simply render a mesh to an image given field of view and other scene parameters?


Source install instructions outdated

Current latest stable release (v.1.0.0) is too outdated and has bugs which are fixed in the master branch.

I'm attempting to install from source, but the instructions are possibly outdated.
I tried:

python setup.py sdist bdist_wheel --universal
pip install --upgrade dist/*.whl

which did not give any errors, and I can import library and get tfg.__version__='2021.6.7', however most modules are missing. For example, tfg.geometry returns

AttributeError: module 'tensorflow_graphics' has no attribute 'geometry'

Can you update instructions / point towards what's an obvious step I'm missing?


Mesh segmentation dataio

The mesh segmentation dataio.py file is defaulting to size [V,3] for vertices, while it can be generalized to any number of features.

Just proposing so that someone can generalize this. Or I can put in a PR.


Padding mesh data

I am trying to train the mesh segmentation code here in tensorflow graphics with my own dataset. My dataset has some meshes with varying vertex count and edge count. I tried to zero-pad the vertices and faces array for each mesh so that all the meshes in my dataset will have the same number of vertices and faces.

However, I am running into issue of

  1. not able to view the mesh from the colab ipython notebook. (My vertices are between 0 and 1, so I know without padding I can view them)
  2. Since the padding for faces did not allow any negative values (pytorch3d sets the padding value for faces to be -1 and I believe here I have to pad it with 0), I had to pad the faces with a 0 which obviously pointed to the first vertex in the list.

Can you someone clearly point a way to pad custom mesh datasets and feed it into the network. An example to do this is very much appreciated. Thanks in advance

NOTE: The network is training with this zero padded dataset, but not really learning anything.