Skip to content

Conversation

@spyke7
Copy link

@spyke7 spyke7 commented Dec 27, 2025

Hi @orbeckst
I have added OpenVDB.py inside gridData that simply export files in .vdb format. Also I have added test_vdb.py inside tests and it successfully passes.
fix #141

Required Libraries -
openvdb

  • conda install -c conda-forge openvdb

There are many things that need to be updated like docs, etc, but I have just provided the file and test so that you can review it, and I can fix the problems. Please let me know if anything needs to be changed and updated.

@codecov
Copy link

codecov bot commented Dec 27, 2025

Codecov Report

❌ Patch coverage is 96.00000% with 2 lines in your changes missing coverage. Please review.
✅ Project coverage is 88.64%. Comparing base (b29c1f4) to head (97aef4b).

Files with missing lines Patch % Lines
gridData/core.py 71.42% 1 Missing and 1 partial ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master     #148      +/-   ##
==========================================
+ Coverage   88.20%   88.64%   +0.43%     
==========================================
  Files           5        6       +1     
  Lines         814      863      +49     
  Branches      107      115       +8     
==========================================
+ Hits          718      765      +47     
- Misses         56       57       +1     
- Partials       40       41       +1     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@spyke7
Copy link
Author

spyke7 commented Dec 27, 2025

@orbeckst , please review the OpenVDB.py file. After that, I will add some more test covering all the missing parts

@orbeckst
Copy link
Member

orbeckst commented Dec 27, 2025 via email

Copy link
Member

@orbeckst orbeckst left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for your contribution. Before going further, can you please try your own code and demonstrate that it works? For instance, take some of the bundled test files such as 1jzv.ccp4 or nAChR_M2_water.plt, write it to OpenVDB, load it in blender, and show an image of the rendered density?

Once we know that it's working in principle, we'll need proper tests (you can look at PR #147 for good example of minimal testing for writing functionality).

CHANGELOG Outdated
Comment on lines 24 to 26
Fixes

* Adding openVDB formats (Issue #141)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not a fix but an Enhancement – put it into the existing 1.1.0 section and add you name there.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the CHANGELOG, this PR and issue are in the 1.1.0 release, so should I add my name in the 1.1.0 release or remove those lines and put them in the new section?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, now move it to the new section above since we released 1.1.0.

Comment on lines 183 to 188
for i in range(self.grid.shape[0]):
for j in range(self.grid.shape[1]):
for k in range(self.grid.shape[2]):
value = float(self.grid[i, j, k])
if abs(value) > threshold:
accessor.setValueOn((i, j, k), value)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks really slow — iterating over a grid explicitly. For a start, you can find all cells above a threshold with numpy operations (np.abs(g) > threshold) and then ideally use it in a vectorized form to set the accessor.

@orbeckst orbeckst self-assigned this Jan 9, 2026
@spyke7 spyke7 requested a review from orbeckst January 18, 2026 06:48
@spyke7
Copy link
Author

spyke7 commented Jan 18, 2026

fixed the CHANGELOG and OpenVDB.py. I didn't get the time to work on the blender part due to exams. I will surely try do it!

@spyke7
Copy link
Author

spyke7 commented Jan 18, 2026

Screenshot (125) Screenshot (126) Screenshot (128) Screenshot (129)

The first two are for naChR_M2_water.vdb and the last two are for 1jzv.vdb.
Also in the OpenVDB.py, the function should be transform.preTranslate which I will fix with the new tests.
Can you please confirm that these are the correct rendering?
I can provide the .vdb files as well here.

@orbeckst
Copy link
Member

Good that you're able to load something into Blender. From a first glance I don;t recognize what I'd expect but this may be dependent on how you render in Blender. As I already said on Discord: Try to establish yourself what "correct" means. Load the original data in a program where you can reliably look at it. ChimeraX is probably the best for looking at densities; it can definitely read DX.

Btw, the M2 density should look similar to the blue "blobs" on the cover of https://sbcb.bioch.ox.ac.uk/users/oliver/download/Thesis/OB_thesis_2sided.pdf

@spyke7
Copy link
Author

spyke7 commented Jan 19, 2026

Screenshot (131) Screenshot (132)

The first one is for 1jzv.vdb and second for nAChR_M2_water.vdb (not done the shading/coloring)
I think the .vdb files as generated by the OpenVDB.py are now correctly rendering in blender.

Can I proceed with the tests part?

@BradyAJohnston
Copy link
Member

Mentioned in the Discord but also bringing up here: In your current examples (most obvious with the pore) is that the axis is flipped so that X is "up" compared to atomic coordinates which would have Z as up.

@spyke7
Copy link
Author

spyke7 commented Jan 19, 2026

Mentioned in the Discord but also bringing up here: In your current examples (most obvious with the pore) is that the axis is flipped so that X is "up" compared to atomic coordinates which would have Z as up.

Thank you for the update! will try to fix this

@spyke7
Copy link
Author

spyke7 commented Jan 19, 2026

Screenshot (133) Screenshot (136)

I think this fixes the axis..

@BradyAJohnston
Copy link
Member

Ideally we would see this alongside the atoms or density from MN as well - to double check alignment because you might also need to flip one of the X or Y axes.

@BradyAJohnston
Copy link
Member

The scales might be different (larger or smaller by factors of 10) but you can just scale inside of Blender by that amount to align the scales, but we want to be double checking alignemnt and axes.

@spyke7
Copy link
Author

spyke7 commented Jan 20, 2026

Hi @BradyAJohnston
Screenshot (138)
Screenshot (139)

I have first of all added the MolecularNode add-on as given in the https://github.com/BradyAJohnston/MolecularNodes, and imported the 1jzv.pdb. After that import the .vdb file and there was difference in size of two. So I made the size the .pdb bigger. The centers of both of them are same and I didn't flipped any of the axes in the ss provided.

I wrote a small blender py script to compare bounding boxes of the pdb and vdb objects to verify centroids, extents and axis alignment-

import bpy
from mathutils import Vector

def bbox_world(obj):
    bbox = [obj.matrix_world @ Vector(c) for c in obj.bound_box]
    mn = Vector((min(p[i] for p in bbox) for i in range(3)))
    mx = Vector((max(p[i] for p in bbox) for i in range(3)))
    return mn, mx

def centroid_world(obj):
    mn, mx = bbox_world(obj)
    return (mn + mx) / 2.0

def size_world(obj):
    mn, mx = bbox_world(obj)
    return mx - mn

pdb = bpy.data.objects.get("1jzv.001")
vdb = bpy.data.objects.get("1jzv")

print("pdb centroid:", centroid_world(pdb))
print("pdb size:", size_world(pdb))
print("vdb centroid:", centroid_world(vdb))
print("vdb size:", size_world(vdb))

output -
pdb centroid: <Vector (7.6985, 23.7885, 76.0560)>
pdb size: <Vector (33.1410, 45.4170, 29.3960)>
vdb centroid: <Vector (8.7238, 23.4452, 76.7628)>
vdb size: <Vector (43.6190, 52.3429, 40.3425)>

The centroids are almost same I guess...
the data seems to be correctly aligned

@BradyAJohnston BradyAJohnston self-assigned this Jan 20, 2026
@BradyAJohnston
Copy link
Member

@spyke7 It's still not 100% clear from your screenshots - can you import with the pore instead as that is more clear? And when you are taking a screenshot it would be more helpful to have the imported density in the centre of the screen rather than mostly empty space.

@PardhavMaradani
Copy link

Looks like you are attempting a standalone export to .vdb files from GridDataFormats. (If your end use case is to use this only within Blender, I'd strongly recommend using MolecularNodes to import various grid formats as it already uses GridDataFormats internally and provides a lot of cool features like varying ISO values, different colors for positive and negative ISO values, slicing along all three major axes, showing contours, centering, inverting etc - both from GUI and API) From a quick scan of the code, you seem to want to support both pyopenvdb (the older one) and openvdb (the newer one) - note that there are some minor differences to take into account between them. You can take a look at the grid_to_vdb method from an earlier version in MN that shows the differences and handles the export to .vdb within MolecularNodes. Hope this helps. Thanks

@BradyAJohnston
Copy link
Member

If this functionality can be added directly to GDF then we can also take advantage of that in MN going forwards.

@PardhavMaradani
Copy link

If this functionality can be added directly to GDF then we can also take advantage of that in MN going forwards.

Agreed. In addition to exporting to .vdb format, we also add some additional metadata (currently, info about inversion, centered) that we later use. So as long as the metadata for Grids is carried over during export, we should probably be good. Thanks

@BradyAJohnston
Copy link
Member

In addition to exporting to .vdb format, we also add some additional metadata

This is a good point and something to consider as well. As far as I am aware Blender / MN (and other 3D animation packages) might be the only ones who use .vdb as a format rather than any scientific packages / pipelines.

If there is anything out there that does take .vdb then we might want to consider if any relevant metadata should be saved. We might want to standardise on relevant metadata entries (we could either re-use from MN or update inside of MN to more general ones) so that GDF interactions with .vdb attempt to approach some kind of standard. This might be a larger question outside of scope for a simple read / write, but certainly functionality to pass in custom metadata like we do in MN would be ideal.

@PardhavMaradani
Copy link

Some thoughts:

  • I don't think the goal should be to have the OpenVDB exporter produce a .vdb directly that matches/aligns with what MN creates
    • GDF should be independent of MN and Blender
    • Blender has a Z-up axis orientation and MN uses a custom world scale and these aren't necessarily the same with other tools
    • Due to the above, the exporter should not perform any hardcoded transformations. Instead, any such transformations (like translating, scaling, etc) should be format-specific kwargs in the exporter
  • Ideally, both an import and export support go hand in hand
    • Having an import back from .vdb to write to other formats to visualize in other tools would ensure self validation
    • I only provided the MN reference images because that is the only one I am familiar with and I don't have experience with other tools. Apologies if I misled in any way
    • MN would also benefit from being able to import .vdb files directly similar to other density files
  • Exported .vdb files retaining custom metadata that GDF grids already supports (along with the ability to transform as described above) is a must for MN to be able to use this exporter
  • From a quick look at the code, it seems a lot more complex than what we have in MN. I am not too familiar with the licensing issues etc, so I will let Brady add any thoughts about this

@spyke7 spyke7 requested a review from orbeckst January 26, 2026 14:54
@spyke7
Copy link
Author

spyke7 commented Jan 26, 2026

  • GDF should be independent of MN and Blender
  • Blender has a Z-up axis orientation and MN uses a custom world scale and these aren't necessarily the same with other tools
  • Due to the above, the exporter should not perform any hardcoded transformations. Instead, any such transformations (like translating, scaling, etc) should be format-specific kwargs in the exporter

Yes, when I import the .vdb as export by the OpenVDB.py it is different in scale than the one import by the MN-add on. Also, I need to add the geometry node as I showed in the video, and change the threshold which is ISO in this case, to make it look like same as that of MN-add on

@spyke7
Copy link
Author

spyke7 commented Jan 26, 2026

Please also show screen shots that demonstrate that your code and the existing MN plugin produce the same output in blender.

if I can make a video showing all the things about importing the .vdb as exported by OpenVDB.py and then importing the MN-add on and making it to the same scale, so that they are overlapping that would be better.

@orbeckst
Copy link
Member

If a video is necessary then show it. Please also write out (in text) the individual steps that you are carrying out in the video.

(Videos just take a lot of time to watch and are often ambiguous. Text is much faster to digest and more succinct because the writer (hopefully) took the time to think about how to clearly convey the message.)

@orbeckst
Copy link
Member

@PardhavMaradani many thanks for your input:

I don't think the goal should be to have the OpenVDB exporter produce a .vdb directly that matches/aligns with what MN creates

  • GDF should be independent of MN and Blender

Yes, I agree, the goal here is to have something that works generically and would be easy for MN to adopt in order to reduce code-duplication.

  • Blender has a Z-up axis orientation and MN uses a custom world scale and these aren't necessarily the same with other tools Due to the above, the exporter should not perform any hardcoded transformations. Instead, any such transformations (like translating, scaling, etc) should be format-specific kwargs in the exporter

I agree. Does the current PR hard-code any of these transformations?

Can you give an example of what this should look like?

Ideally, both an import and export support go hand in hand

  • Having an import back from .vdb to write to other formats to visualize in other tools would ensure self validation.

I'd be ok to have this PR only export and raise a new issue for reading VDB files, which looks very doable, given that @spyke7 is already using some of this openvdb functionality in the tests (which is good!)

MN would also benefit from being able to import .vdb files directly similar to other density files

That's good motivation. Maybe @spyke7 would be interested to work on it after this PR?

Output filename (should end in .vdb)

"""
self.grid=numpy.ascontiguousarray(self.grid, dtype=numpy.float32)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If this need to be done, do it in __init__. It's confusing to change attributes as a side-effect of writing.

]

vdb_grid.background = 0.0
vdb_grid.transform = vdb.createLinearTransform(matrix)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I assume that the transformation is required to make the VDB grid to have the correct origin and delta in general.

Or is this something specific to the MN blender use, @PardhavMaradani ?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or is this something specific to the MN blender use, @PardhavMaradani ?

I addressed this in the comment below. Thanks

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I saw the tests (eg test_write_vdb_with_delta_matrix) that checked that reading the VDB file would reproduce the original delta and my understanding is that this works because of the transformations added here. I think it's quite important that we can roundtrip consistently so I would leave the transformations as they are as a default. (Correct me if I am wrong, please.)

If MN/Blender needs to scale/shift then we should make this possible on top of the default.

gridData/core.py Outdated
"""
if self.grid.ndim != 3:
raise ValueError(
"OpenVDB export requires a 3D grid, got {}D".format(self.grid.ndim))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Still uses format.

Comment on lines 114 to 121
def test_write_vdb_nonuniform_spacing_warning(self, tmpdir):
data = np.ones((3, 3, 3), dtype=np.float32)
delta = np.array([0.5, 1.0, 1.5])
g = Grid(data, origin=[0, 0, 0], delta=delta)

outfile = str(tmpdir / "nonuniform.vdb")
g.export(outfile)
assert tmpdir.join("nonuniform.vdb").exists()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What are we testing here?

Why does the name contain "warning"?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this only checks if the file exists with non-uniform delta

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think in the end I can check whether each axis has correct spacing. I will update it

Comment on lines 195 to 201
@pytest.mark.skipif(HAS_OPENVDB, reason="Testing import error handling")
def test_vdb_import_error():
with pytest.raises(ImportError, match="pyopenvdb is required"):
gridData.OpenVDB.OpenVDBField(
np.ones((3, 3, 3)),
origin=[0, 0, 0],
delta=[1, 1, 1]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You should be able to test the import handling when the test suite is being run. Look into mocking https://docs.python.org/3/library/unittest.mock.html (and there may also be something for pytest) – basically, make it so that just when this test function is run, the openvdb module is not imported.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have not done this in the recent push. Will be doing it!

@PardhavMaradani
Copy link

Does the current PR hard-code any of these transformations?

There is a hard-coded linear transformation in the current code as you pointed out in a review comment above. It is unclear to me what it does - @spyke7 could you please help explain (maybe I'm reading this incorrectly, if it is a combined scale and translation matrix, shouldn't the translations be in the 4th column and not row?)

Can you give an example of what this should look like?

This exporter could have additional args (similar to DX) that control the export behavior. Something like:

def _export_vdb(
    self,
    filename,
    center: bool = False,
    scale: float = 1.0,
    metadata: dict = None,
    ...
):
    ...

The center param could control whether the grid is centered around origin or left untouched and the scale param to scale the grid (OpenVDB scaling, not GDF grid resampling). scale/preScale and translate/postTranslate in pyopenvdb/openvdb respectively can be used. A combined matrix can also be used with createLinearTransform for the respective cases to avoid library specific methods if that works.

Ideally, any grid metadata should be exported as OpenVDB metadata as well. It would also help to have this be specified at export time as well. All of the above are generic and useful for any tools. MN requires all of the above to use this exporter as a drop-in replacement.

I am not sure if the threshold param (currently part of the __init__, but not exposed in the export) is needed. Looks like it is used to set the tolerance value of copyFromArray method - maybe it should be renamed accordingly. If it is needed, it should be exposed in a similar way to the params above. Thanks

@spyke7
Copy link
Author

spyke7 commented Jan 27, 2026

There is a hard-coded linear transformation in the current code as you pointed out in a review comment above. It is unclear to me what it does - @spyke7 could you please help explain (maybe I'm reading this incorrectly, if it is a combined scale and translation matrix, shouldn't the translations be in the 4th column and not row?)

Yes. This is a row major matrix (used in the code). The one you are saying is the column major matrix where the translation is in the 4th column.
The column major matrix is used in Opengl (I don't know much about this, but it is used in Opengl). openvdb required row major matrix (createLinearTransform). I have previously checked the matrix separately, changed the matrix to column major, i.e, in the 4th column the translation is present. The createLinearTransform gave an ArithmeticError: Tried to initialize an affine transform from a non-affine 4x4 matrix.

@spyke7
Copy link
Author

spyke7 commented Jan 27, 2026

Maybe @spyke7 would be interested to work on it after this PR?

Sure!

@spyke7
Copy link
Author

spyke7 commented Jan 28, 2026

I am not sure if the threshold param (currently part of the __init__, but not exposed in the export) is needed. Looks like it is used to set the tolerance value of copyFromArray method - maybe it should be renamed accordingly. If it is needed, it should be exposed in a similar way to the params above. Thanks

yes, threshold is used to set the tolerance.

@spyke7
Copy link
Author

spyke7 commented Jan 29, 2026

This exporter could have additional args (similar to DX) that control the export behavior. Something like:

def _export_vdb(
    self,
    filename,
    center: bool = False,
    scale: float = 1.0,
    metadata: dict = None,
    ...
):
    ...

The center param could control whether the grid is centered around origin or left untouched and the scale param to scale the grid (OpenVDB scaling, not GDF grid resampling). scale/preScale and translate/postTranslate in pyopenvdb/openvdb respectively can be used

So for example if I use, g.export(str(output_file), scale=0.01), I have passed scale=0.01, but the export() function as present in core.py will reject that. So I need to add **kwargs as well in the export - def export(self, filename, file_format=None, type=None, typequote='"', **kwargs):

and inside the function change, exporter(filename, type=type, typequote=typequote, **kwargs)

if this can be done, then we can pass scale, tolerance and center. Should I do it @orbeckst as it is present in the comments that it can process kwargs but not required to do so...

@orbeckst
Copy link
Member

orbeckst commented Jan 29, 2026

Yes, sort of: you need to add additional explicit keyword arguments to the top-level export() method and then add the specific keywords to the _export_vdb() method; still keep the **kwargs as this will swallow all other keywords that are not relevant for vdb.

@orbeckst
Copy link
Member

See #149 (comment) for a discussion for why we want to have explicit keywords.

@spyke7
Copy link
Author

spyke7 commented Feb 2, 2026

@PardhavMaradani can you please explain the center variable, more? As I cannot understand what to do for this..

@PardhavMaradani
Copy link

PardhavMaradani commented Feb 2, 2026

@PardhavMaradani can you please explain the center variable, more? As I cannot understand what to do for this..

The center param is to allow the users to specify whether they want the imported volume object (in tools like Blender, etc) to be centered around the world origin or not. This is a world space transform that determines the positioning of the volume object in 3D space. When True, the center of the entire volume (box) is at origin - this is useful for visualization cases that involve only the grid. When False, the volume object is positioned as per it's origin info in the grid - this is useful for cases where one needs to have the grid data align with a trajectory or molecule (like this example).

Here is an example of a density file (apbs.dx.gz) that is centered (left) and not (right):

density-centered-vs-original

Here is a front view of the above:

density-centered-vs-original-fv

Here is a snippet from the code I pointed out in a previous comment:

      if center:
          offset = -np.array(grid.shape) * 0.5 * gobj.delta
      else:
          offset = np.array(gobj.origin)

      # apply transformations
      vdb_grid.transform.preScale(np.array(gobj.delta) * world_scale)
      vdb_grid.transform.postTranslate(offset * world_scale)

If center is enabled, we transform based on the size of the grid so that the box center is at the origin. If not, we transform it based on the origin info of the grid object. As you can see above, because this is a world transform, this goes hand in hand with the scale transform and the scale value. Hope this helps. Thanks

@PardhavMaradani
Copy link

Thinking about this a bit more - @BradyAJohnston , given that the centering and scaling are just world transforms, do we really need to impose this upon GDF? We used openvdb for these transforms as we were anyway exporting the file and this was easiest to do right there. Since our use of GDF, we now have a common way to access to the underlying grid data in our density entity, so once we create the density object, we can just scale and transform our Blender object as we need. Our ask of GDF then reduces to having just metadata support in export. Your thoughts? Thanks

Copy link
Member

@orbeckst orbeckst left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor changes, please run black on the files to get all formatting consistent.

Regarding the transformations it actually looks reasonable to me, but I want to hear more from @BradyAJohnston and @PardhavMaradani .

assert tmpdir.join("auto.vdb").exists()

def test_write_vdb_with_metadata(self, tmpdir):
data = np.ones((3, 3, 3), dtype=np.float32)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could use grid345 and then add metadata.


class TestVDBWrite:
def test_write_vdb_from_grid(self, tmpdir, grid345):
data,g = grid345
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

space after ,

got = acc.getValue((i, j, k))
assert got == pytest.approx(float(data[i, j, k]))

def test_write_vdb_default_grid_name(self, tmpdir):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use fixture grid345?


voxel_size = grid_vdb.transform.voxelSize()

spacing=[voxel_size[0], voxel_size[1], voxel_size[2]]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

space around =

vdb_field.write(outfile)

grids, metadata = vdb.readAll(outfile)
assert grids[0].name == 'direct_test'
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

assert shape/content


spacing = [voxel_size[0], voxel_size[1], voxel_size[2]]

assert_allclose(spacing, [1.0, 2.0, 3.0], rtol=1e-5)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

instead of [1.0, 2.0, 3.0] use the variable

Suggested change
assert_allclose(spacing, [1.0, 2.0, 3.0], rtol=1e-5)
assert_allclose(spacing, delta, rtol=1e-5)

Comment on lines 178 to 179
assert acc.getValue((2, 3, 4)) == pytest.approx(5.0)
assert acc.getValue((7, 8, 9)) == pytest.approx(10.0)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

instead of hard coding 5.0 and 10.0, access data

Suggested change
assert acc.getValue((2, 3, 4)) == pytest.approx(5.0)
assert acc.getValue((7, 8, 9)) == pytest.approx(10.0)
assert acc.getValue((2, 3, 4)) == pytest.approx(data[2, 3, 4])
assert acc.getValue((7, 8, 9)) == pytest.approx(data[7, 8, 9])

(and one could just make it a loop over index tuples if there were more than 2)

grid_vdb = grids[0]
acc = grid_vdb.getAccessor()

assert acc.getValue((1, 1, 1)) == pytest.approx(1.0)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

access data

]

vdb_grid.background = 0.0
vdb_grid.transform = vdb.createLinearTransform(matrix)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I saw the tests (eg test_write_vdb_with_delta_matrix) that checked that reading the VDB file would reproduce the original delta and my understanding is that this works because of the transformations added here. I think it's quite important that we can roundtrip consistently so I would leave the transformations as they are as a default. (Correct me if I am wrong, please.)

If MN/Blender needs to scale/shift then we should make this possible on top of the default.

@spyke7
Copy link
Author

spyke7 commented Feb 3, 2026

In this recent push, I have just applied the changes as asked in test_vdb.py. Will soon implement the scale and center in core.py as well as OpenVDB.py

@PardhavMaradani
Copy link

Regarding the transformations it actually looks reasonable to me, but I want to hear more from @BradyAJohnston and @PardhavMaradani .

  • I am fine with the exporter having the center and scale support. (i.e., ignore my previous comment) Not having this would add an additional step for MN. Given these don't change anything in the index space, these should not impact the import from vdb support later
  • The current transform in the code adds an additional offset of -ve half delta - @spyke7 , I presume you added this to make this cell-centered? I would leave the default of vertex-centered in OpenVDB as is. Blender already accounts for this (see Blender PR #138449). The current code would cause a tiny offset
    • If at all a cell-centered transform is needed, this should probably be passed as an additional param and not hard-coded
  • I see that the exporter only creates a float grid. Given that OpenVDB supports different grid types, maybe use the data type to determine the corresponding grid type?

@spyke7
Copy link
Author

spyke7 commented Feb 3, 2026

  • The current transform in the code adds an additional offset of -ve half delta - @spyke7 , I presume you added this to make this cell-centered? I would leave the default of vertex-centered in OpenVDB as is. Blender already accounts for this (see Blender PR #138449). The current code would cause a tiny offset

    • If at all a cell-centered transform is needed, this should probably be passed as an additional param and not hard-coded
  • I see that the exporter only creates a float grid. Given that OpenVDB supports different grid types, maybe use the data type to determine the corresponding grid type?

Yeah!, I added that 0.5 * delta offset because GDF uses a cell-centered convention. I will remove this, as this is just creating additional offset.

Copy link
Member

@orbeckst orbeckst left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks all really good to me.

From my perspective, we only need to decide if the transformations should stay.

EDIT: Only just saw #148 (comment) — so we're keeping the transformation but remove the offset.

@BradyAJohnston @PardhavMaradani want some way to tweak the exports. Could you please leave a (blocking) review describing what you need to have added so that MN can make best use of the functionality?

@BradyAJohnston
Copy link
Member

Sorry should have time to look over this tomorrow. Adding the offset / centering on export is definitely something that could be handled by MN, but adding some transformation to the grid on export might still be useful more generally (or adding a transform as a Grid before export?). Will look over in more detail tomorrow.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

add OpenVDB format

4 participants