Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Spherical volume rendering #64

Draft
wants to merge 29 commits into
base: main
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
29 commits
Select commit Hold shift + click to select a range
f4e9c3b
Adding temp PR for spherical rendering
sochowski Dec 3, 2021
ab78cab
Merged grid_position and spherical_grid_position to switch vertex sha…
sochowski Dec 8, 2021
af7663d
Temporary PR for spherical rendering 2
sochowski Dec 20, 2021
ba3ed48
Update to do transformations in geometry shader
matthewturk Dec 20, 2021
ca6dbfb
add constant color shader
matthewturk Dec 20, 2021
d840d3f
Organized spherical ray-tracing
sochowski Jan 21, 2022
83d793f
Merge remote-tracking branch 'upstream/main' into spherical_refactoring
chrishavlin Aug 2, 2022
4cd83b3
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Aug 2, 2022
726aa82
spherical example to its own, resetn cartesian
chrishavlin Aug 2, 2022
3cf8186
experimenting...
chrishavlin Dec 1, 2022
c9a5bff
working? now only converting to cartesian for GLpos, etc.
chrishavlin Dec 1, 2022
065eb6d
add a shell fragment to the example, skipped linting
chrishavlin Dec 1, 2022
ddb69fd
swap the angles in the transformation
chrishavlin Dec 2, 2022
0501c6f
simplify the example
chrishavlin Dec 2, 2022
2cb14a8
cleanup
chrishavlin Dec 2, 2022
ad96eb3
Merge remote-tracking branch 'upstream/main' into spherical_vol_rende…
chrishavlin Dec 2, 2022
fb80cea
bad merge: add back the constant color shader
chrishavlin Dec 2, 2022
65a6abf
more cleaning
chrishavlin Dec 2, 2022
f43d577
move the spherical uniforms to child
chrishavlin Dec 6, 2022
d9a9771
add a bbox command line arg for example
chrishavlin Dec 16, 2022
1ffffb5
Merge remote-tracking branch 'upstream/main' into spherical_vol_rende…
chrishavlin Dec 21, 2022
0474454
in progress: spherical ray intersections
chrishavlin Jan 6, 2023
0031ee8
vectorize normal plane calc, only do it if spherical
chrishavlin Jan 9, 2023
4e4fdec
lingering changes, they look OK
chrishavlin Apr 12, 2023
dbd7240
Merge remote-tracking branch 'upstream/main' into spherical_vol_rende…
chrishavlin Jun 6, 2023
afcef53
add a simple test
chrishavlin Jun 6, 2023
4f73cdd
only rescale for cartesian
chrishavlin Jun 8, 2023
ac9cd00
move shader constants to include
chrishavlin Jun 8, 2023
3cef5b3
Merge remote-tracking branch 'upstream/main' into spherical_vol_rende…
chrishavlin Jun 8, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
48 changes: 48 additions & 0 deletions yt_idv/scene_data/block_collection.py
Original file line number Diff line number Diff line change
Expand Up @@ -41,6 +41,39 @@ def add_data(self, field, no_ghost=False):
# Every time we change our data source, we wipe all existing ones.
# We now set up our vertices into our current data source.
vert, dx, le, re = [], [], [], []

# spherical-only: SHOULD ONLY DO THIS IF DATASET IS SPHERICAL
# normal vectors and plane constants for min/max phi face
phi_plane_le = []
phi_plane_re = []

axis_id = self.data_source.ds.coordinates.axis_id

def calculate_phi_normal_plane(le_or_re):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we spend enough time in this function that it makes sense to vectorize it? i.e., call if on a list of coordinates and then return an array?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Vectorizing this is a good idea! and I think will simplify un-tangling the spherical-only part of the calculation from the base cartesian version.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

# calculates the parameters for a plane parallel to the z-axis and
# passing through the x-y values defined by the polar angle phi
# the result plane defines the boundary on +/-phi faces.
#
# eq of plane:
# X dot N = d where N is a normal vector, d is a constant,
# X is a position vector
# this function returns N and d.
phi = le_or_re[axis_id["phi"]]
theta = le_or_re[axis_id["theta"]]
r = le_or_re[axis_id["r"]]

# prob can/should use a yt func here to ensure consistency....
z = r * np.cos(theta)
r_xy = r * np.sin(theta)
x = r_xy * np.cos(phi)
y = r_xy * np.sin(phi)
pt = np.array([x, y, z])

z_hat = np.array([0, 0, 1])
normal_vec = np.cross(pt, z_hat)
d = np.dot(normal_vec, pt)
return normal_vec, d

self.min_val = +np.inf
self.max_val = -np.inf
if self.scale:
Expand All @@ -65,6 +98,12 @@ def add_data(self, field, no_ghost=False):
le.append(block.LeftEdge.tolist())
re.append(block.RightEdge.tolist())

normal, const = calculate_phi_normal_plane(le[-1])
phi_plane_le.append(np.array([*normal, const]))

normal, const = calculate_phi_normal_plane(re[-1])
phi_plane_re.append(np.array([*normal, const]))

for (g, node, (sl, _dims, _gi)) in self.data_source.tiles.slice_traverse():
block = node.data
self.blocks_by_grid[g.id - g._id_offset].append((id(block), i))
Expand Down Expand Up @@ -95,6 +134,15 @@ def add_data(self, field, no_ghost=False):
VertexAttribute(name="in_right_edge", data=re)
)

phi_plane_re = np.array(phi_plane_re, dtype="f4")
phi_plane_le = np.array(phi_plane_le, dtype="f4")
self.vertex_array.attributes.append(
VertexAttribute(name="phi_plane_le", data=phi_plane_le)
)
self.vertex_array.attributes.append(
VertexAttribute(name="phi_plane_re", data=phi_plane_re)
)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So it occurs to me that we should have the cartesian volume rendering as the basic option, and we can/should have spherical as a subclass. Let's not try cramming it all in; @neutrinoceros 's work on having things be visible in index-space coordinates is encouraging me to think of it in both ways.

That's not really specific here though, except that what we may want to do is to have this be a supplemental function that gets called if the component accessing the data is viewing it in spherical coordinates.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ya! I agree totally! I just jammed it in here to get things in initially working. Unjamming it and subclassing it relates to your vectorization question I think. If I vectorize that function, it'll be easier to have a sublcass that takes the output from the standard cartesian version to calculate the new spherical-only attributes.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

latest push no longer breaks things for cartesian coords (it's now vectorized and only runs if spherical).

I still want to keep thinking on how to better capture the different geometries though. I initially thought it may make sense to have a SphericalBlockCollection, but implementing it wasn't super straightforward because the calculations rely on the left and right edges, which currently only get saved via the vertex_array attribute after casting to "f4". So a standard construction like:

class SphericalBlockCollection(BlockCollection):
        def add_data(self, field, no_ghost=False):
            super().add_data(field, no_ghost=no_ghost)
            
            << --- calculate phi-normal planes from left, right edges --- >>

would require us also saving the full le and re arrays as attributes. Since they're only needed for an intermediate step, it seemed better to just add a geometry check to BlockCollection.add_data(). But i'm open to ideas here!

# Now we set up our textures
self._load_textures()

Expand Down
7 changes: 7 additions & 0 deletions yt_idv/shaders/grid_expand.geom.glsl
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,8 @@ layout ( triangle_strip, max_vertices = 14 ) out;
flat in vec3 vdx[];
flat in vec3 vleft_edge[];
flat in vec3 vright_edge[];
flat in vec4 vphi_plane_le[];
flat in vec4 vphi_plane_re[];

flat out vec3 dx;
flat out vec3 left_edge;
Expand All @@ -22,6 +24,9 @@ flat in vec4 vv_model[];

flat out ivec3 texture_offset;

flat out vec4 phi_plane_le;
flat out vec4 phi_plane_re;

// https://stackoverflow.com/questions/28375338/cube-using-single-gl-triangle-strip
// suggests that the triangle strip we want for the cube is

Expand Down Expand Up @@ -73,6 +78,8 @@ void main() {
inverse_pmvm = vinverse_pmvm[0];
inverse_proj = vinverse_proj[0];
inverse_mvm = vinverse_mvm[0];
phi_plane_le = vphi_plane_le[0];
phi_plane_re = vphi_plane_re[0];
dx = vdx[0];
v_model = vec4(current_pos, 1.0);
texture_offset = ivec3(0);
Expand Down
10 changes: 10 additions & 0 deletions yt_idv/shaders/grid_position.vert.glsl
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,10 @@ in vec4 model_vertex;
in vec3 in_dx;
in vec3 in_left_edge;
in vec3 in_right_edge;

in vec4 phi_plane_le; // first 3 elements are the normal vector, final is constant
in vec4 phi_plane_re;

flat out vec4 vv_model;
flat out mat4 vinverse_proj; // always cartesian
flat out mat4 vinverse_mvm; // always cartesian
Expand All @@ -13,6 +17,9 @@ flat out vec3 vdx;
flat out vec3 vleft_edge;
flat out vec3 vright_edge;

flat out vec4 vphi_plane_le;
flat out vec4 vphi_plane_re;

vec3 transform_vec3(vec3 v) {
if (is_spherical) {
// in yt, phi is the polar angle from (0, 2pi), theta is the azimuthal
Expand Down Expand Up @@ -46,4 +53,7 @@ void main()
vdx = vec3(in_dx);
vleft_edge = vec3(in_left_edge);
vright_edge = vec3(in_right_edge);

vphi_plane_le = vec4(phi_plane_le);
vphi_plane_re = vec4(phi_plane_re);
}
121 changes: 121 additions & 0 deletions yt_idv/shaders/ray_tracing.frag.glsl
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,11 @@ flat in mat4 inverse_pmvm; // always cartesian
flat in ivec3 texture_offset;
out vec4 output_color;

flat in vec4 phi_plane_le;
flat in vec4 phi_plane_re;

const float INFINITY = 1. / 0.;
const float PI = 3.1415926535897932384626433832795;
chrishavlin marked this conversation as resolved.
Show resolved Hide resolved
bool within_bb(vec3 pos)
{
bvec3 left = greaterThanEqual(pos, left_edge);
Expand Down Expand Up @@ -45,6 +50,106 @@ vec4 cleanup_phase(in vec4 curr_color, in vec3 dir, in float t0, in float t1);
// void (vec3 tex_curr_pos, inout vec4 curr_color, float tdelta, float t,
// vec3 direction);

float get_ray_plane_intersection(vec3 p_normal, float p_constant, vec3 ray_origin, vec3 ray_dir)
{
float n_dot_u = dot(p_normal, ray_dir);
float n_dot_ro = dot(p_normal, ray_origin);
// add check for n_dot_u == 0 (ray is parallel to plane)
if (n_dot_u == float(0.0))
{
// the ray is parallel to the plane. there are either no intersections
// or infinite intersections. In the second case, we are guaranteed
// to intersect one of the other faces, so we can drop this plane.
return INFINITY;
}

return (p_constant - n_dot_ro) / n_dot_u;
}

vec2 get_ray_sphere_intersection(float r, vec3 ray_origin, vec3 ray_dir)
{
float dir_dot_dir = dot(ray_dir, ray_dir);
float ro_dot_ro = dot(ray_origin, ray_origin);
float dir_dot_ro = dot(ray_dir, ray_origin);
float rsq = r * r; // could be calculated in vertex shader
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

which r is this? could we actually calculate it in the vertex shader? eventually this would need to be once for every cell edge, right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that would be the r from left_edge or right_edge. So I could calculate r^2 for both of those in the vertex shader, ya?


float a_2 = 2.0 * dir_dot_dir;
float b = 2.0 * dir_dot_ro;
float c = ro_dot_ro - rsq;
float determinate = b*b - 2.0 * a_2 * c;
float cutoff_val = 0.0;
if (determinate < cutoff_val)
{
return vec2(INFINITY, INFINITY);
}
else if (determinate == cutoff_val)
{
return vec2(-b / a_2, INFINITY);
}
else
{
return vec2((-b - sqrt(determinate))/ a_2, (-b + sqrt(determinate))/ a_2);
}

}

vec2 get_ray_cone_intersection(float theta, vec3 ray_origin, vec3 ray_dir)
{
// note: cos(theta) and vhat could be calculated in vertex shader
float costheta;
vec3 vhat;
if (theta > PI/2.0)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's entirely possible that this is what the compiler would do anyway, but I have found it to be simpler to write this as the sum of two different values multiplied by booleans. So you'd compute both and then add them. For instance, something like this, but with the casting done correctly:

first = theta > PI/2;
vhat = vec3(0,0,-1) * first + vec3(0,0,1) * (!first)

something like that, but I'm forgetting the right casting.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did see that suggestion on some glsl forums, in some cases it can be more efficient than the full if statements? I think it would simplify the code here, will make that change.

{
// if theta is past PI/2, the cone will point in negative z and the
// half angle should be measured from the -z axis, not +z.
vhat = vec3(0.0, 0.0, -1.0);
costheta = cos(PI - theta);
}
else
{
vhat = vec3(0.0, 0.0, 1.0);
costheta = cos(theta);
}
float cos2t = pow(costheta, 2);
// note: theta = PI/2.0 is well defined. determinate = 0 in that case and
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it may also be worth exploring using slight offsets instead of accounting for singularities; i.e., if ==0.0 produces a singularity, use 0.0 + epsilon

// the cone becomes a plane in x-y.

float dir_dot_vhat = dot(ray_dir, vhat);
float dir_dot_dir = dot(ray_dir, ray_dir);
float ro_dot_vhat = dot(ray_origin, vhat);
float ro_dot_dir = dot(ray_origin, ray_dir);
float ro_dot_ro = dot(ray_origin, ray_dir);

float a_2 = 2.0*(pow(dir_dot_vhat, 2) - dir_dot_dir * cos2t);
float b = 2.0 * (ro_dot_vhat * dir_dot_vhat - ro_dot_dir*cos2t);
float c = pow(ro_dot_vhat, 2) - ro_dot_ro*cos2t;
float determinate = b*b - 2.0 * a_2 * c;
if (determinate < 0.0)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This set of nested conditionals probably does need refactoring. I think single-level nested conditionals might be manageable by the compiler but we should avoid multi-level nesting.

{
return vec2(INFINITY, INFINITY);
}
else if (determinate == 0.0)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we do not want multiple return points

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There can be 2 real intersections of the cone though? In most cases, only one of the intersections would also be on the face of the spherical voxel. But in some cases there could be 2.

{
return vec2(-b / a_2, INFINITY);
}
else
{
vec2 t = vec2((-b - sqrt(determinate))/ a_2, (-b + sqrt(determinate))/ a_2);

// check that t values are not for the shadow cone
float cutoff_t = (-1.0 * ro_dot_vhat) / dir_dot_vhat;
if (t[0] < cutoff_t)
{
t[0] = INFINITY;
}
else if (t[1] < cutoff_t)
{
t[1] = INFINITY;
}
return t;
}
}

void spherical_coord_shortcircuit()
{
vec3 ray_position = v_model.xyz; // now spherical
Expand All @@ -53,6 +158,22 @@ void spherical_coord_shortcircuit()
vec3 dir = -normalize(camera_pos.xyz - ray_position_xyz);
vec4 curr_color = vec4(0.0);

// intersections
vec2 t_sphere_outer = get_ray_sphere_intersection(right_edge[id_r], ray_position_xyz, dir);
if (t_sphere_outer[0] == INFINITY && t_sphere_outer[1] == INFINITY)
chrishavlin marked this conversation as resolved.
Show resolved Hide resolved
{
// if there are no intersections with the outer sphere, then there
// will be no intersections with the remaining faces and we can stop
// looking.
discard;
}

vec2 t_sphere_inner = get_ray_sphere_intersection(left_edge[id_r], ray_position_xyz, dir);
float t_p_1 = get_ray_plane_intersection(vec3(phi_plane_le), phi_plane_le[3], ray_position_xyz, dir);
float t_p_2 = get_ray_plane_intersection(vec3(phi_plane_re), phi_plane_re[3], ray_position_xyz, dir);
vec2 t_cone_outer = get_ray_cone_intersection(right_edge[id_theta], ray_position_xyz, dir);
vec2 t_cone_inner= get_ray_cone_intersection(left_edge[id_theta], ray_position_xyz, dir);

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so once we have these, what do we do?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. all those t values are the intersections with the base geometries on which the faces of the spherical voxel lies, so we need to find the t values that actually intersect the voxel, not just the base geometries. Those would be the ray entry/exit points of the current voxel.
  2. I think those entry/exit points correspond to t_min and t_max of the standard cartesian tracing? So we'd similarly want to take n_samples of the current fragment between those min/max values?

// do the texture evaluation in the native coordinate space
vec3 range = (right_edge + dx/2.0) - (left_edge - dx/2.0);
vec3 nzones = range / dx;
Expand Down