Recently I've been looking into solving the visibility of any volume from any volume. Specifically, given two volumes, do any two points within them see each other? Implementing a visibility query that answers this could be the ultimate tool for occlusion culling and sort of a fundamental building block for other visibility operations. After all, operations such as point visibility and object visibility could be seen as special cases of such an algorithm.
Umbra 3.4.16 now includes this feature and I'll discuss here how it works and how it came to be. I'll also cover some use cases, such as visualizing total visible volume and improving dynamic object culling accuracy.
Umbra already supports line segment queries. These are visibility queries between two points. This might be splitting hairs, but Umbra’s line segment query differs from ray casting in that it requires an endpoint where the query terminates. This query simply traverses Umbra’s visibility graph between the two points, effectively intersecting a line segment with the graph. Supporting other shapes requires efficiently intersecting other corresponding volumes.
Solving visibility between two red boxes. Traversed portals in green.
The new query for volume-to-volume visibility forms a “tunnel” between two shapes and traverses the visibility graph (portals and cells) inside this volume. If the graph is connected along it, we know the shapes see each other.
This idea is somewhat similar to testing moving objects for collision. If an object moves during a frame, you’d want to take into account all intermediate positions instead of start and end positions only. Otherwise, a collision could be missed. The collision is therefore computed against this combined shape where the object slides along a ray, sometimes called Minkowski sum of a line segment and the moving object. Similarly, we want to intersect portals of our visibility graph against a volume which forms when one volume slides towards other. In our case, the volume also changes along the way if the two ends are different.
In practice, we don’t have the luxury of supporting arbitrary volumes, so I’ve concentrated on spheres and boxes. Sphere-to-sphere visibility involves intersecting a cut cone (a conical frustum) with the visibility graph. The connection between two AABBs (axis aligned bounding boxes) is defined by volume limited by the silhouettes of the two boxes.
Unfortunately, the table of intersection algorithms on realtimerendering.com only offers an empty slot for the cone-AABB intersection. For moving AABBs it suggests defining the volume using plane equations, where computing the planes can be costly. Turns out there are interesting methods for computing both intersections.
Cone-AABB intersection involves solving intersection point of cone axis against each AABB plane, computing closest points inside AABB and seeing if they are inside the cone. Both end radiuses of the cut cone also aren’t necessarily those of the spheres.
There’s a method for computing ray-AABB intersection called the slab method. It involves computing the range where a ray overlaps an AABB on each axis. If there’s a common range where all axes overlap, it means the ray intersects the AABB. It’s possible to extend this method for computing our AABB-tunnel intersection so that on each axis the ray has a variable thickness as well.
It should be noted that this method doesn’t support occluder fusion. This means that combined coverage of all occluders is not considered, but rather each occluder by itself. In other words, the query tells whether there’s any connection within the tunnel between the two volumes rather than a direct line of sight. Interpretation of the result is therefore conservative – as it is with other visibility queries.
The new queries are exposed as runtime API functions that implement sphere-to-sphere and AABB-to-AABB visibility. The API allows batching several queries together, which improves performance especially if target volumes are close to each other. The queries can additionally be performed for a single point by using an AABB with zero dimensions or a sphere with zero radii.
I usually avoid specific performance numbers, since they are dependent on content and Umbra parameters used. One general characteristic is that queries that quickly reach the endpoint or an occluder are faster than ones that traverse a lot of volume. This means that performance also depends on the exact use case. I’m happy that especially the AABB query is only slightly slower than the line segment query. The AABB query can simply be used for solving whether characters in-game world see each other, but it could also open up interesting opportunities for utilizing spatial information hierarchically and otherwise. Let’s look at some examples.
One application of volumetric queries is visualizing total visible volume from a point. We can subdivide the scene into a grid and solve visibility for each node from the camera point. This yields the visualization below. It is a top-down view of a simple scene, where red lines indicate boundaries of visible and occluded space. White lines are the camera frustum.
In the above visualization, the camera point acts as the source volume. We can also visualize visible volume for an arbitrarily sized box (white).
Following this idea further, precomputing and storing visibility of each grid node would allow us to build a database for fast visibility lookups. The problem now becomes storing and accessing the data efficiently. Storing state for each grid node would produce too much data for a practical solution. For example computing and storing visibility hierarchically while merging nearby similar nodes could be sufficiently small and fast.
Dynamic objects are objects that move and whose positions are not known until runtime. These objects can be tested for visibility against Umbra’s software depth buffer. The software depth buffer is a conservative approximation and its accuracy depends on the parameters used. Unlike static objects, dynamic object visibility can therefore “leak” slightly through nearby walls if the camera is at an angle.
Below purple boxes are handled as dynamic objects. Bright purple boxes are visible according to the depth buffer test. The leaks are caused by Umbra’s depth buffer using a single depth value per visibility graph cell. Smaller “smallest occluder” parameter value makes the depth buffer more accurate and thus reduces dynamic object leaking.
We can improve dynamic culling accuracy also by additionally testing dynamic objects for visibility using our new query.
There’s also a query variant that tests AABB vs. static object visibility. While intended for more general use, we can also use it for generating a list of all visible objects - as an alternative to Umbra’s regular visibility query queryPortalVisibility. Since queryPortalVisibility solves visibility in one pass instead of object-by-object, it of course wins in performance. Initially this doesn’t seem like a useful exercise - however, this kind of implementation would be easier to distribute to a high number of jobs for example for a GPU implementation.
Read more about our Umbra 3 gaming product in here!