I’m happy to announce that Umbra has moved to a sweet new office! We’ve been busy with boxes and moving labels, and on the moving day elevators were out of order. But the end result is spacious, shiny and worth the trouble. And maybe some champagne and a few beers. Anyways..
We use the word scene a lot here at Umbra. We use it to refer to different game maps and other 3D content we use to test Umbra with – real world content we’ve received from our wonderful industry partners, without whom there would be no Umbra. The word comes from ancient greek, meaning the building behind a theatre stage from where actors emerged. When it comes to games, our modern meaning is not that far from classical drama. And if that’s so, Umbra is like the stagehand: someone working backstage who helps keep the show running invisible to the audience. We now set our drama on a more intricate stage, but it fills the same purpose.
In this series of posts I delve a bit into Umbra’s internals and reasoning behind. In the previous installment I reviewed the visibility problem and what properties a good solution should have. There was also something confusing about old painters. In short, an ideal solution should be practical, fast, early, automated, correct and deal with globality of the problem. Surely not that tall of an order. In this part I’m going to reveal Umbra’s secret sauce. Ready? Aren’t secrets the best?
Umbra has a something in common with several traditional visibility approaches – occluder rasterization, potentially visible sets and portals. It’s sort of a mix of the best parts, and I’m going to present the recipe. Umbra’s culling system precomputes a simplified visibility database representing a 3D world. A fast runtime library then traverses the database in order to solve visibility for a view.
We call these databases tomes. They consist of automatically placed portals and cells. These are common concepts in the world of visibility: a cell is a volume in 3D space, within which visibility is similar. The world is divided into cells. Portals are links between cells: openings in space leading to another cell that also have a shape. The difficult problem is finding good portals and cells in fast and practical fashion.
Umbra’s precomputation step, Umbra optimizer, solves this problem. It starts with voxelization – discretization of 3D space. The algorithm begins by generating small “raw” cells. These are then improved by combining them together into bigger final cells, so that loss in occlusion is minimal.
Each raw cell is a continuous set of empty voxels not bigger than some user defined size. Portals are generated on open surfaces between these cells. User controls database accuracy by supplying the size for these raw cells. This “raw cell size” together with voxel size are two most important user controlled parameters in Umbra. They are publicly given more understandable names: “smallest occluder” (raw cell size) and “smallest hole” (voxel size). Controlling these parameters allows scaling the data for any environment and requirements. Several versions of data are generated for different levels of detail – occlusion near camera is more important than farther away.
Umbra’s runtime walks these cells linked by portals to figure out the visible volume. Visible objects are collected along the path. Compared to other kind of portal culling systems, Umbra generates a lot of portals. Runtime thus presents quite an engineering challenge: it needs to be fast and optimized on each platform to handle high portal count. Common method for portal culling is narrowing the view frustum to fit encountered portals. Umbra doesn’t do this – Umbra rasterizes the portals instead. In the end, a low-resolution depth buffer is generated.
Some visibility approaches are based on hand made occluder models – a representation of what’s solid in the world. You could see Umbra as a method for modelling empty space instead. This approach has the benefit that all inaccuracy can always be countered by growing the empty space (i.e. portals). Sources of inaccuracy include voxelization and low rasterization resolution. Conversely this allows us make the runtime faster by using lower database and runtime resolutions, while always retaining error-free output.
My previous blog post discussed global nature of visibility. Umbra is divided into a local offline operation and a global runtime one. Unlike PVS systems, this allows fast reuse of partial results from previous precomputations – and thus fast local updates to the database.
There’s also benefits in having a separate set of static data on which the runtime algorithm operates. It makes the system independent from anything else: it’s parallelizable and easier to integrate. An application can begin to execute Umbra’s runtime as soon as camera’s position is known for a frame.
You can see how we use portals like traditional portal culling and rasterize them like occluder models. The reality is naturally a lot more intricate than this blog post, but that’s the gist of it. There are more features to Umbra still not discussed, but hopefully this provides a window into reasoning behind everything. I believe Umbra to be a very good practical system: it’s able to process any kind of polygonal input into a fast visibility solution. When it comes to drama and entertainment, it might even be one step forward on the long road towards that perfect stage.
If you want to give it a go, we do have a free 30 evaluation period available for our visibility solution – it’s likely the best thing you’ll ever meet.