From occlusion culling to LOD – see the new Umbra tech in action

All Posts

Umbra is currently known for its industry leading automatic occlusion culling solution. Many triple A studios use it and it’s integrated in the Unity Engine. If you use occlusion culling in Unity, you are already using our product! For more detailed information about occlusion culling in Unity, please check out Jasin’s great write-up here.

The problem

A massive landscape with long view distances or a single model of a building where even a tiny bolt is modeled to a great precision have a thing in common: they are very hard to render in real-time. There are often too many objects on the screen to be rendered efficiently. Usually the worst case situation is when the camera sees the whole scene at once with little to no occlusion. In these cases, occlusion culling is simply not enough.

Fortunately, a technique called level of detail or LOD solves this problem by creating simpler versions of each object. Then, during runtime the system selects assets that look good enough based on the camera distance or their size on the screen. If done correctly, this increases rendering performance and the quality reduction is unnoticeable to the end-user. Unfortunately it also introduces a new problem for the content authors, because in addition to the original source data, they also need to somehow create these simpler versions. This work is tedious and does not scale well with large object counts. We need a fully automated solution.

Umbra to the rescue

Our answer is a completely automated visibility and level of detail optimizing solution. We combined our existing occlusion culling system with our brand new LOD system to massively expand the types of scenes we can optimize. From the user’s point of view, all they have to do is load up their scene, press a single button and let Umbra handle the rest. We do not require any manual markup on any objects. Once the scene has been optimized by Umbra, we output an optimized 3D database which contains both visibility data and LOD assets. During runtime we simply query the database for a list of visible objects and select the appropriate LOD models for each one.

The front-end of our system is built on top of Unity, so everybody with Unity 5+ installed will be able to use this in their projects. Our system supports assets that use the Standard Shader in Unity. Our LOD assets are designed to work with the Standard Shader. We also make sure that the LOD assets are tiny enough for runtime streaming from the disk or over the network.

Sounds good, but does it work?


In our recent videos we demonstrated how our solution can optimize very complex scenes that would normally require hundreds of gigabytes of GPU memory. We processed around 25 square kilometer area of Boston in our program and we were able to render the output at over 500 fps on a R9 270X, which is well beyond the recommended VR frame rates. We also tested our implementation on Android and managed to get the same scene running on Samsung S6 with Gear VR! Being able to display this scene in VR frame rates is a huge win, compared to our starting point when we could not even fit the scene in memory, let alone render it in real-time.

Neat, where can I get it?

We are planning to make this available first for the Unity users sometimes next year, however we are also looking for early access users. If you are interested, please sign up for early access.


Sign up for Alpha

More from the Umbra blog

Bring your passion to us!

Elina Nygård / August 10, 2017
We are seeking seasoned veterans, propeller hat scientists and enthusiastic students to join us and…
Read more

Overview on Popular Occlusion Culling Techniques

Umbra / December 15, 2016
This article was featured in   The occlusion culling experts at Umbra share…
Read more

Getting It Just Right – Benchmarking On HoloLens

Karlos Zafra / October 21, 2016
"Ahhh, this porridge is just right," – Goldilocks Finding the exact balance in any recipe is…
Read more

Popular Posts