In the pursuit of efficient 3D content streaming, data compression is essential. To understand where and how Umbra uses compression, I am going to briefly explain our processing pipeline.
In short, Umbra encodes arbitrary 3D scenes into a hierarchical representation and stores it in our Cloud platform for high-performance over-the-network delivery. To determine which assets need to be downloaded and rendered, Umbra queries the hierarchy using the current camera position. In this context, compression is used to reduce the time it takes to download the assets.
While Umbra assets are made up of geometry and textures, in this blog series, I will only discuss compression of triangulated surface geometry.
Before implementing a custom compression solution, it is a good idea to ask the question: why not use a standard solution such as gzip? To answer the question, let's do comparisons between gzip, Google's Draco mesh compressor, and our new custom mesh compressor.
The two quantities we want to measure are compression ratio and decompression time. From these quantities, we can calculate the combined download + decompression time at various bandwidths.
We will look at two kinds of models. The first one is reconstructed using photogrammetry and the second one is an artist-made model.
Let's start with the photogrammetry model. It looks like this:
And here are the results of the comparison:
Looking at the compressed size graph, we can see that while gzip cuts the size roughly in half; it is nowhere near Draco or Umbra. This shows that if you know the data you are compressing, you can easily beat general purpose compressors by a large margin. While Draco is slightly smaller than Umbra, ours decompresses roughly two times faster.
Looking at the download + decompression time graph, we can see that at 250Mbit/s it becomes more efficient to use gzip than Draco, while Umbra remains superior above those speeds.
The second model is the artist-made chair:
And here are the results:
The results tell a similar story. Once again, Draco comes does ever so slightly better on compression ration, but Umbra decompresses over three times faster compared to Draco. This matters when delivering 3D content progressively over networks for use on the web.
Umbra has a unique advantage knowing the mesh data really well; after all we generate all the data we compress! Because of this fact, we can build a custom compressor that relies on properties guaranteed by our reconstruction algorithms. This is not necessarily something you can do in a general purpose mesh compressor. In theory, we could adjust our reconstruction algorithms to specifically generate meshes that compress better, and/or decompress faster. This is the greatest advantage of controlling the whole pipeline.
There are also other concerns such as integration, memory usage, binary size, and subtle things like how different vertex attributes are quantized. All these concerns have to be taken into an account when designing a custom compressor in our context.
In the next post, I will explain how our mesh compressor works. I will also demystify the visualization in this posts title and the one below this paragraph.