Manu Raivio

RealityCapture to Browser with Umbra SceneStream

Back to blog
January 31, 2020

Photogrammetry models allow us to capture the world around in exquisite detail — but that detail often comes with a drawback: large models don't perform well in real-time rendering. Whether you use them in a game-engine or display them on a website, detail is lost in optimizing models for performance.

In the following I will show you how to turn your high-resolution photogrammetry model into an Umbrafied scene and get it running in the browser at 60 FPS or more.

These Umbrafied scenes render on virtually any device and within the context of any application.

(Use mouse buttons to navigate and scroll to zoom.)

A few words about photogrammetry 

Photogrammetry enables our customers to create photorealistic 3D models based on the real world — with use-cases ranging from city planning, and online journalism, all the way to machine learning. 

If you aim for the maximum detail and accuracy, your photogrammetry datasets can become enormous, going from tens of gigabytes to several terabytes in size. This in turn, makes them hard to render in real-time, even on high-performance desktop hardware. The usual solution to enable the use of such datasets is to reduce the level of detail to a point where the models are no longer representative of original capture.

Our goal at Umbra is to enhance the performance and availability of these massive 3D datasets and models, for you to use them at their original capture resolution, regardless of application. Hence, Umbra SceneStream allows for unlimited input size and automatically generates a progressively streamable output format for local real-time rendering.

Capture Strategy

Photographs

High-quality photos are essential for photogrammetry. A high ISO value reduces possible uncertainty in feature detection for the photogrammetry software, by reducing noise and granularity in the photographs. A large depth of field in turn ensures that a high number of features remain detectable by keeping as much of the scene in focus as possible. Finally, a high number of quality pixels in the photographs provide more accurate data for the photogrammetry software to process.

High-resolution photogrammetry to browser

In the example for this tutorial tutorial, we use about 200 photos for a segment of the bridge picture above.

RealityCapture

Our choice of photogrammetry software for this tutorial is RealityCapture. Compared to the competition, it gives high-poly results quickly, without breaking your bank. Other similar software include Meshroom, Agisoft Metashape, Autodesk ReCap, and Pix4D, to name a few. Models from any of these software can be used together with Umbra SceneStream.

While preparing your model, there is no need to worry about triangle or texture budgets. You can use 4K textures (in fact this is preferable for the next step) and as many triangles as you need for creating the perfect looking content. In the end, the Umbra pipeline makes sure it still loads fast, runs at a high frame rate, and stays within the memory budget of any target device.

Therefore, we adjust RealityCapture's parameters to output the maximum amount of detail possible and let Umbra take care of optimizing the mesh and textures.

Alignment

Our photos are 26Mpix each and we tune up the alignment settings to make sure we use all the available features. Different settings for feature detection can lead to a tradeoff between having less bleed between objects or losing vertical accuracy, i.e., missing geometry.

Reconstruction

Normal detail is often sufficient and takes less time to reconstruct. For our purposes, since we are aiming for the maximum amount of detail, we choose High Detail. Greater accuracy takes longer to process but also gives the most significant possible detail in the resulting model.

Texturing

Adaptive or Fixed texel size works best as we do not need to mind maximum texture count — the more, the better.

Export

We export the resulting mesh and textures as .obj files into a folder. Save mesh by parts helps manage file size as .obj files can grow to be several gigabytes. This time the exported model is approximately 12 GB in split .obj files, including textures.

Umbrafication

For this step, you need an account at umbra.io. You can get your early access here to evaluate and test it for free.

Content is easiest uploaded to Umbra SceneStream using the command line interface (CLI). Once you've been granted access to the two-week trial, you can download the CLI tool as part of our SDK package available on the download page

For our scene, we use the following parameter string to upload and Umbrafy the data:

umbra.exe umbrafy --api-key <your key from umbra.io> 
--input-obj “<folder where the exported model .obj files are stored>”
--scene-name "Bridge" --feature-size 0.001291 --content-profile scanned
--import-obj-zIsUp 1 --import-obj-right-handed 1
Feature-size is the most crucial parameter and defines the output resolution, i.e., what is the smallest detail the Umbrafication process aims to presernve  in the resulting output scene. When processing photogrammetry models, anything smaller than texel size in the input model is unnecessary. However, it is important to start with a large value for quick previews and work iteratively towards that figure. See this article before uploading content.

From RealityCapture, we can see the texel size of 0.001291 for this model — so we also use it as the Umbra feature-size parameter.

After you launch the process, and the input data has uploaded, you can monitor the state of the processing from the umbra.io portal. Once processed, the content is immediately available for streaming to the web, mobile apps, Unity, or through the C++ SDK.

Browser integration

For a quick browser integration, you can easily generate a viewer link in your umbra.io account. The simplest way is to use it in the address bar of a browser using our WebGL player.

In addition to the URL scene-link, a scene can be embedded using the automatically generated embed code from our portal - the result can be seen below (and at the top of this article).

There are other options as well:

- For the most customization in building your app/web app using your own rendered, you can use our native JS API, available on GitHub: https://github.com/UmbraSoftware/umbrajs

- For those already using three.js, we also have an integration available: https://github.com/UmbraSoftware/umbrajs-three/

Using the THREE.JS integration is easy:

- Simply add the umbra js library to your html code with

<script src="umbrajs-three.js"></script>

- Umbrafied scenes can then be initialised with a few more lines of code using the scene link:

let Umbra = await UmbraRuntime.initWithThreeJS(renderer)

let umbraScene = Umbra.createScene
(
'key=pubk-6f592e67-5aec-479a-ad9e-46ad4e4fe699&project
=745415655&model=745415871',
)

This player using the sneaker scan is a simple example while this one is a more elaborate setup showcasing PBR material support of the same model.

We have further code examples here to help you get started.

Early Access

To begin Umbrafying your RealityCapture models, get your early access here.