Virtual and augmented reality is coming big time. New generation of VR headsets are already providing totally new and immersive ways to experience 3D like never before — now would be a good time to jump aboard the train for new tomorrow!
Umbra has been working on a new technology for providing automatic solution for generating and streaming platform optimized 3D content from arbitrary 3D input. “Any 3D content on any device” as we say. See Vinh’s great blog post for more information about how it actually works.
Although the new technology is not bound to any particular device or platform, there is no way we could have skipped all the coolness and opportunities of VR. In the end, high performance requirements of rendering real 3D view is a great way to showcase our new tech in action. But why not aim higher while we’re at it? Huge scenes running real time on VR on mobile devices using Umbra? Sign me in!
We decided to use Samsung’s GearVR combined with a powerful mobile phone as our development and demo platform. It was already available for customers and it has a robust integration with Unity engine, which made the coding and testing really easy and painless. With a single press of a button in Unity editor we could deploy our code to any GearVR-ready smartphone (Samsung Galaxy S6/Edge in our case) and see huge 3D worlds running in a tiny mobile device. I could have never imagined running virtual reality applications on a smartphone when I was playing Snake on my very first Nokia mobile phone.
Not only iteration times were fast, but Unity’s profiling tools proved to be a great asset for us even on mobile platform. Although we couldn’t enable deep profiling of our code, the information was just enough for us to spot biggest performance bottlenecks on Android and compare results with the data we had gathered from other platforms. Deploy, profile, fix and repeat!
Did I forget to mention that Unity applications are cross platform almost out of the box? We were actually quite surprised how platform independent code we had written even though Unity provides access to platform specific features like IO access and stuff. Difference between VR and regular rendering was a single toggle on Unity’s user interface that said something like “Enable VR support”! No code changes required!
The ease of development didn’t come without a cost. By using high level abstractions provided by Unity, we’re limiting ourselves to Unity’s engine and it’s rendering pipeline with very limited options for performance optimizations. To meet already high performance requirements of smooth virtual reality experience we had to find every possible way to bypass Unity’s limitations. Even though we did quite well, there are still some cases where access to lower level resources would have benefit us greatly.
The usage of C# scripting language also introduced new challenges for us: the unpredictable and uncontrollable memory management scheme that would trigger garbage collection every now and then causing noticeable lag spikes. In addition to more predictable memory management probably the most desired features for us would have been better multithreading support on public scripting API and better control on resource transfer between CPU and GPU.
In the end I’m personally satisfied with GearVR and Unity as our development platforms. We were able to create really impressive virtual reality demos using our tech in a tight schedule that wouldn’t have been possible if we had to create our own engine. You don’t agree? Go and try our new tech yourself by signing up for early access!