Dual-Cone View Culling for Virtual Reality Applications

~ 6 Minute Read.

I once had this idea that plagued me for quite a while un­til I im­ple­ment­ed it in my bach­e­lor’s the­sis: isn’t the hu­man view vol­ume more cone-shaped than frus­tum-shaped? Al­so: lens­es of vir­tu­al re­al­i­ty head­sets are round as well and so should be the por­tion of the im­age you can see on the screen! 1

With that pre­sump­tion, wouldn’t it be more acu­rate and maybe even more ef­fi­cient to use cones in­stead of frus­tum vol­umes for culling in vir­tu­al re­al­i­ty ren­der­ing?

I took up­on my­self the task to find out:

In­ter­sec­tion Math

First chal­lenge was to fig­ure out the in­ter­sec­tion math for cones and bound­ing prim­i­tives: sphere, ax­is-aligned bound­ing box­es and tri­an­gles.

I wrote a small in­ter­sec­tion vizual­i­sa­tion tool us­ing Mag­num to be able to ver­i­fy my unittests and nice­ly vi­su­al­iz each:

Most of my math code is con­trib­ut­ed to Mag­num, with the ex­cep­tion of the cone-tri­an­gle in­ter­sec­tion as I didn’t ac­tu­al­ly get that to ful­ly work and did not end up need­ing it as im­ple­ment­ing prim­i­tive based culling on the GPU was pushed out of scope of the the­sis. You can tell from the im­age that there are still a cou­ple false pos­i­tives (tri­an­gles in­cor­rect­ly marked as in­ter­sect­ing with the cone). 2

At this point I want to thank the guys of re­al­timeren­der­ing.com for their ta­ble of ob­ject in­ter­sec­tions. I found the pa­pers by Geo­met­ric Tools there, which I based my meth­ods on.

Ex­cept in re­spect to cone-aabb which is en­tire­ly miss­ing (am I the first who need­ed that?)— I came up with an en­tire­ly own method here, which I will write a sep­a­rate blog post on to­mor­row.

Im­ple­men­ta­tion in Un­re­al En­gine

At Vhite Rab­bit we use Un­re­al En­gine 4 ev­er since we switched from an in­house game en­gine (which was to slow to de­vel­op with sad­ly). Hence, to be able to use it for our games, should it work, I want­ed to re­al-world proof the new culling ap­proach by im­ple­ment­ing it in UE4 and us­ing it with a scene from the In­fin­i­ty Blade: Grass Lands as­set pack they pro­vide for free.

And I’m glad I did! The re­sults I had from bench­mark­ing the in­di­vid­u­al in­ter­sec­tion meth­ods was orig­i­nal­ly quite a bit more op­ti­mistic than they should have been, I found sev­er­al bugs in them where test­ing hadn’t been suf­fi­cient in edge cas­es, and over­al con­clu­sion of the the­sis changed by 180°.

If you ac­cept­ed the Epic Games eu­la and have ac­cess to the main Un­re­al En­gine repos­i­to­ry, you should be able to find my im­ple­men­ta­tion of it here.

Re­sults

In the end I fig­ured out that I com­pared with rather naive frus­tum in­ter­sec­tion im­ple­men­ta­tions dur­ing bench­mark­ing and that the cone math was not able to hold up against more so­phis­ti­cat­ed SIMD im­ple­men­ta­tions. On the oth­er hand, my code was not op­ti­mized that far ei­ther.

The CPU culling I im­ple­ment­ed was by a large fac­tor slow­er. Cor­rect­ly op­ti­mized it should be pos­si­ble to bring that down, though. More in­ter­est­ing and shock­ing to me was that—con­trary to all ex­pec­ta­tions—the acu­ra­cy was low­er than with the clas­si­cal frus­tum culling.

(Cam­era fly­through, culled prim­i­tives over time and frus­tum cull du­ra­tion over time with Ocu­lus Rift.)

I was test­ing on the Ocu­lus Rift most­ly where you ac­tu­al­ly see not on­ly a cir­cu­lar cutout of the ren­dered im­age, but the en­tire dis­tort­ed im­age through the lens­es. That is like­ly al­so the rea­son why they don’t use a sten­cil mesh 1 as is done with the HTC Vive. Even on most oth­er head­sets (e.g. day­dream view) all of the ren­dered im­age is vis­i­ble through the lens­es.

The cone there­fore ful­ly con­tained the frus­tum rather than the oth­er way around and the op­ti­miza­tion was pret­ty much use­less…

I didn’t give up there ac­tu­al­ly. For the pre­sen­ta­tion of my the­sis I want­ed to check whether this works for the Vive at least: And yes, this worked much bet­ter!

(Same cam­era fly­through, culled prim­i­tives over time with HTC Vive.)

While still not bet­ter in all cas­es (prob­a­bly be­cause the view vol­ume is no ful­ly sym­met­ri­cal cone with cir­cu­lar base), I at least I didn’t feel like my idea was to­tal­ly fail­ing its re­al­i­ty check.

Con­clu­sion

My orig­i­nal long-shot hope was to find a view culling so­lu­tion for VR that is not on­ly more ef­fi­cient but pos­si­bly com­pa­ra­bly sim­ple to im­ple­ment as clas­si­cal view frus­tum culling. That aside I still be­lieve that it would be pos­si­ble to make this more ef­fi­cient, maybe by us­ing asym­met­ri­cal cones and SIMD or lever­ag­ing ND­Cs for prim­i­tive culling… ei­ther way, this is the point where I cut off the idea.

I hope you en­joyed this post and got some­thing out of this! I for my part learned to check my pre­con­di­tions more thor­ough­ly be­fore build­ing out im­ple­men­ta­tions of ideas on top of them.

1(1,2)
The idea was orig­i­nal­ly in­spired by the “Ad­vanced VR Ren­der­ing” GDC talk by Alex Vla­chos.
2
I fixed a cou­ple of these since—the im­age is not ful­ly up to date—but still not ful­ly cor­rect.

Writ­ten in 90 min­utes, ed­it­ed in 15 min­utes.