Thursday, 2 June 2011

Point based global illumination

It's been a while since I last posted here, so I thought I'd post an update. I'm sure people are interested in the new core, but unfortunately real life (ie., desperately trying to finish my PhD) has got in the way so I have had limited time in the last several months.

While progress has been slow in that area, there's been a recent need for more advanced illumination techniques, and since this is a much smaller project I have taken it on in the meantime. The aim is to implement point cloud based global illumination, inspired primarily by the paper Point-Based Approximate Color Bleeding from Pixar (by Christensen, 2008) but also by Micro-Rendering for Scalable, Parallel Final Gathering by Ritschel et al.. I'd like to emphasize that while these are somewhat approximate techniques due to the hierarchical point cloud representation, they rely on rendering micro environment buffers of the scene at each point, and so correctly take occlusion information into account. This is in contrast to earlier, more approximate point based algorithms which produce similar results in some circumstances. On the downside, microbuffer based algorithms are inevitably slower.

I've already made some progress in the "pointrender" branch in the aqsis git repo, and I'm keen to show some test pictures. These show pure ambient occlusion lighting, but doing colour bleeding is a very simple extension. The geometry below was generated by an updated version of the example fractal generation procedural. The procedural previously was designed to generate Menger sponge fractals, but I've updated it to allow arbitrary subdivision motifs and to avoid the redundant faces which used to be created. First, the original:



A kind of turrets motif:



and, for good measure, some displacement. Displacement does slow things down in some of these cases, but only to the extent that the displacement bounds cause more geometry to be generated and shaded.



As to performance, I'm not particularly happy with that at this stage. I can't remember the exact timings, but the above images took perhaps half an hour or so to generate at 1024x1024 on a single core without any "cheating" to speed things up. That is, no occlusion interpolation or anything like that, so one occlusion query per shading point. With a bit of judicious cheating I think we can get this down significantly, hopefully to a few minutes per frame at a shading rate of 1.

That's all for now, see you next time!

8 comments:

Jonathan Merritt said...

Hi Chris, really great work as usual! :-) What would be nice is also to include brief instructions for replicating your images. Are the changes in the master branch / some other branch / a subdirectory? The example images are fantastic, but you know people like to tinker with stuff... :-)

Chris Foster said...

Hi Jonathan,

I'll probably do a proper tutorial for this stuff later, at the moment it's really edit-the-code kind of stuff (I'm still having trouble tuning things to avoid artefacts). Having said that, if you want to try it out, have a look at the "pointrender" branch. The procedure is to first bake out a point cloud using something like

bake3d("test.ptc", "", P, Nn, "_area", area(P), "_radiosity", col);

and then in the beauty pass compute occlusion with the "ray-traced" occlusion signature

occlusion(P, N, 0, "filename", "test.ptc")

Obviously that's far from a complete description. I'll try to do something better, hopefully in the not too distant future.

Jonathan Merritt said...

Hi Chris,

Thanks for the tips. I played with your Menger procedural, and found that I did get some artefacts. The artefacts I've been getting look a bit like shadow map self-shadowing, so I tried offsetting the lookup point just slightly in the occlusion() call. That seemed to work OK, but I have no idea if it's recommended.

Looking forward to the tutorials! :-)

Chris Foster said...

Other renderers do have "bias" option to occlusion(), but it shouldn't be required to get good results since the disks are currently one-sided with back face culling enabled. I think there's a bug in there somewhere, perhaps a problem with numerical precision in the culling. Damn 32-bit floats!

Chris Foster said...

I've made some commits to fix the self-shadowing artifacts, so you shouldn't need the bias anymore. The problem did turn out to be quite tricky - a combination of floating point roundoff errors and a subtle bug.

1armedScissor said...

I'm curious to know what the bug was. I'm going through similar development and experiencing what sounds like all the same problems.

Chris Foster said...

@1armedScissor: There are very many ways in this algorithm you can get bugs which look like self-shadowing. However, if you use back-face culling for the disks they probably aren't degeneracy problems as such.

The case in question was actually a subtle problem with backface culling not working quite as intended for very nearby disks.

ve may bay tet said...

thank for your post. It is very good.

Welcome

This blog represents a common area where we can share ideas, thoughts and other verbose information with others - While giving a little more insight into the inner workings/minds of our team.

Though there might be overlap in places, the content here differs from that of the main Aqsis website and is intended to be complimentary while still acting as a useful resource for those interested in the world of 3D graphics and rendering.

So... sit back, relax, and have fun... we will !!! ;-)