Thursday, 2 June 2011

Point based global illumination

It's been a while since I last posted here, so I thought I'd post an update. I'm sure people are interested in the new core, but unfortunately real life (ie., desperately trying to finish my PhD) has got in the way so I have had limited time in the last several months.

While progress has been slow in that area, there's been a recent need for more advanced illumination techniques, and since this is a much smaller project I have taken it on in the meantime. The aim is to implement point cloud based global illumination, inspired primarily by the paper Point-Based Approximate Color Bleeding from Pixar (by Christensen, 2008) but also by Micro-Rendering for Scalable, Parallel Final Gathering by Ritschel et al.. I'd like to emphasize that while these are somewhat approximate techniques due to the hierarchical point cloud representation, they rely on rendering micro environment buffers of the scene at each point, and so correctly take occlusion information into account. This is in contrast to earlier, more approximate point based algorithms which produce similar results in some circumstances. On the downside, microbuffer based algorithms are inevitably slower.

I've already made some progress in the "pointrender" branch in the aqsis git repo, and I'm keen to show some test pictures. These show pure ambient occlusion lighting, but doing colour bleeding is a very simple extension. The geometry below was generated by an updated version of the example fractal generation procedural. The procedural previously was designed to generate Menger sponge fractals, but I've updated it to allow arbitrary subdivision motifs and to avoid the redundant faces which used to be created. First, the original:



A kind of turrets motif:



and, for good measure, some displacement. Displacement does slow things down in some of these cases, but only to the extent that the displacement bounds cause more geometry to be generated and shaded.



As to performance, I'm not particularly happy with that at this stage. I can't remember the exact timings, but the above images took perhaps half an hour or so to generate at 1024x1024 on a single core without any "cheating" to speed things up. That is, no occlusion interpolation or anything like that, so one occlusion query per shading point. With a bit of judicious cheating I think we can get this down significantly, hopefully to a few minutes per frame at a shading rate of 1.

That's all for now, see you next time!

Wednesday, 8 December 2010

Depth of field and motion blur in aqsis-2.0

In the last few weeks I've been working on getting motion blur and depth of field rendering working in the new aqsis-2.0 rendering core. It more or less works now, so I thought I'd post an update on the progress along with another screencast:



The method I've chosen to use so far is called "interleaved sampling" as nicely described in "Data-Parallel Rasterization of Micropolygons with Defocus and Motion Blur" by Fatahalian et al. Interleaved sampling is quite easy to understand at a basic level: for the motion blur case, it's very much like rendering a bunch of images - each at a different time during the shutter interval - and then simply averaging them all. (The details of the implementation are a bit more sophisticated for efficiency, but that's the general idea.)

Obviously if you do this with a small number of time snapshots you're bound to get strobing artefacts, so there's some sample jittering which goes on to try to reduce these. The number of strobed images which are rendered are under the control of the user, so they can easily turn this up to improve the quality at the expense of render time. As I say in the screencast, my impression is that the method is quite good for fast low quality sampling, but the other major method (known in the paper linked above as "interval sampling") may be faster for high quality usage, so I plan to implement it as well for comparison.

An interesting point is that the interval method makes a compromise when both depth of field and motion blur are combined to reduce the runtime. This compromise correlates the time and lens positions and results in exactly the same kind of strobing which is present in the interleaved method.

There's one original feature in my implementation, which is to investigate good ways of generating reasonably high quality sample distributions for interleaved sampling. At the start of each frame we choose a set T of N time values, and each sample in the image must have a time taken from this set. For efficient bounding during the sampling stage, we arrange the samples into tiles of size N so that each time in the set T is represented exactly once in each tile. So far so good (this is all pretty standard) but the question is, how do we arrange the N samples within the tile so that the resulting sampling noise is small? If they are arranged purely randomly, the noise will be large, and a regular pattern will result in aliasing as usual.

The solution I used is to create a set of 81 (= 3^4) prototype tiles with "coloured corners". The "colours" here really correspond to a particular sample pattern in each corner, more or less as described in "An Alternative for Wang Tiles: Colored Edges versus Colored Corners" by A. Lagae and P. Dutre. The tiles are laid out in the image plane using a spatial hash function so that adjacent tiles have sample patterns which match up on the corners and edges. The samples within each tile are laid out using an optimization heuristic which tries to have samples with similar time values far away from each other; this improves the sample stratification within each pixel filter region and reduces the noise. In Lagae and Dutre, they use a Poisson disk distribution within each tile to position the samples, but it's not obvious how to do something similar in our case because we're restricted to a fixed set of N times, with associated lens positions in the depth of field case. An additional constraint is that the samples have to be laid out in regular grid strata in screen space for efficient bounding during rasterization.

Monday, 1 November 2010

Interactive rendering with the aqsis-2.0 core

This blog has been pretty quiet for a while, but aqsis development has been coming along behind the scenes. During the aqsis-1.6 development last year I focussed a lot on making aqsis faster. After working on this for a while it became obvious that some major changes were needed for the code to be really fast. In particular, the aqsis sampler code is geared toward dealing with single micropolygons at a time, but it seems better for the unit of allocation and sampling to be the micropolygon grid as a whole. This was just one of several far-reaching code changes and cleanups which seemed like a good idea, so we decided that the time was right for a rewrite of the renderer core. Broadly speaking, the goals are the following:
  • Speed. Simple operations should be fast, while complex operations should be possible. The presence of advanced features shouldn't cause undue slowdowns when they are disabled.
  • Quality. Speed is good, but not at the cost of quality. Any speed/quality trade offs should be under user control, and default settings should avoid damaging quality in typical use cases.
  • Simplicity. This is about the code - the old code has a lot of accumulated wisdom, but in many places it's complex and hard to follow. Hopefully hindsight will lead us toward a simpler implementation.
Fast forward to the better part of a year later - development has been steady and we've finally got something we think is worth showing. With Leon heading off to the Blender conference, I thought an interactive demo might even be doable and as a result I'm proud to present the following screencast.

[PG] Please note, this embedded player is reduced in size, click the YouTube logo to link to YouTube and see it at the full size.



There's several important features that I've yet to implement, including such basic things as transparency, but as the TODO file in the git repository indicates, I'm getting there. The next feature on the list is to fix depth of field and motion blur sampling which were temporarily disabled when implementing bucket rendering.

Edit: I realized I should have acknowledged Sascha Fricke for his blender-2.5 to RenderMan exporter script which was used by Paul Gregory in exporting the last example from blender. Thanks guys!

Welcome

This blog represents a common area where we can share ideas, thoughts and other verbose information with others - While giving a little more insight into the inner workings/minds of our team.

Though there might be overlap in places, the content here differs from that of the main Aqsis website and is intended to be complimentary while still acting as a useful resource for those interested in the world of 3D graphics and rendering.

So... sit back, relax, and have fun... we will !!! ;-)