Global illumination in real time is an important area of research today. Enlighten by Geomerics are the best in this domain. I will describe a technique less efficient but a good beginning which support dynamic lights and objects with mulitple bounces!
Like Enlighten, we want to dynamically update a lightmap texture. To do this, the first step is to generate a unique uv coordinate for each face of your scene a.k.a UV Unwrap.
Many differents techniques can be used here. we chose to implement it by planar projection described here. The idea is to project all faces on the dominant axis of the normal. After that, we pack each small texture into a big one with this article. That works ok but it would be better if each face had their neighbor into the texture space.
Of course you can simply use your favorite 3D modeler to do this stuff for you and maybe in a better way. For example, in blender: edit mode -> press "u" -> lightmap pack.
A better method is maybe using Ptex of Walt Disney Studios, but I didn't give it a try yet.
At this point, you must be able to generate a color for each face of your scene by coloring the lightmap (don't do that, it's just to be clear):
You can now directly write your scene in texture space and we want to get all the data of the scene in this space.
To do that, we use a multiple render targets to write simultaneously 3 small textures (like 128²) :
|1||RGBA32F||Position X||Position Y||Position Z||Always 1.f|
|2||RGB32F||Normal X||Normal Y||Normal Z|
|3||RGB||Albedo R||Albedo G||Albedo B|
Before drawing into these textures you must clear them with alpha to 0. The alpha channel is marked to 1 in the first one just to say "yes it's a real pixel where we have written some info", it will be useful later because you will have some empty space in your textures. If not, you are really lucky!Vertex Shader :
You can get some troubles writing faces which have an area smaller than one pixel of your textures. This is a well known problem, check the article conservative rasterization by Nvidia. A simple geometry shader will help you to fix that.
The output should be similar to this :
Note that if you have a dynamic object you just need to update the 3 textures only at the space of this object. It will be really fast.
We generate another one texture with the direct illumination (phong + shadow mapping works well) in texture space. You just need to draw a fullscreen quad and use the textures of position and normal to compute the lighting. You must update the direct illumination each time the lights and objects change but again, it's really cheap.
You will get something like this :
You have now all the informations to compute the radiance.
According to GPU Gems 2 - Dynamic Ambient Occlusion and Indirect Lighting by Michael Bunnell, I'm considering each pixel of my buffers like a surfel.
We have this relation between receiver and emitter elements:
We can now find a formula of radiance transfer between two surfels and use a compute shader to generate our final texture of indirect lighting:
The result is the first bounce of indirect lighting. You can repeat the process as many times as you want by replacing the sampler of the direct lighting by your last GI buffer to get the others bounce.
Of course this pass is really slow. You can compute this in more than one frame by sampling only a few emitters each time. I get two bounces in ~200ms with a texture of 128² on a Nvidia 650M.
I interpolate the result in the time to get a smooth rendering between the new texture and the last one.
Here's some of my results, and the comparaison between with and without global illumination :
To avoid the O(n²) complexity, you maybe can generate a quad tree of the geometry to sample only the surfels which are close to your receiver element. Of course you will need to update this tree each time you move an object but I don't know if it's possible in real time. It seems to be the solution of Geomerics, they precompute a structure that allows them to get the visibility of each element and their distance with the others but all the geometry must be static.
The application was made with Muhammad Daoud for a school project at the University of Luminy.
The great model of the scene is made by Sylvain Bernard a.k.a Mestaty, thanks mate (sorry I didn't take the final version..)!