Particle Tracing Methods in Photorealistic Image Synthesis
Rázsó István Márk rezso@inf.bme.huDepartment of Control Engineering and Information Technology Technical University of Budapest Budapest, Hungary |
Particle tracing is a method to simulate the behaviour of large number of particles. In a general algorithm there is a time unit which is used as a step interval in the simulation. In every step each particle is examined. Particles can be born, move, collide, change their properties and die in the environment. Image generation algorithms use a simpler particle tracing method. These methods utilise the particle behaviour of light, where each light particle has a small quantity of power. To simulate the light particles (their interaction with the environment), we trace particles from event to event. Light particles do not collide with each other, so each particle can be simulated separately. Usually we assume that during the flight of the particle from one surface to another no event can change its directional properties. The media between the surfaces is vacuum or affecting only the colour, but not the directional properties of the particle, so the effect can be calculated at the endpoint. Figure 1 shows the two different types of particle tracing methods in two dimensions.
How to trace a particle
To simulate a particle we have to generate particles in the scene, then
we have to trace these particles through reflections and refractions, until
they are absorbed or leave the scene.
We can total the power emitted by the lightsources
( ). Then we generate particles,
each carrying power .
For each lightsource
with power ,
we trace rays
according to the emission characteristics of the lightsource, as in [1].
Another method: for each particle we only store
the attenuation. The attenuation describes the decrease of the power of
the particle from its birth on. When we need the photon’s power, we divide
the power of the light source which emitted the photon by the total number
of photons emitted by the same lightsource, and multiply this value by
the attenuation of the particle, as in [2].
This method allows us to increase the number of photons emitted by each
light source, but this can lead to bias in the solution.
After we have found the lightsource for the particle,
a starting point and a direction on the lightsource, we trace the particle
through the scene. At each intersection (a photon hits a surface) a photon
can be absorbed, refracted or reflected. If it is not absorbed, we store
the properties of the photon, and generate a new direction and colour according
to the physical properties of the material. Figure 2 shows a scene with
different particle paths and surfaces with particle hits.
Particle properties
Different algorithms need different photon properties, but each algorithm requires to store the power of the particle. A particle can have a fixed wavelength power, say red or green or blue [1], or this power can represent the full spectra [4]. For many algorithms it is important to store the incoming direction of the particle. Some kind of position information on the surface hit is also needed. We can store the identifier of the surface and local coordinates on it, or we can store position in the scene, and a surface normal vector [4].
How to use the generated particle data
One type of algorithms use particle data to generate different meshes,
used during the display phase. These algorithms can use the particle distribution
information to estimate irradiance at a surface point. This irradiance
can be used in a raytracer. The other type of algorithm directly use particle
data during rendering.
The algorithms also differ in their goals. Some of the mesh generator
algorithms can render realtime walkthroughs in an environment, and are
used in commercial applications, but they are less physically correct.
The other algorithm has a big computation time, but generates more realistic
pictures.