When talking about mass scenes the computational complexity is very
important. The most relevant is the time complexity. The time necessary
to create a nice image can be divided into three exclusive parts: the
time to prepare and to design the scene, time for the parser to read
the source and to build the internal structures, and the time that the
rendering engine needs to render the image.
The time needed for the scene composition has been reduced
significantly. The scene composer does not have to mark for
each of the generated objects the place where to put it. The
boring long lasting work surely belongs to the computer.
Execution of all proposed commands takes place in the parse
time. Although the parse time is much smaller that the other
two times, the shortening of that time is still not a worthless
job. During the scene preparation the designer needs to preview
the image (at low resolutions) many times, where a longer parse
time is very annoying. To accelerate the layout minimal distance
comparison, some types of hash-tables are applicable. As the reader
has certainly noticed, all the layout techniques work this way:
generate a random entity (a vertex or a ray) and try to use the
entity to find a reference point. If the reference point has
been found, it is good, but if not (the vertex has not been
inside or the ray did not hit anything), the quest for a
reference point has to be repeated. To avoid these unlucky
choices, some space partition trees could be made that would
assign correct probabilities to the tree nodes. Thereby a random
entity (the vertex or the ray) will have lower probabilities to
be generated in the regions where the chance to find a reference
point is lower.
The render time acceleration is a hard task and out of the scope of
this paper. There is one suggestion how to save some render time (and
quite a lot of memory): to count the distance from the camera to the
object being generated in the object's macro to reduce the model
quality according to this distance. It is sufficient if the models
in the front (near the camera) have higher detail, and the others
at the back have lower detail. However POV-Ray makes for the rendering
some sort of scene object trees, thereby the render time rises by the
logarithm of the scene complexity - the number of objects in the scene.
In other words it almost does not matter if there is one thousand of
objects in the scene or ten thousands.
As we have chosen POV-Ray, it works on almost any platform and it
is possible to accelerate the parse and render times by a migration
to a faster machine.
Scene
# of elements
Parse Time
Render Time
The letter soup
1500
0:00:05
0:01:35
The characters on a terrain
50
0:00:01
0:00:05
The hairy monster
45000
0:04:08
0:01:03
Table 1: The parse and render time comparison, executed on an
AMD K6-2 350 with a preview resolution (320 x 200 pixels) without
the antialiasing.