next up previous
Next: Implementation Up: Seamless tiling in OpenGL Previous: Introduction

Subsections


Survey on image-based rendering

Based on [Kan97] image-based rendering techniques can be classified into four areas: offline synthesis (mosaicking), online synthesis (pixel reprojection) and image-interpolation (morphing resp. interpolation).

 

Morphing

This category is not physically based and 3-D geometry is not considered at all. It simply interpolates between a pair of possibly unrelated images. This technique is used most widely in the advertising and entertainment industry. During the morphing process the images are warped so that the source shape slowly assumes the target shape, while maintaining a visual appealing mix. An example can be found in [BN92].

 

Mosaicking

At least two different images are combined to get a larger image. The resulting image (mosaic) has a wider field of view than its constituent images. It is a more compact representation that allows new views of the scene quickly to be generated. The simplest subsection of this technique are rectilinear panoramas. But they are problematic for wide view angles greater than 180 $%
{{}^\circ}%
\,$. For a complete surround view spherical or cylindrical panoramas are more suitable. However these representations have to be warped prior to viewing to show a geometrical exact view of the scene.

 

Interpolation

The idea behind this class of methods is, to build up some kind of a lookup table$\emph{.}$ This table includes many image samples of a scene from a lot of different viewpoints. A new view from an arbitrary viewpoint is synthesized by interpolating the data stored in the lookup table.

The advantage of this class of methods is that unlike all other methods, pixel correspondence is not necessary. In addition, the lookup table is an approximation of the plenoptic function $P(X,Y,Z,\theta,\phi)$[AB91]. The plenoptic function is a 5-D description of the flow of light at every 3-D position and 2-D viewing direction. Because the process of image synthesis is restricted to a search in the lookup table with following interpolation, fast visualization can be achieved.

Disadvantages are the high number of image samples resulting in high memory requirements and the necessity of knowing the exact camera position and orientation for every sample during data acquisition. Two recently described approaches are light field rendering [LH96] and the lumigraph [GGSC96].

 

Reprojection

These techniques use a relative small number of images, but additionally geometric constraints from the scene geometry are applied. For synthesizing a view from a virtual camera position, the image pixels are reprojected appropriately. The geometric constraints can be of the form of known depth values at each pixel [CW93], or epipolar constraints between pairs of images are used (fundamental matrix [LDFP93] , [LF94] or constraints between pairs of cylindrical panoramas [MB95]). It is also possible to use three images with trilinear tensors [AS98].

 


next up previous
Next: Implementation Up: Seamless tiling in OpenGL Previous: Introduction
Schroecker Gerald
2000-04-06