[Abstract] [Introduction]
[Configuration] [Optical beacon tracking]
[Other issues] [Conclusions]
[References]
In this section we will describe the spatial context of the show - the stage, hardware components we used, configuration of and communication between software packages, as well as the network configuration.
Figure 1 shows the configuration of the stage, the position of the cameras and the tracking area which was covered by the two cameras.
Figure 1: Stage configuration
The stage was V-shaped with each side being about 10 metres in length. It was one meter high and the ground was covered with asphalt containing reflecting stones which turned out not to be optimal for the tracking process. On the back end of the stage there was the screen (about 10 by 5 metres) which was used as a projection screen for the virtual city. The inline-skaters were doing their performance within the darker region labelled as "tracking area", which was approximately 10 by 6 metres in size.
During performance the inline-skaters were wearing special caps plastered with retro-reflectors, which in turn were reflecting the light coming from move- and turnable spotlights. The spotlights were positioned next to the cameras on both sides of the stage at a height of about 9 metres. This configuration ensured that the tracking process was stable and robust. The caps were reflecting enough light in order for the cameras to produce images with good contrast and reasonably bright beacons which could be tracked properly.
Our system consisted of:
Figure 2: Network scheme
As shown in figure 2 all the machines were connected to build a Local Area Network (LAN), we used Ethernet. Due to heavy magnetic influences by other high current cables, used for the lighting of the stage, we had to include a multiport repeater to guarantee a stable network operation. Two of the SGI Indy's were placed, as mentioned earlier, on two travelling cranes at a height of about 9 metres. Each Indy-camera was connected to its own Indy, which performed the image processing part. It was hardly possible to access these machines, so a third Indy was installed in a kind of "control room" to start the tracking processes via remote login. The person operating this third Indy was responsible for the two tracking processes, starting them and carrying out any corrections or re-initialisations if problems occurred.
One main constraint of our project was secure operation. We were not able to establish an independently running program without operators, because this would have been too unreliable. So we chose the approach to use persons who initiate and supervise processes and intervene if it is necessary. In this manner we had one person responsible for the tracking processes and one person operating the Onyx™ responsible for the rendering process.
The Onyx™ was connected to the TV control desk by the SVHS video output. Only a resolution of 640x480 was used for the big video screen.
This section just gives a coarse overview about all the software-modules we used. Later chapters will cover these issues in more detail.
Figure 3: Software modules and connections
As one can see in figure 3 the two CCD-cameras support the two Indys with two images of the scene. Figure 1 depicts that the two images provide a sufficiently large overlap to detect markers at an arbitrary position on the stage. The position of a beacon in 3-d space is reconstructed out of the two 2-d images provided by the cameras. The 2-d image space position data gained from the beacon tracking subsystems is delivered to the Onyx graphics workstation over the network. Here the two pairs of (x,y)-beacon positions are used to calculate the final 3-d position in world- (stage-) coordinates. Series of subsequent position vectors are used to calculate a speed and height estimate of the tracked inline-skater. All the interprocess communication was done by Berkeley datagram sockets, which are fast but not completely reliable.
We also developed a "virtual city builder"-program which is capable of creating virtual city models of arbitrary size. The models are stored in the VRML data format. The model data together with the fly-through information represent the fundamental data for visualisation. As soon as the fly-through is initiated the data from the tracking process is used to control the fly-through. We developed a special module to define a fly-through "route" for a particular city-model. For this purpose we used a so-called "space-mouse" with 6 degrees of input freedom.
Finally a few words about the rendering subsystem. Though we did not
have any previous experience, we used Iris Performer as the basic graphics
library due to its ease of use especially as far as different data-formats
are concerned. We adapted the demo program "perfly", which is
delivered with the Performer libraries, for our needs. Basically only a
few changes had to be made to the core program, support for Motif user
interfaces was added and a fly-through generator was included.
Previous Chapter |
Next Chapter |
Pages created and maintained by Stefan
Brantner
Last update: 15.04.1997
Institut for Computer Graphics and
Vision
Graz, University of Technology