When a navigation is accomplished that is as natural/intuitive as possible the user may want to interact not only with the whole scene, but also with the objects in the VR. He may want to move objects, transform objects, group objects, create and delete objects, ...
Somehow the user has to know which objects he can interact with. Usually not everything that can be seen (to make to VR environment more natural there will be some decorational objects) can be interacted with (in contrary to the real reality, where everything that is in reach can be touched and therefore interacted with - if that is always desirable is another question).
The most intuitive possibility is to grab an object by intersectiong it with the input device. With a data glove the user can simply grab for an objcet just the way he would have done it in reality. With a 3D-mouse he will move the mouse inside (near) the object and then press a button.
Especially in the CAVE grabbing for objects is problematic as objects cannot be seen when one's hand (or the input device is near by). Remember that the objects only seem to be three dimensional inside the CAVE but they are just projections on the screen that is usually farther away. Or even worse the objects are behind a screen and there is no way to grab for these objects.
The most common solution for a virtual grabbing device is ray casting. A line extends the pointing device and objects that are intersected by this line are selected and can be interaced with. (Ray casting is very precise as the intersection between an object and the ray is always one single point.)
There must be a way for the user to know if he can interact with an object he is pointing at, he is grabbing (intersection with his input device). One method would be to highlight this object (change its color, put a semi-transparent bounding sphere/box around it), another could be to make a signal sound that tells the user it is ok to interact with this object.
The next thing that the user might want to know is, what he can do with this object. Can it be moved around, is it just a button that can be pressed a lever that can be pulled, ... Probably the object can be transformed in many different ways, it might be possible to change its color, to scale it, delete it...
A more accurate method than ray casting is casting a cone that will get wider the farther away it gets from the user, this way it will be easier for the user to interact with objects that are far away.
Usually it is rather hard to position an object as accurate as it is desired (as it would be possible in reality). The main reasons for this are inaccurate position trackers and an inaccurate stereo viewing.
To achive an even finer placement the user at first has to specify a vector (straight line) along which the movement can be performed and then with some kind of controller the distance (in centimeters, or even millimeters) that the object should move. (In a similar way rotations can be implemented, by specifying the plane (normal vector - axis) and the angle of the rotation)
To accomplish tasks that are special to the VR (or are done in a very different way than in reality), new means of interaction have to be found. Such tasks include scaling, coloring or grouping objects, creating or deleting objects are also jobs that usually cannot be done in the real world.
In the CAVE these tasks are done by selecting an object and then pressing a certain mouse button, which opens a window where the right command has to be selected (again bei pointing at the menu item and then pressing a mouse button).
Object creation can be done in very different ways. One would be that only a few special objects can be created (for example pieces od furniture for an interior decoration application). Or new objects can just be created out of some primitives (like spheres and cubes) which can be manipulated (scaled, colored) to get the desired compound.
The most complicated but also most flexible way to create a new object in the virtual world is by specifying its vertices in the tree dimensional space just by pointing at that position. (or even creating 3d-curves by sweeping the input device through the air)
How can the user specify commands that are more abstract and not connected to any object in the VR. Commands for loading, saving or exiting the program.
Such commands are best achieved via two dimensional menues (e.g. virtual pull-down menues). When the user presses the designated button a menue (or even a menue structure) appears where these commands can be selected by pointing to the entry in the menue.