Wolfgang Greimel1,2, Werner
Backfrieder2
greimel@cg.tuwien.ac.at,
werner@bmtp.akh-wien.ac.at
1 Institute
of Computer Graphics
Vienna University of Technology
Vienna / Austria
2 Department of Biomedical
Engineering and Physics
University of Vienna
Content:
1. Introduction
1.1
Aim of System
2. State of the Art
3. System Description
3.1
Camera Model
3.2
User Interface
3.3 User
Options
4. Implementation
4.1
AVW-Library
4.2
Tracking System
4.3
Tracking Process and Communication
4.4 Configuration
5. Results and Conclusions
6. Acknowledgements
7. References
Minimally invasive surgery, developed during the last 15 years becomes a major field of surgical intervention. The patient benefits from little damage around the focus of surgery. Special skills of the surgeon are needed to work in narrow body cavities with little space for the endoscope and surgical tools.
The surgeon’s navigation during surgery can be supported by modern visualisation techniques based on 3D medical image data. A tool for intra-operative navigation was developed providing three orthogonal sections through the image volume, a 3D display of surgical tools relative to sensitive anatomical structures, a virtual endoscopic image together with the video image through the endoscope. The user interface has a simple layout for easy handling to meet the needs in the surgical theatre.
Keywords:
virtual endoscopy, volume rendering, computer-assisted surgery
To aid minimally invasive surgery an intra-operative surgical navigation system was developed. It is based on multi-modal 3D image data, providing complementary information (e.g., magnetic resonance imaging (MRI) and x-ray computed tomography (CT)) to the surgeon.
The visual information is generated by using all available 3D imaging modalities (Spiral-CT, MRI, single photon emission computed tomography (SPECT) and positron emission tomography (PET)). The reason for using different image modalities lies in the properties of the imaging methods. CT images have a good contrast for bones, MR images show especially soft tissues, SPECT and PET are used for functional analysis.
The combination of volume data sets results from the registration of respective anatomical structures within the data sets. The registration of the structures is completed semiautomatically using algorithms, which fit to the image modality, e.g. region growing, morphological procedures, thresholding operations. Image combination is done by minimizing the distance between the surfaces of respective segmented structures. This is done in an optimized way by chamfer matching [1].
The volume data set is rendered accordingly
to the position of the endoscope. The 3D view of the anatomical structures
is calculated and displayed using the intrinsic imaging properties of the
endoscopic optics. In addition to this view, we show the current position
of the endoscope on a 3D view and 3 orthogonal slices of the volume at
the position of the tip of the endoscope. We use an optical tracking system
(FlashPoint 3D Localizer, Image Guided Technologies, USA) for monitoring
the position of the endoscope relative to the patient. This tracking system
can be installed in a surgical environment. Our developments provide guidance
for the physician and a possibility to use different image views directly
within the operating room.
Especially for virtual endoscopy two approaches, surface rendering [3] and direct volume rendering [4] are used for 3D visualisation.
The surface rendering methods use an intermediate segmentation step to transform the volume data into a mesh of polygons. Due to the large amount of voxels to be processed and the complexity of the scene, most segmentation techniques are semiautomatic. Manual segmentation is time-consuming and may contain operation errors, fully automatic procedures are not generally applicable. The resulting mesh of polygons can be rendered using the usual graphics hardware support. The drawback is often a time-consuming preparation phase and reduced accuracy, since information about the inner parts of the objects is lost. These techniques were implemented by several authors [5, 6, 7].
Direct volume rendering is used in many applications [8, 9, 10, 11]. Two-dimensional views are generated by casting rays from an observation point through the entire volume. No loss of information has to be taken into account and by rendering transparent objects, information about the scene behind the objects is visualised. Limitations are that small, complicated internal structures within a large data set might be difficult to display. In recent time, hardware acceleration [12, 13] made it possible to achieve interactive frame rates and implement new solutions [11].
There are also applications, which use both techniques [14].
Virtual endoscopy systems are used
today in a wide field of applications, such as interactive virtual colonoscopy
[5, 14], providing an overview of the colonic surface to support the navigation
of the endoscope and to avoid penetration of the surface. Others suggest
the use for virtual bronchoscopy [6], virtual ventricle endoscopy [7] or
endoscopic sinus surgery [15]. Furthermore, it is possible to create interactive
fly-throughs. For training purposes, surgery is simulated in a virtual
environment for optimal path calculation, orientation, diagnosis and tumor
studying. A drawback of most virtual endoscopy systems is that they do
not simulate the distorted optical model of a real endoscope.
The motivation for our display system was to add complementary information to the current endoscope systems. In the current surgical equipment only the video image through the endoscope is displayed. A 3D overview of the position of the endoscope relative to a 3D rendering of the patients head, 3 orthogonal slices of the volume data at the position of the endoscope and a perspective rendering using the optical model of the real endoscope are added. Thus useful means of orientation are provided to the surgeon during intervention.
Figure 1: Camera model for virtual
endoscopy: simple perspective model (left),
perspective camera model with endoscopic
distortion (right)
A snapshot of the display system is shown in figure 2 and consists of the following windows:
The selection of the current slices (transversal, coronal, sagittal) is done either by user input or automatically by the tracking system using a defined position on the endoscope, eg. the tip, as reference. It is possible to map the information about the anatomical structures on the actual slices as shown in figure 2, which shows the brain (violet), the tumor (blue), optical nerve (green) and bone (white). The information about the density is still available by using different shades of the color of the object. The orientation guidance lines, which represent the position of the endoscope at line crossings, are shown. The lines also indicate the position of the orthogonal slices. The contour of the endoscope can also be displayed in these images.4. ImplementationA 3D view (figure 2: transparent rendering window) is generated using a transparent volume rendering algorithm [4]. During the pre-processing each voxel is aligned to an anatomic structure and for each structure color and opacity are defined. A voxelised endoscope is modelled and rendered together with the volume data set. The rendering is the most-time consuming task. To improve the rendering performance we use an image-based method. The position of the endoscope is calculated first. If the endoscope is inside the volume, the corners of the subvolume, where the endoscope is currently positioned, are mapped to the image plane and define those pixels, which are affected by the change of the position of the endoscope. These pixels are called rendermask. Therefore only a small part of the rays has to be recalculated during the rendering process, which allows interactive display speed of the system. The size of the rendermask depends on the size of the endoscope and the part, which is currently within the volume data set.
The size of the image can be customised from the configuration file. It is possible to zoom in and out as well as to rotate the viewpoint to allow a view of the volume data from different angles and to enlarge areas of interest.
The endoscopic rendering at the display system uses the algorithm described Eisenkolb et al. [16]. There are 3 different implementations of this algorithm. The "preview" mode is used for fast image generation with less accuracy, the "detail" version takes more time but produces a more precise output. The "object" mode includes the colors and opacities from the object map.
As a future development the display of the current video image from the endoscope is planned using a video frame grabber.
Beside the command-buttons for each window the following options are accessible for the user:
Data management options
It allows the loading of the volume data set and its object information as well as its unloading. It is furthermore possible to get additional information about the volume, e.g. date of scan, information about the patient and the original voxel size.
Display options
There are some functions for manipulating the display of the object map. It is possible to superimpose the grey-scale image of the 3 orthogonal slices with the colored information about the anatomical structures. Display options for each object can be set interactively. This allows to turn on and off the visualization of selected structures, e.g. nerves, vessels, which are of particular interest during surgical intervention. This influences the transparent rendered image as well.
The display of orientation guidance lines on these images can also be selected here. These lines show the position of the tip of the endoscope. Another function allows the superimposition of the contour of the endoscope on these images.
Further options are available for turning on and off all windows on the screen. A screen refresh function is implemented.
Tracking system options
This menu topic provides access to two functions concerning the tracking process. The first is the start of the registration process of the tracking system with the volume data. The second starts and stops the tracking. During tracking the position of the endoscope is shown on the three orthogonal slices and the 3D view.
Figure 3: Architecture scheme
The implementation uses the basic X-Library and the X-Toolkit together with the OSF/Motif window manager. As can be seen in figure 3, two platform independent libraries are used, the AVW-Library, described in section 4.1 and the endoscopic view functions, which implement the optical model of the endoscope described in section 3.1.
AVW is a comprehensive C-library for manipulation of 3D medical data sets. It allows an extensive analysis of multi-dimensional and multi-modal data sets. The library supports the development and implementation of image-manipulation algorithms and purpose build software solutions. The basics of the procedures are special data types for images and volumes.
The data types for volume data range
from binary (1 bit) to 24 bit color images. The information about the objects
within the volume is provided by an additional volume of the same size
as the medical data set. It allows the definitions of up to 255 objects
and maps each voxel of the medical volume to an object. Each object has,
amongst other properties, its specific color and opacity, which are used
for transparent rendering and display of orthogonal slices. The functions
provided by the AVW-Library can be divided into the following groups [17]:
4.3 Tracking Process and Communication
Display Process Tracking Process
Registration Step
Tracking Step
For software development and testing, a simulation of the tracking system was implemented. It has 6 degrees of freedom for movement of the endoscope and eases the testing of the process communication.
The process communication between the tracking process and the display process is based on the computation time needed for the transparent rendering. To prevent an event overflow of the display process, the tracking process sends approx. two new positions per second to the display process. This is within the interactive speed range we determined necessary for clinical use.
It is further possible to disable the communication to the tracking process and switch to a off-line mode, which can be used for training purposes.
The shape of the simulated endoscope
is a voxelized object, whose color and size can be defined in the configuration
file. The endoscope can be shown on the 3 orthogonal slices and on the
transparent rendered image.
In surgical applications, vessels and nerves close to the tissue surfaces must not be damaged. Although modern computer graphics hardware supports real-time surface rendering, we used transparent volume rendering for simultaneous visualisation of structures beneath the surface.
The most time-consuming task is the 3D transparent rendering algorithm. It takes about 2 seconds to render the 256 x 256 pixels image shown in figure 2. The volume consists of 320 x 320 x 216 voxels and the resolution is 8 bit. We improved the rendering by applying a rendering mask and therefore we render only selected pixels when tracking the endoscope. The speed-up factor depends on the position of the endoscope (parallel or orthogonal to the viewing direction) and on its size, but the average speed-up is around 3.5, which leads to approximately 2 frames per second. Unfortunately, when selecting a new viewpoint, this advantage is lost. A precomputation of the 3D view from different selected viewpoints was not deemed sensible. There is hardware available for parallel perspective rendering [13], which achieves interactive frame rates. Unfortunately it does not fit to our hardware environment.
Future developments will see the
inclusion of the video image from the endoscope using a video grabber.
It is further planned to implement stereoscopic rendering for the 3D view
window. When using shutter glasses the user will get a 3D impression about
the volume data, the position of the anatomical structures and the position
of the endoscope.
The authors wish to thank Dipl. Ing.
Monika Eisenkolb for providing the endoscopic view functions. Special thanks
to Dr. Monika Cartellieri of the clinic of ORL, Vienna University Hospital,
for the access to the endoscope equipment, providing image data and valuable
discussion and to Dr. Fritz Vorbeck from the Clinic of Radiology, Vienna
University Hospital, for providing MR and CT data.
[2] Ezquerra, N. Navazo, I. Morris, T.I., Monclus, E.: Graphics, Vision, and Visualization in Medical Imaging: A State of the Art Report, in Eurographics’99, pp. 21-80, 1999.
[3] Lorensen WE., Cline HE., Marching Cubes: a high resolution 3D surface construction algorithm. Computer Graphics 1987; pp. 163-169, 1987
[4] Levoy, M.: Display of Surfaces from Volume Data. IEEE Computer Graphics and Applications, 8(3):pp. 29-37, February 1987
[5] Hong, L., Muraki S., Kaufmann A., Bartz D., He, T.: Virtual Vovage: Interactive Navigation in the Human Colon. in SIGGRAPH 97 Conference Proceedings, pp. 27-34, ACM SIGGRAPH, Addison Wesley, August 1997.
[6] Geiger, B., Kikinis, R.: Simulation of Endoscopy, in Lecture Notes in Computer Science: Computer Vision, Virtual Reality and Robotics in Medicine, Nicholas Ayache, editor, pp. 276-282, Springer Verlag, April 1995
[7] Bartz, D., Skalej, M.: VIVDENI – Virtual Ventricle Endoscopy, in Data Visualization, Springer-Verlag, pp. 155-166, 1999
[8] Shahidi, R., Argiro, V., Napel, S., Gray L., McAdams HP., Rubin G.D., Beaulieu C F., Jeffery R.B., Johnson A.: Assessment of Several Virtual Endoscopy Techniques Using Computed Tomography and Perspective Volume Rendering, Lectures Notes in Computer Science, 1131:pp. 521-526, 1996
[9] Darabi K., Resch K. D. M., Weinert J., Jendrysiak U., Perneczky A.: Real and Simulated Endoscopy of Neurosurgical Approaches in an Anatomical Model, Lectures Notes in Computer Science, 1205: pp. 323-326, 1997.
[10] Brady L.M., Jung K.K., Nguyen H.T., Nguyen T.PQ.: Interactive Volume Navigation, IEEE Transactions on Visualization and Computer Graphics, 4(3): pp. 243-255, July-September 1998
[11] Vilanova, A., König, A., Gröller, E.: VirEn: A Virtual Endoscopy System, in Machine GRAPHICS & VISION Vol. 8, No 3, pp 469-487, 1999
[12] Meißner, M., Kanus, K., Straßer, W.: VIZARD II, a PCI-Card for Real-Time Volume Rendering: in Eurographics/Siggraph Workshop on Graphics Hardware, pp. 61-67, 1998
[13] Pfister H., Hardenbergh J., Knittel J., Lauer H., Seiler L.: The VolumePro Real-Time Ray-Casting System, in SIGGRAPH 99, Computer Graphics Proceedings, Annual Conferences Series, pp. 251-260, 1999
[14] You S., Hong L., Wan M., Junyaprasert K., Kaufmann A., Muraki S., Zhou Y., Wax M., Liang Z.: Interactive Rendering for Virtual Colonoscopy, in Proceedings of IEEE Visualization pp. 433-436, 1997
[15] Yagel R., Stredney D., Wiet G., Schmalbrock P., Rosenberg L., Sessanna D., Kurzion Y.: Building a Virtual Environment for Endoscopic Sinus Surgery Simulation, in Computer & Graphics Vol 20 (6), pp. 813-823, Springer Verlag 1996
[16] Eisenkolb M., Backfrieder W.: Virtual endoscopy of Multi-modal Data in ORL-Surgery, in Physica Media, Vol XV, N. 4, July-September 1999, p. 28, 1999
[17] Robb, R.: AVW Programmers’s
Guide, Version 3.0, Mayo Foundation, Rochester, USA, 1999