Virtual Reality in Medicine

by Suzanne Martin

Introduction

What is Virtual Reality? Virtural Reality (VR) is a human-computer interface
that simulates a realistic environment and allows users to interact with it.
It is studying 3-D images on a 2-D computer screen. It can be compared to
looking through a glass-bottom boat. VR allows one to put on the scuba
equipment and interact with the surroundings without getting wet.

Why is it used? VR offers invaluable educational advantages. It is ideal for
showing clients products that are otherwise too expensive to physically
demonstrate. The 3-D sensory impact is unmistakable

VIrtual Visual Environment Display – VIVED

Developed by NASA Johnson Space Center (JSC)/LinCom Corporation, the
University of Texas Medical Branch at Galveston and the Galveston Independent
School District. VIVID provides a unique educational experience using VR
technologies. It integrates prescribed fly-throughs of the human skull and
heart with other interactive multi-media (audio, video, etc.) capabilities.


Steps required to create a Virtual Reality experience through the human body.

Hardware

A Silicon Graphics Reality Engine computer was used to turn Computerized
Axial Tomograhy (CAT/CT) and Magnetic Resonance Imaging slices into 3-D
volumetric images and movies of the observer “flying through” the body.

Viewing the final 3-D images was done on a Macintosh IIcx computer with 16M
ram. A Mac was chosen because it is relatively affordable, it has widespread
use in school systems, it is the leading engine of desktop multi-media, and
there is a wide variety of software and hardware available for this task.

The VR movie can be stored either on a hard drive or transferred onto video
tape and viewed through red-blue glasses. It can also be viewed using a
VR Head-Mounted Display (HMD) or Binocular Omni Orientational Monitors (BOOM
system). The final images can be stored on CD-ROM or laser disc.

A limitation due to the high resolution of the body images prohibits the
observer from “flying” through the human body at will. Only the technology
to create prescribed “fly throughs” is currently available. The technology
is available for full interactive virtual reality experience with less
data-intense applications, i.e. with applications requiring less resolution.

Software

File Conversion and Data Preparation

The University of Texas Medical Branch at Galveston provided 1.5 mm thick
CAT/CT slices of the human skull and MRI slices of the heart. These slices
were used to create images. The skull is held in place during the CT scan by
a foam band which created extraneous data.
Scans of the skull were performed resulting in a data set of over 120 slices
through the skull and 60 slices through the mandible (jaw). A MRI scan of
the human heart resulted in a data set containing 200 slices.

The data files created at the Medical Branch were then transferred to JSC
IGOAL (Integrated Graphics, Operations, and Analysis Laboratory). There the
scans were cropped to eliminate as much extraneous data as possible without
losing any critical information. IGOAL developed a tool called “ctimager”
which used thresholding to remove unwanted noise and extraneous data from
each slice.

Data Filtering and Conversion of Volume Data to Polygonal Data

The volume data was then converted, using a tool developed by IGOAL called
“dispfly”, into a form that can be displayed directly by the computer. This
tool used multiple filtering algorithms to prepare CT and MRI data for
conversion to polygonal form. The anatomical models were generated based on
the marching cubes algorithm.

The filtering process typically consisted of thresholding the data to
eliminate most of the noise. A low pass filter was used to minimize the
high-frequency noise that would produce an irregular bumpy surface when input
to the algorithm. This process produced a relatively smooth surface that
approximated the scanned specimen and reduced the number of noise generated
polygons. A unique filter was created for the heart data which only smoothed
the data between scans, no other filtering was needed.

Due to the large number of slices in both the heart and skull data sets,
several models were made, each of which represented a small number of slices.
A meshing algorithm, “meshit”, was developed to improve the display
performance. This algorithm converted the raw collection of triangles into
efficient strips. An average of over 100 triangles composed each triangle
strip.

Generating Stereo Images

Stereo sequences were rendered after the models were made. IGOAL developed
a tool called OOM (Object Orientation Manipulator) which generated the
sequences by rendering each frame to disk. The images used red and blue
color separation for representing stereo images. Once the sequence was
recorded to disk it was converted to Macintosh .pict format and transferred
to a Mac. Full color image sequences were also transferred to the Mac for
non-stereo viewing.

Stereo Images and Multi-Media

On the Mac the images were edited to produce a desired effect, such as
digitized cadaver overlays or text inserts describing what is being viewed.
Using Apple’s QuickTime extension, the images were converted into QuickTime
movies for animation on the Mac.

Result

The result of all this is a self-contained educational experience giving
students a new method of learning as they interact with the subject matter
through VR. A series of “fly throughs” were created allowing the observer
to experience VR during a tour through the human heart or skull.

Applications and Current Research in VIVED

Current research emphasizes creating a high resolution VR simulator of the
human body for educational purposes. Applications for this technology
include any area in which complex 3-D relationships must be understood, for
example: Anatomy education, Education for mechanics of all types, Education
for chemistry students, Pathology studies for surgeons, Simulation of plastic
and reconstructive surgery, Endoscopic training for surgeons.

Other Applications

The University of North Carolina at Chapel Hill has created a “predictive”
modeling of a radiation treatment using dynamic images created by ultra sound,
MRI and x-ray.

The Dartmouth Medical School in Hanover, N.H. has created computational
models of the human face and lower extremities to examine the effects
and outcomes of surgical procedures.

Greenleaf Medical Systems in Palo Alto, CA has created ‘EVAL’ and ‘Glove
Talker’. ‘Eval’ uses sensor-lined data gloves and data suits to obtain
range-of-motion and strength measurements of injured and disabled patients.
The ‘Glove Talker’ is a data glove sign-language device used for
rehabilitation which allows someone without voice (stroke or cerebral palsy
patient) to make gestures the computer understands. Using HMD, the patient can
relearn how to open a door, walk, point or turn around in space.

Conclusion

CT scanned medical images of bone (i.e. the skull) can be generated into high
quality VR imaging for prescribed fly-throughs on the Macintosh computer using
either a HMD or BOOM system. A heart VR model that has been generated from
MRI data is being developed.

Preliminary results have shown that a high resolution model can be developed
using this type of imaging data. In order to maintain the goal of high
quality VR imaging, some problems which were caused by the amount of data
needed to deal with frame-by-frame sequencing of the prescribed fly-throughs
had to be overcom. Alternative hardware and software solutions are being
explored to alleviate this problem.

Another problem has been the technology for the HMD display systems. The LCD
displays do not have the resolution needed to maintain a high quality VR
experience. The CRT displays are reaching the resolution needed however the
cost is prohibitive for multiple education platforms.

Surgical simulations may become routine especially to rehearse strategies for
intricate and rare operations.

References

“NASA TECHNOLOGY TRANSFER Commercial Applications of Aerospace Technology”,
National Aeronautics and Space Administration, Technology Applications.

Porter, Stephen, “Virtual Reality”, Computer Graphics World, (March, 1992),
42-54.

Sprague, Laurie A., Bell, Brad, Sullivan, Tim, and Voss, Mark,
“Virtural Reality In Medical Education and Assessment”, Technology 2003,
December 1993.


[Return to CS563 '95 talks list]

matt@owl.WPI.EDU

Source

Be Sociable, Share!