[Skip to Content]
Access to paid content on this site is currently suspended due to excessive activity being detected from your IP address 54.166.74.94. Please contact the publisher to request reinstatement.
[Skip to Content Landing]
Download PDF
Figure 1.
On axial cross sections, individual structures are traced and labeled in a process called modeling and segmentation.

On axial cross sections, individual structures are traced and labeled in a process called modeling and segmentation.

Figure 2.
Different views of the cartilaginous nasal structures in the "virtual nose."

Different views of the cartilaginous nasal structures in the "virtual nose."

Figure 3.
Different views of the "virtual nose" shown in combination with a compatible virtual craniofacial model.

Different views of the "virtual nose" shown in combination with a compatible virtual craniofacial model.

Figure 4.
A virtual reality display system called the ImmersaDesk displays a stereoscopic image that delivers an interactive 3-dimensional virtual reality experience when viewed with specialized eyeglasses. Multiple viewers can share the same virtual reality experience.

A virtual reality display system called the ImmersaDesk displays a stereoscopic image that delivers an interactive 3-dimensional virtual reality experience when viewed with specialized eyeglasses. Multiple viewers can share the same virtual reality experience.

Figure 5.
Specialized eyeglasses allow for stereoscopic viewing of the model.

Specialized eyeglasses allow for stereoscopic viewing of the model.

Figure 6.
The "wand."

The "wand."

Figure 7.
Using the "wand," the model can be "grabbed" and moved in space with an optional virtual hand.

Using the "wand," the model can be "grabbed" and moved in space with an optional virtual hand.

1.
Tardy  ME  JrSurgical Anatomy of the Nose. New York, NY Raven Press1990;
2.
Larrabee  WF  JrMakielski  KHSurgical Anatomy of the Face. New York, NY Raven Press1993;
3.
Tardy  METhomas  JRBrown  RJFacial Aesthetic Surgery. St Louis, Mo Mosby–Year Book Inc1995;
4.
Toriumi  DMBecker  MDRhinoplasty Dissection Manual. Philadelphia, Pa Lippincott Williams & Williams1999;
5.
Ai  ZDech  FRasmussen  MSilverstein  J Radiological tele-immersion for next generation networks. Westwood  JDHoffman  HMMogel  GTRobb  RAStredney  DedsMedicine Meets Virtual Reality 2000. Amsterdam, the Netherlands IOS Press2000;4- 9
6.
Pearl  RKEvenhouse  RRasmussen  M  et al.  The virtual pelvic floor, a tele-immersive educational environment. Proc AMIA Symp. 1999;345- 348
PubMed
7.
Mason  TPApplebaum  ELRasmussen  MMillman  AEvenhouse  RPlanko  W Virtual temporal bone: creation and application of new computer-based teaching tool. Otolaryngol Head Neck Surg. 2000;122168- 173
PubMedArticle
8.
Schroeder  WMartin  KLorenson  BThe Visualization Toolkit: An Object Oriented Approach to 3-D Graphics. Upper Saddle River, NJ Prentice Hall Computer Books1997;
9.
Vartanian  AJHolcomb  JAi  ZMRasmussen  MTardy  METhomas  JR A virtual reality human craniofacial model  Paper presented at: American Academy of Otolaryngology–Head and Neck Surgery Annual Meeting September 23, 2002 San Diego, Calif
10.
Czernuszenko  MPape  DSandin  DDeFanti  TDawe  GLBrown  MD The ImmersaDesk and Infinity Wall projection-based virtual reality displays. Comput Graph (ACM). 1997;3146- 49Article
11.
Singh  AGoldgof  DTerzopolous  DDeformable Models in Medical Image Analysis. Los Alamitos, Calif Institute of Electrical and Electronic Engineers–Computer Society1998;
Citations 0
Original Article
September 2004

The Virtual NoseA 3-Dimensional Virtual Reality Model of the Human Nose

Author Affiliations

From the Division of Facial Plastic and Reconstructive Surgery, Department of Otolaryngology–Head and Neck Surgery (Drs Vartanian, Tardy, and Thomas), and the School of Biomedical and Health Information Sciences (Mss Holcomb and Rasmussen and Dr Ai), University of Illinois at Chicago; and the Lasky Clinic, Beverly Hills, Calif (Dr Vartanian).

 

From the Division of Facial Plastic and Reconstructive Surgery, Department of Otolaryngology–Head and Neck Surgery (Drs Vartanian, Tardy, and Thomas), and the School of Biomedical and Health Information Sciences (Mss Holcomb and Rasmussen and Dr Ai), University of Illinois at Chicago; and the Lasky Clinic, Beverly Hills, Calif (Dr Vartanian).

Arch Facial Plast Surg. 2004;6(5):328-333. doi:10.1001/archfaci.6.5.328
Abstract

Background  The 3-dimensionally complex interplay of soft tissue, cartilaginous, and bony elements makes the mastery of nasal anatomy difficult. Conventional methods of learning nasal anatomy exist, but they often involve a steep learning curve. Computerized models and virtual reality applications have been used to facilitate teaching in a number of other complex anatomical regions, such as the human temporal bone and pelvic floor. We present a 3-dimensional (3-D) virtual reality model of the human nose.

Methods  Human cadaveric axial cross-sectional (0.33-mm cuts) photographic data of the head and neck were used. With 460 digitized images, individual structures were traced and programmed to create a computerized polygonal model of the nose. Further refinements to this model were made using a number of specialized computer programs. This 3-D computer model of the nose was then programmed to operate as a virtual reality model.

Results  Anatomically correct 3-D model of the nose was produced. High-resolution images of the "virtual nose" demonstrate the nasal septum, lower lateral cartilages, middle vault, bony dorsum, and other structural details of the nose. Also, the model can be combined with a separate virtual reality model of the face and its skin cover as well as the skull. The user can manipulate the model in space, examine 3-D anatomical relationships, and fade superficial structures to reveal deeper ones.

Conclusions  The virtual nose is a 3-D virtual reality model of the nose that is accurate and easy to use. It can be run on a personal computer or in a specialized virtual reality environment. It can serve as an effective teaching tool. As the first virtual reality model of the nose, it establishes a virtual reality platform from which future applications can be launched.

The deceptively simple external shape of the human nose belies the complexity and individual variability of its underlying anatomy. The shape of human nose is formed by the interplay of osseous, cartilaginous, and soft tissue elements. The manipulation of this structural scaffolding is the primary modality used to effect change in rhinoplasty. As such, an accurate and clear understanding of the anatomy of the human nose is a prerequisite for the rhinoplasty surgeon. A firm grasp of nasal anatomy early in a rhinoplasty surgeon's career can minimize negative consequences of mastering the "learning curve."

Traditional methods of learning anatomy have helped generations of surgeons familiarize themselves with nasal anatomy. These learning modalities have included anatomy atlases, textbooks or literature descriptions, cadaver dissections, lectures by rhinoplasty experts, videos, and intraoperative observations. Many outstanding textbooks and nasal anatomy atlases exist, and are usually the first educational tools used in teaching anatomy.1-4 Nevertheless, textbooks can be limited by the artist's or photographer's perspective. Even texts with the highest level of accuracy and image quality are limited by the 2-dimensional nature of printed material, which cannot impart 3-dimensionality. Video material is similarly limited by not being able to deliver a 3-dimensional (3-D) experience. Another limitation of most traditional methods of learning anatomy relates to their distinct lack of any interactivity and feedback.

As a supplement to knowledge gained from books and observation, cadaver dissections can play an important role in the mastery of anatomy. Unfortunately, limited availability and high cost limit the use of cadavers in most teaching institutions. Even when cadavers are available, learning from cadaver dissection is limited by the poorly delineated anatomical structures, by dissection planes that have been altered by chemical preservation, and by the relatively brief time (typically) spent in the dissection laboratory.

For most surgeons, the mastery of 3-D nasal anatomy is still largely dependent on extensive intraoperative experience. Observation during rhinoplasty is an effective way for many beginning rhinoplasty surgeons to learn nasal anatomy. Clearly, there is no substitute for the knowledge that is gained after having performed a large number of rhinoplasties through dissection, tactile appreciation, and manipulation of nasal structures. It is precisely this intimate understanding of nasal anatomy that the physician-in-training or the beginning rhinoplasty surgeon may be lacking. In such cases, virtual reality teaching tools aimed at teaching 3-D nasal anatomy with interactive capabilities could be particularly useful.

Computerized anatomical models and virtual reality applications have been used to facilitate teaching in a number of other complex anatomical regions.5-7 Virtual reality models of the human temporal bone and the pelvic floor have already demonstrated the great potential of such teaching tools at our institution. In line with our desire to create a 3-D virtual reality model of the human nose and its surrounding structures, we set out to produce the "virtual nose."

METHODS
DATA ACQUISITION

Our goal was to create an accurate virtual reality model based on anatomical data and observations. We based our model on a freshly frozen female cadaver from the Visible Human Project (National Library of Medicine, Bethesda, Md). The head and neck of the cadaver had been previously cut into fine (0.33-mm) axial cross sections and photographed. These high-resolution color photographs had been digitized and were available as a set of consecutive images. Several other cadaver sources were closely evaluated. We chose to use this set of cadaver axial cross sections because it provided (1) thin cross sections (0.33 mm), (2) frozen cadaver sections with minimal tissue distortion, (3) high-resolution images of the cross sections, (4) corresponding computed tomographic data from the cadaver's skull and cervical spine, and (5) relative ease of visualization of different structures on cross-sectional analysis. Our data set consisted of 460 consecutive axial cross sections (0.33-mm cuts) of the midface and nose.

PREPARATION OF DATA

The basic principle used involved combining hundreds of detailed and properly aligned 2-dimensional anatomical cross sections to generate a polygonal 3-D model. Each image was traced using computer software to outline the structures of interest. Individual structures were traced and given structure-specific color codes at each axial level (n = 460) (Figure 1).

MODEL CREATION

The set of "tracings" were saved as a set of images that were stacked, combined, and used to create 3-D models. Using a specialized computer, several 3-D polygonal models of nasal and midface structures were produced. Further refinements to these initial models were made using advanced 3-D modeling software applications.

The surrounding bony craniofacial structures were produced in a separate project and made compatible with the virtual nose for added functionality. Further refinements to the models were made using computer modeling programs to improve surface detail. Small variations in scale were adjusted before our 3-D nasal model was combined with our 3-D skull model. This final computer model of the nose was then programmed to operate as a virtual reality application.

The models were incorporated into the virtual reality environment using an original set of modular software programming tools.8 A number of computing tools were utilized to enable our model to function in a virtual classroom with audiovisual interaction and feedback. The application was programmed on a commercially available graphics supercomputer (Silicon Graphics Onyx2; Silicon Graphics Inc, Mountain View, Calif). A second set of final models was also simplified for viewing on personal computers as virtual reality modeling language–based models.

RESULTS

A 3-D virtual reality model of the human nose and surrounding structures was created. The virtual nose demonstrates the following anatomical structures: lower lateral cartilages, upper lateral cartilages, osseous nasal vault, nasal septum, and overlying skin cover (Figure 2 and Figure 3). Also, the compatibility of the virtual nose with a separately produced craniofacial model allows the user to inspect the skull and cervical spine.9 The virtual nose is most functional when it is viewed in a virtual reality environment. The model was also programmed to run on a personal computer and displayed on a high-resolution computer monitor.

VIRTUAL REALITY VIEWING

Viewing the model in a 3-D virtual reality environment greatly enhances the model's interactivity and usefulness as a teaching tool. The virtual reality images can be generated a number of ways. We used a virtual reality display system called an ImmersaDesk (Board of Trustees, University of Illinois at Chicago) to facilitate 3-D viewing of our model.10 The ImmersaDesk resembles an angled rear-projection big-screen television monitor that provides the viewer with a large viewing space, stereovision, and a viewer-centered perspective (Figure 4). It projects a high-resolution image that appears as a slightly offset double image to the naked eye. Lightweight interactive glasses are used to support 3-D stereovision (Figure 5). When the image is viewed with these eyeglasses, the viewer perceives a free-floating 3-D image. Regardless of the viewing angle, the image retains its 3-D hologramlike characteristics. The eyeglasses include a built-in receiver that tracks head position, allowing the computer to continually compute the viewer's unique perspective. The interactive experience is also enhanced by the ImmersaDesk's active tracking of the viewer's head position. The teaching system allows the viewer's head movement around the 3-D model to generate different points of view of the model.

EXTRA FEATURES

The model affords the user additional features that are not possible in the study of real physical models. The virtual nose can be viewed with or without a variably transparent skin cover. A mouselike device called the wand allows the viewer to manipulate and interact with the virtual reality models (Figure 6). The user can grab the model with a virtual reality hand and move it in any direction in space (Figure 7). The viewer can turn the model using the wand or change head positions to view the model from different perspectives and angles. The wand also can be used to activate on-screen menu options, much like the clicking action of a mouse on a personal computer. With the wand, the user can zoom in and out of particular areas of interest. The user can make various structures disappear, such as the upper lateral cartilages, lower lateral cartilages, cartilaginous septum, or skull background. Individual structures can be faded in and out of view using intuitive and user-friendly interfaces, which have been built into the model. By converting more superficial structures into phantom images, the viewer can grasp the 3-D relationship of superficial and deeper structures. For instance, by increasing the transparency of the lower lateral cartilages, the relationship of the medial crural footplates and the caudal septum can be clearly appreciated. During all maneuvers, the viewing perspective is constantly adjusted based on the primary viewer's head position.

The networking capabilities of our virtual reality model also expands its potential as a teaching tool. Several viewers can stand in front of the same ImmersaDesk and share the same model, or they can share the same models between multiple ImmersaDesks anywhere in the networked world. In this manner, virtual reality lectures can be carried out by a lecturer in one location with participants in several other locations. Networked participants see the same 3-D models and are able to communicate by voice (2-way audio communication) and to indicate areas of interest by pointing with their own wands. This interactive system is intuitive, easy to use, and currently available in a number of centers. The virtual nose model is also programmed to run on a personal computer using one of several software products capable of displaying virtual reality modeling language models.

COMMENT

The virtual nose represents a useful application of cutting edge virtual reality technology to the field of facial plastic surgery. The model provides the user with an idealized yet accurate model of the human nose and skull. A clear understanding of intricate nasal anatomy is essential for all surgeons who operate on the nose. Knowledge of nasal anatomy can also be useful to medical students, medical artists, prosthetic fabricators, and other "students" of human anatomy. Clearly, there is no substitute for intraoperative observation and experience. Unfortunately, intraoperative experiences may be limited or unavailable to some. Thus, early exposure to anatomical models (such as the virtual nose) may help to accelerate the learning curve in the mastery of nasal anatomy. Better understanding of nasal anatomy may also contribute to the reduction of undesirable surgical outcomes.

The current model is most impressive in a setting that allows the viewer to have a complete virtual reality experience. The networking potential built into our model can provide an interactive virtual reality experience in multiple networked remote locations via the Internet. This can enable interactive virtual reality lectures on nasal anatomy from multiple remote networked locations over the networked world.

Mindful of the specialized equipment needed to run such an interactive virtual reality model, we also developed a simpler version of the model that runs on a current personal computer mated with a high-resolution monitor. This simpler model is still quite impressive and functional but does not deliver a true virtual reality experience. The ability to operate a slightly downscaled version of the virtual nose on a personal computer further leverages the teaching potential of this model. It also provides the user with a more accessible and convenient option for exploring the model.

The current model represents a prototype, which, like all prototypes, can benefit from improvements. We are working on a second-generation virtual reality model of the nose that will demonstrate greater anatomical detail and accuracy and will contain more interactive features. Specific improvements will include the addition of realistic skin color textures, endonasal structures, virtual (lateral and medial) osteotomy paths, better-defined subcutaneous soft tissue layers, and the outline of basic rhinoplasty incisions.

Besides its immediate use as an experimental model and a teaching tool, the virtual nose establishes a reliable virtual reality model of the nose, which can facilitate exciting near future applications based on currently available technology. Deformable computer models allow the modification of polygonal shapes or subunits based on user preference.11 The introduction of deformable virtual reality nasal models will allow the creation of an almost limitless list of variant nasal anatomical structures. For instance, by using such a deformable model, the participant will be able to select from a menu of different alar cartilage variants and observe the net contribution of each variant to the shape of the nose.

The virtual nose, with its deformable features, can also serve as the platform for the development of a simplified virtual reality rhinoplasty simulator. In such a model, simple surgical maneuvers and their effects can be demonstrated in 3-D virtual reality. The combination of 3-D visualization and interactive immediacy can make a rhinoplasty simulator a useful educational tool.

In summary, many exciting virtual reality applications may find future use in facial plastic surgery. We hope that our project is the first of many steps that will be taken worldwide toward this end.

CONCLUSIONS

The virtual nose is a 3-D virtual reality model of the nose that is accurate and easy to use. It can be run on a personal computer or in a specialized virtual reality environment. It can also serve as an effective teaching tool. As a working virtual reality model of the nose, it establishes a virtual reality platform from which future applications can be launched.

Back to top
Article Information

Correspondence: A. John Vartanian, MD, Lasky Clinic, 201 S Lasky Dr, Beverly Hills, CA 90212 (drvartanian@yahoo.com).

Accepted for publication June 17, 2004.

The modular software programming tools used in this study were developed at VRMedLab (Virtual Reality in Medicine Laboratory) at the University of Illinois at Chicago.

References
1.
Tardy  ME  JrSurgical Anatomy of the Nose. New York, NY Raven Press1990;
2.
Larrabee  WF  JrMakielski  KHSurgical Anatomy of the Face. New York, NY Raven Press1993;
3.
Tardy  METhomas  JRBrown  RJFacial Aesthetic Surgery. St Louis, Mo Mosby–Year Book Inc1995;
4.
Toriumi  DMBecker  MDRhinoplasty Dissection Manual. Philadelphia, Pa Lippincott Williams & Williams1999;
5.
Ai  ZDech  FRasmussen  MSilverstein  J Radiological tele-immersion for next generation networks. Westwood  JDHoffman  HMMogel  GTRobb  RAStredney  DedsMedicine Meets Virtual Reality 2000. Amsterdam, the Netherlands IOS Press2000;4- 9
6.
Pearl  RKEvenhouse  RRasmussen  M  et al.  The virtual pelvic floor, a tele-immersive educational environment. Proc AMIA Symp. 1999;345- 348
PubMed
7.
Mason  TPApplebaum  ELRasmussen  MMillman  AEvenhouse  RPlanko  W Virtual temporal bone: creation and application of new computer-based teaching tool. Otolaryngol Head Neck Surg. 2000;122168- 173
PubMedArticle
8.
Schroeder  WMartin  KLorenson  BThe Visualization Toolkit: An Object Oriented Approach to 3-D Graphics. Upper Saddle River, NJ Prentice Hall Computer Books1997;
9.
Vartanian  AJHolcomb  JAi  ZMRasmussen  MTardy  METhomas  JR A virtual reality human craniofacial model  Paper presented at: American Academy of Otolaryngology–Head and Neck Surgery Annual Meeting September 23, 2002 San Diego, Calif
10.
Czernuszenko  MPape  DSandin  DDeFanti  TDawe  GLBrown  MD The ImmersaDesk and Infinity Wall projection-based virtual reality displays. Comput Graph (ACM). 1997;3146- 49Article
11.
Singh  AGoldgof  DTerzopolous  DDeformable Models in Medical Image Analysis. Los Alamitos, Calif Institute of Electrical and Electronic Engineers–Computer Society1998;
×