Preoperative magnetic resonance image of a brain tumor fused with the real-time video surgery to provide intraoperative navigation. (Courtesy of Ferenc Jolesz, MD, Brigham and Women's Hospital, Boston, Mass.)
The precision surgical robot, Robodoc (Integrated Surgical Systems, Davis, Calif), for high-precision hip replacement surgery. (Courtesy of William Barger, MD, University of California, Davis, Sacramento.)
The Zeus telesurgery system for enhanced microsurgery. (Courtesy of Yulun Wang, MD, Computer Motion Inc, Goleta, Calif.)
Sigmoid polyp (arrow) as viewed by virtual colonoscopy. (Courtesy of James Brink, MD, Yale University School of Medicine, New Haven, Conn.)
Preoperative planning with a 3-dimensional reconstruction of a patient's liver and tumor. (Courtesy of Jacques Marescaux, MD, IRCAD, Strasbourg, France.)
Sinus surgery simulator with circles for path planning guidance of the student in endoscopic sinus surgery. (Courtesy of Charles Edmond, MD, University of Washington School of Medicine, Seattle, Wash.)
Anastomosis simulator. (Courtesy of Marc Raibert, PhD, Boston Dynamics Inc, Boston, Mass.)
Satava RM. Emerging Technologies for Surgery in the 21st Century. Arch Surg. 1999;134(11):1197-1202. doi:10.1001/archsurg.134.11.1197
Copyright 1999 American Medical Association. All Rights Reserved. Applicable FARS/DFARS Restrictions Apply to Government Use.1999
Laparoscopic surgery is a transition technology that marked the beginning of the information age revolution for surgery. Telepresence surgery, robotics, tele-education, and telementoring are the next step in the revolution. Using computer-aided systems such as robotics and image-guided surgery, the next generation of surgical systems will be more sophisticated and will permit surgeons to perform surgical procedures beyond the current limitations of human performance, especially at the microscale or on moving organs. More fundamentally, there will be an increased reliance on 3-dimensional images of the patient, gathered by computed tomography, magnetic resonance imaging, ultrasound, or other scanning techniques, to integrate the entire spectrum of surgical care from diagnosis to preoperative planning to intraoperative navigation to education through simulation. By working through the computer-generated image, first with preoperative planning and then during telepresence or image-guided procedures, new approaches to surgery will be discovered. These technologies are complemented by new educational opportunities, such as tele-education, surgical simulation, and a Web-based curriculum. Telementoring will permit further extension of the educational process directly into the operating room.
Laparoscopic and endoscopic surgical procedures have been a major turning point in surgery. These techniques have been implemented in every discipline to change procedures requiring major trauma into minimally traumatic (and minimally invasive) procedures. Yet in doing so, significant physical challenges have been imposed on the surgeon—namely, the loss of the natural and intuitive ability to perform surgery. The surgeon is no longer looking at the patient anatomy directly but rather at a video monitor that is 2-dimensional and not in the direct hand-eye axis. The instrument motions are backwards, based on a fulcrum effect of the trocar insertion site, and there is almost no sense of touch. To restore the natural feeling of performing surgery, several systems have been developed. What they all have in common is the use of information-age technologies such as 3-dimensional (3D) visualization, robotics, teleoperation, and computer-assisted manipulation. These allow the awkward motions of minimally invasive procedures to be translated into natural hand motions from a surgical workstation. In addition, these systems can also be programmed to enhance capabilities beyond the limits of human performance. The systems were initially designed to be located near the patient, but it was soon discovered that they could also be used at a remote site. The concept of telesurgery, surgery to a remote location, was a by-product of the inherent capability of the systems.
Even more fundamental is the underlying concept on which these new technologies are based—using information science and information itself as a tool to improve patient care. This concept was originally articulated in 1995 by Nicholas Negroponte of the Massachusetts Institute of Technology Media Lab when he described the idea of using " . . . bits instead of atoms." That is, using the information representation (bits) of a real object (atoms) in the real world. His example is the facsimile machine. For thousands of years, documents were sent from place to place in the form of atoms—papyrus, clay tablets, parchment, letters, and so on. Today the same documents are sent in the form of bits—the fax. The transmission of bits is faster, cheaper, and more efficient. Thus, by working in the "information world" of bits, we can improve efficiency and even accomplish things that would not be possible in the world of atoms. Laparoscopic surgery is the first step into this information world, and dexterity-enhanced, computer-assisted, image-guided, or robotic surgery is the next step. In laparoscopic surgery, the surgeon looks at the video monitor—seeing the "information equivalent" of the actual organs and tissues. Although highly innovative new instruments are continuously being developed, laparoscopic surgery began with a grasping instrument that was first illustrated by Ambrose Paré in 1523. Today the laparoscopic instruments are still mechanically manipulated by hand, which is very awkward and counterintuitive, and even the best and most innovative tools are modifications of previous systems. With computer-assisted, dexterity-enhanced surgery, hand motions are converted into electronic signals (the information equivalent of touch and motion) from the handles at the workstation and then transmitted to the tip of the instrument that is located inside the patient. Now the entire surgical procedure is electronic—conducted in the information world through video images and electronic hand signals. Just as the video image can magnify and improve the view over that achieved when the surgeon is operating from outside the body, so too can electronic hand motions be enhanced to allow the surgeon greater dexterity and higher precision than in open surgery. Laparoscopic surgery can be regarded as a transition into the information world that will permit surgery not only on a normal scale, but even on a microscopic, cellular, or nanoscale—clearly beyond the performance of the unaided hand.
It is an additional bonus that the ability to enhance human performance through information technologies also provides the ability to perform surgery in remote locations. The patient and surgeon do not have to be at the same site to have improved surgical performance. Because the hand motions are transmitted electronically to the instrument tips, the procedures can be performed over distances. There are challenges to implementing distant surgery, such as the reliability (or quality of service) of the telecommunications lines and the issue of latency. This is the delay in time from when the hand motion is initiated by the surgeon until the remote manipulator (end effector or surgical instrument) actually moves. Both human performance standards and mechanical stability of the system have a maximum delay time from sending the hand motions to the tip of the instrument of approximately 200 milliseconds. Currently, this latency, or delay factor, limits the distance to a few hundred miles over terrestrial telecommunications, and renders it impossible over satellite systems (in geosynchronous orbit of approximately 22,000 miles), which have a latency of nearly 1.5 seconds. However, the use of remote surgery using telemedicine networks, or telesurgery, is not going to be determined by the technology as much as it will be determined by other nontechnical issues, such as reimbursement, legal issues, patient referral patterns, or physician and patient behavioral patterns. While technically feasible under certain technological conditions, it remains to be proved that such a system would be financially or clinically viable. Thus, of the 2 advantages of telepresence (or robotic) surgery—dexterity enhancement and remote surgery—only the former is currently employed clinically.
The earliest implementation of any type of computer-aided surgery began in neurosurgery with stereotactic surgery. Preoperative acquisition of computed tomographic (CT) or magnetic resonance imaging (MRI) scans were a planning tool to precisely locate the position of the lesion, plan the best trajectory across normal brain tissue to cause the least possible morbidity, and then feed the information from the computerized image into the stereotactic framework at the time of surgery. This provided accurate positioning within better than 1 mm of accuracy. The latest innovations include the open MRI scanner of Jolesz,1 which permits real-time acquisition of the brain image with updates to correct the slight, but critical, positional structure shifts due to craniotomy, cerebral spinal fluid changes, and tissue accommodation to the instruments. This also permits data fusion of the video image of the procedure with the MRI image for precision positioning of better than 0.5 mm accuracy (Figure 1). Neurosurgery continues to be one of the most frequent clinical fields using these new technologies, and pioneering efforts are underway to extend the capabilities to spinal surgery (eg, pedicle screw placement) as well. Although less sophisticated, stereotactic breast biopsy and hepatic brachytherapy are first steps for general surgery to implement stereotactic image-guided procedures.
Orthopedic surgery benefited from precision and accuracy of computer-aided surgery with Robodoc (Integrated Surgical Systems Inc, Davis, Calif),2 a preoperative planning and intraoperative femoral canal boring system to assist in femoral head replacement with a prosthesis (Figure 2). After matching the computer-generated image of a femoral prosthesis to the digital image of the x-ray of the patient's femur, data from the 2 images are fed to the Robodoc system, which then performs the milling of the femoral shaft. The precision of the conventional handheld broach method of coring out the femur results in approximately 75% prosthesis-to-bone contact; however, with Robodoc this precision is increased to 96%. The system is in use in Europe, but currently remains in clinical trials in the United States. Recently, DiGioia et al3 have developed HipNav (CA Surgica, Pittsburgh, Pa), a full 3D planning and intraoperative navigation system for computer-assisted placement of the acetabular prosthesis.
Telepresence surgery, a teleoperation system where the surgeon directly controls the motion of the tips of the instruments, was first described by Satava and Green4 in 1992. This early system, developed by Green et al5 at SRI International (Menlo Park, Calif), was validated by Bowersox et al6 by performing vascular anastomoses on animals. Just 5 years later in 1997, Himpens et al7 in Belgium performed the first telesurgery cholecystectomy on a patient, using a commercial version of the telepresence surgery system developed by Intuitive Surgical Inc (Menlo Park), and within a year Carpentier et al8 in France had performed more than 150 telepresence surgery cardiac procedures on the beating heart. In a similar manner, Computer Motion Inc, Goleta, Calif, developed a telesurgical system (Figure 3) that is capable of performing microsurgical procedures and Falcone et al9 reported the results in gynecology for tubal anastomoses. There are several other systems being developed with similar capabilities that are attempting to reduce the complexity and cost of the systems while maintaining the high level of accuracy and safety. Current efforts for telepresence systems are toward providing the capability to perform surgery on the beating heart through motion compensation; this would allow the surgeon to operate on any moving structure with the same precision as if it were perfectly still.
As indicated earlier, reconstruction of patient-specific data from CT or MRI scans into full 3D models of individual patient pathology has provided a new tool for diagnosis, preoperative planning, surgical simulation, and intraoperative navigation. Virtual colonoscopy (Figure 4) is one example of the diagnostic potential of this approach. Numerous investigators, such as Vining10 and Robb,11 reconstruct (segmentation) the 3D representation of an individual patient's colon from his or her CT scan. Then, using sophisticated computer programs derived from military flight planning, they are able to "fly through" the inside of the virtual colon, giving a view identical to that of performing actual video colonoscopy. There is also the ability to "fly" outside of the colon to look for local invasion or metastases. Studies are currently underway to compare the efficacy of virtual colonoscopy with standard video colonoscopy as a screening tool for colon cancer and polyps. The advantages of such a noninvasive procedure, which has no complications and is less costly, are self-evident. Current limitations include the time and dedicated personnel needed to create the 3D image and the need to perform an adequate bowel preparation to reduce artifact; however, preliminary results on detection rates in screening for colon polyps of 0.5 cm or larger are greater than 95%, which is equal to or better than standard colonoscopy. Obviously, virtual endoscopy can be applied to nearly every region of the body, and diverse applications such as virtual sinusoscopy, inner-ear endoscopy, bronchoscopy, angioscopy, and intracardiac "fly throughs" are currently being evaluated. While virtual endoscopy will most likely replace invasive procedures for screening and possibly diagnosis, there will remain a substantially large number of patients who will continue to require surgical endoscopic procedures for therapy, such as polypectomy, sphincterotomy, and stent placement.
The 3D patient-specific images are also used for preoperative planning, such as complicated maxillofacial or hepatic procedures. Altobelli et al12 and Montgomery et al13 have clinical experience in maxillofacial reconstructive surgery to provide symmetrical prostheses for craniofacial dysostosis and mandibular anomalies, and Marescaux et al14 used 3D images for both preoperative planning of complicated hepatic resections as well as training residents in surgical simulation. In 1996, Levy15 began using patient-specific 3D reconstructions of intrauterine tumors, which were imported into a hysteroscopy simulator, to preplan and rehearse various options of the procedure and to customize his operations for each individual patient. The HipNav system of DiGioia et al3 inherently contains the same type of preoperative planning for the acetabular portion of hip surgery, optimizing selection and placement of the prosthesis. The next-generation HipNav system will also incorporate a modification of the NeuroStation (Sofamer-Danek, Memphis, Tenn), an intraoperative navigation system that assists in precise alignment of the prosthesis during the procedure by data-fusion of the preoperative image over the patient and tracking the instruments and prosthesis to ensure proper registration.
Intraoperative navigation for stereotactic surgery is an important application of 3D visualization, most commonly called image-guided surgery. The technique of intraoperative registration has been used for years by Jolesz,1 while Bucholz and Greco16 and Smith et al17 have developed the NeuroStation for instrument tracking in neurosurgery. When the open MRI scanner for real-time tracking is combined with data fusion as indicated earlier, precision is increased and for the first time real-time correction of stereotactic navigation pathways is possible. One less-costly alternative to real-time intraoperative MRI navigation has been ultrasound, which is used extensively in ultrasound-guided breast biopsy or prostate biopsy. However, at this time the ultrasound systems do not have the high resolution that CT or MRI scanners have, though progress in sophisticated beam forming and digital signal processing algorithms is dramatically increasing the visual fidelity of ultrasound.
As indicated earlier, the uses of the 3D image are many, and a patient-specific image can be imported into a surgical simulation and used for education, training, and even testing and evaluation. The precedent is flight simulators for pilots, who not only train extensively on simulators, but also must pass a series of manual dexterity simulations to ensure technical proficiency before acquiring certification and licensure. Pioneering efforts at surgical simulation began with the rise of virtual reality in 1990 with a simulation by Delp et al18 of tendon transplant for the lower extremity. The simulation was quite simplistic, as was the abdominal simulator of Satava19; however, it provided interactivity with 3D computer representations. Simulations improved by increasing visual fidelity, adding haptic input and patient-specific data (Levy15) and combining both preoperative planning and surgical simulation (Figure 5) (Marescaux et al14). Edmond et al20 have incorporated the flight training technique of terrain following into a sinusoscopy simulator. By overlaying a series of diminishing circles along the projected path that the endoscope should take, entry-level students can practice procedures with computer-generated guidance (Figure 6). The latest addition has been an anastomosis simulator (Figure 7) developed by Raibert et al,21 which has incorporated the automatic measurement of hand position, pressure on the instruments, and tearing forces to the surface of virtual tissue. These parameters are continuously tracked, graphed, and presented for immediate feedback of manual performance. This can be used for real-time reinforcement for training, or tabulated into a "report card" for evaluation and possibly, in the future, certification.
Despite the advances in information science for simulation, the prime consideration remains the educational content and the curriculum; the technologies are simply more sophisticated tools. However, with the fidelity of surgical simulation increasing, the number of educators using simulation to objectively measure manual skills is increasing. Unfortunately, there is no consensus as to the tasks to perform to determine competence, nor are there even the proper metrics to determine successful skill acquisition. Thus, one of the most important challenges for surgical education will be deriving appropriate metrics to apply to a standardized set of tasks that can objectively measure performance of manual skills. Once it is demonstrated that specific tasks can improve and assess technical skills, a stringent long-term evaluation of performance related to clinical outcomes in the operating room must be conducted. While it has taken more than 40 years for the aviation industry to accomplish this objective, the technology exists today that could allow surgical simulation to achieve such a goal in a much shorter time frame.
In addition to the introduction of medical simulation, the traditional medical educational process is becoming enriched by the potential of 3D visualization and tele-education. Hoffmann and Murray22 have used 3D visualization for a next-generation educational program called VisualizeR. This program provides a medical student curriculum that has the 3D image of the Visible Human as the interface as well as the visual index to the contents. By clicking on an organ or body part, all information can be accessed, such as histology, anatomy, video clips of surgical procedures, radiological images, and so on. In addition, physiologic processes are animated to illustrate function. Because the images are full 3D, they can be manipulated for "fly through" diagnoses, structures taken apart and reassembled, or practiced on for surgical simulations. The curriculum currently resides on an internal network of computers at the University of California, San Diego; however, it is being converted to a Web-based format that will be available over the Internet. Several similar projects are emerging from other academic institutions, such as Imielìnska et al23 of Columbia University, New York, NY (The Vesalius Project) and Heinrichs and Dev24 of Stanford University, Palo Alto, Calif (SUMMIT Project). All of these educational programs, which were started within the individual institutions, are moving toward Web-based applications. Powerful new platform-independent programming languages, such as Java, hypertext markup language (HTML), and virtual reality markup language (VRML), are just a few that are providing the opportunity for tele-education using any type of computer from any Internet connection. In addition, the educational sites are becoming total medical resources by providing links (through hypertext "hot links") to other important reference sources on other Web sites, such as the National Library of Medicine's MEDLINE for indexed publications or the Visible Human Project for full 3D anatomy reference. The original tele-education, which consisted of video conferencing from site to site in real time (synchronous education), is now supplemented by an entire universe of non–real-time education (asynchronous) through Web sites, allowing students the opportunity to supplement their education at a time most convenient to them. As indicated above, the key issue is not the technology, but the educational content of the Web site and, to a certain extent, the interface (how easy it is to understand and use). Because literally anyone can post a Web site, a premium is being paid to those who create the content. Educational value is thus attached to "brand equity" or "trusted source," meaning those reputable academic institutions that are known for high-quality education and whose Web sites live up to their parent institution's high educational standards. Thus, educational opportunities are moving in the direction of highly distributed educational curricula from trusted sources with greater convenience through asynchronous learning.
What all these new information-age learning tools have in common is the ability to provide a virtual educational experience by interacting with a 3D representation of human anatomy on the computer as if the structures actually existed. In addition, these curricula are able to be customized for each student; once the student enters his or her profile and begins using the system, the program tailors the lesson to the individual's pace, modifies the session based on responses, keeps track of performance, and gives real-time feedback. This is the first stage of highly personalized medical education, providing for each student a personal (computer-generated) mentor for guidance and assistance.
Using the Internet and telemedicine invokes another possibility, that of telementoring. Early efforts by Rosser et al25 and subsequent refining of the method by Lee et al26 have opened an extended form of direct surgical training. Rosser has developed a curriculum that begins with a laparoscopic training course at Yale University, New Haven, Conn, but then continues in the local hospital operating room, where observation of the surgeon's performance through video conferencing and teleillustration continues the mentoring process. No longer does the trainee take a short course in laparoscopic surgery on a pig and then begin operating alone on patients. The question arises whether this same capability can be used for certification or hospital privileging through teleproctoring, thereby reducing the cost and increasing the availability of experts for proctoring. Certainly numerous nontechnical and ethical issues must be resolved before there can be widespread acceptance.
As the technologies discussed in this article continue to improve and stringent clinical evaluation either confirms or fails to validate their effectiveness, surgery will move to even less invasive or noninvasive modalities throughout the spectrum of health care. Perhaps 20 to 50 years in the future, total body scanning systems will become ubiquitous and affordable to a point where a total patient scan can be accomplished—similar to the Whole Body Scan (CyberWare Inc, Monterey, Calif),27 a laser surface scanning device that is being experimentally used by the US Air Force to acquire precise anthropomorphic measurements to custom-tailor clothing for pilots. The first generation of high-resolution handheld ultrasound systems is becoming available, and at a cost that is an order of magnitude less than conventional systems. Likewise, next-generation noninvasive sensor systems, similar to pulse oximetry, are able to acquire many physiologic and biochemical parameters only previously available from blood samples. The result will be the ability to create a full 3D representation of a patient, which can be used by a surgeon for noninvasive detection, diagnosis, preoperative planning, and assistance during treatment. The implications for surgery are that we are in the middle of a transition period that will fundamentally change the practice of surgery. However, in the foreseeable future, there will be the need for both conventional surgery and laparoscopic (minimally invasive) surgery. As the new technologies are validated, there will be a new richness to surgery that will require even more surgical skills and training. The practice of surgery will not be replaced, but rather will change and mature. The golden age of surgery, where the outcomes of a surgical procedure were solely determined by the manual skills of the surgeon, is over. Technologies that enhance the surgeon's ability beyond human limitations have provided newfound capacity to perform procedures that were previously considered impossible. Surgery will no longer be limited to what can be seen with the naked eye or felt by the human hand; 3D imaging and telepresence surgery extend the reach beyond these limits. The challenge to the surgeon is to be aware of the opportunities, rigorously evaluate the technologies, and be willing to change if evidence-based outcomes demonstrate a clear benefit for the patient.
The opinions or assertions contained herein are the private views of the author and are not to be construed as official, or as reflecting the views of the Department of the Army, Department of the Navy, the Defense Advanced Research Projects Agency, or the Department of Defense.
Corresponding author: Richard M. Satava, MD, Department of Surgery, Yale University School of Medicine, 40 Temple St, New Haven, CT 06510 (e-mail: firstname.lastname@example.org).