Understanding surgical anatomy is crucial in both preoperative planning and intraoperative decision-making. To meaningfully interpret patient anatomy with respect to performing surgical procedures, one must be able to translate the visual information obtained from conventional 2-dimensional images (eg, computed tomography and magnetic resonance imaging) to the direct operative view. This can be challenging and has the potential to misinterpret valuable information and potentially cause errors. When visual data are returned to a more familiar 3-dimensional format, the positions of organs relative to other important anatomical structures are more easily understood.1 One exciting way of addressing this is harnessing the potential of a relatively new innovation in medical technology called mixed reality. Mixed reality is a merging of the real and virtual worlds in which computer-generated images exist alongside physical objects. This contrasts with virtual reality, in which the user is immersed into a completely computer-generated world. This article describes the process of creating a 3-dimensional, computer-generated mixed-reality model from the computed tomographic data of actual patients and its use as a training tool and intraoperative guide.