Medical Simulation/Taxonomy

Motivation
To our knowledge, there is no taxonomy for VR-based medical simulators. However, there are many benefits of a taxonomy: In summary a taxonomy supports communication and development.
 * Provides standardized terminology and classification;
 * Helps to communicate between engineers, medical experts, educators and other important disciplines;
 * Can be used after task analysis to prioritize components;
 * Facilitates analysis and validation.

The taxonomy on this page is based on. The intention is to make the taxonomy more accessible and dynamic to community-based extensions and changes.

Related Work
In medical simulation there is an overwhelming amount of papers describing simulators and algorithms. To create a taxonomy we identified and analyzed numerous position papers, surveys of existing simulators  to name a few.

Satava postulated five generations of simulators: geometric anatomy, physical dynamics modeling, physiologic characteristics, microscopic anatomy, and biochemical systems. Furthermore, he defined the following requirements for realism in medical simulators: visual fidelity, interactivity between objects, object physical properties, object physiologic properties, and sensory input.

Liu et al. discriminate between technical (deformable models, collision detection, visual and haptic displays, and tissue modeling and characterization) and cognitive components (performance and training).

Delingette divided simulator components into input devices, surgery simulator (collision detection and processing, geometric modelling, physical modelling, haptic rendering, and visual rendering), and output devices.

In a recent overview by John, three areas have been defined: input data, processor, and interaction. Here, interaction has been subdivided into haptics, display technologies, other hardware components, and algorithms and software.

Taxonomy
Merging the definitions and reports of the related work, we propose a taxonomy (see outline below) with three main classes: Datasets, Hardware, and Software. In the following we provide a brief definition of each class and give some examples, that will be discussed in the following chapters of this book in more detail.


 * 1) Datasets
 * 2) ; Synthetic
 * Computed
 * Modeled
 * 1) ; Subject-specific
 * In Vivo
 * Ex Vivo
 * 1) Hardware
 * 2) ; Interaction devices
 * Sensor-based
 * Props
 * 1) ; Processing Unit
 * Stationary
 * Mobile
 * 1) ; Output
 * Visual
 * Haptic
 * Acoustic
 * 1) Software
 * 2) ; Model
 * Technical
 * Content
 * 1) ; Interaction
 * Tasks
 * Metaphors
 * Technical
 * 1) ; Simulation
 * Static
 * Dynamic
 * Physiological
 * 1) ; Rendering
 * Visual
 * Haptic
 * Acoustic

Datasets
Synthetic datasets can be Computed (e.g., based on statistic models or heuristics) or Modeled (e.g., produced by digital artists with 3D modeling tools, or sourced from CAD designs of instruments). Usually, these are well-meshed surface geometries with highly detailed textures. Another approach are Subject-specific datasets. Several medical imaging modalities (e.g., sonography, MRI, CT) allow the reconstruction of volume data, that can either be used directly or segmented for further processing. Furthermore, physiological parameters, tissue properties and other can either be measured In Vivo or Ex Vivo.

Hardware
Interaction devices can either be Sensor-based or Props. Sensor-based devices can be commercial off-the-shelf products, self-constructed prototypes or hybrids and examples range from game console controllers to haptic devices and optical tracking systems. Props can replicate body-parts or instruments, that are either augmented, tracked or simply passive parts of the overall setup. The Processing unit relates to the kind of computing systems that are used for the simulator. This can be Stationary (e.g., single- or multi-core systems, clusters, or servers) or Mobile systems (e.g., handheld devices, or streaming clients). Furthermore, GPUs can be used for parallelization. Finally, the Output can be realized on several modalities, Visual, Haptic and Acoustic being the three most common. The visual component can be further divided into different display types: HMD, screen or projection screen with or without stereoscopic rendering. Likewise, haptics is divided into tactile and kinesthetic feedback.

Software
The Model is the link between the datasets and the algorithms. It can be regarded from two points of view: Technical (e.g., data structure, LODs, mappings, etc.) and Content (e.g., patient, instruments, and environment ). One crucial element for the acceptance of a medical simulator is the Interaction with numerous solutions from HCI and 3DUI. Here, we can distinguish between Tasks (navigation, selection, manipulation, session management, assessment etc.), Metaphors (direct “natural” interaction, gestures, etc. ) and Technical (e.g., GUI elements, OSDs, or annotations). Simulation is divided into different levels: Static (e.g., fixed structural anatomy, environment), Dynamic (e.g., physics-based with collision detection and handling, rigid body dynamics, or continuum mechanics applied to soft tissue) and Physiological (e.g., functional anatomy , or the physiome project). The Rendering is tightly coupled to the results of the simulation. It can be divided into Visual, Haptic or Acoustic algorithms.