This project is to develop and evaluate a new genre of surgical simulators that will quickly, efficiently, and accurately capture and convey expertise in standard and new surgical skills. The simulators will create a unique surgical learning experience in which time is not restricted and the learner can be allowed to explore and make mistakes. However, users of current simulators might experience cognitive overload and be increasingly overwhelmed by data streams from multiple modalities such as video, animation, text, and narration. Cognitive task analysis and efficient interface design are thus implemented to improve cognitive efficiency and decision making skills. The multimodal use-state sensing interface can capture sensory input from haptics, video (e.g., learner's facial affect) and speech. With respect to our particular interest, with the logged positional and pressure data from user haptic input, we will develop a model of positional and pressure differences which can distinguish user state under baseline and problematic conditions. We are also able to identify similar classifiers for speech and video. The dynamic feedback will facilitate mentoring adjustment based on current user state.

Project Description:

Technological advances over the last 10 years have opened up a new venue for surgical teaching. The introduction of simulators and virtual reality provide a place to gain surgical experience without putting patients at risk. We have accepted this great advantage and simply inserted simulators wherever possible into our current training experience. As simulation is put into place and tested, we realize we should also improve the method with which we learn. The traditional Halsteadian method of teaching surgery was based on an apprenticeship model because there was no viable alternative to learning on the human. Hand-in-hand with this limitation was the necessity for experts to be the teachers and for the teaching moments to be moved along by the progress of the case in the operating room without the option of re-doing an error or stopping at a critical moment to ask questions. As a new generation of surgical simulators is being developed, teams around the world are realizing that there are multiple problems, which arise when new technology is simply inserted into surgical training. An expert surgeon is often not able to articulate his expertise or why he makes certain surgical decisions. However, cognitive theory tells us that learners need to be taught decision making strategies to gain expertise.[3] With simulation there is no risk to the patient or time constraints on teaching so comprehensive learning objectives from cognitive theory can be addressed. At the same time, simulators have introduced new obstacles. We are able, in a surgical simulation, to represent multiple information sources to the learner. Text, audio and moving images are imbedded into the simulator, often resulting in cognitive overload to the learner and impeding knowledge transfer.

Current simulators are built by watching experts perform surgery and by asking them to describe what they are doing. Research shows that expert knowledge is highly automated and is not easily accessible to the expert. An expert surgeon has difficulty articulating exactly what leads him to make a surgical decision. He may make errors 30-50% of the time when attempting to describe how his automated knowledge operates in practice. Not only has this been a problem in transfer of expertise to novice surgeons in the operating room, but these errors are not being addressed when simulators are designed. This problem with our current apprenticeship model of teaching is being replicated in new technology.

The next problem encountered in the first generation of surgical simulators is cognitive overload. Simulators, whether utilizing multi-media or sophisticated virtual reality, are capable of presenting information in animation, video, text, narrative, graphics, etc. The learner's attention is split among these media and it requires more mental work to integrate the information from many of these different sources. The goal when using technology needs to be a lowering of the cognitive load focusing attention only on essential, goal-relevant information. Examples of cognitive principles necessary in the development of simulators are: utilization of narrative in media displays rather than text (Narration principle) or elimination of extraneous words, sounds (e.g. music) and pictures (Coherence principle). The use of an educational expert helps developers focus on these principles to reduce the cognitive load on the learner.

In this project we use Cognitive Task Analysis (CTA), as an accepted method of teaching expertise.[4-6] We apply CTA to flexor tendon repair, a common hand surgery with long-term risks to the patient. The project describes a framework with which to look at a surgical procedure and develop technology that will complement the educational and cognitive goals of the user.

Educators have only recently been included in collaborative teams designing simulators; previously, we have had technology experts building technology around a task or procedure based on input from the task experts. As described, this technique can substitute technology for pedagogy with the result of actually making learning more difficult for the user. By incorporating educational principles into the next generation of surgical simulators, we will speed growth toward expertise and improve the way surgeons learn.

Our collaborative group at USC began simulator development with defined teaching objectives, allowing us to design existing technology and innovate new technology that will accomplish the goal of the surgical simulator-to facilitate learning and accelerate the acquisition of expertise.

Advisor(s):
Tiffany Grunwald M.D.,
Dick Clark Ph.D.,
Scott Fisher,
Margaret McLaughlin Ph.D.,
Shrikanth Narayanan Ph.D.

Funding:
Institute for Multimedia Literacy,
School of Cinematic Arts