Eliciting Tacit Expertise in 3D Volume Segmentation

Published in Proceedings of the 9th International Symposium on Visual Information Communication and Interaction, 2016

The output of 3D volume segmentation is crucial to a wide range of endeavors. Producing accurate segmentations often proves to be both inefficient and challenging, in part due to lack of imaging data quality (contrast and resolution), and because of ambiguity in the data that can only be resolved with higher-level knowledge of the structure and the context wherein it resides. Automatic and semi-automatic approaches are improving, but in many cases still fail or require substantial manual clean-up or intervention. Expert manual segmentation and review is therefore still the gold standard for many applications. Unfortunately, existing tools (both custom-made and commercial) are often designed based on the underlying algorithm, not the best method for expressing higher-level intention. Our goal is to analyze manual (or semi-automatic) segmentation to gain a better understanding of both low-level (perceptual tasks and actions) and high-level decision making. This can be used to produce segmentation tools that are more accurate, efficient, and easier to use. Questioning or observation alone is insufficient to capture this information, so we utilize a hybrid capture protocol that blends observation, surveys, and eye tracking. We then developed, and validated, data coding schemes capable of discerning low-level actions and overall task structures.

authors: Ruth West and Meghan Kajihara and Max Parola and Kathryn Hays and Luke Hillard and Anne Carlew and Jeremey Deutsch and Brandon Lane and Michelle Holloway and Brendan John and Anahita Sanandaji and Cindy Grimm

Authors: Ruth West and Meghan Kajihara and Max Parola and Kathryn Hays and Luke Hillard and Anne Carlew and Jeremey Deutsch and Brandon Lane and Michelle Holloway and Brendan John and Anahita Sanandaji and Cindy Grimm