As usual with any debugging technique, the introduction of the delays inherent in using voice output will change the timing relationships of components. However, this is no different than the effects of breakpoints and debug printouts of today, so we have not introduced a new form of perturbation into the problem.
One problem with all these techniques is that they are still ad hoc, and the placement and nature of the voice response is an art no different than the inclusion of debug-print statements. Our goal in this paper is to heighten the awareness of the auditory input channel in the hope that the next decades will see voice-based debugging techniques become more integrated into the development tools and debugging tools that will be our next generation of software tooling. The availability of rich libraries in (.NET Framework) languages like C#, make it much more reasonable to construct visual displays that allow us to see our problem in domain-specific, rather than implementation-specific, terms. A similar availability of audio output libraries will facilitate the construction of domain-specific interaction tools. The ability of tools to let us set auditory feedback in a manner similar to how we currently set breakpoints and variable watch displays will facilitate the use of the audio channel in everything from low-level debugging, raising it to a higher rate of interaction, to high-level monitoring at the component and architectural level, bringing our presentation more into synchronization with our mental models.