The impact of audio-visual speech on work-load in simultaneous interpreting
Date issued
Authors
Editors
Journal Title
Journal ISSN
Volume Title
Publisher
License
Abstract
Conference interpreters face various sources of visual input during their work: the speaker’s gestures, reactions from the audience, presentations, etc. Far from perceiving this additional visual information as additional burden, interpreters insist on having access to visual information. The aim of this study was to investigate the impact of visible lip movements on simultaneous interpreting contrasting simultaneous interpreting with / without visible lip movements and with / without background noise. A group of listeners was included in order to control for task effects (N=31, 17 listeners and 14 interpreters). Participants’ ratings of speech difficulty and speech delivery, duration judgments, recall accuracy of the speech, fundamental voice frequency, silent pauses, translation accuracy, cognate translations and pupillary reactions were statistically investigated. On the whole, the findings did not support the hypothesis that audio-visual speech input lowers work-load in simultaneous interpreting. For pupillary reactions even the opposite pattern was found: pupil sizes were larger during simultaneous interpreting with audio-visual speech, but not during simultaneous interpreting with background noise, a factor having been found to affect performance and participants’ ratings. In the light of this pattern, the effect of visual input may be interpreted in this case in terms of general arousal without necessarily being linked to work-load.