Researching Cognitive Processes of Translation

This edited volume covers an array of the most relevant topics in translation cognition, taking different approaches and using different research tools. It explores theoretical and methodological issues using case studies and examining their practical and pedagogical implications. It is a valuable resource for translation studies scholars, graduate students and those interested in translation and translation training, enabling them to conceptualize translation cognition, in order to enhance their research methods and designs, manage innovations in their translation training or simply understand their own translation behaviours.

111 downloads 3K Views 6MB Size

Recommend Stories

Empty story

Idea Transcript


New Frontiers in Translation Studies

Defeng Li · Victoria Lai Cheng Lei  Yuanjian He Editors

Researching Cognitive Processes of Translation

New Frontiers in Translation Studies Series editor Defeng Li Centre for Translation Studies, SOAS, University of London, London, United Kingdom Centre for Studies of Translation, Interpreting and Cognition, University of Macau, Macau SAR

More information about this series at http://www.springer.com/series/11894

Defeng Li • Victoria Lai Cheng Lei • Yuanjian He Editors

Researching Cognitive Processes of Translation

Editors Defeng Li Centre for Studies of Translation, Interpreting and Cognition University of Macau Macau SAR, China

Victoria Lai Cheng Lei University of Macau Macau, China

Yuanjian He University of South China Hunan, China

ISSN 2197-8689 ISSN 2197-8697 (electronic) New Frontiers in Translation Studies ISBN 978-981-13-1983-9 ISBN 978-981-13-1984-6 (eBook) https://doi.org/10.1007/978-981-13-1984-6 Library of Congress Control Number: 2018956733 © Springer Nature Singapore Pte Ltd. 2019 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, express or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore

Introduction Defeng Li and Victoria Lai Cheng Lei

The last 10 years have seen an upsurge in cognitive translation studies (Halverson 2010). In a recent review of research trends in translation studies based on a survey of published literature in Translation Studies Bibliography, translation process research (TPR) was found to be an area that had received most research attention since 2010, and the total number of publications on TPR almost doubled that of corpus translation studies, which came out as the second most published area in the survey (Li 2017). This seems to suggest that TPR as a field of research in translation studies is well poised for further development and growth in the future years thanks to the unprecedented interest among both cognitive and neuroscience scientists and humanities scholars in how the human brain works and the general enthusiasm about translation processes among translation teachers. This may also be attributable to the current obvious interest in data-based empirical research methods in the discipline, as evidenced in the fast-growing literature on corpus translation studies, interpreting studies, and translator education, which in most instances have adopted qualitative or quantitative research designs and methods. Translation process research as a research field has come a long way over the last 40 years, from the initial use of think-aloud protocols as the main research instrument to the subsequent adoption of Translog combined with screen recording techniques and technologies, to the ensuing enthusiasm about experimenting with eye trackers, to the application of neurological and neuroimaging tools such as the electroencephalography (EEG), the positron emission tomography (PET), the functional near-infrared spectroscopy (fNIRS), and the functional magnetic resonance imaging (fMRI). As the research instruments have become more sophisticated, so have the topics and issues tackled under the umbrella of translation process research and cognitive translation studies. As a matter of fact, translation process may be understood in both a broad and narrow sense. Broadly speaking, it refers to the entire process of a translation project, from the initiation of the project to the commissioning of the translation assignment, the authoring of the source text, and the actual translational action in which the linguistic and cultural transfer is realized, to the dissemination, use, and reception of the translated text in the target language and culture. This translation process is v

vi

Introduction

usually studied from a cultural perspective (e.g., Lefevere 1993) and more recently a sociological approach (Gouanvic 2005). Such investigations usually center on the sociocultural factors that have affected the selection of the source text, the production of the translated text, and the dissemination and reception of the translated text in the target language and culture. Often raised are such questions as why the text was selected for translation; what was the purpose of the translation; how factors such as the ideology, patronage, and dominant poetics affected the adoption of translation methods and strategies; and how the power relations, political systems, and cultural preferences influenced the dissemination and reception of the translation after it entered a new language and culture. In a word, the sociocultural approach to translation processes centers on the external factors that have impacted the initiation, production, and reception of a translated text. Translation process can also be understood in its narrow sense. It can refer to the mental process of rendering a text from one language into another. Research on the mental processes of translation seeks to understand the happenings and workings in the translator’s brain from the moment a source text enters his brain to the moment the target text comes out in another language and the monitoring and subsequent revision processes, including the use of external resources to assist the translation. This is often referred to as translation process research in translation studies (Jakobsen 2017), and it is also the focus of the present volume. According to Munoz (2017), translation process research (TPR) began with Paneth’s MA thesis on conference interpreting in the 1960s. But it was in the 1970s and early 1980s that TPR was gradually established as a new area of research in translation studies and gained momentum in the past decade thanks to the advances in keylogging, screen recording, eye tracking, and neuroimaging technologies. This line of research is often conducted via experimentation and lab methods. So it may be best described as the experimental approach to translation process research. While TPR is generally lab-based, some researchers have been calling for such research to be moved out of laboratories and taken to the real translation workplace in order to understand what is known as translation cognition in situ (e. g., Ehrensberger-Dow 2014; Risku et al. 2017). Often adopting an ethnographic method, this kind of study seems to have opted for an experiential approach in its research designs. The experiential-experimental approach to translation process research consists of several sub-approaches including psycholinguistic, behavioral, corpus-textual, cognitive-neurological, situational, and integrated approaches, depending on the research questions raised, the tools used, and the methods adopted for the research. This volume consists of contributions by some leading TPR researchers based on their keynote presentations made at the First and Second International Conference on Cognitive Research on Translation and Interpreting (ICCRTI) held at the Centre for Studies of Translation, Interpreting and Cognition (CSTIC), University of Macau, in 2014 and 2015. They cover an array of topics and represent different approaches applicable for translation process research today and a number of research tools available for investigating translation processes.

Introduction

vii

The first part of the collection features three chapters of theoretical considerations on translation process research as a new research area. The rise of the experientialexperimental approach in translation process research has gotten people to think whether and how it fits in with the sociocultural approach. House believes that the growing interest in the strategies of comprehension, problem solving, and decisionmaking in the translator’s mind “does not need to be at the expense of the sociocultural” (this volume, p. 4). But she opposes any view that offers an excessive role of subjective interpretation in translation by seeing translation as an art and argues that in such a “widespread exaggerated emphasis on the subjective personal, it is necessary to renew a focus on both language and text – the linguistic focus, and on what happens in translators’ minds when they engage in translating texts – the cognitive focus” (ibid, p. 4). Following a critical review of current translation cognition research using intro- and retrospection, behavioral experiments, and neuroimaging studies, she argues that looking for “a descriptively and explanatorily adequate neuro-linguistic theory of bilingualism useful for and compatible with a theory of translation” might be “a first step towards a more valid and reliable approach to investigating the translation process” (ibid, p. 12). He takes up what House proposes – to seek for a neurolinguistic theory of bilingualism for translation process research. Drawing from universal grammar, computational theory of language processing, neurocognitive bilingualism, and neuro-functional control theory for bilinguals, he presents his initial efforts to construct an integrated and conceptually detailed theoretical framework for understanding translation and interpreting as a bilingual process in the brain. He hypothesizes that memory and computation as two processing mechanisms compensate and complement each other in translation and particularly in simultaneous interpreting, which is a result of the “system design” (ibid, p. 16). He further suggests that some properties displayed in translated texts and interpreted speeches can be explained by what is called the “processing economy hypothesis” (ibid, p. 36). As translation process research advances, some researchers have argued for a closer integration of translation process research and cognitive science paradigm (e. g., Alves 2015). Taking this as their point of departure, Carl and Schaeffer propose a computational framework for postediting machine translation (PEMT) based on the well-known noisy channel model (Levy 2008). They extend the noisy channel model with relevance theory (RT) and believe that such a combination will account for both the unconscious priming effects and the conscious processes in PEMT. The second part of this volume focuses on tools and methods applicable for researching the translation process and presents a few proposals for such applications. Despite that a number of newer technologies have been applied in translation process research, much is yet to be explored and consolidated regarding research methods. For instance, keylogging data can reveal much about the production process of a translation, but they do not tell much about how the translator works on the source text. We have to refer to recorded gaze data for details on the cognitive processing that made the production of a stream of translation possible. Jakobsen takes data from the CRITT’s TPR database and illustrates how the eye-tracking data and keystroke data of a single translator can be combined in the exploration of the

viii

Introduction

translator’s cognitive processing in translation, indicating that qualitative analysis is important for revealing patterns and themes in eye-tracking and keystroke data. After many years of translation process research with instruments including TAPs, keylogging and eye trackers, recent years have witnessed an increasing interest among researchers to study the topic from a neurological and neurocognitive approach. Along this line of neuroscientific investigation of translation and interpreting, Lu and Yuan outline how functional near-infrared spectroscopy (fNIRS) may be used to explore the brain activities in translation and interpreting. fNIRS has been used since the 1970s in neuroscience to investigate brain activities by measuring changes of hemodynamic responses based on the near-infrared light between 650 nm and 950 nm. As a noninvasive technique, fNIRS is often used to localize and monitor the cerebral activities such as visual, auditory, memory, attention, and motor. Lu and Yuan argue that translation and interpreting, as activities involving many of these subskills and processes, are particularly prone to investigations with the assistance of fNIRS. While studies utilizing fNIRS are beginning to emerge (Li and Lei 2016; Lin et al. 2018a, b; He et al. 2017), studies taking stock of other neuroimaging technologies like fMRI have also appeared. Alves et al. share their preliminary thoughts on how the brain imaging technologies can be used to study psychological processes in translation. As facilitators of communication, translators need to construct a hypothesis about the ST author’s meanings and the TT readers’ cognitive environment. To accomplish this, they need to activate different layers of mind reading and inferential mechanisms. Alves et al. therefore propose to integrate behavioral and neurophysiological data in the interdisciplinary investigation of “the inferential mechanisms involved in translation processing” (this volume, p. 131). Cognitive load and cognitive efforts have often been the object of translation process studies. They tell the difficulty of a translation task and the problems encountered in translating a segment. However, measuring the difficulty of translation has often been a difficult subject. Sun, following his earlier work on measuring the difficulty of human translation (2015), discusses his attempt to measure the difficulty of postediting in machine translation (PEMT). He believes that measuring the difficulty of postediting in machine translation can help translators avoid cognitive overload and underload. Furthermore, understanding and selecting tasks of appropriate difficulty levels are conducive to training translation students and turning them into experts of translation and PEMT. The efforts to understand how a translator’s brain works in the process of translation and interpreting are at least partially motivated by the possible implications for translation and interpreting training, even language learning and teaching in general, especially when translation is used and practiced in the classroom either to acquire translation strategies and methods or to learn a second language. Göpferich examines the role of translation and translation competence in L2 writing. Through a rather extensive review of related literature, she demonstrates that, on the one hand, suppression of L1 use in L2 writing hampers the knowledge-constructing function of writing but, on the other hand, translation can be detrimental to L2 writing, depending on the learner’s translation competence and the methods of using

Introduction

ix

translation in the process. She also found evidence to support her thesis that academic writers often read and draw on materials from one or more other languages, an ability that she terms transliteracy. And she believes all these have serious implications for both translation and L2 writing instruction, suggesting that translation competence, “as a cognitive catalyst for trans- and multiliteracy” (this volume, p. 191), is a “soft skill” that should be acquired by students of all disciplines in multilingual and multicultural societies (ibid, p. 192). Translation process research, after 40 years of growth, has made exciting progress in the issues covered in the investigations. We know more about the translators’ brains and behaviors today than decades ago. We have also adopted or adapted to various new and useful technologies in translation and interpreting process studies. However, as with all kinds of new research, this field of research has also met with its own challenges, such as the development of a coherent theoretical framework of translation processes with strong explaining power, proper triangulation of different data sources, and more efficient methods for analyzing log data and eye-tracking data in fusion, pushing the boundaries by resorting to more advanced and sophisticated research instruments such as fNIRS and fMRI, and so forth. For this area to continue to produce useful results, all these challenges will have to be met head on and addressed effectively. The chapters in this volume may represent one of the collective efforts by translation process researchers the world over to meet such challenges.

Acknowledgment We would like to acknowledge the financial support from the University of Macau Multi-year Research Grants (MYRG2015-00150-FAH, MYRG2016-00096-FAH, and MYRG2017-00139-FAH).

References Gouanvic, J. M. (2005). A Bourdieusian theory of translation, or the coincidence of practical instances: Field, ‘habitus’, capital and ‘illusio’. The Translator, 11(2), 147–166. He, Y., Wang, M., Li, D., & Yuan, Z. (2017). Optical mapping of brain activity underlying translation asymmetry during Chinese/English sight translation. Biomedical Optics Express, 8(12), 1–12. Jakobsen, A. (2017). Translation process research. In J. W. Schwieter, & A. Ferreira (Eds.), The handbook of translation and cognition (pp. 19–49). New Jersey: Wiley-Blackwell. Lefevere, A. (1993). Translation, rewriting and manipulation of literary fame. London: Routledge. Li, D. (2017). Researching methods in translation studies: Setting the scene. Presented at the 1st international symposium on translation studies research methods, CSTIC, University of Macau. Li, D., & Lei, V. (2016). Which is more costly, transcoding or paraphrasing? Presented at the third international conference of cognitive research on translation and interpreting, University of Macau. Lin, X., Lei, V. L. C., Li, D., & Yuan, Z. (2018). Which is more costly in Chinese to English simultaneous interpreting, “pairing” or “transphrasing”? Evidence from an fNIRS neuroimaging study. Neurophotonics, 5(2), 025010.

x

Introduction

Lin, X., Lei, V. L. C., Li, D., Hu, Z., Xiang, Y., & Yuan, Z. (2018). Mapping the small-world properties of brain networks in Chinese to English simultaneous interpreting by using functional near-infrared spectroscopy. Journal of Innovative Optical Health Sciences, 11(3), 1840001. Martín, R. M. (2017). Looking toward the future of cognitive translation studies. In J. W. Schwieter, & A. Ferreira (Eds.), The handbook of translation and cognition (pp. 555–572). Hoboken: Wiley-Blackwell. Risku, H., Rogl, R., & Milosevic, J. (2017). Translation practice in the field. Translation Spaces, 6(1), 3–26. https://doi.org/10.1075/ts.6.1.01ris.

Contents

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Defeng Li and Victoria Lai Cheng Lei Part I 1

2

3

V

Theoretical Models

Suggestions for a New Interdisciplinary Linguo-cognitive Theory in Translation Studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Juliane House

3

Translating and Interpreting as Bilingual Processing: The Theoretical Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yuanjian He

15

Outline for a Relevance Theoretical Model of Machine Translation Post-editing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Michael Carl and Moritz Schaeffer

49

Part II

Methods and Applications

4

Segmentation in Translation: A Look at Expert Behaviour . . . . . . . . Arnt Lykke Jakobsen

71

5

Explore the Brain Activity during Translation and Interpreting Using Functional Near-Infrared Spectroscopy . . . . . . . . . . . . . . . . . . 109 Fengmei Lu and Zhen Yuan

xi

xii

Contents

6

Translation in the Brain: Preliminary Thoughts About a Brain-Imaging Study to Investigate Psychological Processes Involved in Translation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 Fabio Alves, Karina S. Szpak, and Augusto Buchweitz

7

Measuring Difficulty in Translation and Post-editing: A Review . . . . 139 Sanjun Sun

8

Translation Competence as a Cognitive Catalyst for Multiliteracy – Research Findings and Their Implications for L2 Writing and Translation Instruction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 Susanne Göpferich

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199

About the Contributors

Fabio Alves is a Professor in Translation Studies at Universidade Federal de Minas Gerais in Brazil. His main focus of research is on translation as a cognitive activity, including the study of expertise in translation, human–machine interaction, and inferential processes in translation. He has published extensively in peer-reviewed journals, such as Target, Meta, Across Languages and Cultures, Machine Translation, and Translation and Interpreting Studies, and in book series published by Benjamins, Routledge, and Springer. Augusto Buchweitz is from Brain Institute of Rio Grande do Sul, Pontificia Universidade Catolica do RS (PUCRS), Porto Alegre/RS, Brazil. Michael Carl is a Professor at the Kent State University, USA. He is also Director at the Center for Research and Innovation in Translation and Translation Technology (CRITT). His current research interest is related to the investigation of human translation processes and interactive machine translation. He is a (co-)author of more than 140 papers and articles on translation, machine translation, and translation process research. Susanne Göpferich (Deceased) was Professor of Applied Linguistics in the Department of English and Director of the Centre for Competence Development (ZfbK) at Justus Liebig University, Giessen, Germany. From 2003 to 2010, she was Professor of Translation Studies at the University of Graz, Austria, and from 1997 to 2003, Professor of Technical Communication and Documentation at the Karlsruhe University of Applied Sciences, Germany. Her main fields of research comprise text linguistics, academic literacy development, specialized communication, translation theory and didactics as well as comprehensibility research with a focus on processoriented research methods. She is the author of more than 130 journal articles, book chapters, edited volumes, and monographs, including Translationsprozessforschung: Stand – Methoden – Perspektiven (Tübingen: Narr, 2008), Text Competence and Academic Multiliteracy: From Text Linguistics to Literacy xiii

xiv

About the Contributors

Development (Tübingen: Narr, 2015), and Interdisciplinarity in Translation and Interpreting Process Research (edited together with Maureen Ehrensberger-Dow and Sharon O’Brien, Amsterdam, Philadelphia: John Benjamins, 2015). Yuanjian He has taught Translation Studies at the Chinese University of Hong Kong and the University of Macau. His main research interests are in the areas of the neurocognitive design of language and its impact on language processing, including translating and interpreting as bilingual processing. Juliane House received her first degree in English and Spanish translation and international law from Heidelberg University; her PhD in Applied Linguistics from the University of Toronto, Canada; and honorary doctorates from the Universities of Jyväskylä, Finland, and Jaume I, Castellon, Spain. She is Emeritus Professor at Hamburg University, Distinguished University Professor at Hellenic American University, Athens, Greece, and President of the International Association for Translation and Intercultural Studies (IATIS). Her research interests include translation theory, contrastive pragmatics, discourse analysis, politeness, English as a lingua franca, and intercultural studies. She has published widely in all these areas. Her recent book publications include Translation: A Multidisciplinary Approach (Palgrave Macmillan 2014), Translation Quality Assessment: Past and Present (Routledge 2015), Translation as Communication Across Languages and Cultures (Routledge 2016). Arnt Lykke Jakobsen is a Professor Emeritus of Translation and Translation Technology at the Copenhagen Business School in Denmark. In 1995, he invented the keylogging software program Translog. In 2005, he established the Centre for Research and Innovation in Translation and Translation Technology (CRITT), which he directed until his retirement in 2013. His main focus of research is developing and exploiting a methodology for translation process research using keylogging and eye tracking. Defeng Li is Professor of Translation Studies at the University of Macau. He has researched and published extensively in Translation Studies as well as Second Language Education. He takes a keen interest in data-based empirical translation studies, cognitive and psycholinguistic investigation of translation processes, curriculum, and material development in translation education. Fengmei Lu receives her PhD in Biomedical Sciences from Faculty of Health Sciences, University of Macau, Macau, SAR, China. Moritz Schaeffer is a Research Associate in the Faculty of Translation Studies, Linguistics, and Cultural Studies at the Johannes Gutenberg-Universität Mainz in Germany. He was recently a Research Associate in the Center for Research and Innovation in Translation and Translation Technology at the Copenhagen Business School in Denmark, and in the Institute for Language, Cognition, and Computation

About the Contributors

xv

at the University of Edinburgh in Scotland. His research interests include cognitive modeling of the human translation process, human-computer interaction in the context of translation, and the psychology of reading. He has also conducted research on bilingual memory during translation, and error detection in reading for translation. Sanjun Sun received his PhD in Translation Studies from Kent State University in 2012. He is an Assistant Professor of Translation Studies at Beijing Foreign Studies University, where he teaches translation technology, research methods, and commercial translation. He has co-authored two books in Chinese – A Translator’s Handbook (2010) and Research Methods in Language Studies (2011) – and has published articles in Meta, Target, and other journals. His major research interests include translation process research, translation technology, and empirical research methods. He has extensive experience as a translator. Karina S. Szpak is a PhD candidate at Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, Brazil. Zhen Yuan is Assistant Professor and Director of Bioimaging Core with the Faculty of Health Sciences at University of Macau (UM), Macau SAR, China. Before joining UM, he had worked as a Clinical Assistant Professor in the Arizona State University (August 2012–August 2013) and Research Assistant Professor at University of Florida (September 2007–August 2012). He received his PhD degree in Mechanical Engineering from University of Science and Technology of China in 2002. Between 2002 and 2007, he had received several postdoc trainings in different institutes including National University of Singapore (2002–2004), Clemson University (2005), and University of Florida (2005–2007). His academic investigation is focused on cutting-edge research and development in laser, ultrasound, and EEG/ fMRI-related biomedical technologies including biomedical imaging and signal processing/spectroscopy, biomedical optics, neurosciences, cancer, and nanomedicine. As the principal investigator or co-investigator for the above research activities, he has achieved national and international recognition through more than 80 publications in high ranked journals in his field and over 2000 independent citations. He was selected to be an active reviewer for over 50 top journals. He is a guest associate editor of Medical Physics and Applied Optics and an editorial board member of Journal of Biosensors and Bioelectronics, International Journal of Radiology, and International Journal of Biomedical Sciences. He is a senior member of the Optical Society of America (OSA) and a senior member of the International Society for Optics and Photonics (SPIE).

Part I

Theoretical Models

Chapter 1

Suggestions for a New Interdisciplinary Linguo-cognitive Theory in Translation Studies Juliane House

In this chapter I will first briefly review current thinking about translation as an art, and then proceed to make a case for a new interdisciplinary linguistic and cognitive orientation in translation studies. I will then look at translation-relevant introspective and retrospective studies, as well as behavioral experiments and neuro-imaging studies. Finally I will propose a new interdisciplinary approach that sets out to combine a functional-linguistic translation theory with a neuro-functional theory of bilingualism.

1.1

Translation as an Art: The Cult of the Individual, the Social, the Cultural

Here I am referring to the currently popular trend of elevating the person of the translator, his socio-cultural embeddedness, his creativity, his visibility, his status and influence above all concerns in translation studies. Scholars in this paradigm are above all interested in the reasons for, and the effect of a translation, the necessity of “intervention” and “resistance” by the translator, the importance of her moral-ethical stance as well as her ideological, political, historical, post-colonial, feminist etc. attitudes. A pre-occupation with external, social, cultural, personal etc. factors impinging on translation “from the outside” as it were seems to me to miss the point about the essence of translation. The wide-spread view today (cf. Gentzler 2008; Prunc 2011) of translation as an art, coupled with the cult of individual translators’ creativity all emphasize translation as a new creation. Neo-hermeneutic, constructivist and various effect-oriented approaches to This paper is a revised and shortened version of (House 2013). J. House (*) Hellenic American University, Athens, Greece © Springer Nature Singapore Pte Ltd. 2019 D. Li et al. (eds.), Researching Cognitive Processes of Translation, New Frontiers in Translation Studies, https://doi.org/10.1007/978-981-13-1984-6_1

3

4

J. House

translation believe in the translator’s right to manipulate the original text (cf. Hermans 1985; Reiss and Vermeer 1984; Shamma 2009; Stolze 2003). As opposed to this view of translation as largely depending on an individual’s art of interpretation, I would argue against any excessive role of subjective interpretation in translation and the idea that the meaning of a text is never in the text but in what readers make of the text. The linguistic analysis of the original text as a first step in translation needs to show in detail how a text is what it is, that it is what it is rather than being obsessed with what that text might or should mean to a reader. Such a view is I think essentially compatible with Walter Benjamin’s (1923/1992) early implicit prioritizing the text over the person of the translator. I believe that in view of such widespread exaggerated emphasis on the subjective personal, it is necessary to renew a focus on both language and text – the linguistic focus, and on what happens in translators’ minds when they engage in translating texts – the cognitive focus. What I think is needed today is a theoretically based description and explanation of how strategies of comprehending, problem solving and decision making with reference to the text translators handle come about in their bilingual minds. Of course, such a focus does not need to be at the expense of the socio-cultural: It has long been recognized that socio-culturally shared knowledge sets as linguisticcognitive representations in the form of schemata, scripts, plans, constructions and routines result from conventionalization processes in a particular culture via the medium of language. (cf. Cook and Bassetti 2011; Sperber 1996). This recognition already found its way into translation studies in the 1990s (Wilss 1996). But this early linguistic-cognitive orientation was soon eclipsed by the rise of another paradigm: translation process research. I will not describe this research field in any detail here, since this has been done more competently in many recent work (Göpferich and Jääskeläinen 2009; Jääskeläinen 2011). Rather I will critically examine the validity and reliability of existing translation process research.

1.2

Introspective and Retrospective Translation Process Studies: How Valid and Reliable Are Their Outcomes?

Introspective and retrospective studies, frequently involving monologic, sometimes also dialogic tasks, as well as rating and other decision-related tasks have been a very productive research paradigm since their inception in the eighties. However, the validity and reliability of the verbal report data elicited in such studies have often simply been taken for granted, although they are in fact far from clear. Despite many attempts to improve the quality of thinking aloud protocol (TAP) data, the general assumptions behind these translation process research has not really been questioned. The fundamental question underlying all introspective and retrospective translation studies is that persons involved in the act of translating have substantial control over their mental processes, and that these processes are to a very large extent

1 Suggestions for a New Interdisciplinary Linguo-cognitive Theory in. . .

5

accessible to them. It is however far from clear that this assumption is valid. Even more important from the point of research methodology is the fact that at present it is not clear that this assumption CAN be confirmed or falsified. There are at least five unresolved questions with regard to translation-related introspective and retrospective research methodology, which I will list below: 1. Is what ends up being verbally expressed in thinking aloud sessions really identical with underlying cognitive processes? 2. Exactly which cognitive processes are accessible to verbalization and which are not, i.e. how can one differentiate between metacognitive monitoring and reflective (declarative) behavior on the one hand and routinized (procedural) behavior on the other hand? 3. Does the fact that translators are asked to verbalize their thoughts while they are engaged in translating change those cognitive processes that are (normally) involved in translation? In other words, are translators engaged in introspection sessions subject to the so-called “observer’s paradox”? 4. What happens to those parts of (often expert) translators’ activity that are highly, if not entirely, routinized and automatized and are thus by definition not open to reflection? (cf. Königs 1986, who distinguished an automatic, ad-hoc bloc from a “rest” bloc of cognitive translation activity). 5. With regard to retrospective translation-related research: how can data from ex post facto interviews or questionnaires access translation processes given working memory constraints and given the pressure felt by subjects to provide data that will satisfy the researcher? Is it not likely that subjects will make meta-statements about what they think they had thought? These questions, and possibly more, touch upon one of the most controversial issues in contemporary cognitive science: the nature of consciousness. Much recent neuro-scientific emphasizes the importance of the non-conscious (cf. e.g. Suhler and Churchland 2009; Nosek et al. 2011; Cohen and Dennett 2011), confirming the lack of received knowledge about consciousness emphasize the need for a comprehensive theory of consciousness that goes beyond an exclusive focus on (often inaccessible) representations.

1.3

Behavioural Experiments on the Translation Process: How Valid, Reliable and Insightful Are Their Outcomes?

In light of doubts about the effectiveness of introspective and retrospective translation process research, translation scholars looked towards more controllable behavioural experiments designed to avoid making untestable claims about the “black box”. They wanted now to directly trace linear and non-linear translational steps and phases of the translation process, measuring temporal progress or delay,

6

J. House

types and numbers of revisions undertaken by the translator, the (measurable) effort expended, the nature and number of attention foci and attention shifts as well as the frequency and kind of emotional stress responses shown by the translator. This ambitious agenda was made possible through recent, mostly computer-related technological progress such that experiments using key-board logging, screen recording, eye tracking and various physiological measures could be undertaken (cf. Jakobsen 2006). A recent overview of this line of behavioural translation-related research that often neatly combines various tools (e.g. keyboard logging and eye-tracking) is provided in Shreve and Angelone (2010) and O’Brien (2011). Two critical questions need to be asked with regard to the validity and reliability of the behavioural measures used in experimental behavioural translation process research: 1. Can measurements of observable behaviour (as provided in keyboard logging, eye-tracking etc.) inform us about cognitive processes that occur in a translator’s mind? 2. Can measurements of observable behaviour explain the nature of cognitive representations of the two languages, throw light on a translator’s meta-linguistic and linguistic-contrastive knowledge, and illuminate comprehension, transfer and reconstitution processes emerging in translation procedures? Both questions, I believe, can not be answered in the affirmative at the present time. What such experiments CAN and DO measure, is exactly what they set out to measure: observable behavior, no more and no less. This means that the results of such behavioural experiments cannot and should not be taken as indications of processes in the minds of translators, rather they may provide interesting hypotheses. If such experiments are combined with theoretical models that incorporate features of semantic representation and of processing, they may pave the way towards abandoning any clear-cut distinction between product and process in favour of more holistic and unitary perspective on product and process. (cf. Halverson 2009). It is necessary, however, to differentiate between cognitive-psychological processes and the underlying neural correlates. The number of fixations, gaze time, pause length, incidence of self-corrections etc. examined in key-logging and eye-tracking experiments do not point to the involvement of certain neurological substrates. Rather, they may highlight certain translation difficulties (cf. Dragsted 2012) and attendant decision processes, and these may be taken to involve certain neural networks more than others. Still, the crux is that the involvement of neural network x cannot tell us exactly which processes are connected with neural network x. Recently, a new research strand: Bilingual Neuro-imaging Studies has caught the interest of translation process researchers. In the following I will cast a critical look at how promising they are for the field of translation studies.

1 Suggestions for a New Interdisciplinary Linguo-cognitive Theory in. . .

1.4

7

Bilingual Neuro-imaging Studies: Are They Useful and Relevant for Translation Studies?

Can neuro-imaging studies give us “a direct window” on the translator’s “black box”, on what goes on in a translator’s mind, finally providing us with a solution to Krings’ question in 1986 “Was geht in den Köpfen von Übersetzern vor? What happens in translators’ heads”? First of all, the value of the findings of such studies is controversial (cf. Aue et al. 2009) not least because they are crucially depend on the type of task used. With the exception of some rare recent use of isolated sentences, functional magnetic resonance imaging (fMRI), positron emission tomography (PET) and Event Related Potential (ERP) studies are word-based (cf. e.g., De Groot 1997; Hernandez 2009; Klein et al. 2006; Price et al. 1999). Translation, however, is essentially text-based. Any application of neuro-imaging experimental research to translation thus faces the dilemma that translation research is essentially interested in less controllable, larger and more “messy” units. To date, neuro-imaging studies clearly lack ecological validity due to their inherent task artificiality. This is, as Michel Paradis writes, because “the use of any task other than the natural use of language (including natural switching and mixing) has the same consequence as using single words: the task does not tap the normal automatic processes that sustain the natural use of language including the contribution of pragmatics and its neural underpinnings” (2009, pp. 157–158). Over two-thirds of neuro-imaging studies on laterality and language switching and mixing use single words as stimuli, for instance in picture naming experiments where subjects are asked to switch on command (but see e.g., Abutalebi 2008 for an exception). However, as Paradis (2009, p. 160) points out, brain activity crucially differs for language use in natural situations and in language use “on cue”, and, most importantly, these situations correspond to opposite types of processes. Indeed, single words are very different from the rest of language. They are part of the (conscious) vocabulary of a language, not part of the lexicon. The latter includes morpho-syntactic properties and is integrated into each language subsystem’s neural network in the bilingual brain. Single word stimuli are explicitly known formmeaning associations subserved by declarative memory, while procedural memory underlies normal, natural language use. Each memory system relies on distinct neuro-functional structures. And normal, natural language use also critically involves cortical areas of the brain’s right hemisphere to process the pragmatic aspects of utterances – this, however, is irrelevant in processing single words that are used out of context. Another problem with neuro-imaging data that needs to be addressed relates to the nature of the evidence from neuro-imaging data: blood flow and other hemodynamic responses routinely provided in such data cannot be taken to be direct measures of neuronal activity. Further, and this is a serious methodological drawback, indeed: most neuroimaging studies have not been replicated. Many reported neurological activations are strongly task-dependent and rely on a particular technique employed, so that

8

J. House

replication is difficult. And it is this task and technique dependence which suggests that the reported activations in the brain are indicative of the particular task and technique employed rather than being indicative of language representation, processing and switching per se. Given these shortcomings, it is advisable to first look for a theory with enough descriptive and explanatory potential before expecting enlightenment from experimental neuro-imaging studies, whose relevance for translation studies is, at the present time, not clear at all.

1.5

A Neuro-linguistic Theory of the Functioning of Two Languages in the Brain

The neuroscientist Paradis has set up his own neuro-linguistic theory of the bilingual mind. The following model (see Fig. 1.1) depicting the neuro-functional and linguistic-cognitive system of the bilingual mind is presented in Paradis (2004, p. 227). The model features different levels for explicit metalinguistic knowledge of a bilingual’s two languages L1 and L2, sensory perceptions, feelings, episodic memory and encyclopaedic knowledge, a joint conceptual system and different languagespecific levels of semantics, morphosyntax and phonology. Conceptual mental representations are independent of language. In translational L1 and L2 contact situations the degree of overlap depends on their relative typological closeness. Paradis’ model emphasizes the need “to distinguish between representation and control, between what is represented and how it is represented, between what is represented and how it is accessed, and between what is represented in each language and how these language representations are organized in the brain into systems or subsystems” (2004, pp. 230–231). In Paradis’ model, L1 and L2 pragmatics encompass and feed into both the conceptual system and the different language levels. Implicit linguistic competence and metalinguistic knowledge are independent systems. Only the use of metalinguistic knowledge is consciously controlled. The use of implicit competence is automatic, devoid of conscious effort, awareness of the process involved or attention focussed on the task on hand. Languages are represented as neurofunctional subsystems of the language system (the implicit linguistic competence) which is a component of the verbal communication system that in addition to the language system contains metalinguistic knowledge, pragmatic ability and motivation. This verbal communication system is connected to the cognitive system where intentions to communicate a message are formulated or messages are received and interpreted according to the lexico-grammatical constraints of L1 and L2 that activate the relevant concepts and depend on pragmatic context-dependent inferences. The intention to communicate triggers the verbalization of the message formulated in the cognitive conceptual system. The implicit linguistic competence (“the

1 Suggestions for a New Interdisciplinary Linguo-cognitive Theory in. . .

Sensory perceptions, feelings

Metalinguistic Metalinguistic knowledge of knowledge of L1 L2

9

Episodic memory, encyclopaedic knowledge

Semantics L1

Semantics L2

Morphosyntax Morphosyntax L1 L2

Phonology L1

L2 PRAGMATICS

L1 PRAGMATICS

CONCEPTUAL SYSTEM

Phonology L2

Fig. 1.1 A schematic representation of the components of verbal communication. (From Paradis 2004, p. 227)

grammar”) constrains the encoding of the message and the pragmatics component makes selections in terms of styles, registers, discourse norms, speech act directness, politeness etc. Paradis suggests that bilinguals (including translators) have two subsets of neuronal connections, one for each language, and these are activated or inhibited (for instance in the process of translation) independently. But there is also one larger set on which they can draw items of either language at any one time. All selections are automatic – i.e. unconsciously driven by activation levels. With specific reference to translation, Paradis proposes the operation of two distinct translation strategies: 1. A strategy of translating via the conceptual system involving processes of linguistic decoding (comprehension) of source text material plus encoding (production) of target text material 2. Direct transcoding by automatic application of rules which involves moving directly from linguistic items in the source language to equivalent items in the

10

J. House

target language. In other words, source language forms immediately trigger target language forms, thus bypassing conceptual-semantic processing. However, there are some alternative approaches to Paradis’ strict division of the conceptual and the linguistic levels. Both Levinson (1997) and Pavlenko (2014) suggest a distinction between language-based concepts and concepts that are clearly not linked to language. However, De Groot (2013) reports on experimental studies that have demonstrated that bilinguals develop concepts which merge concepts specific to their two languages. Paradis’ theory is relevant for translation in that he presents an explanation for the representation model of two languages as keys to essential translation processes of decoding, comprehending, transferring, re-assembling and re-verbalising. Of particular importance in his model is, I believe, the overriding importance he assigns to the L1 and L2 pragmatics components which impact on the conceptual system and on the other linguistic levels. With regard to the joint separate conceptual system, the model can explain that expert translators more often than not do not need to access it as they move directly from the source to the target language. (cf. Königs 1986 and Tirkkonen-Condit 2004 for empirical evidence). The importance afforded by Paradis to the pragmatics component suggests the possibility of combining his model of the bilingual (translator’s) brain with a functional-pragmatic translation theory of linguistic text analysis, translation and translation evaluation (House 2009, 2015, 2016). This theory is designed to explicate how pragmatic, textual, and lexico-grammatical meanings in an original text are re-constituted in a different context, with the translation text being either a functionally equivalent re-constitution of a source text, or a complete contextual adaptation to the new L2 environment. The model provides a principled procedure for a comprehensive linguistic-textual analysis and, in the case of its use in evaluation, for a comparison of the textual profiles of source and target texts. It also integrates relevant contrastive linguistic and pragmatic research into its operation. Two fundamental operations of translation are hypothesized in this model: overt translation and covert translation. They are defined as outcomes of different types of re-contextualization procedures with qualitatively different cognitive demands on the translator: overt translation is psycholinguistically and cognitively complex, covert translation is simple. In overt translation, the addressees and recipients of the translation are quite “overtly” not directly addressed. While embedded in its new target culture context, the translation signals at the same time its ‘foreign origin’. An example would be a speech given by a prominent representative of the L1 linguaculture delivered at a specific time and place. The translator’s work in translating this speech is here important and visible. The translation can be said to be a case of “language mention”, resembling a quote. The addressees and recipients of the translation are meant to appreciate the source text in a new frame and a new discourse world. The pragmatics of the source text and the target text are mentally co-activated, and this is why overt translation can be called psycholinguistically and cognitively complex. ‘Real’ functional equivalence cannot be achieved, and is also not aimed at – only a kind of second-level functional equivalence is possible.

1 Suggestions for a New Interdisciplinary Linguo-cognitive Theory in. . .

11

Covert translation, on the other hand, enjoys the status of an original text in the target linguacultural context. For the recipient of the translation in the target linguaculture, it is not marked pragmatically as a translation at all. It is a case of “language use”, a functionally equivalent speech event created by the translator. There is no co-activation of the pragmatics of the source text and the target text in the recipient’s mind, and it is this absence of mental co-activation which explains why covert translation can be said to be a psycholinguistically and cognitively simple act. Covert translation often involves massive intervention on the levels of language/text, register and genre. And in order to achieve the necessary functional equivalence, the translator needs to make allowance for the target text’s pragmatics component. This can be done via the use of a so-called “cultural filter”, a construct capturing differences in source and target text’s addressees’ linguaculturally determined conventions and expectation norms. Cultural filtering should ideally be based on empirical cross-linguistic and cross-cultural research to guide and explain translators’ choices. An example of such research is the studies conducted by the present author over many years on English-German discourse norms in oral and written texts in many genres. They point to differences in preferences for explicitation, directness, interpersonal focus and use of verbal routines (cf. House 2006a, b). With regard to other language pairs, there is a deplorable lack of systematic contrastive pragmatic work on register and genre variation, which renders a solid theoretical underpinning of translation studies in this respect next to impossible. What is clearly needed here is a combination of qualitative, quantitative, exemplar- and corpus-based as well as experimental cross-cultural research (for promising suggestions of such a combination, see e.g. Halverson 2010 and Alves et al. 2010). Returning to Paradis’ (2004, 2009) neuro-functional model of bilingualism: how relevant is it for linguistic-cognitive translation studies, and might it be combined with a functional translation theory such as, for instance, the one described above. Paradis’ model is, I think, highly relevant for translation studies (cf. Malmkjær 2011, who also makes this point), and it may well be combined with an existing translation model (House 2015, 2016), for the following reasons: 1. The importance of the L1 and L2 pragmatics components in Paradis’ model provides support for the assumptions underlying the functional pragmatic translation theory described above, in particular with reference to (a) the concept of the cultural filter in covert translation with its hypothesized complete switch to L2 pragmatic norms (b) the hypothesized co-activation of the L1 and L2 pragmatics components in overt translation Paradis’ model supports the claim in the functional-pragmatic translation theory described above that overt translation is psycholinguistically more complex due to an activation of a wider range of neuronal networks – across two pragmatics-cumlinguistics representational networks (cf. Fig. 1.1) in the translation process. And it also supports the claim that covert translation is psycholinguistically simple since only one pragmatics-cum-linguistics representational network – the one for L2 – is

12

J. House

being activated in the process of translation. At the present time this is a hypothesis to be tested empirically.

1.6

Conclusion

For a new linguistic-cognitive orientation in translation studies that may emanate from a critical look at current research involving intro- and retrospection, behavioural experiments and neuro-imaging studies, a fresh attempt at theorizing might be a fruitful beginning. I have suggested that as a first step towards a more valid and reliable approach to investigating the translation process, one may look for a descriptively and explanatorily adequate neuro-linguistic theory of bilingualism that can be useful for, and compatible with, a theory of translation. The combination suggested in this paper is just one possible first attempt at construing a rapprochement between the disciplines of cognitive science and linguistically-cognitively-oriented translation studies. Other more potent examples may be brought forward in the course of scientific inquiry, and it may well be that these effectively falsify the rather broad and general suggestion sketched in this paper.

References Abutalebi, J. (2008). Neural aspects of second language representation and language control. Acta Psychologica, 128(3), 466–478. Alves, F., Pagano, A., Neumann, S., Steiner, E., & Hansen-Schirra, S. (2010). Translation units and grammatical shifts: Towards an integration of product-and process-based translation research. In G. Shreve & E. Angelone (Eds.), Translation and cognition (pp. 109–142). Amsterdam: John Benjamins. Aue, T., Lavelle, L. A., & Cacioppo, J. T. (2009). Great expectations: What can fMRI research tell us about psychological phenomena? International Journal of Psychophysiology, 73(1), 10–16. Benjamin, W. (1923/1992). The task of the translator (H. Zohn, Trans.). In R. Schulze & J. Biguenet (Eds.), Theories of translation (pp. 71-82). Chicago: University of Chicago Press. Cohen, M. A., & Dennett, D. C. (2011). Consciousness cannot be separated from function. Trends in Cognitive Sciences, 15(8), 358–364. Cook, V., & Bassetti, B. (2011). Language and bilingual cognition. New York: Psychology Press. De Groot, A. M. B. (1997). The cognitive study of translation and interpretation: Three approaches. In J. H. Danks, G. M. Shreve, S. B. Fountain, & M. K. McBeath (Eds.), Cognitive processes in translation and interpreting (pp. 25–56). Thousand Oaks: Sage. De Groot, A. M. B. (2013). Bilingual memory. In F. Grosjean & P. Li (Eds.), The Psycholinguistics of Bilingualism (pp. 171–191). Oxford: Blackwell. Dragsted, B. (2012). Indicators of difficulty in translation: Correlating product and process data. Across Languages and Cultures, 13(1), 81–98. Gentzler, E. (2008). Translation and identity in the Americas: New directions in translation theory. London: Routledge.

1 Suggestions for a New Interdisciplinary Linguo-cognitive Theory in. . .

13

Göpferich, S., & Jääskeläinen, R. (2009). Process research into the development of translation competence: Where are we, and where do we need to go? Across Languages and Cultures, 10(2), 169–191. Halverson, S. (2009). Psycholinguistic and cognitive approaches. In M. Baker & G. Saldanha (Eds.), Routledge encyclopedia of translation studies (2nd ed., pp. 211–216). London: Routledge. Halverson, S. (2010). Cognitive translation studies: Developments in theory and method. In G. Shreve & E. Angelone (Eds.), Translation and cognition (pp. 349–370). Amsterdam: John Benjamins. Hermans, T. (1985). The manipulation of literature. Studies in literary translation. London: Croom Helm. Hernandez, A. E. (2009). Language switching in the bilingual brain: What’s next? Brain and Language, 109(2), 133–140. House, J. (2006a). Communicative styles in English and German. European Journal of English Studies, 10(3), 249–267. House, J. (2006b). Text and context in translation. Journal of Pragmatics, 38(3), 338–358. House, J. (2009). Translation. Oxford: Oxford University Press. House, J. (2013). Towards a new linguistic-cognitive orientation in translation studies. Target, 25(1), 46–60. House, J. (2015). Translation quality assessment: Past and present. Oxford: Routledge. House, J. (2016). Translation as communication across languages and cultures. Oxford: Routledge. Jääskeläinen, R. (2011). Back to basics: Designing a study to determine the validity and reliability of verbal report data on translation processes. In S. O’Brien (Ed.), Cognitive explorations of translation (pp. 15–29). London: Continuum. Jakobsen, A. L. (2006). Research methods in translation: Translog. In K. Sullivan & E. Lindgren (Eds.), Computer key-stroke logging and writing: Methods and Applications (pp. 95–105). Oxford: Elsevier. Klein, D., Zatorre, R. J., Chen, J.-K., Milner, B., Crane, J., Belin, P., & Bouffard, M. (2006). Bilingual brain organization: A functional magnetic resonance adaptation study. Neuroimage, 31(1), 366–375. Königs, F. (1986). Adhoc versus rest-block: Textuelle elemente als auslöser des Übersetzungsprozesses und didaktische entscheidungshilfen. In W. Kühlwein (Ed.), Neue Entwicklungen der Angewandten Linguistik (pp. 15–36). Tübingen: Gunter Narr. Levinson, S. C. (1997). From outer to inner space: Linguistic categories and non-linguistic thinking. In J. Nuyts & E. Pedersen (Eds.), Language and conceptualization (pp. 13–45). Cambridge: Cambridge University Press. Malmkjær, K. (2011). Translation universals. In K. Malmkjær & K. Windle (Eds.), The Oxford handbook of translation studies (pp. 83–93). Oxford: Oxford University Press. Nosek, B. A., Hawkins, C. B., & Frazier, R. S. (2011). Implicit social cognition: From measures to mechanisms. Trends in Cognitive Sciences, 15(4), 152–159. O’Brien, S. (2011). Cognitive explorations of translation. London: Continuum. Paradis, M. (2004). A neurolinguistic theory of bilingualism. Amsterdam: John Benjamins. Paradis, M. (2009). Declarative and procedural determinants of second languages. Amsterdam: John Benjamins. Pavlenko, A. (2014). The bilingual mind: And what it tells us about language and thought. Cambridge: Cambridge University Press. Price, C. J., Green, D. W., & Von Studnitz, R. (1999). A functional imaging study of translation and language switching. Brain, 122(12), 2221–2235. Prunc, E. (2011). Entwicklungslinien der Translationswissenschaft. Berlin: Frank & Timme. Reiss, K., & Vermeer, H. J. (1984). Grundlegung einer allgemeinen Translationstheorie. Tübingen: Niemeyer.

14

J. House

Shamma, T. (2009). Translation and the manipulation of difference: Arabic literature in nineteenth-century England. Manchester: St. Jerome. Shreve, G. M., & Angelone, E. (Eds.). (2010). Translation and cognition. Amsterdam: John Benjamins. Sperber, D. (1996). Explaining culture: A naturalistic approach. Oxford: Blackwell. Stolze, R. (2003). Hermeneutik und Translation. Tübingen: Gunter Narr. Suhler, C. L., & Churchland, P. S. (2009). Control: Conscious and otherwise. Trends in Cognitive Sciences, 13(8), 341–347. Tirkkonen-Condit, S. (2004). Unique items: Over-or under-represented in translated language? In A. Mauranen & P. Kujamaeki (Eds.), Translation universals – Do they exist? (pp. 177–184). Amsterdam: John Benjamins. Wilss, W. (1996). Knowledge and skills in translator behavior. Amsterdam: John Benjamins.

Chapter 2

Translating and Interpreting as Bilingual Processing: The Theoretical Framework Yuanjian He

2.1

The Core Assumptions

No matter how we look at it, translation or interpreting as an end-product is the salient outcome of the bilingual processing in the brain of the working translator/ interpreter. This intrinsic or inherent aspect of translating or interpreting is an obvious subject matter for scientists in the field of neurocognitive bilingualism as well as for translation scholars who concern themselves with the neurocognitive aspects of the translating or interpreting processes. For both groups, the goal of the enquiry, which we may simply call the scientific study of translation and interpreting, is to find out what happens in the brain of the working translator and interpreter that brings about what we read as a translated text or hear as an interpreted speech. For scientists, it is through investigating the neural basis and neurocognitive processes in which a bilingual handles translating or interpreting tasks that they gain insights into this aspect of bilingualism, including cerebral organizations and operations of the bilingual brain (cf. Paradis 1994a; Christoffels and De Groot 2005; De Groot and Christoffels 2006; De Groot 2011, 2013). For translation process researchers, the scope of investigation is less ambitious and more focused. At this point in time, there is an urgent need for both investigative methodologies to engage empirical work and feasible theoretical frameworks to evaluate and explain the results of empirical work. Methodologies include deploying large-scale bingual parallel corpora and/or applying laboratory (or sub-laboratory) methods for cognitive and neuroscience research (such as key-logging, eye-tracking, EEG, MEG, PET, fMRI, fNIR and so on). Corpus technology is more accessible to the majority of translation researchers since the lab methods require larger funding and more elaborative conditions to set up and run. However, despite its application

Y. He (*) University of South China, Hunan, China e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2019 D. Li et al. (eds.), Researching Cognitive Processes of Translation, New Frontiers in Translation Studies, https://doi.org/10.1007/978-981-13-1984-6_2

15

16

Y. He

to translation research over the past two decades, the progress that has been made with corpus technology is still limited partly because some of the techniques are designed mainly for monolingual corpora common to corpus linguistic operations. For example, the automated lexis-based concordance diagnosis is yet to be perfected when applied to bilingual (or multilingual) parallel texts which translation research has to deal with and where manual tagging of textual units in the source text and the corresponding rendering strategies in the target text(s) is usually unavoidable and indispensable, not to mention the enormous task of conceptually and descriptively characterizing a textual unit in the ST and its rendering strategies in the TT(s) before tagging. Renovations and re-designing are therefore called for. The pros and cons of applying corpus technology to translation research have been critically reviewed (e.g., Kenny 2007; Pym 2008; He 2016, 2017) and we will leave those issues aside and address them separately in another paper. In what follows, we will first present an integrated perspective to language processing (for both monolingual and bilingual settings). It is deemed to be an entirely fresh one in the sense that it integrates four major theories of human language and cognition: (a) the theory of Universal Grammar (Chomsky 1995, 2012), (b) the computational theory of language processing of the mind (Pinker 1994, 1999, 2008), (c) the neurocognitive bilingualism (De Groot 2011), and (d) the theory of neurofunctional control in the bilingual brain (Paradis 2004, 2009). They represent four key assumptions. Firstly, language processing is subserved by three subsystems of the brain: (a) the Conceptual-Intentional-Contextual (CIC) System (also known as Thought System) that formulates concepts, intentions and contexts for outgoing speech, and interpret the same in incoming speech; (b) Language Faculty that constructs structured linguistic expressions representing the outgoing concepts, intentions and contexts, and parses the incoming expressions; (c) the Articulatory-Perceptual (AP) System (also known as sensor-motors) that verbalizes outgoing linguistic expressions and preceives incoming ones. Secondly, for a bilingual speaker such as the translator or interpreter, the abovementioned three subsystems contain relevant and necessary contents of two languages so as to make bilingual processing possible. Thirdly, an activation-inhibition mechanism operates as cognitive control for mono- as well as bilingual processing. In the latter case, the motivation to process one language rather than the other by the bilingual speaker will lower the activation threshold for the targeted language and raise that of the non-targeted one. Fourthly, the very process and the outcome of language processing are hinged on the overall operational interaction between memory and computation of relevant subsystems of the brain. Operationally, each subsystem is partly memory and partly computation. Memory applies as a priori and whenever it fails, computation takes over. Such is the system design that necessitates a processing economy to the effect

2 Translating and Interpreting as Bilingual Processing: The Theoretical Framework

17

that processing by memory is cognitively less costly than by computation, and that the cost of computation itself varies due to various routes of computation. Based on the perspective of those assumptions, we will then present a theoretical framework for treating translating or interpreting as bilingual processing, with previous proposals elaborated (e.g., Christoffels and De Groot 2005; De Groot 1997, 2011; De Groot and Christoffels 2006; Paradis 1994a, b). Cerebral correlations between language processing and the subserving brain areas are also discussed before concluding.

2.2

An Integrated View of Language Processing

Among the definitions of language processing, the following is one that succinctly covers the linguistic as well as the conceptual aspect of it: Language processing refers to the generation of an utterance from the intention to communicate a message to its acoustic realization, or from the perception of the acoustic signal to the extract of a message (comprehension). The message itself in the conceptual domain, hence not linguistic; it is alternately the output and input of the linguistic system. (Paradis 2004, p. 240)

Bilingual processing such as instantiated in translating or interpreting involves two languages instead of one in the alternately operated production and perception process. It is a cognitively complex and complicated process that any theoretical perspective to it that does not address the fundamentals of human higher cognitive systems such as the Language Faculty, memory and the thought systems will simply be self-defeating.

2.2.1

Language Faculty

For Noam Chomsky (e.g., 1995, 2012), every child is born with an initial state of it before exposed to a specific language and hence it is the reason why the child is able to speak at all. The initial state is perceived as being a set of language-nonspecific settings for grammatical operations (morphological, syntactic, phonological, prosodic, phonetic, etc.) and is thus called the Universal Grammar (UG). After the child is exposed to a particular language, language-specific settings will be mapped onto the brain, probably stored as procedural memory subsets. For Chomsky, language is meaning with sound, instead of sound with meaning (the traditional belief since Aristotle). Linguistically, meaning is the semantic representation of “thought” decomposable into “concepts, intentions and contexts”. Technically, it is characterized by an infinite discrete set of structured expressions generated by the Language Faculty. When the thought systems initiate out “concepts, intentions and contexts” (for purposes of communication), those “concepts,

18

Y. He Language Faculty Inflectional Affixation Complex Word Formation

Lexis

Phonetic Interface

ArticulatoryPerceptual System/ Sensor-motors

Syntax Logical Interface

ConceptualIntentionalContextual System/ Thought Systems

Fig. 2.1 The language faculty from a speech production point of view

intentions and contexts” will be construed into structured expressions by the Language Faculty. While the intended “concepts, intentions and contexts” (by the speaker) are semantically, and internally (to the subsystems of the brain), represented by structured expressions, the sensor-motor systems operate itself to give those structured expressions an externalization – sound or sign. Thus, the Language Faculty does two things: being connected to the thought systems and sensor-motors on the one hand, and generating structured expressions on the other. For those tasks, it has four components: Lexicon, Syntax, the Phonetic Interface (to sensor-motors) and the Logical Interface (to the thought systems), as illustrated in Fig. 2.1:1 Figure 2.1 is a refined and yet confined state of the Language Faculty viewed from a speech production point of view. The arrows indicate the links between the different components (to be explained as we continue), with the bold-lined arrows (here and henceforth) indicating where the lexical feeds come from. In a nutshell, the thought systems initiate “concepts, intentions and contexts” for lexical representations in Lexicon, where relevant lexical items (root words, words with derivational and/or inflectional affixes, compounds, etc.) are stored and assembled; then the selected lexical items are assembled in Syntax into structured expressions (more commonly called sentences), which are then assigned with appropriate semantic and phonetic features respectively by the Logical and the Phonetic Interfaces before being interpreted by the thought systems and verbalized, if so wished by the speaker, via sensor-motors. As an arrow indicates, a phrase formed in Syntax can be input back to Lexicon as a component for a compound, e.g., [off-the-range] targets, [out-of-the-stock] goods, and so on (cf. Pinker 1999, p. 205). Chomsky’s (e.g., 1995, 2012) original term is “the Conceptual-Intentional System” and we expand the term to “the Conceptual-Intentional-Contextual System” for the purpose of language processing as a whole including a pragmatic element. See Kiparsky (1982) for the composition of Lexicon; Chomsky (1995) for all four components of the Language Faculty; Pinker (1999, pp. 202–205) for the connections between Lexicon and Syntax; and He (2006b) for an assembly of all those elements. All figures in this paper are meant to illustrate the relevant concepts and relationships between them, as exposited in relevant theories, rather than the exaction of those concepts and relationships. 1

2 Translating and Interpreting as Bilingual Processing: The Theoretical Framework

19

Technically, a sentence is assembled on what may be called the head-of-thephrase principle, a language-nonspecific rule which says that each and every lexical item of a sentence is the head of its own phrase and a sentence is the ultimate phrase formed by combining all phrases within it together. For instance, the head of a sentence in a language like English is the tense marker which takes one or two arguments (a subject only or a subject and a predicate); each of the arguments has its own head, a noun as the head of a subject for instance, and a verb as the head of a predicate, which in turn may take one or two arguments (a direct object, a direct and an indirect object, a direct object and a locative/goal/source/etc. complement). Each head will form its own phrase, e.g., a noun forming a noun phrase and a verb forming a verb phrase, and then it merges into another phrase with a higher head. So, a noun phrase merges into a tense-marker phrase to function as a subject or into a verb phrase to function as an object, and a verb phrase merges into a tense-marker phrase to function as a predicate. Computed on the head-of-the-phrase principle, a structured expression is formed by exhausting the supplied lexical items. The head-ofthe-phrase principle is considered to be innate and hence universal across all languages. In fact, the internal relations between constituents within a phrase are also thought to be innate (Pinker 1994, pp. 107–109), another language-nonspecific phenomenon that may be called the head-complement/modifier/specifier rule, where complement/modifier/specifier refers to a phrase combining itself to the head of another phrase. Those relations manifest themselves in both syntax and morphology (i.e., combining lexical items into a complex word form) of any language. Children do not have to learn it specifically but simply pick it up through language exposures. For instance, “She cleans carpet” is [specifier [head-complement]] in English syntax and “carpet-cleaning person” is [[head-complement] specifier] in English morphology, both displaying the same structural hierarchy.2 Although vital to make language actually usable, the thought systems and sensormotors are not language-dependent. Sensor-motors distinguish between speech and non-speech sounds and verbalize the abstractions of speech sounds as instructions from the Language Faculty in speech production, as well as transmitting abstractions of speech sounds back to the Language Faculty in speech perception. “Thought” in its very nature is decomposable into concepts, intentions and contexts (a reason that makes language usable at all; Pinker (2008 pp. 90–91, 150)). Thus, the thought systems consist possibly of a number things: (a) a repertoire of concepts (such as entity, type, time, space, place, causation, resemblance, etc.); (b) a repertoire of intentions (such as wanting to express feelings and to do things); (c) a set of interpretive rules and norms for applying concepts and intentions to various communicative contexts (such as social and cultural settings); (d) metalinguistic knowledge of the language(s) of the speaker which he/she has learned conscientiously and/or from contextual inference; (e) processes of metaphorical abstraction from linguistic expressions; and so on and so forth. We assume that all these concepts,

2

A language-specific case is [carpet-clean]-er] where the specifier is derivationally an affix to the verb but semantically understood to be hierarchically higher than the complement.

20

Y. He

intentions, contexts and so on – are stored as memory subsets; when encoded, they are semantically and pragmatically represented by lexical and syntactic constructs computed on grammatical rules. A central issue that concerns the relations between the thought systems and the Language Faculty is what it may look like in general after the content of the thought systems – concepts, intentions or contexts – is encoded by the Language Faculty. This issue was rarely discussed before but is vitally important for the study of translating and interpreting where the thought systems interpret and conceptualize the message in the source language and then have it encoded in the target language by the Language Faculty. Here it suffices to say that concepts/intentions/contexts are of a universal or culture-specific nature. The universal ones are not decomposable by lexical representations alone (hence the reason for being universal; Pinker 2008, pp. 90–91, 150). Some universal concepts/intentions/contexts will be encoded in similarly ways across languages and others will be encoded in language-specific ways. Culture-specific concepts/intentions/ contexts will be always encoded language-specifically. The understanding of this is significant for the study of cross-linguistic encoding of any “human thought” initiated from an original source as is in translating and interpreting. We will discuss the whole issue in a separate paper. Chomsky’s main concern has been with how the Language Faculty internally generates structured expressions and he is “a paper-and-pencil theoretician” (Pinker 1994, p. 52). In the context of language processing, however, speech production is just one way of language traffic, so to speak; the other way is when the speaker is not producing an expression but rather receiving one. To see the whole picture, a more comprehensive cognitive make-over of language processing is needed. Such is purposefully provided by Steven Pinker in his trilogy of cognitive psychology of language (of 1994, 1999 and 2008), which advocates a computational theory of the mind on how language is processed in both speech production and perception.

2.2.2

Interplay of Memory and Computation

For Pinker, the mind (or relevant subsystems of the brain) is part of memory (which stores concepts, intentions and contexts and their lexical and phonological representations – root words, complex words, affixes and so on) and part of vast computing power (which computes memory items into structured expressions, including root/ bound words and/or affixes into complex word forms). Memory and computation represent the division of labour of cerebral operations and they interact and complement each other to make language processing possible at all. In essence, there are two processing routes: one we may call “the memorycomputation loop” as illustrated in Fig. 2.2, and the other “the memory loop” as illustrated in Fig. 2.3 later on. Figure 2.2 is a development from Fig. 2.1. Instead of being oriented solely to speech production, Fig. 2.2 takes perception into account, too. The Grammar-Parser

2 Translating and Interpreting as Bilingual Processing: The Theoretical Framework

21

Conceptual-Intentional-Contextua l System (Thought Systems)

Long-term Memory (words+ other items)

Logical-Interface Grammar-Parser Phonetic-Interface

Short-term Memory

Articulatory-Perceptual System (Sensor-motors)

Speech-Production

Speech-Perception

Fig. 2.2 The memory-computation loop

(a set of lexical and syntactic rules) together with its two interfaces, the LogicalInterface (a set of semantic rules) and the Phonetic-Interface (a set of phonological, prosodic and phonetic rules), is the computational part of the Language Faculty. The memory part is a collection of memory items (words and others) in long-term memory (LTM). In speech production, the thought systems, driven by communication needs, will formulate “a mentalese” or “a message” that instructs the Language Faculty (lexical memory and the Grammar-Parser) to generate structured expressions to be verbalized via sensor-motors to meet the communication needs. The process necessitates retrieval of words from memory and computes them into structured expressions by the Grammar-Parser. Presumably, single words are sometimes retrieved and verbalized without going through the stage of structural computation, as single-word utterances prove it. In speech perception, the sounds or signs of an incoming expression are picked up by sensor-motors in a sequence heard by ears or read by eyes. The abstraction of any identifiable phonetic and phonological content goes to lexical memory and the Grammar-Parser. If this content can be registered with relevant entries of the memory, i.e., it is known to the hearer, it serves as lexical feeds to the GrammarParser for structural reconstruction, or parsing as technically known, i.e., being computed into a structured expression (in the hearer’s brain) identical to what the original speaker said (Pinker 1994, pp. 198–206).

22

Y. He

Conceptual-Intentional-Contextua l System (Thought Systems)

Long-term Memory (words+ other items)

Short-term Memory Articulatory-Perceptual System (Sensor-motors)

Speech-Production

Speech-Perception

Fig. 2.3 The memory loop

Either in speech production or perception, the job of the Grammar-Parser is to build structured expressions, with lexical feeds from memory directly (as in production) or from what is registered with memory by word sequences via sensor-motors (as in perception). Recalling that structure-building is based on the head-of-thephrase principle in production. There is evidence that the same principle applies to parsing, too. For instance, listening to a speech accompanied by sight-reading the script presumably helps locate the head of a phrase, e.g., the verb of a predicate or a verb phrase, thus making parsing easier in the brain. Also, studies on comprehension in simultaneous interpreting (SI) show that a clause is the preferred unit of interpreting, whereby the verb phrase is the crucial part of a unit and the interpreter holds verbalization till hearing the verb (Christoffels and De Groot 2005, pp. 457–458). After the Grammar-Parser reconstructed an incoming expression, it is interpreted in the thought systems and becomes a piece of mentalese, or a message containing the original conceptual and intentional information. In cases where an incoming expression is in the form of a single word, it need not go through the phase of parsing. Instead it goes to memory for a match. When the match is found, it is interpreted in the thought systems. When the two parts of the mind, memory and computation, interplay to tackle the task of language processing, the operational tenet is that while memory applies as a priori, computation takes over whenever memory fails, a tenet which Pinker tries to

2 Translating and Interpreting as Bilingual Processing: The Theoretical Framework

23

strike home in his theories (1994, pp. 427–429; 1999, pp. 300–304). As illustrated in Fig. 2.2, memory items stored in long-term memory are retrieved to the GrammarParser which are part of the computational networks of the mind. In production, when a word cannot be retrieved from memory (due to not being stored there for instance), an alternative word or a syntactic solution may be sought. For example, when the word “cup” is not retrievable, the word “mug” could be retrieved as an alternative; or a phrase like “a small cylindrical container with a handle used for drinking tea or coffee” could be computed to fulfil the need of verbalizing “the concept of a ‘cup’”. In perception, when a word is heard but not registered with memory, its lexical meaning is not interpretable (by the thought systems) if it comes in as a single-word utterance. If it is part of an expression, it will not be so easily interpreted for either its lexical meaning or structural function as a constituent, though the structural and pragmatic contexts may provide some clues for some sort of interpretation. Also, in speech production, any memory item (words, phrases, sentences or even texts) can be verbalized without going through grammatical construction. Likewise, in speech perception, any item that comes in and registers with a match (i.e., an existing entry) in memory will be interpreted by the thought systems without going through parsing. In other words, memory can be the sole processer for speech, provided that the memory item makes coherent utterances in production and sensible conceptual and/or intentional interpretation in perception; otherwise, computation will kick in for grammatical (re-) construction. This understanding is vitally important in the current context of discussion. In purely theoretical terms, when memory acts as the sole processer for speech, the processing route looks as if the Grammar-Parser were taken out of Fig. 2.2 and becomes what may be called the memory loop, as illustrated in Fig. 2.3. In Fig. 2.3, memory is the sole supplier for what is to be verbalized in production and what is to be matched in perception. Here, as stated earlier, memory item (words, phrases, sentences or even texts) can be verbalized without going through grammatical construction. Likewise, in speech perception, any item that comes in and registers with a match (i.e., an existing entry) in memory will be interpreted by the thought systems without going through parsing. The average adult speaker memorizes a good number of words in various forms (affixes, free and bound roots, compounds, etc.) and items larger than words, like phrases, sentences and even texts, e.g., proper names of a phrasal form like “the International Monetary Fund”; idiomatic expressions like “to keep close tabs on someone”; sentences that are frequently spoken or heard, like “you have the right to remain silent when questioned; anything you say or do may be used against you in a court of law”. When a poem is recited or a song is sung (without the singer sightreading the lyric), it is retrieved (probably bits by bits from its entirety in memory) and verbalized, rather than its component words being retrieved from memory and syntactically computed into the required stanzas at that very moment, which would have been the case when it was composed by its author. Likewise, when a poem or a song is heard (or read), if it is already memorized by the hearer (or reader), it will pass through the AP system and interpreted straightaway by the thought systems,

24

Y. He

without being parsed first, though the parsing function could well be activated (for the brain is functioning all the time) but not so actively engaged as otherwise. Extra-linguistic information about the world around us in relation to language use, like contextual clues (relating to manners of speaking, settings and scenes in communication, etc.), as well as facts and learned knowledge, can enter memory too and stored as long-term memory subsets. These are either pragmatic relations for language use or encyclopaedic knowledge that the speaker may have. How an item gets memorized has to do with how frequently it is exposed to the speaker, often reinforced by rote learning, such as learning a vocabulary in a language class. Of course it varies in speakers as to how much linguistic and other types of information they have stored in memory. It depends on the speaker’s biological and mental maturation (to a point) and life experience. However, the underlying principles that govern the cognition of speech processing are the same for everyone: (a) it takes both memory and computation to make it happen; (b) computation takes over when memory fails. The implication is profound in two ways. Firstly, one speaker’s memory-based speech production or perception could be another’s computational labour. Such understanding is vital when we observe the speech out-flow, for instance, of a simultaneous interpreter on site and compare it with that of another interpreter. Secondly and perhaps more importantly, the way speech processing works, i.e., the interplay of memory and computation, is not designed randomly but for a reason. The reason is for system economy (assuming all biological systems have to work economically one way or another). For Pinker, the “memory vs. computation” or “analogue vs. digital” way of processing information is simply the way it is of the universe, of which the human cognitive systems are just a part (Pinker 1999). But memory-based processing is limited, because anyone’s memory is ultimately limited. The Grammar-Parser has to be there to handle complex computational tasks (such as applications of morphological and syntactic rules) that fall beyond memory’s capacity.

2.2.3

Neurocognitive Bilingualism

Neither Chomsky nor Pinker has touched on the issue of bilingual processing in any systematic exposition. Nevertheless, the central assumptions in their theories do not differ so much from those for neurocognitive bilingualism. For the latter type of research, the core assumptions, according to De Groot (2011, p. 5), a leading neurocognitive psychologist of the field, include that “linguistic knowledge units like words, phonemes, and, perhaps, grammatical rules each have a representation in long-term memory, and that language processing involves the concerted activation and deactivation of subsets of those memory representations,” and these assumptions apply “to both language comprehension and language production.” The key difference, as we can see, is that for Chomsky and Pinker, instead of seeing the rule systems of the Language Faculty as memory subsets, they are innate and hence universal at the initial state (i.e., when the child is born) and instantiated as

2 Translating and Interpreting as Bilingual Processing: The Theoretical Framework

25

genetically-transmitted and “hard-wired” biological organs of the brain (Chomsky 2012, p. 4; Pinker 1994, pp. 307–309), whereas for people like De Groot the issue remains on debate (2011, pp. 6–7). Taking the Chomskian and Pinkerian view, the Language Faculty need not be acquired and stored as procedural memory subsets. Only language-specific properties will be acquired and stored as such at later stages of the child’s language development. Neurocognitively, this is perhaps the fundamental difference between the initial state of the Language Faculty and specific settings of a particular language the child acquires to speak and tune into the initial state. Take, for instance, the so-called head-of-the-phrase principle. Even though this is supposed to be innate and hence universal, languages have their own ways of arranging items to precede or follow the head of a phrase. For example, French noun phrases are head-initial, as in “la [banner rouge]”, whereas English and Chinese ones are head-final, as in “the [rad banner]” and “[hong (red) qi (banner)]”. In fact, this is how the six major word orders across the world languages are arranged when we take the verb as the head a clause: SV(O), S(O)V, VS(O), V(O)S, (O)VS and (O)SV. The child who is exposed to a particular language will be tuned into the head-initial or the head-final order while he/she applies the innate head-of-the-phrase principle. In other words, some grammatical rules are innate and universal, like the head-of-the-phrase principle, and others are not but language-specific, like the headinitial or the head-final order, sometimes called the head-parameter. If this line of theorization holds, the innate grammatical rules are not stored in long-term memory but the intrinsic properties of the Language Faculty. On the other hand, languagespecific requirements or rules, which the child learns after being exposed to that language, are likely to be stored as memory subsets in long-term memory. How the two sets of rules, innate (and universal) and learned (and language-specific), operate together in a concerted manner remains to be understood better. It is likely to be a matter of cognitive control. For the current discussion, we will take the Chomskian and Pinkerian line of theorization and leave to the scientific community the burden of proof whether or not the Language Faculty is a genetically-transmitted and hence “hard-wired” biological organ of the human body. Taking this view and theoretically speaking, there is not much difference between unilinguals and bilinguals in how the mental grammar will influence them in principle. Both groups possess an inherited set of innate rules and have acquired some language-specific requirements. The only difference lies in the fact that the bilingual speaker, or the multilingual speaker for that matter, has experienced exposure to two (or more) languages at one point in life after birth and thus acquired the language-specific requirements of those languages, which he/she speaks at a level of proficiency that is linked to the circumstances under which those languages were acquired. Apparently, our translator/interpreter belongs to this group of speakers. There are a number of theories and hypotheses, accompanied by a greater number of studies, on how a bilingual speaker acquires L2 (the second language after his/her native one, often termed L1), how his/her bilingual mental lexicon is organized and words of L2 or L1 are retrieved, and how syntactic processing of L2 may differ from

26

Y. He

that of L1. For our purpose of treating translating or interpreting as an act of bilingual processing, we take a minimalist approach with a few core assumptions. Firstly, we will not discuss acquisition, for we assume that the translator/interpreter is the ideal bilingual speaker, who as an adult has already acquired both L1 and L2 at the native level of proficiency. Either language can be the SL (source language) or the TL (target language). And the SL is always the language that comes in for perception and the TL the one that goes out as production. Secondly, we assume that the translator/interpreter as the ideal bilingual speaker will process the syntax of L1 and that of L2 to an equal level of proficiency. In the study of bilingualism, there are three approaches to L2 syntactic processing. One approach predicts a knowledge deficit, another a processing deficit, and yet another predicts both. The knowledge deficit approach, known as the Interpretability Hypothesis, stipulates that L2 lacks the same syntactic representation in the brain as L1 but is processed just as the same (Tsimpli and Dimitrakopoulou 2007). The processing deficit approach, also known as the Continuity Hypothesis, holds the opposite view (Hopp 2009, 2010). Combining both views is the so-called Shallow Structure Hypothesis that L2 lacks both the same level of syntactic representation and that of processing as L1 (Clahsen and Felser 2006). There is evidence for all three approaches and the research results are far from being conclusive at this stage. One indicator is L2 proficiency. “Grammatical violations elicit the same patterns of ERP responses (the N400 and P600), and activity in the brain regions, in proficient L2 speakers and native speakers, but different ones in non-fluent L2 users” and proficient L2 speakers even deploy L2 parsing strategies when parsing L1 sentences (De Groot 2011, pp. 81, 219). As we know, in normal circumstances of professional translating and interpreting, the translator/interpreter is, or has to be, a very proficient bilingual of both L1 and L2 (whichever is his/her native tongue makes little practical difference). Therefore, to regard him/her as the ideal bilingual speaker has sound empirical grounds. Thirdly, we assume that the mental bilingual lexicon contains all the core lexis of both languages, sufficient for any general purpose of translating or interpreting between them. Because translating or interpreting often focuses on a subject area, for instance science, technology, medicine, tourism, military affairs, government and public administration and so on, the translator/interpreter cannot be expected to rememorize all related words and jargons across the subject areas. This is where the content of the bilingual mental lexicon of a translator or interpreter differs from that of another. In particular, if he/she has worked in a subject area for a long time and becomes familiarized with the relevant words and jargons in both languages, there will be a subject-related subset of L1-L2 lexical pairings that stores itself in long-term memory. The larger the subset is and the deeper it is stored in memory, the more experienced the act of translating or interpreting will appear to be. This has become known as “the cognitive signature” of the translator or interpreter (cf. Paradis 1994a; He 2011). However, the so-called “pairing” does not mean that a pair of L1-L2 words is stored as an inseparable unit, but rather it means that while they are stored as individual units, there is such a low threshold of activation when

2 Translating and Interpreting as Bilingual Processing: The Theoretical Framework

27

one of them presents itself as the SL word that it will simultaneously activate the other as the target word. A central and intensely-studied topic in neurocognitive bilingualism is how the mental lexicon of a bilingual is organized. First, does it equal the monolingual lexicon of L1 plus that of L2? The answer is Yes, because, no matter how the individual has acquired two languages, the lexis of L1 and that of L2 would constitute the subsystems of the larger system that combines them, i.e., the bilingual lexicon. Second, what is the link between the two subsystems? It is not lexical meaning since it is part of a language and the lexical semantics of a language (L1 or L2) does not always converge on another. For instance, the meaning of “borrow” and that of “lend” are encoded by two separate words in English, i.e., “to borrow” and “to lend”, but they are encoded in one word in Chinese, i.e., “jie” (to borrow/lend). Sometimes, such one-word-in-L1 vs. two-words-in-L2 situation is cross-category and evokes very complex grammatical operations. For instance, the adjective “solemn” in English also has a causative verb form “to solemnize”, but Chinese has only the adjective form “yansu”, which on the surface can be used in the same way as “to solemnize” in English, if we take “xuexiao yansu-le jilv” in Chinese as the English equivalent of “the school has solemnized its disciplines”. The Chinese sentence also means “the school has tightened/enforced its disciplines”. All three verbs in English, “to solemnize/tighten/enforce”, has a causative meaning, i.e., “to cause . . . to become solemn/tight/enforced”, ensued by their morphological markings (-ize/-en/ en-). The question is: since the Chinese “yansu” has no morphological marking for causativity, where does its causative meaning come from? Alternatively, “xuexiao yansu-le jilv” in Chinese can be re-phrased as “xuexiao shi jilv yansu-le” (the school has made its disciplines become solemn). The minimal contrast between them, i.e., “yansu” alternating its syntactic positions in accordance with the presence or absence of the syntactic causative marker “shi” (to make/cause), makes us to believe that instead of getting its causativity from morphology like English “to solemnize”, “yansu” in Chinese gets its causativity from syntax. Now return to the issue of “to solemnize” and “yansu” being lexical entries in English and Chinese respectively. The point they illustrate is that both cover a concept of causativity that is beyond language mechanisms that operate to encode this concept in English and Chinese separately. In other words, they, together with the example mentioned earlier, demonstrate that the lexical semantics of a language does not equal to that of another so that it cannot serve as the link between the sub-lexicons of a bilingual. What is likely to be the link is a higher conceptual system, a “conceptualizer” (De Groot 2011, p. 225) which is language-independent and stores “concepts” (Paradis 2004, pp. 199, 201). A concept is the mental representation or conceptualization of “a thing (object, quality or event) formed by combining all of its characteristics or particulars” and “can be acquired through experience, by organizing features of perception into coherent wholes” (Paradis 2004, p. 199). The notion that a conceptualizer links the sub-lexicons of a bilingual is known as the “ThreeStore Hypothesis” (Paradis 2004, pp. 196–198). In the current context of discussion, the conceptualizer per se is part of the Conceptual-Intentional-Contextual System, or

28

Y. He

simply the thought systems, mentioned earlier in Figs. 2.2 and 2.3. According to Pinker (2008), word meanings can be decomposed into certain basic concepts that a human’s intellectual capacity is indispensable with, concepts such as “cause”, “means”, “event” and “place”; as such, they are considered to be innate (2008, pp. 90–91, 95). The idea that while some concepts are innate due to being so basic and universal that word meanings can be decomposed into, others can be a little more concrete and acquired through experience of physical reality and human life and be specifically represented in word meanings, is parallel to the theory of UG mentioned earlier, which stipulates that some grammatical rules are innate and universal, and others acquired and language-specific. With the Language Faculty and the thought systems each responsible for a subsystem of the human higher cognition by way of genetic transmission and life experience, the picture of how language works in principle seems to be getting clearer than ever before. This is perhaps not surprising since “language is only usable with the support of a huge infrastructure of abstract mental computation” (Pinker 2008, p. 150). When a concept is neurologically visited upon as part of the neural arousal for communication, it is encoded through the Language Faculty (Lexicon and GrammarParser) and then verbalized through sensor-motors. The question is: when faced with a bilingual lexicon and Grammar-Parser, how is the ensuing encoding carried out in the selected language, L1 or L2, instead of the unselected one? This has to do with language control in bilinguals.

2.2.4

Neurofunctional Control

There are two major issues in the study of language control in bilinguals. First, how does it work? Second, is the control language-dependent or part of the human cognitive control function in general, known as the executive control? Paradis’ (2004) neurofunctional model of bilingualism offers insightful answers to those questions that might correlate to how the cerebral systems operate in reality (I return to this in Sect. 2.4). According to Paradis, the brain organizes itself into neurofunctional systems. The (neurofunctional) system that processes language includes several major subsystems from top to bottom: (a) the conceptualizer; (b) the mental lexicon; (c) the mental grammar (and parser); (d) the articulator. Each subsystem has its own structures and properties which are not altered by being connected to and working together with the others (Paradis 2004, pp. 225, 227). This is exactly what we is laid out in Fig. 2.2, where the Conceptual-Intentional-Contextual (CIC) System is the conceptualizer, as mentioned earlier in Sect. 2.2.3, the Articulatory-Perceptual (AP) System plus what it controls, i.e., sensor-motors, is the articulator. In between are Grammar-Parser and Lexicon. In other words, Paradis’ model is in agreement with what we have laid out in Fig. 2.2. Additionally, successful language processing that meets communicative needs also involves other subsystems like: (a) metalinguistic knowledge of a language; (b) pragmatic conventions (that govern language use in context); (c) sensory

2 Translating and Interpreting as Bilingual Processing: The Theoretical Framework

29

perceptions and feelings that are fed to the conceptualizer; (d) encyclopaedic knowledge; (e) episodic memory that are also fed to the conceptualizer (Paradis 2004, p. 227). Presumably, the above (a), (b), (c) and (d) are all stored as memory subsets and their applications involve computation of the thought systems. We will include metalinguistic knowledge and pragmatic relations in the discussion later, but not other issues due to the limitations of this paper. For the bilingual speaker, the organization of those subsystems in the brain does not change simply because they may need to process two languages alternately instead of always one (and language pairs in bilinguals have no effect whatsoever on the said organization, either) (Paradis 2004, pp. 189–190). There is not a single mechanism in the monolingual brain that is not also available to the bilingual speaker (2004, pp. 225, 229). What changes is the representations of those language-related subsystems, like lexicon, grammar, the articulator, metalinguistic knowledge, pragmatic rules, etc. While those subsystems are monolingually represented in unilinguals, the same subsystems are bilingually represented in the bilingual speaker. Another neurofunctional system of the brain, independent of the abovementioned language-processing system (and its subsystems), is the activationinhibition mechanism “physiological in nature and operative in all higher cognitive representations, whatever their domain” (Paradis 2004, p. 28). It activates a representation and inhibits neighbourhood representations under certain conditions. For instance, the more frequently and more recently a word is used, the more automatically it is activated for selection. Reversely, the less frequently and longer away a word is used, the less likely that it is activated or the more automatically it is inhibited. Neurologically, activation requires certain amount of positive neural impulses to reach the neural substrate of a representation (a word, a grammatical rule, etc.). The more often an item is used, the lower of its activation threshold becomes. It is true of the reverse as well. This is known as the Activation Threshold Hypothesis (Paradis 2004, pp. 28–30). When the unilingual speaker is motivated to speak, the motivation, externally driven by communication needs in a context, is internally (or neurologically) represented as neural impulses activating the systems. The thought systems will formulate “a mentalese” or a “message”, which is verbalized after encoding. Which word(s) are selected for encoding depends on frequency and recency of use regulated by the activation-inhibition mechanism that is explained above. Cognitively, the word with the highest activation level is selected because activation is not targeting just a single word but its lexical neighbourhood. In reality, the speaker can change his/her mind (an override of the system by conscientious decisions of the speaker at costly cognition – something which I will not discuss here). When the bilingual speaker verbalizes an utterance in one language, the same lexical process operates, and at the same time, presumably, the motivation to speak in one of the two languages at disposal releases a good amount of positive neural impulses that not only activate the sub-lexicon of one language but also automatically inhibit that of the other so as to avoid interference (Paradis 2004, p. 28). Though it is noted that lexical access in bilinguals can be language-nonselective, for instance in speech perception, the incoming word(s) of one language activate those

30

Y. He

relevant in the other with only limited regard to the context (De Groot 2011, pp. 177–180), it is not to the extent at which to falsify the activation-inhibition mechanism (De Groot 2011, p. 296). Language-nonselective lexical access might have been caused by deviant representations, a type of interference where lexical and/or grammatical elements are cross-represented between L1 and L2 (Paradis 2004, p. 188). The activation-inhibition mechanism also applies to syntactic processing or parsing. The bilingual Grammar-Parser contains, theoretically, a set of universal grammatical rules for both L1 and L2, and two subsets of language-specific rules for L1 and L2 respectively. Universal rules, such as the head-of-the-phrase principle, need to be activated regardless, simply because without them those processing cannot be accomplished. In addition, we assume in general, in monolingual processing for instance, that language-specific rules need be activated, too, because what is acquired of a specific language will always be there when the language is processed. Then, when the activation-inhibition mechanism is at work, only the subrule-system of one language is activated, while the sub-rule-system of the other is inhibited. As stated in the above, the activation-inhibition mechanism operates for unilinguals and bilinguals alike. But there is evidence that bilinguals appear to be worse at overcoming inhibition than unilinguals in performing both language and non-language tasks, thus supporting the activation-inhibition mechanism in how it works and it being part of the human cognitive control function in general (see, Costa et al. 2000 on Catalan(L1)-Spanish(L2); Costa et al. 2008 on Catalan(L1)-Spanish(L2); Linck et al. 2008 on English(L1)-Spanish(L2); Misra et al. 2012 on Mandarin(L1)English(L2); Prior 2012 on non-language tasks). To summarize, speech production must start with motivations to make an utterance. Such motivations will formulate “a mentalese” or “a message” in the thought systems by activation of concepts. Items in the Language Faculty, words and rules, will be selected and applied to encode the conceptual message before it is verbalized via the Articulatory-Perceptual (AP) System. In speech perception, the incoming language input serves as the lexical feed which is decoded via syntactic parsing in the Language Faculty and interpreted in the thought systems (presumably also by activation of concepts). In bilinguals, any language-related system, like lexicon, Grammar-Parser and the AP system, is bilingually represented. Two key processes are also at work. First, memory as one vital component of the Language Faculty may supply items larger than words as encoded output (a phrase, a sentence or a whole chunk of language units), or as paired-ups for incoming language input, and therefore causes encoding or decoding to bypass the Grammar-Parser altogether. It is the so-called memory loop. Cognitively, processing via memory is more economical than by computation. Second, an activationinhibition mechanism operates to the effect that the more often an item is used, the lower of its activation threshold becomes, and that the motivation to process one language rather than the other by the bilingual speaker will lower the activation threshold for the targeted language and raise that of the other. All processes, in either encoding or decoding, operate sequentially in rapid succession in a matter of

2 Translating and Interpreting as Bilingual Processing: The Theoretical Framework

31

milliseconds in the brain where everything is connected to everything else (Paradis 2004, pp. 192, 225).

2.3

Translating and Interpreting as Bilingual Processing

Having presented an integrated perspective to how language may be processed for both unilinguals and bilinguals in the brain, I now focus on a special case of bilingual processing: translating and interpreting, which requires two languages to be alternately processed as the input (i.e., the source language, SL) and the output (i.e., the target language, TL). In some extreme cases, such as simultaneous interpreting (SI), the ear-voice span, i.e., the time duration between input and output, is rather short, lagging from 2 to 3 sec in average or about 4–5 words (Christoffels and De Groot 2005, p. 457). Within this narrow span, a chunk of the language input is held in short-term memory for decoding till the message extracted from the immediately preceding chunk is recoded and verbalized in the target language. And the cycle goes on and on. It is a task that puts so much strenuous demand on the cognitive systems that an untrained bilingual tries to avoid, and that even a trained professional cannot take on for a long stretch of time without system overload (Paradis 1994b, p. 322; Christoffels and De Groot 2005, p. 456).

2.3.1

Related Cognitive Processes

Based on the integrated view presented in the above, we may now apply it to the translating or interpreting setting. Fig. 2.2 can be modified now as Fig. 2.4. In a nutshell, the source input starts with sensor-motors, through decoding by the Articulatory-Perceptual (AP) System and the Language Faculty, and ends with the Conceptual-Intentional-Contextual (CIC) System, where the message extracted from the input is mediated into a target-compatible shape or form, i.e., it complies with the TL community’s conceptualization of the world and with the translator/interpreter’s communicative intentions. Then, the mediated message is recoded in the TL, through the Language Faculty and the Articulatory-Perceptual (AP) System, and verbalized out by sensor-motors. As illustrated in Fig. 2.4, this loop of processing is like going bottom-up first with the source input and then coming top-down with the target output, thus being artfully dubbed as vertical translation (De Groot 1997, pp. 29–32). Specifically, the loop always starts with the source text/speech input. The sensormotor systems pick up the incoming SL input in a sound/sign sequence (sometimes, the input could be a mix of sound and sign, as in interpreting aided by sight-reading of a script or screen slide). The initial chunk in the sequence will be stored in shortmemory (STM), subject to capacity (about 1 sec via hearing), and then channeled to

32

Y. He

Cognitive Control (activation-inhibition)

Conceptual-Intentional System (Thought Systems)

Logical-Form Interface (SL-semantics &TL-semantics)

Long-term Memory (SL-metalinguistic knowledge + Pragmatics& TL-metalinguistic Knowledge + pragmatics)

Grammar-Parser (SL-morphonogy+syntax & TL-phonology+phonetics)

Phonetic-Form Interface

Long-term Memory (SL-words + other items & TL-words + other items)

(SL-phonology+phonetics & TL-phonology+phonetics)

Short-term Memory

Articulatory-Perceptual System (Sensor-motors)

Target Text/Speech

Source Text/Speech

Fig. 2.4 The memory-computation loop in translating/interpreting as bilingual processing

long-time memory (LTM) and the Grammar-Parser at the same time. And the next chunk comes in and so on. As stated earlier, the input serves as the lexical feed for the Grammar-Parser. Also, a clause is the preferred parsing unit in interpreting. In other words, the interpreter waits till most lexical items constituting a clause are heard, particularly the verb, before producing the output. Let us assume that a clause is the basic parsing unit in translation, too. It could even be a sentence containing more than a clause, e.g., a matrix and a subordinate clause, or two coordinated clauses, since the translator is not constrained by the ear-voice span and can read the source text once and again till the parsing unit is complete. Cross-linguistically, any phrase that contains a verb and one or more arguments (subject and/or object(s)) is a clause, irrespective of whether the verb is tensed or not. Some languages have tense (a temporal concept – past, present and future – associated with verbs relative to the point of speaking), like English, but others do not, like Chinese, which have other means to express the concept of time. Lexically speaking, since our translator/interpreter in theory is the ideal bilingual speaker, there is hardly any word from the source input that he/she does not already know. It means that an incoming source language (SL) word will always register

2 Translating and Interpreting as Bilingual Processing: The Theoretical Framework

33

with an entry in long-time memory (LTM). At this stage, three scenarios duly arise: (a) the parsing unit is a memory item itself, i.e., it is memorized as a structured expression as a whole by the translator/interpreter; (b) part of the parsing unit is a memory item, also an structured expression, a noun phrase for instance; (c) the parsing unit is not a memory item, part or whole. Depending on the scenarios, the processing routes from this point onwards could vary. I will first discuss the scenario (c) and then (a) and (b). The scenario (c) means that the incoming source lexical feed goes to the Grammar-Parser for structural reconstruction, i.e., to be parsed into a structured expression, after it being phonetically filtered through the Articulatory-Perceptual (AP) System and lexically registered with the SL part of the mental lexicon. Or we may say that the incoming source expression is to be syntactically decoded after being phonetically and lexically decoded first. After the structural reconstruction, the source input, now in the form of a structured expression (in the Chomskian sense), goes to the Conceptual-Intentional-Contextual (CIC) System for interpretation. After that, it becomes a piece of “pure message”, or a piece of mentalese so to speak. This message or piece of mentalese will then be recoded and verbalized as the target language (TL) output. Theoretically, it is just as simple. In reality, however, there are a number of factors that play a part in the recoding process. Conceptually, in the thought systems of the translator/interpreter, there is first of all what is conceptually shared by all human beings and across speech communities, concepts such as entity, type, time, space, place, causation, resemblance and those relating to human feelings (hungry, cold, hot, sleepy, ill, etc.). So, the concept of “rising at eight o’clock in the morning” is probably conveyable in every language no matter how differently it may be encoded across languages.3 Then, there is what is unique to each community based on its language and culture. For example, the concept of “killing two birds with one stone” in the English-speaking community does not exist in the Chinese-speaking community, which instead expresses a similar concept of “shooting two eagles with one arrow”. The latter may be said as being conceptually subcategorized in relation to the former, since “shooting” could be a subcategory of “killing”; “eagles” a subcategory of “birds”; and “stone” and “arrow” both subcategories of “weapon” probably at a time when those concepts were formulated. Though they are unique to the English and Chinese communities respectively in the sense one never conveys “the concept” belonging to the other, they are also conceptually related in some ways. That is why one also could guess what the other conveys in such situations. In the course of translating or interpreting, it is usually not a problem to recode what is conceptually shared across the SL and the TL. The problem invariably arises when the translator/interpreter tries to recode 3 It may be that even for concepts that are shared across speech communities, they may be conceptually connected to different things for people speaking one language rather than another. For instance, people living within the polar circles who are no strangers to “rising in the morning in the darkness of long polar nights” may conceptually perceive “rising at eight o’clock in the morning” differently from those living in the equatorial regions. Nevertheless, “rising at eight o’clock in the morning” is a commonly perceivable concept for people across cultures.

34

Y. He

what is unique to the source community and hence alien to the target community. Such recoding, descriptively dubbed as the transfer of “culture-specific items (CSIs) from the source to the target text”, often came up in discussion in the translation literature, with a missing point (e.g., Baker 1992; Newmark 1980; Nida 1964; Reiss 1971). The point is that what is unique to the source community creates a conceptual barrier to the target community which the translator/interpreter holding dual membership of both communities has to overcome by engaging conceptual mediation between the two in his/her Conceptual-Intentional-Contextual (CIC) System (He 2004, 2009, 2011). Conceptual mediation is also driven by the intentions of the translator/interpreter. The communication needs that drove the SL author or speaker to produce the source text or speech may not be the same as those driving the translator/interpreter, who may re-interpret those communication needs, consciously or subconsciously, or even conveniently, in accordance with the context in which he/she is in (e.g., the purpose or person for which/whom he/she provides the service of translating or interpreting). The diversion of intent on the part of the translator/interpreter also causes conceptual mediation between the source input and the target output. Linguistically, the SL and the TL systems are represented differently in all aspects of language, like lexical semantics, morphology, syntax, phonology (including prosody) and phonetics. Even when a concept is shared across the source and the target communities, the encoding tools, SL or TL, are not cast out of the same mould. For example, “making someone the king” can be expressed as “to enthrone someone” in English, but has to be expressed differently in Chinese, as in “to let someone be the king” or “to let/make someone ascend to the throne”. One can also say those things in English but simply not “to enthrone someone” in Chinese. Even a daily human-activity concept like “going to sleep” has to be represented differently across languages. While the phrase “to go to bed” is the most commonly used in English, it is never so said in Chinese, which simply says “to sleep” or “to go to sleep”. In other words, the translator/interpreter will joggle the linguistic means at his/her disposal and decide on a recoding choice. In interpreting where the ear-voice span is short, he/she has to decide quickly, often subconsciously. Pragmatically, there are conventions that govern language use in context and they differ from one language to another. Those conventions interfere with the recoding decisions at all levels (lexical-semantic, morphological, syntactic, phonological and prosodic, even phonetic; e.g., “one” can be pronounced as either [yi:] or [yau] in Chinese, the latter is used in professional telecommunications such as calling out flight numbers by air traffic control). Pragmatic conventions are shaped and refined by contextual and cultural settings and nurture us on “how to speak what to whom at when and where”. They can be procedural in nature in the sense that once learned, they apply implicitly, like swimming or driving a car. In translating and interpreting, the conventions in the SL and the TL both play a part. For instance, Chinese people often greet one another by asking “Have you eaten”, versus “Hello or How are you” by English-speaking people. Thus, whatever concept is distilled from the Chinese

2 Translating and Interpreting as Bilingual Processing: The Theoretical Framework

35

greetings can be recoded as those in English, and vice versa. This does not mean that “Hello or How are you” is never said in Chinese or “Have you eaten” is never asked in English. It simply means that pragmatic conventions interfere with recoding. The fundamental point of how pragmatics interacts with the linguistic encoding process in principle is that lexical representations and grammatical rules are what makes encoding possible, and that they are limited either in choice or in the number of rules in a definitive set (either universal or language-specific); pragmatic conventions which the speaker acquires through life experience and stores as memory subsets simply tell the Language Faculty: (a) which lexical items to select and (b) which grammatical rules to apply, so as to result in a piece of encoding that is appropriate in the context the encoding is meant for. This basic principle of pragmatics is, unfortunately, often not understood and the role of pragmatics misconceived. And it is vitally important for us to clarify it here. Metalinguistically, the explicit knowledge that the translator/interpreter possesses about the SL and the TL systems also interferes with the recoding decisions. Unlike the use of the innate Language Faculty, which is implicit and automatic without conscious effort or awareness, the use of the metalinguistic knowledge is consciously controlled, with full awareness of the rules that are applied to various levels (e.g., morphological or syntactic) (Paradis 2004, p. 222). The translator/interpreter may use it to analyze the grammatical structures of the source input and monitor those of the target output before verbalization, and may consciously adjust the balance in written translation. Cognitively, particularly in interpreting, due to the strenuous demand on the system, it is not guaranteed that a sufficient array of cognitive resources will be brought upon to bear on the task on time and all the time. For instance, due to fatigue, lapse of attention, environmental distractions, etc., short-term episodic memory will be affected (temporarily or for longer periods). Or the (whole verbal communicative) system cannot sustain itself on the short ear-voice span for a long time without suffering from overloading and breaking down. All these may interfere with the recoding process. It takes a concerted operation involving all those aspects discussed in the above to deliver the outcome of the recoding process, i.e., the target text or speech. One aspect which did not receive much attention previously is the conscious use of metalinguistic knowledge for recoding. This is observed in translation more than interpreting because the translator is not under constraint by the ear-voice span and thus has more time than the interpreter to consciously apply any explicit bilingual metalinguistic knowledge at his/her disposal to adjust the balance between the input and the output. Since such adjustment is mainly based on grammatical structures, it is also called the “structure-routed” processing (Paradis 1994a, p. 407, b p. 332).

36

2.3.2

Y. He

The Processing Economy Hypothesis (PEH)

The notion of processing economy was first proposed in Chomsky (1991) which stipulates that a structured expression is formed by minimal computation in terms of rule applications, otherwise it is ill-formed, since any rule application by the Language Faculty is non-redundant (Chomsky 1995, pp. 2, 168–169). In other words, computational economy in linguistic encoding is a system requirement. In Pinker (1994, pp. 427–429; 1999, pp. 300–307), processing economy is taken to a new level where, as mentioned before, the operational tenet is that memory applies as a priori and computation takes over whenever memory fails, the reason behind being that processing via memory is simply less costly (or more economical) than computation. It is simply a natural law governing digital computation and analog memories of any kind (Pinker 1999, pp. 300–306). The fact that the Language Faculty consists of computational and memory components is not an accident but a reflection on the nature of the universe, of which human systems are just a part (Pinker 1999). In other words, the brain is designed to economize its operations. For example, when a particular structured expression (representing a specific meaning) is formed, it is by minimal computation or via memory, or by optimal interactions between them. While Chomsky’s notion is only local, Pinker’s is global. The implications are profound for the study of language processing. There are two crucial aspects. Firstly, processing is economized where memory precedes computation wherever possible, and secondly, computations are themselves economized. The second aspect applies when grammatical structures are processed. For example, because minimal computation applies to all structures, processing complex structures is presumably not as economical as when simpler structures are involved.4 In the native monolingual situation, the issue of economy is mostly irrelevant for research because, whatever the output the speaker produces, it is taken as the result of a process that has met all necessary conditions for processing, the economy condition included (a vital theorem for Chomsky), without having to be influenced by factors like source-based concept mediation, explicit bilingual structural knowledge, bilingual-pairing, etc., which inevitably exist in the process of translating/ interpreting. In contrast, the issue of processing economy is real for the study of translating/interpreting and presents itself as a tangible research paradigm. The exact workings of memory-computation mutual complementation and compensation in the brain are a complex and complicated issue of which we do not understand much at the present. Nor do we know much about how exactly the mutual complementation and compensation is conditioned by processing economy (cf. He 2006a). There are, however, a number of areas in the process of translating/ interpreting which we believe may shed some light on processing economy, among

4

We do not concern ourselves with predictably many other possibilities, e.g., the mentalese extracted from incoming complex structures is encoded in a series of outgoing simple structures. We are just presenting a frame of theoretical reference relevant to our discussion.

2 Translating and Interpreting as Bilingual Processing: The Theoretical Framework

37

those, as mentioned earlier, concept mediation, explicit bilingual structural knowledge and bilingual-pairing. Let us look at them one by one. Firstly, translating/interpreting relies more often than not on conceptual mediation to formulate a piece of target-compatible mentalese based on the source input. The more conceptual mediation that is engaged for this task, the more computation there is and consequently more costly the whole process will be. One exemplary case is when a culture-specific item (e.g., an idiom or metaphor) from the source input is translated, where the item invariably poses to be a conceptual barrier between the source and the target thought systems, ensuing subsequent conceptual mediation. In producing a written translation, the translator may take time to produce a semantic equivalent to it by computation if he/she cannot come up with a target home-grown replacement from memory, or simply chooses not to translate it. In interpreting, particularly simultaneous interpreting, the interpreter does not have the luxury of time. The precious seconds slip by and a gap is left in production when he/she has no memory-pairing to resort to and no time to produce a semantic or contextual equivalent by computation. However, if he/she does have a memory-paired target item to offer, it will be uttered out quickly (assuming that his/her other cognitive functions are working properly, e.g., no lapse of concentration, no distraction or fatigue, etc.). As mentioned before, the ability of an interpreter to produce the target speech via SL-TL memory-pairing is known as his/her “cognitive signature” (Paradis 1994a). Such situations thus provide a cognitive window for us to look into the inner workings of memory-computation mutual complementation and compensation in the brain. Secondly, as is well observed in translation, a target text or part of it may appear to look grammatically alike to the source text, technically known as “being transcoded” from its SL counterpart.5 In terms of processing, “transcoding” is believed to result from the so-called “structure-routed” recoding, namely, the TL structural expression is constructed by following the same or similar grammatical structure(s) as the SL. This is accomplished with the help of explicit knowledge of the SL-TL structures from the declarative memory (part of LTM) (Paradis 1994b, pp. 319–345). Compared with the “non-structure-routed” recoding based on the outcome of the source-message-based conceptual mediation, “transcoding” has the advantage of short-circuiting the conceptual mediation process. Namely, the translator uses his/her explicit knowledge of the SL-TL structures to produce a target text which may or may not be an accurate semantic representation of the source message. Nevertheless, the linguistic goal of the bilingual processing is achieved. In this sense, “transcoding” is computationally less costly than “non-structure-routed” recoding. An exemplary case is once again when an idiom or metaphor is translated. It is often observed to be transcoded, e.g., the Chinese idiom “ke zhou qiu jian” being rendered as “carve the boat and/to seek the sword”, leaving the reader to guessing what that image really means. The translator has the option of footnoting an

5 This may happen more between closely related languages (e.g., the Romance’s) as Jakobson (1957, pp. 232–239) and Catford (1965, pp. 28–29) claimed.

38

Y. He

annotation: “A man dropped his sword from a boat. He carved the sideboard of the boat where the sword went into the river and jumped down to seek it at the riverbed when the boat docked. An idiom for the extremely singularly minded.” In interpreting, however, the interpreter has no time for image-annotation or semantic explanation. Failing to come up with a memory-paired target item, her/his option is either leaving a production gap or to transcode if she/he is quick enough. In other words, when memory-pairing fails, producing a semantic replacement for a culturespecific item by computation is rarely an option in interpreting, but transcoding may be. This implies that transcoding is probably processing-wise more economical than “non-structure-routed” recoding. Even in translation where time is normally not a crucial factor, the cross-the-board patterns displayed by independently produced multiple translations of the same source text show that transcoding applied to a significant extent in both culture-specific and non-culture-specific texts, suggesting that transcoding may be processing-wise a cognitively preferred recoding route, possibly due to being more economical than “non-structure-routed” recoding. It also implies that the more explicit bilingual structural knowledge the translator possesses, to the greater extent it is applied cognitively. Thirdly, bilingual pairing is most likely to be observed in interpreting where the ear-voice span is acutely limited in time. In comparison, pairing is difficult to be ascertained in translation for there is no way of knowing how much time the translator spent on the target production. By definition, pairing is memory-based between the source input and the target output, usually lexical and phrasal.6 This brings us back to the scenarios (a) and (b): the incoming parsing unit is either a memory item as a whole or part of it is. In monolingual processing, if the incoming unit is a memory item, it means that it is memorized as a structured expression as a whole by the speaker. Technically, the incoming unit activates a matched item in long-term memory (LTM) and once matched, the memory item goes to the Conceptual-Intentional-Contextual (CIC) System for interpretation. In translating/interpreting, moreover, the activation is assumed to be double-folded: the incoming unit activates not only a matched item in the SL but also an associated one in the TL. In other words, the translator/ interpreter has rememorized not only the incoming SL unit but also its associated TL counterpart. This rememorized SL item and its associated TL counterpart in LTM are thus called a memory pair. For instance, for many Chinese-English interpreters, they can verbalize the English phrase in (b) below as soon as hearing the Chinese one in (a), and vice versa: (a) you Zhongguo tese de shehuizhuyi have China characteristic’s socialism (b) socialism with Chinese characteristics

6 Memory-paired decoding-recoding of complex-structures may or may not be possible, subject to research. Veteran professional SI interpreters we work with testify that this actually happens. Studies are needed. Results from case studies on translation are far from being conclusive (e.g., He 2011, pp. 77–90).

2 Translating and Interpreting as Bilingual Processing: The Theoretical Framework

39

Logical-Form Interface (L1- & L2-semantics)

L1/2

L1------->L2 L2/1 L1L2 L2/1 L1L2 L2/1 L11 s. pause may have been owing to editing delay rather than to cognitive uncertainty about the solution. In such cases it seems that processing information may be more difficult to retrieve if such fragments are isolated in separate segments. With the exceptions mentioned, but still with a 1 s. segment boundary criterion as the basic segmentation principle, we now arrive at a result with seventeen fragments. Segment 5 is fragmentary and could be included in segment 4, but it is also possible (as done here) to see it as one of four revision steps. Accepting the two exceptions mentioned (in segments 2 and 12) and occurrences of intra-segmental pauses >1 s. where only cursor navigation keystrokes intervene between these pauses and segment boundary pauses, we get the following 17 segments (see Fig. 4.3). The difference between a hand-crafted segmentation such as the above and an automatic segmentation, e.g. by a 1 s. boundary criterion, is easily seen. Disregarding the first (orientation) segment, segment two would be broken into two fragments, a very small fragment ‘Kr’ and a very large fragment ‘ankenschwester des Todes’. Intuitively, it makes better sense to study the two fragments together as part of one whole, despite the intervening >1 s. interval. This allows us to see parallel behaviour in the typing of segments two and three although the interval after ‘w’ in segment three is shorter than 1 s. In segment four, automatic segmentation would yield one segment with four backspaces only. To interpret backspaces, they need to be connected either with what came before or after

78

A. L. Jakobsen

1. [Start][•01.530][▼][▲][•04.415][▼][▲][•00.951][Return] 2. [•02.527Kr[•03.635]ankenschwester•des•Todes• 3. [•02.011]w[•00.913]ird•zur•vierfacher• 4. [•01.372]◄◄◄◄[•01.794]vierfach•[•00.578]lebens[•00.655]lang•[•00.967]veruzr◄◄rteilt[•00. 562][Return] 5. [•02.028]◄◄◄◄änglich 6. [•02.449]◄[•00.733]fach•zu•Lä[•00.780]◄ebenslänglich•verurteilt[•00.858][Return] 7. [•02.309]D[•00.826]ie•K[•00.717]rank[•00.967]enschwester•Colin• 8. [•18.876][▼][▲][▼][▲][•00.842]Krankenpfleger•[•01.576][▼][•00.686][▲][▼][▲][▼][▲][•01.1 86][▼][▲][▼][▲] 9. [•01.778]◄◄◄er•Krankenpfleger•[•00.951]◄[•00.609]◄◄◄◄◄◄im•Krank[•00.842]enhaus •angestellt◄e[•00.530]◄te•Krankenpfle[•00.578]ger•Colin•Norris•[•00.624]wurde• 10. [•02.402]für•den•Mord•an•vier•seiner•[•00.671]Patienten•heute• 11. [•04.259]zu[•00.951]m•lebensöl[•00.749]◄◄ö[•00.920]◄längliche[•00.640]m•Ge[•00.531]◄ ◄◄◄r[•00.655]← 12. [•01.248]←→→→→→[•00.665]◄•einer[•01.575]→→◄n•Gefängnisstrafe•verurteilt• 13. [•01.872]Der•[•00.547]32– jährige•[•00.562]Norris•aus•Glasgow•tötete•die•vier•Frauen•[•00.594]2002,•indem•er•ihnen• 14. [•05.397]große•[•00.686]Mengen•an•Schlafmedikamenten•verabreichte.• 15. [•01.529]Gestern•wurde•[•00.874]er•[•00.944]des•vierfachen•Morde 16. [•01.560]◄◄◄[•00.905]in•Folge•eines•[•00.608]la 17. [•04.493]◄◄nach•lange[•00.843]m•F[Shift+Back]Ver◄◄[Shift+Back]Gerichtsverfahren•[•00. 983]des•vierfachen•Mordes•[•00.765]schg[•00.702]◄uldig•gesprochen.•

Fig. 4.3 Edited Translog representation with text divided into 17 segments by a modified 1 s. segment boundary criterion. (In this representation typing pauses shorter than 500 ms. have been deleted)

so that they become part of a narrative or history. Only if we see what was deleted and what took its place can we begin to interpret what motivated the deletion. So they are better seen as part of a segment rather than as a separate segment. The same applies to what we find in segment eight, where there are two intervals >1 s. Automatic segmentation would create two separate segments with only cursor movements. As such segments need to be contextualized to be interpretable, they are best seen as part of and connected to their immediate context, either by being included in the segment before (as done here, in one, four, and eight) or after. Finally, in segment twelve, there seems to be a straight ‘cognitive’ line from the arrow movements targeting the deletion of the ‘m’ in ‘zum’ to the typing of ‘einer’, indicating that the solution now on the translator’s mind was a feminine noun (‘Gefängnisstrafe’ [prison sentence]). The >1 s. interval which intervened between

4 Segmentation in Translation: A Look at Expert Behaviour

79

the deletion of ‘m’ and the typing of ‘Gefängnisstrafe’ is likely to have been caused by the need to review what had just been typed and to replace the final ‘m’ with an ‘n’ in the intervening adjective. Hand-crafted segmentation as here illustrated is incapable of dealing with the huge volumes of data that can be handled if automatized procedures are used, but there is a place for both. Manual micro-labour can attend more closely to local, individual processing phenomena and differences, but cannot convincingly subject observations to large-volume empirical testing and therefore has problems with generalisation of observations. When combined, manual study of individual cases can be used to adjust algorithms and increase the accuracy of large-scale corpus analysis, and such analysis can in turn be used to test if tentative hypotheses developed from case studies find support in large-volume analysis. Automatic procedures, by contrast, may ride somewhat rough-shod over individual differences and even data recording inaccuracies, but has enormous advantages from volume when it comes to identifying commonalities. The next section will offer segment-by-segment analysis of each of the seventeen segments identified on the basis of keystroke intervals, with the adjustments just mentioned.

4.4 1.

Segment-by-Segment Keystroke Analysis [Start][•01.530][▼][▲][•04.415][▼][▲][•00.951] [Return].

This segment of the log records the time and keyboard action from the start and until the first text production key was struck after the appearance of the ST on the screen. This opening phase has been labelled ‘initial orientation’ (Jakobsen 2002, p. 90) on the assumption that the time from the moment the source text is first displayed (and logging initiated) until typing begins is likely to have been spent on some kind of initial orientation in the text. By definition this initial phase may have mouse clicks and other cursor navigation keystrokes, but cannot have any text production keystrokes. From a keystroke analysis point of view, it is necessarily treated as a single segment, regardless of any occurrences of pauses between navigation keystrokes or mouse clicks. The initial orientation section here took up just under 10 s (9703 ms). From the keylog file we can only guess at how the translator spent these first seconds. Ten seconds is enough for an averagely skilled reader to read 40–60 words, and therefore potentially enough for the translator to have read all the text examined here, but since we are not considering gaze data yet and only have a couple of up and down cursor movements in the data, we cannot know how much, if anything, was read. One first

80

A. L. Jakobsen

question regarding the orientation phase we would like gaze data to give an answer to, therefore, is simply this: What was read by the translator during initial orientation (and how)? The [Return] keystroke indicates the end of the initial orientation phase and the translator’s readiness to initiate the typing of the target text. The first segment of the drafting phase began with very slow, hesitant typing: 2.

[•02.527K[•00.500]r[•03.635]a[•00.312]n[•00.250]k[•00.296]e[•00.325]n[•00.375]schwester•des•[•00. 249]Todes•

The intervals recorded before the typing of ‘K’, ‘r’, and ‘a’ (the first letters in ‘Krankenschwester’ [nurse]) can be viewed as part of or overlapping with initial orientation. A fair guess is that these intervals were still spent on initial orientation, possibly now focused on the immediate segment to be translated (‘Killer nurse’). Gaze data could support (or invalidate) this guess by showing reading activity while ‘K’ and ‘r’ were very hesitantly typed. A tentative general hypothesis about eye-key coordination might be that, typically, a new segment will be read for comprehension first. During the typing of the translation there may be a need for the translator to refer back visually to the part of the ST segment to which a translation is about to be typed to refresh the memory of the exact ST expression. Such speculation could be supported (or refuted) if gaze data will provide an answer to the following question: What was fixated in the interval immediately prior to the onset of the typing of (a portion of) a new segment? As can be seen in the broken-up typing of the first TT word, the difference between the typing speed of ‘Kranken’ [sick] and the typing of the second constituent of the compound noun, (‘schwester’ [sister]) was very striking. While there were intervals of at least 250 ms between each keystroke in the typing of ‘Kranken’, all intervals between keystrokes in the typing of ‘schwester’ were shorter than the 245 ms threshold. The nine characters in this second constituent were typed in 1092 ms, i.e. with an average speed of eight characters per second, which tells us that the translator was capable of typing at this speed, provided that she had arrived at certainty about what to write (cf. Angelone 2010, p. 17). This was apparently not the case when ‘Kranken’ was typed as the typing speed for this first half of the word was almost eight times slower than for the second part. The very long interval of more than three and a half seconds before the typing of ‘a’, is particularly intriguing. Automatic segmentation (even by a 2 s. boundary criterion) would make ‘Kr’ into a separate but fragmentary and anomalous segment. But all we can say from the keystroke data is that something exceptional, perhaps related to uncertainty, may have been going on here, which leads to a general question concerning gaze and

4 Segmentation in Translation: A Look at Expert Behaviour

81

uncertainty: What evidence (if any) can gaze behaviour offer to support interpretation of typing hesitation as an indication of uncertainty? 3.

[•02.011]w[•00.913]ird•zur•[•00.312]vierfacher•

Between the noun phrase in the headline (‘Krankenschwester des Todes’ (‘Killer nurse’)) and the verb phrase occurred a typing pause of more than 2 s. A ‘w’ was typed and followed by an interval of almost 1 s., suggesting (as with the typing of ‘Kr’ above) that processing of the verb phrase (including the rest of the headline) had not been fully completed or still remained uncertain at this point. The ungrammaticalness of ‘zur vierfacher’ [to the fourfold] suggests continuing uncertainty about how to proceed. The feminine gender indicated by the form ‘zur’ may reflect that a feminine noun like ‘Strafe’ [punishment] or ‘Haftstrafe’ [prison sentence] was probably (vaguely) active at the time. 4.

[•01.372]◄◄◄◄[•01.794]vierfach•[•00.578]lebens[•00.655]lang•[•00.967]veruzr[•00.484]◄◄r[•00. 249]teilt[•00.562][Return]

Segment 4 supports the suspicion that the translator was uncertain about how to continue. After 1372 ms of consideration, the translator deleted ‘zur vierfacher’ and a further 1794 ms later typed ‘vierfach lebenslang verurteilt’ [(lit.) fourfold lifelong sentenced]. The [Return] at the end signals readiness to go on to a new segment. It is noteworthy that with the exception of the typing of the very first compound constituent (‘Kranken’), the uncertainty pause after ‘w’ in ‘wurde’ [was] and an accidental u/z typo, no within-word (constituent) interval longer than 245 ms occurred. This strongly suggests that graphic words have a privileged cognitive status. 5.

[•02.028]◄◄◄◄ä[•00.390]nglich

6.

[•02.449]◄[•00.733]fach•zu•Lä[•00.780]◄eben[•00.374]s[•00.421]l[•00.312]ä[•00.281]nglich•verurte ilt[•00.858][Return]

Segments five and six show that the translator was still not completely happy with the translation of ‘receives four life sentences’. After the abortive ‘wird zur vierfacher’ followed the complete (but somewhat awkward) ‘wird vierfach lebenslang verurteilt’. After yet another 2 s. interval, a third attempt was started with deletion of ‘ang verurteilt’ followed by insertion of ‘änglich’, making the third solution ‘wird lebenslänglich’. Finally, after a further interval of two and a half seconds, a fourth solution was arrived at that turned out to be durable, i.e. a solution

82

A. L. Jakobsen

which appeared to satisfy the translator and was not subsequently changed. After deleting ‘fach lebenslänglich’, the translator retyped ‘fach’ followed by ‘zu Lebenslänglich verurteilt’ bringing the translation of the verb phrase (predicate) portion of the headline finally in place as: ‘wird vierfach zu Lebenslänglich verurteilt’ (‘receives four life sentences’). The progression across the four solutions illustrates the way in which a translation is sometimes hammered out. Structurally, the translation remained the same in that all four solutions revolved around a passive construction of the predicate: ‘wird . . .(zu). . . verurteilt’. The translator struggled both with formulating and integrating a translation of the numeral ‘four’ and the adjectival use of ‘life’ in the ST compound construction ‘life sentences’ using various not wholly successful adverbial solutions. When the adverbial use of ‘lebenslänglich’ was shifted to the nominal construction ‘zu Lebenslänglich’, the translation of the headline fell into place after 46 seconds. 7.

[•02.309]D[•00.826]ie[•00.250]•[•00.343]K[•00.717]rank[•00.967]enschwester•Colin•

The translator next embarked on translating the main body of text, beginning with ‘Hospital nurse Colin’. In the headline ‘nurse’ was translated as ‘Krankenschwester’, in agreement with the cultural default assumption that a nurse is a female individual. However, in English the word nurse is unmarked for gender, whereas in German ‘Krankenschwester’ [(lit.) sick sister] can only be used in reference to female individuals. So, as the nurse’s name in the text is a man’s name (Colin), ‘Krankenschwester’ will not do. Typing here at the beginning of the main text was relatively slow, suggesting either uncertainty or that new information was being taken in at the same time. Gaze data should show if the typing of segment seven was accompanied by above-normal visual orientation, e.g. to overcome uncertainty concerning obligatory maleness of an individual named Colin. After a very long interval (for this translator) of almost 19 s., the translator arrived at the conclusion that Colin was the name of a male nurse, so the headline’s ‘Krankenschwester’ was changed to ‘Krankenpfleger’. 8.

[•18.876][▼][▲][▼][▲][•00.842]Kranken[•00.437]pfleger•[•01.576][▼][•00.686][▲][•00.390][▼][ ▲][▼][▲][•01.186][▼][•00.422][▲][•00.468][▼][▲]

We can infer from the keystrokes that the long 19 s. interval was no doubt related to the discovery that ‘Colin’ must be the name of a male nurse, but the interval does not permit us to infer how the translator arrived at this conclusion. This leads to yet another question: Does gaze data help elucidate a translator’s problem solving process? Although an interval > 1 s. (1186 ms) was recorded inside segment eight (cf. also segments one and four), it was not counted as a segment boundary because only

4 Segmentation in Translation: A Look at Expert Behaviour

83

navigation keystrokes occurred after it. It is likely that there was considerable gaze and cognitive activity during the interval, but this was most probably targeted at items in the same segment. The change to ‘Krankenpfleger’ was also made in the first sentence of the main text: 9.

[•01.778]◄◄◄er•Kranken[•00.421]pf[•00.453]leger•[•00.951]◄[•00.609]◄◄◄◄◄◄im•[•00.406 ]Krank[•00.842]enhaus•angestellt◄e[•00.530]◄te•[•00.390]Krank[•00.363]enpfle[•00.578]ger•Colin• Norris•[•00.624]w[•00.406]urde•

Very quickly after making the change to ‘Krankenpfleger’, however, the translator decided that ‘im Krankenhaus angestellte’ [(lit.) in the hospital employed] should come between the definite article ‘Der’ and ‘Krankenpfleger’ to conform with typical German information structure and to make it explicit that the nurse was employed in a hospital. This decision to change the information structure coincided with the longest intra-segment interval (951 ms). Otherwise we note that intervals (> 245 ms) quite regularly occurred between words or at (or near) morpheme or syllable boundaries, and not within words (word stems). 10.

[•02.402]für•den•Mord•an•vier•seiner•[•00.671]Patienten[•00.296]•[•00.402]heute•

A 2.4 s. interval occurred before the predicate phrase. Here, translation began to run more smoothly. It is the first segment without any immediate online corrections. The interval before ‘Patienten’ could have resulted from a need to refresh the memory of what the ST word was, which would predict time spent on refixating ‘patients’ in the ST. Alternatively, seeing the continued hesitations between ‘Patienten’ and ‘heute’ (‘today’), all of this hesitation could reflect uncertainty about where to place ‘heute’ in the sentence. 11.

[•04.259]zu[•00.951]m•lebens[•00.312]öl[•00.749]◄◄ö[•00.920]◄[•00.328]lä[•00.249]ngliche[•00.6 40]m•Ge[•00.531]◄◄◄◄[•00.297]r[•00.655]←

The long typing intervals at the start of segment eleven seem to indicate that translating ‘imprisoned for life’ generated problems of a similar kind to those encountered in segments three to five, although less severe. The German translation sought for, initially, by the translator was a construction similar to English ‘sentenced to lifelong? ?’, where ‘??’ stood for a masculine or neuter noun.

84

12.

A. L. Jakobsen

[•01.248]←→→→→→[•00.665]◄•einer[•01.575]→→[•00.468]◄n•Gef[•00.499]ä[•00.281]ngnisstra fe•verurteilt[•00.343].•

The problem was resolved when the translator thought of the feminine noun ‘Gefängnisstrafe’ [(lit. ‘prison punishment’) to replace ‘??’ and quickly made the necessary changes (deleting the ‘m’ in ‘zum’, inserting ‘einer’, then perhaps after checking the new solution in [•01.575] changing ‘m’ to a final ‘n’ in ‘lebenslänglichen’). As was the case in segment two, automatic application of a 2 s. boundary criterion would split this segment into two, a fragmentary ‘einer’ and an equally fragmentary ‘n Gefängnisstrafe verurteilt’. The motivation for typing ‘einer’ follows from the decision to use a feminine noun (‘Gefängnisstrafe’) for the translation represented in the second half of the segment. The change of ‘m’ to ‘n’, similarly, must be seen in the context of segment twelve, which is itself a fragmentary representation of what was being processed (indicated by the cursor navigation across portions of the processed segment). To make fuller sense of the content of segment twelve, it also has to be viewed in the context of the processing represented in segments ten and eleven. 13.

[•01.872]Der•[•00.547]32[•00.436]– jährige•[•00.562]Norris•aus•Glasgow•[•00.485]tötete•die•vier•Frauen•[•00.594]2002[•00.499],•indem• er•ihnen•

Like segment ten, this segment was also typed without any immediate online correction having to be made. Typically, we find that the sentence-initial pause was relatively long. Also, typically, we find interval lengthening before and after numbers. The pause after ‘Glasgow’ is structurally located between the NP and the predicate and could also be related to extra monitoring of the spelling of the foreign place name. 14.

[•05.397]große•[•00.686]Mengen•an•Schlafmedi[•00.500]ka[•00.359]menten•[•00.406]verabreichte.•

Segment fourteen is yet another straightforwardly typed sequence. The translator had to pause initially for more than 5 s. to think of how to translate the direct object ‘large amounts of sleeping medicine’ (and the verb). The segment shows that when translation is progressing smoothly for this translator, pauses longer than 245 ms. only occur between full words – except in a very long polysyllabic word like ‘Schlafmedikamenten’.

4 Segmentation in Translation: A Look at Expert Behaviour

15.

85

[•01.529]Gestern•wurde•[•00.874]er•[•00.944]des•vierfachen•Morde

The translation of ‘Yesterday, he was found guilty of four counts of murder’ proceeds unproblematically, including the transposition involved in translating ‘he was’ to ‘wurde er’. The pauses around the typing of ‘er’ were probably used to find a solution for how to translate ‘four counts of murder’. The missing ‘s’ at the end of ‘Morde’ indicates that a new translation had occurred to the translator. 16.

[•01.560]◄◄◄[•00.905]in•Folge•eines•[•00.608]la

The new translation (again) involved rearrangement of the information structure and fronting of the final adverbial phrase (‘following a long trial’) in the ST. The translator appears not to have seen (or foreseen) this final adverbial phrase when section fifteen was typed. Just as the translator was going to type ‘langen’, typing was broken off by a better idea for translating ‘following’. 17. [•04.493]◄◄nach•[•00.359]lange[•00.843]m•F[•00.359][Shift+Back]Ver[•00.375]◄◄[Shift+Back]G erichtsverfahren•[•00.983]des•vierfachen•Mordes•[•00.765]schg[•00.702]◄u[•00.234]ldig•gesprochen. •[•04.134]

This new idea concerned the interpretation and translation of ‘following’. The causal interpretation suggested by ‘in Folge’ was discarded and replaced with a temporal interpretation (‘nach’ [after]). Making this change and thinking out the necessary rearrangement of the construction took only about four and a half seconds. Finally, the generic ‘Verfahren’ [proceeding(s)] was specified into ‘Gerichtsverfahren’ [legal proceeding(s)].6 Since no statistically significant results can be drawn from what is only an illustration of the way keystroke analysis can be used to show how translators process text in segments, nevertheless it is worth pointing out a few other observations. The speed of production accelerated very noticeably across the first four sentences (including the headline). The headline was translated at a speed of one ST word per 8 s., the first sentence of the main text at one ST word per 5 s., and sentences two and three at one word per 1.4 and 2 s. This acceleration was accompanied by an increase in the number of keystrokes per second, which went up from about three to five per second, occasionally peaking at 8. The number of text production keystrokes went up from 2.6 to 4.6 per second, but not, as might perhaps be expected, because the relative proportion of text production keystrokes out of the total number of keystrokes increased. There was an increase, but it was minimal. There was also an increase in the number of words processed within a single

6

Segments 16 and 17 have been comprehensively analysed in Jakobsen (2016).

86

A. L. Jakobsen

segment. The headline and first sentence of the text (23 words) were processed in 12 segments, i.e. about two words per segment, whereas sentences two and three (34 words) were processed in just five segments, almost seven words per segment. Further analysis would probably show that these various measures reached their peak operating levels already in the course of the short extract examined here. Levels would be unlikely to continue increasing much beyond the level reached here, with peaks of around 80 keystrokes per segment and typing (target text production) at a speed of about 4–5 keystrokes per second. (A translator capable of continuing for eight hours at this pace would produce upwards of 15,000 words of translated text.) As we have seen, keystroke-based analysis leaves a number of questions unanswered. In the next section the potential of gaze data to answer some or all of the questions raised as well as to support or question our tentative generalisations will be examined. It will be particularly interesting to see if gaze data will fully or only partially support keystroke-based segmentation, or if gaze data will perhaps point to totally different principles of segmentation. If it turns out that gaze data support the segmentation already made, it will be interesting to know to what extent gaze data provide evidence on which interpretations of what happened can be based and/or evidence of possible causes of some of the delays that could not be explained by reference to the keylog data.

4.5

The Contribution of Gaze Data

Data from the eye tracker come with a time stamp and coordinate information about the location on the screen where the gaze was recorded as having been directed. Additional information is often given, e.g. separate information about the position of the gaze for the left and the right eye, the pupil diameter of each eye, if there was a cluster of gaze observations big enough to be interpreted as a fixation, if the velocity of gaze movement was such that it was interpreted as a saccade, and more. The frequency with which this is recorded depends on the speed of the eye tracker. A 60 Hz tracker records and reports the position of the gaze 60 times per second. A 1000 Hz tracker records and reports the position 1000 times per second. The speed of the eye tracker used for the recording under analysis was 300 Hz. The quantity of data recorded by an eye tracker is very large, and sometimes there is a striking contrast between the amount of apparently very accurately recorded data and the often quite poor accuracy of much of the data, which can be due to many external factors affecting the quality of a recording (contact lenses and glasses, reflection of light, head movement, heavy eyelids, long eyelashes, etc., etc., but is mainly due to calibration not having been fully accurate. Suboptimal calibration may lead to the display of pixel coordinates being systematically inaccurate or gradually drifting away from what a participant was actually looking at. Translog II displays individual gaze samples with little dots on the screen where the gaze was detected: green for the right eye, red for the left eye. When samples cluster in a fixation on (part of) a word, the little dots will (ideally) create a cloud-like

4 Segmentation in Translation: A Look at Expert Behaviour

87

Fig. 4.4 Gaze samples (in green and red) and fixations (blue circles) from 00,000 ms. to 189,420 ms., i.e. during the drafting of the translation of the four sentences analysed

balloon on that word. A fixation interpreter working mathematically from the recorded sample data will draw a blue-lined circle where the fixation was identified.7 As gaze data is very sensitive to the factors just mentioned, it is sometimes difficult to immediately see what a recorded fixation was most likely a fixation of. In order to connect the recorded gaze information from the eye tracker with what a participant was actually looking at and most likely saw, we need to have a reading model, as well as a translation model and a text production model. We know that both the kind of reading that takes place in a translation task and the typing of the target text are different from ordinary reading and text production. For this reason it might be better to simply say that a comprehensive translation process model is needed which integrates reading, translation and writing.8 Looking at Fig. 4.4, which shows all the recorded gaze samples and fixations in the ST windows across the slightly more than three minutes of translation analysed

7

Translog II also has a GWM (gaze-to-word mapping) function designed to automatically identify words that are fixated. GWM was not used in the present study. 8 A translation process model could include several modalities, listening, reading and watching at the input end, speaking, writing, signing at the production end.

88

A. L. Jakobsen

below, it appears as if the first two words of the headline (‘Killer nurse’), the first two words of the second line (‘Hospital nurse’) and the last two words of that line (‘from Glasgow’) were not read, which seems impossible considering that they were accurately translated. In a situation like that we have three fundamental options, (1) trusting the data (as represented), (2) discarding the data as irrelevant or (3) accepting the data as providing a skewed but relevant and interpretable representation. If we trust the data, we will say that the fixations and their positions were recorded and displayed correctly, the words were not read, but were nevertheless somehow translated. This is very improbable and leaves us with the impossible task of explaining how text can be translated that has not been seen. If we reject the data, we have no experimental evidence to go by and will have to start all over. There is the (technical) possibility that the words were read, but fixations on them were not recorded for some reason (data loss explanation). However, this is equally improbable as option (1) considering the unlikelihood of data loss occurring so systematically at the beginnings and ends of lines, but also (and mainly) because the raw xml data show no such systematic occurrences of data loss. There remains the explanation that the words were fixated and read, the eye tracker recorded the gaze movements correctly, but projected a skewed representation requiring systematic adjustment to be directly informative. If we could fit the recorded data better to our embryonic, intuitive translation-process model by means of some mathematical formula that would stretch the representation of samples and fixations left and right from the vertical centre, we are assuming that they would accurately represent the location of the gaze. Although our tentative gaze analysis depends critically on acceptance of this assumption of systematic displacement, we shall not attempt to recalculate all the raw gaze data mathematically. For the present exploratory purpose, all we will say is that inaccurate calibration appears to have reduced the size of the eyetracker’s model of the screen and to have shrunk its model of the screen’s size, effectively reducing the projected screen by the space of some 14–18 characters at the extreme left and right of all lines as seen in Fig. 4.4 In order to map recorded fixations to the word which was most probably seen and read during a fixation, fixations on the far left should therefore be moved about 14–18 characters to the left and fixations on the far right should be moved farther to the right by a similar distance. The distance left or right, which a fixation needs to be moved is reduced by the space of about one character for each 3–4 characters a fixation approaches the middle of the distortion. This means that e.g. a fixation occurring 35 character spaces towards the middle of the screen from the left should be moved only six character spaces to the left.9

As will be seen in subsequent figures, there is also vertical displacement in the representation of gaze samples and fixations in this recording. Fixations on the headline are well aligned vertically, but the fixations on the second line mostly appear above the line, and the same applies to fixations on lines three and four. Towards the bottom end of the screen we see the reverse phenomenon that the lower on the screen fixations are displayed, the more they tend to be placed lower than their probable target. Inaccurate calibration can also explain why left-eye samples (in red) often appear lower down on the screen than (green) right-eye samples, except in the top and bottom right areas. 9

4 Segmentation in Translation: A Look at Expert Behaviour

89

Fig. 4.5 Screenshot of fixations and gaze samples during the first 9,702 ms of initial orientation (before onset of typing). Cp. Segment 1 in Sect. 4.4 above

With the above caveats and suggested adjustments, let us explore what information can be derived from the gaze data. Unfortunately screenshots cannot show the dynamics of eye movements, but will hopefully give a sufficiently accurate illustration of some of the critical fixations, movements and shifts to be demonstrated.10 The total number of keystrokes and fixations made during the translation of the headline and the first three sentences was roughly the same, both between 700 and 750. The gaze analysis in the next section attempts to track the main progression of the path followed by the gaze, with a few uneventful seconds omitted. The 19 screenshots (Figs. 4.5, 4.6, 4.7, 4.8, 4.9, 4.10, 4.11, 4.12, 4.13, 4.14, 4.15, 4.16, 4.17, 4.18, 4.19, 4.20, 4.21, 4.22, and 4.23) generally follow the keystroke-based segmentation made earlier, except in a few instances. Segments two and thirteen have been split up into four screenshots each. Details have been included which serve to illustrate phenomena that add new insight into the expert translator’s processing, especially concerning how reading and typing are coordinated, how new reading and typing are added by means of anchoring, and the scope of processing units.

10

The interested reader is invited to download a free copy of Translog II and access and replay the P01_T1 file from the SG12 experiment, which is also freely accessible at: https://sites.google.com/site/centretranslationinnovation/

90

A. L. Jakobsen

Fig. 4.6 Screenshot of gaze samples and fixations between 7012 and 9867 ms into the recording. Cp. Segment 2 (1)

Fig. 4.7 Screenshot showing fixations and gaze samples between 9,702 and 13,315 ms. Cp. Segment 2 (2)

4 Segmentation in Translation: A Look at Expert Behaviour

91

Fig. 4.8 Screenshot of fixations and gaze samples between 13,314 and 15,691 ms. Cp. Segment 2 (3)

Fig. 4.9 Gaze activity between 15,675 and 17,247 ms. A space was added immediately after ‘Todes’ at 17,332 (not seen). Cp. Segment 2 (4)

92

A. L. Jakobsen

Fig. 4.10 Gaze samples and fixations 17,334 to 23,439 ms. Cp. Segment 3

Fig. 4.11 Gaze samples and fixations 25,105 to 32,711 ms. Cp. Segment 4

4.6

Analysis of Gaze Data

The representation of gaze samples and fixations in Fig. 4.5 illustrates the need for horizontal adjustment. Accepting the adjustments mentioned, we can see that all the words of the headline were fixated, i.e. they were all read before typing was initiated. Two or three words, ‘Killer’, ‘nurse’ and less intensely ‘receives’, were fixated repeatedly and attracted special attention from the outset. Importantly, only the

4 Segmentation in Translation: A Look at Expert Behaviour

93

Fig. 4.12 Gaze samples and fixations 34,244 to 46,265. Cp. Segment 6

Fig. 4.13 Gaze samples and fixations from 48,762 to 56,206 ms. Cp. Segment 7

headline was read before typing started. There were several visual visits to the target text screen, with a few brief fixations recorded en route, possibly also a look at the keyboard to adjust finger position, but no linear reading beyond the headline. The visits to the target window might indicate an earlier intention to start typing or may simply have been motivated by a desire to be well oriented in the relevant workspace. We are now in a position to answer the first question raised under keystroke analysis: What was read by the translator during initial orientation (and how)? The

94

A. L. Jakobsen

Fig. 4.14 Gaze samples and fixations 56,202 to 68,085 ms. Cp. Segment 8 (1)

Fig. 4.15 Gaze samples and fixations 70,502 to 72,864 ms. Cp. Segment 8 (2)

translator spent almost ten seconds during initial orientation, reading only the ST headline before starting to type the translation, but reading and rereading certain words. In normal reading, ten seconds would be time enough to read about 50 words, but only the six words of the headline were fixated and read. In normal reading, six words would be roughly expected to receive five to six fixations, but the number of regressions and refixations on the words in the headline was much higher (more than

4 Segmentation in Translation: A Look at Expert Behaviour

95

Fig. 4.16 Gaze samples and fixation 89,294 to 95,328 ms. Cp. Segment 9, first half

Fig. 4.17 Gaze samples and fixations 95,328 to 102,136 ms. Cp. Segment 9, second half

40). The condensed style of the headline took extraordinary visual effort and attention, and therefore also exceptional cognitive effort, to unravel and translate. Figure 4.6 partially overlaps with the information in Fig. 4.5, but shows only the gaze activity immediately before and after the typing of the first character (‘K’) of the translation. In the space of less than two seconds before typing started, nine fixations were recorded on the headline, most of them targeting ‘Killer nurse’. Even after about 40 fixations on the six words of the headline, the translator revisited the words

96

A. L. Jakobsen

Fig. 4.18 Gaze samples and fixations from 102,136 to 131,330 ms. Cp. Segments 10, 11, and 12

Fig. 4.19 Gaze samples and fixations from 13,128 to 133,084. Cp. Segment 13 (1)

she was about to type a translation of immediately before initiating typing. The distance in time between the last visual inspection of a word (cluster) and the onset of the typing of the translation of it, the socalled EKS, (cf. Dragsted and Hansen

4 Segmentation in Translation: A Look at Expert Behaviour

97

Fig. 4.20 Gaze activity 133,085 to 138,169 ms. Cp. Segment 13 (2)

Fig. 4.21 Gaze activity 139,507 to 142,907 ms. Cp. Segment 13 (3)

(2008) and Dragsted (2010)) was very short, less than 0.2 s. In comparison, the distance in time between the first visual inspection of ‘Killer nurse’ and the typing of the initial ‘K’ of ‘Krankenschwester’ was more than 9.5 seconds. These two eye-key span measures are EKSmin and EKSmax, respectively. EKSmax typically depends either on how far ahead a translator read before starting to type a translation or on

98

A. L. Jakobsen

Fig. 4.22 Gaze and keyboard activity 143,057 to 146,395 ms. Cp. Segment 13 (4)

Fig. 4.23 Gaze and keyboard activity 151,511 to 159,795 ms. Cp. Segment 14

how difficult it was to process a passage (analogous to the EVS in interpreting), whereas EKSmin will show the degree of a translator’s exploitation of the opportunity to refresh memory of the source text expression of which a translation is about to be typed, thus indicating the degree of a translator’s reliance on stored information in

4 Segmentation in Translation: A Look at Expert Behaviour

99

short-term memory (STM) or need to take advantage of the possibility of refreshing information in STM. The two EKS measures enable us to ask and answer the question: What was the distance in time between the first and the last fixation on a word (group) and the typing of the matching word(s) or segment? The answer to this question goes a long way towards explaining how reading and typing processes are coordinated in translation and how effortful the processing of specific words or groups of words is. Gaze data thus also provide an answer to another question: What was fixated in the interval immediately prior to the onset of the typing of (a portion of) a new segment? At the time ‘K’ was typed (at 9702 ms), the gaze was on ‘Killer nurse’ in the ST window, but immediately after it was on the way down to the TT window to monitor typing on screen. Figure 4.6 documents that the first letter ‘K’ was typed in the target text window before the gaze arrived to monitor the typing. At least two fingers must have already been in place on the keyboard to press without visual attention, which demonstrates this translator’s ability to multitask in the sense of executing and coordinating two different motor activities simultaneously, moving the eyes and attending to information in one part of the screen while moving the fingers and typing in another part of the screen. At the moment ‘K’ was typed, the gaze began to travel down, and by the time the next key was struck (‘r’) the gaze had already arrived, as seen in the next screenshot. Figure 4.7 shows about 3½ seconds of gaze activity between the moment when the activity represented in the previous screenshot stopped and gaze activity in the intervals before and after the second letter of ‘Krankenschwester’ [nurse] was typed. The assumption that gaze data are represented in a displaced manner, about sixteen character spaces too far to the right at the beginning of lines, is supported by the details shown here. It is unlikely that the translator’s gaze would have repeatedly fixated ‘four life’, and only those two words, while beginning to type a translation of two different words (‘Killer nurse’) that had not been fixated. Similarly, it is improbable that the gaze in the target text should be on an empty space to the right of the typing area rather than directly on the characters being typed. Fixations marked between the headline and the target window can be interpreted as brief, incidental en route fixations, recorded as the gaze was travelling down (first) and (then) up between the reading and typing areas on the screen. As soon as typing had begun, the translator’s gaze was briefly active in the source text. The fingers were probably in place to tentatively and very slowly type ‘K’ and ‘r’ (of ‘Krankenschwester’), but immediately after the ‘r’ had been typed, gaze attention returned to ‘Killer nurse’ which received twelve new fixations. The translator clearly already had a definite translation beginning with ‘Kr’ in mind, but the continued fixation of ‘Killer nurse’, and only those words, for 3½ seconds indicates continued uncertainty about the translation of those two words. The translator was not dealing with a problem about how to integrate a translation of the words into a context, for no context was read, but was therefore most probably still evaluating the translation or searching for a different solution. This illustrates the kind of answer

100

A. L. Jakobsen

gaze data can give to our next question. What help (if any) can gaze behaviour offer to support interpretation of typing hesitation as an indication of uncertainty? The uncertainty manifested in the typing delays concerned the translation of the first two words of the ST text. Once the translator had accepted this solution, the blockage was broken and typing of the translation was completed fairly quickly. There was no need for further visual attention to the ST, so attention could be allocated to monitoring, i.e. visually following and checking the typing of the next 12 characters of the word, as seen in Fig. 4.8. In the 2.4 s. sequence shown in Fig. 4.8, the gaze followed the typing of twelve characters (at a speed of 5 characters per second and with the EKSmin increasing to 2.4 s.) in a kind of visual pursuit. Before the last two characters (‘er’) had been typed, the gaze was already shifting, as can be seen in the final en route fixation on ‘2002’. The ability to continue typing while reading new text is vaguely reminiscent of the simultaneous interpreter’s ability to listen to new speech while speaking the interpretation of ‘old’ speech, but with an important difference. The simultaneous interpreter has to continuously listen and speak. The translator in our experiment only read ST while typing TT in parallel for the duration of about half a second. Nevertheless, it is this ability to read ‘new’ ST and simultaneously type TT that permits a translator to reduce or (ideally) overcome the typical jerkiness of translational text production and maintain a more steady flow of output. Translational expertise is necessary to ensure a constant flow of translation production. That is the primary and sine qua non requirement. Secondly, an ability to continue typing, at least for a short time, while reading and translating new source text, is necessary to reduce or remove the disruptiveness of the process. Finally, from the point of view of optimal productivity, typing speed should be fast enough not to constitute a bottleneck in the process. Figure 4.9 shows how the translation of the sentence subject ‘Killer nurse’ was quickly completed (twelve keystrokes in less than 1.6 s., averaging 7.5 keystrokes per second). The translation had been found, and all the necessary instructions to the fingers had been given, so that the gaze could remain in the ST window to read new text. The first seven or eight fixations in the headline (after adjustment) were all on ‘receives’, with three fixations quickly added at the end on ‘four life sentences’. The framework solution mentioned above for ‘receives’ (‘wird . . .(zu). . .verurteilt’ [is. . . (to). . .sentenced]), with or without ‘zu’, was no doubt now available in the translator’s mind, but needed to be combined acceptably with a translation of ‘four life sentences’. After more than 2 s. of continuous reading of ‘four life sentences’ with no typing activity, the ‘w’ of ‘wird’ was typed (at 19,344). As ‘wird’ was typed, visual attention continued to be on ‘four life sentences’, but then shifted down to follow the typing of the first translation suggestion: ‘zur vierfacher’ in monitoring pursuit style. After another look at ‘four life sentences’, the gaze turned again to ‘zur

4 Segmentation in Translation: A Look at Expert Behaviour

101

vierfacher’ in a different, evaluative reading style, with repeated, long regressive and progressive fixations and no typing, indicating dissatisfaction with this first solution, which was subsequently abandoned and deleted at 24,477 (not seen in Fig. 4.10). After the sequence of fixations shown in Fig. 4.10, a second solution was typed (Fig. 4.11). There was still brief initial visual attention here on ‘four life’, but the translator’s attention was now almost fully given to pursuit-style monitoring of the typing of the new solution interspersed with regressive, evaluative fixations. After typing the second solution and shifting the gaze quickly to the ST as if intending to move on to a new segment, the translator took another careful look at ‘vierfach lebenslang’ before changing it to ‘vierfach lebenslänglich’ (Segment 5). This change and the next (Fig. 4.12) were made across 12 seconds without a single look at the ST again, creating the longest EKSmin in the extract examined as well as the second longest unbroken gaze-pursuit sequence (see Fig. 4.23 for the longest). It is evident that throughout this reformulation process the translator’s focus was completely on target text adequacy and that the relevant segment of ST meaning was fully available in STM. The translation of the first sentence of the main text began quickly after minimal prior reading of ST. Only ‘Hospital nurse Colin’ had been read when ‘Die Krankenschwester Colin’ was typed (Fig.4.13). Typically for this translator, when the last four letters of ‘Krankenschwester’ had not yet been typed, the gaze went up briefly, and without interrupting the typing process, to ‘Colin’ as a final reminder before it would be typed. Up to this point the translator proceeded by reading ahead only as far as was necessary to start translating, by a principle of very economic forward reading. In the keystroke analysis above we observed that there was a very long interval of about 18 s. after the typing of ‘Colin’, where the translator had apparently become aware of a possible anomaly in the text, being unsure that ‘Colin’ could be the name of a female nurse as was implied in the translation of nurse into ‘Krankenschwester’. The gaze path by which the translator arrived at the conclusion that the text was about a male nurse shows that for a little more than 2 s, eight fixations were all on the words ‘Hospital nurse Colin Norris (was) imprisoned’, and only then did the translator decide to break with the principle of sparse forward reading and quickly scanned and read the remaining 11 words of the sentence in just six short fixations, as seen in Fig. 4.14. At this point (at 60,292 ms), the translator had seen ‘his’ and knew that ‘Krankenschwester’ would not do as a translation, but did not immediately think of a solution, so the gaze travelled back to a series of reiterated fixations on ‘nurse Colin’ for no less than 8 s, during which the translator was no doubt struggling to find a way of solving the problem caused by ‘nurse’ being used in the ST to refer to a male person. This interpretation is supported by the next gaze movement, which was down to the two TT occurrences of ‘Krankenschwester’ (cf. Fig. 4.14), for which the translator had now probably come upon an alternative. In order to ascertain that the translator’s new solution would work, a second reading of the information towards the end of the opening sentence was undertaken

102

A. L. Jakobsen

(Fig. 4.15) to verify the decision that ‘Colin’ must refer to a male nurse. Significantly, the last word read (again) was ‘his’ and very quickly after, the two occurrences of ‘Krankenschwester’ were replaced by ‘Krankenpfleger’ (Segment 9). In short, the gaze data show that in order to solve the problem the translator had to break with her sparse forward reading principle and decide to read on in order to find contextual information that could resolve her uncertainty. ‘Die Krankenschwester’ was changed to ‘Der Krankenpfleger’, but then immediately again changed to ‘Der im Krankenhaus angestellte Krankenpfleger’ [The in the hospital employed male nurse]. As can be seen in Fig. 4.16, the translation of ‘Hospital nurse’ was now available in STM so that the gaze was free to visually pursue the typing results on screen up to the en route fixation in Fig. 4.16, where the gaze had just begun to move up to the nurse’s name to refresh memory of it before it would be typed a moment later. Once again the meaning represented in four TT words was held in STM by this translator without visual reinforcement for about 6 s. Figure 4.17 shows typical, fluent text production with three gaze shifts from the TT to the ST and back. After ‘wurde’, and grammatically before the predicate, there was a break during which the rest of the sentence was read (11 ‘new’ words). The forward reading was ‘anchored’ by brief visual reinspection of old text (‘Colin Norris was imprisoned’) before reading moved on into new text. The rest of sentence one (Fig. 4.18) was translated in 29 s. with only one major delay. 60 characters of target text were typed in just 11 s. The typing of ‘für den Mord an vier seiner Patienten’ followed very fluently with several gaze shifts to ‘for the killing of four of his patients’. While ‘Patienten’ was being typed, the gaze was shifted up to read the words ‘imprisoned for life today’. The gaze data indicate that the delays before and after the typing of ‘Patienten’ were caused by uncertainty about whether ‘heute’ (‘today’) would be better placed immediately after the verb (*‘wurde heute’) to avoid ‘heute’ being misunderstood as relating to when the murders were committed, but upon reflection no change was finally made in the word order. After ‘heute’ had been typed, the recently typed TT and the matching portion of the ST were reread (108,330–112,000 ms.). Apparently satisfied with the translation, the translator then read ‘was imprisoned for life’ in three fixations and completed the text without further visual attention to the ST despite some slight uncertainty about whether to use ‘Gefängnis’ (neuter) or ‘Gefängnisstrafe’ (feminine). Again the gaze pattern shows that the gaze was typically shifted from the TT to the ST and back for every one or two words of progress, illustrating how often this translator needed to refresh even small amounts of information in STM. During the typing hesitation before ‘Patienten’ and before ‘heute’, the gaze went up to both ‘patients’ and ‘today’, which suggests that the hesitation was indeed related to the problem of where to fit in ‘heute’. The translation of the second sentence of the main text (Fig. 4.19) illustrates the way gaze activity operated when translation ran smoothly. The translation of ‘32

4 Segmentation in Translation: A Look at Expert Behaviour

103

year old Norris from Glasgow killed the four women in 2002 by giving them large amounts of sleeping medicine’ (112 characters) was read, produced and typed in just 28 seconds. In the keystroke analysis, the typing of the translation of this portion of text was divided into two segments only. The gaze data make it clear, however, that although the typing was very fluent (except before the translation of ‘large amounts’), the processing of the sentence was broken into six smaller units, some of which were visually inspected more than once (‘32 year old Norris / from Glasgow / killed the four women / in 2002 / by giving them / large amounts of sleeping medicine’). Figure 4.19 shows how the gaze went back up to where it was last shifted from, the ‘anchor’ word(s), and then continued right to read new text (‘32 year old Norris from Glasgow’) and on into the next line to read ‘killed the four women’, but not beyond that to the right, before jumping down in a long saccade to the typing space, where the ‘D’ had already been typed by the time the gaze arrived (cp. Fig. 4.6). Assuming that the translator had stored the text just seen in STM, the next ST reading would be from ‘in 2002’. As it turned out, however, the gaze was shifted to the ST text under translation much earlier. The gaze went back up to ‘32 year old Norris from Glasgow’ as the translation of that passage was being typed. Then, as ‘Glasgow’ was half typed (see Fig. 4.20), the gaze was already re-reading ‘killed the four women’ in the next line, but not beyond that. When the translator had typed the translation of ‘the four’ and had started to type the translation of ‘women’, the gaze was shifted to fixate ‘in 2002’, just in time for the translation (‘2002’) to be typed with minimal delay, after fixation of the anchor words ‘vier Frauen’ in the TT (Fig. 4.21). This sequence of gaze movements and keystrokes again illustrates the translator’s ability to engage simultaneously in overlapping, multitasking activity, which makes it possible to maintain continuous text production, at least for a while. It demonstrates how reading and typing can be coordinated so that traces in the typing output of cognitive processing boundaries being crossed are reduced or obliterated. It also illustrates this translator’s strategy of sparse forward reading working successfully. Thus the entire early portion of the sentence was translated without a single look at the direct object of ‘giving’. It was pointed out earlier that there was a long interval before the translation of ‘large amounts of sleeping medicine’. The gaze data show that above-average fixation time was spent on ‘large amounts’, but there is no clear clue as to what caused this.11

11

A speculation might be that the translator was considering a more technical medical translation of ‘amounts’.

104

A. L. Jakobsen

The last five words were first read as the translation of ‘by giving them’ (‘indem er ihnen...verabreichte’) was being typed and visually monitored, after visual anchoring to ‘vier Frauen’ (Fig. 4.22). Next, the gaze first travelled up to reread ‘giving them large amounts’, then travelled down to the TT input area, then back up to ‘large amounts’ and ‘of sleeping medicine’, and only then did the typing of the translation of ‘large amounts’ begin (at 151,500 ms). 2½ s. later, the gaze went up once more to refresh the translator’s memory of what it was ‘large amounts of’. But having done that, the translator’s visual attention could be fully devoted to visually pursue and monitor the typing of the long and difficult word ‘Schlafmedikamenten’ (‘Schlafmedi[•00.500]ka [•00.359]menten’) and the final verb ‘verabreichte’ (Fig. 4.23). As was typical of this translator, the gaze was already on its way up to take in new text in the third sentence before the typing of the previous segment was completed. As soon as the middle ‘e’ in ‘verabreichte’ had been typed, the gaze was on its way up to fixate the first word of the third sentence (‘Yesterday’). As Jakobsen (2016) has a detailed analysis of sentence four, there is no need to repeat it here. Suffice it to say that the gaze data from this sentence add evidence to the translator’s sparse forward reading strategy (including an illustration of the risk it involves), to the pattern of shifts between ST reading and TT typing with occasional instances of concurrent reading and typing, to the translator’s need for frequent STM refreshment, and to the various reading styles employed.

4.7

Discussion and Some Tentative Conclusions

What can we learn from a fairly poor recording of a single translator’s gaze and keyboard activity while she was translating four randomly selected sentences? It would be very rash to claim that what we have found in a single recording has validity for cognitive aspects of translation in general and about the way all translators manage their coordination of gaze and keyboard activity. But the recording points to a number of regularities which future case studies will be able to compare results with, and which large-volume analyses can also compare results with. There are two or three technical lessons to be learnt if experiments involve eyetracking. There is a need to be extremely careful not only with calibration, but with the many other factors that may distort the way data are displayed, regardless of whether experiments involve concurrent keyboard typing or spoken responses. Secondly, if a certain amount of distortion is inevitable, a way of correcting distorted projection mathematically has to be found, not, as done here, by rather loosely estimating the distortion based on inspection of the total projection of gaze samples

4 Segmentation in Translation: A Look at Expert Behaviour

105

and fixations. It is crucial that we are able to connect the gaze data with what translators actually read. A third consideration concerns our approach to volume and quality. The importance of accurate data should not be underestimated. Running large-volume analyses of ‘noisy’ data from distorted recordings is not likely to identify relevant patterning. Qualitative, case-based evaluation of data must go hand in hand with large volume analysis, and the tentative conclusions suggested in the present study, as they are all based on observations in a single far from perfect recording of a single translator’s performance across 3 minutes, must obviously be tested against a battery of accurate recordings to reach the status of hypotheses about translational processing. What are the observations that might deserve to be tested in this manner and would be testable? With respect to segmentation it was found that by not accepting segments with only navigation keystrokes in them the number of fragmentary segments could be meaningfully reduced. This could easily be implemented automatically, whereas the hand-crafted reduction which resulted from including ‘Kr’ as part of segment 2 and ‘einer’ as part of segment 12, could not. The analysis of gaze data showed that the translator’s preferred reading strategy was ‘as little as possible’ or ‘only as much as necessary’. On occasion, the translator read quite a few words forward (6 in Fig. 4.5, 17 in Fig. 4.14, 11 in Fig. 4.17), but in all of these cases the forward reading was followed up either by the translation of the first couple of words only being typed or by rereading of smaller portions before translation. The sparse forward reading strategy was generally followed by typing of the translation of the subsegment read, followed by reading of new text before typing of the translation of the previously read subsegment had been completed. This was seen most clearly in the way sentence two of the main text was read and how the translation was typed. The shifts between reading or rereading of a portion of ST and typing of the translation suggest that this translator’s processing of meaning proceeded in considerably smaller units than the segments identified in the keylog analysis with 1 s. segment boundaries. Initially, ten words were read, but only the first three of them were translated when a portion of the same text was reread and that portion translated, and so forth (cf. Table 4.1). In the subsegments identified in this manner, by what was typed immediately following reading of the source text, the number of words typed per segment ranged between one and four. Gaze data provide evidence of segmentation at a deeper level than what has most often been assumed in TPR. The segments identified from gaze data have the appearance of being closer to the way translation processing goes on in the brain, by cognitive processing of minimal translatable units. This by no means invalidates segmentation based on time delay boundary criteria. In fact, and very happily, time delays in the typing activity and gaze movements and shifts converge beautifully to define the subsegments shown in Table 4.1. As can be seen, the shorter segments were all separated from each other by a typing pause of about half a second. It is true that pauses of similar duration also occurred within short segments, e.g. around numbers (‘32’ and ‘2002’) and in the typing of ‘Schlafmedikamenten’, but in these

106

A. L. Jakobsen

Table 4.1 (Sub)segments of sentence two of the main text, by reading (row 1), by typing coordinated with reading (row 2), and by 1 s. segment boundaries in the typing (row 3). Intervals below 245 ms not represented

cases the delay did not involve a gaze shift between the two windows. The regular occurrence of intervals of about half a second between the typing of subsegments in combination with the gaze behaviour across the ST and TT windows support the assumption that the cognitive subsegment content being attended to, processed and translated was indeed that represented in the ST and TT subsegments. It would be very interesting to see, in a large sample, if processing units which can be expressed in 1–4 word phrases will be found to be as generally occurring as in the recording analysed here. By combining typing pause and gaze movement criteria, e.g. in the form of EKSmax and EKSmin measures, it should be possible to do large-volume testing of the generality of such a local observation and possibly identify the default scope (expressed linguistically in words) of a cognitive processing unit. The duration of individual subsegments in sentence two of the main text ranged between 2 and 7 seconds, i.e. much longer than the 0.5 s. intervals between subsegments. This negligible delay in the flow of text production was made possible in part by the translator’s ability to read ‘new’ text while concurrently continuing to type for upwards of a second, giving the translator enough time to visually locate an anchor word (or several) and read 4–6 new words of text to identify the next translatable unit. The phenomenon of anchoring as a means of horizontal attachment was observable in connection with the majority of gaze shifts between the two windows (ST and TT), and was noticeable in the ST when new text was about to be read, or old text about to be reread, and in the TT when new text was about to be added and monitored. This anchoring operation was the way the translator connected

4 Segmentation in Translation: A Look at Expert Behaviour

107

subsegments to form larger segments. In Table 4.1, the refixation of ‘32 year old’ (in subsegment 2), ‘giving them’ (6), and ‘large amounts’ (7) illustrates the phenomenon. If a typical processing unit can be expressed in 2–3 words (+/ 1), forward reading can be theoretically predicted to be slightly longer because of the anchoring mechanism and because reading is sometimes continued slightly longer than necessary, in a kind of overspill. As the scope of forward reading would appear from the large-volume analysis of the scope of cognitive processing units, this measure would be equally testable. The brevity of the expert translator’s subsegments can perhaps be seen as reflecting a routine developed over time based on the continuous availability of the ST for (re)inspection. This availability of the ST also means that translators are more immediately exposed to the ST than simultaneous interpreters, which potentially makes translators more subject to priming by the ST and therefore more inclined to find ‘literal’ or ‘literal default’ solutions (Carl and Dragsted 2012, p. 128; Ivir 1981, p. 58; Toury 2012, p. 225). Support for this assumption may be found in the ‘in Folge’ solution quickly arrived at for ‘following’ (Segment 16). Here the translator was possibly primed by the visual (and also phonetic and semantic) similarity of ‘in Folge’ to ‘following’. The hesitation in connection with the translation of ‘today’ could be interpreted as a possible case of syntactic priming. We can also observe in the small extract studied here that standard cultural-semantic assumptions are immediately activated, as in the case of ‘nurse’ being first translated as ‘Krankenschwester’. Priming has a strong effect on translators, as seen in the pervasiveness of literal default translation and also has an effect on segmentation.12 Finally, gaze data provide evidence of how efficiently a translator can economize with STM to maximize the processing power available for translation. Information in STM was mostly stored for less than 6 s. (with one exceptional peak at 12 s.) As the ST is permanently available for visual inspection, the translator can concentrate all effort on translating whatever current item is in focus at a given point in time and allow unnecessary STM content to quickly degrade since it can always quickly be refreshed, should this be relevant. This permanent availability of the written source text and the written delivery mode of translation are conditions that apply to all translators and strongly affect their cognitive text processing and segmentation. Translators bring different bilingual skills, different individual knowledge, different meaning construction intelligence and different reading and writing skills to the task of translation, but it is a challenging thought that underneath all this difference, we might be able to find commonly shared processing patterns.

12

Schaeffer (2013) makes an interesting case that literal, word-to-word translation is faster and cognitively easier than translation which involves rearrangement of word order.

108

A. L. Jakobsen

References Angelone, E. (2010). Uncertainty, uncertainty management and metacognitive problem solving in the translation task. In G. M. Shreve & E. Angelone (Eds.), Translation and cognition (pp. 17–40). Amsterdam: John Benjamins. Baddeley, A. D. (1986). Working memory. Oxford: Clarendon Press. Bayer-Hohenwarter, G. (2012). Translatorische kreativität: Definition-messung-entwicklung. Tübingen: Narr Verlag. Butterworth, B. (1980). Evidence from pauses in speech. In B. Butterworth (Ed.), Language production. Volume 1. Speech and talk (pp. 155–176). London: Academic Press. Carl, M., & Dragsted, B. (2012). Inside the monitor model: Processes of default and challenged translation production. Translation: Computation, Corpora, Cognition. Special Issue on the Crossroads Between Contrastive Linguistics, Translation Studies and Machine Translation, 2(1), 127–145. Carl, M., & Kay, M. (2011). Gazing and typing activities during translation: A comparative study of translation units of professional and student translators. Meta, 56(4), 952–975. Dragsted, B. (2004). Segmentation in translation and translation memory systems: An empirical investigation of cognitive segmentation and effects of integrating a TM system into the translation process. PhD thesis. Copenhagen Business School, Frederiksberg, Denmark. Dragsted, B. (2010). Coordination of reading and writing processes in translation: An eye on uncharted territory. In G. M. Shreve & E. Angelone (Eds.), Translation and cognition (pp. 41–62). Amsterdam: John Benjamins. Dragsted, B., & Hansen, I. G. (2008). Comprehension and production in translation: A pilot study on segmentation and the coordination of reading and writing processes. In S. Göpferich, A. L. Jakobsen, & I. M. Mees (Eds.), Looking at eyes: Eyetracking studies of reading and translation processing (pp. 9–29). Copenhagen: Samfundslitteratur. Gile, D. (1999). Testing the effort models’ tightrope hypothesis in simultaneous interpreting – A contribution. Hermes, 23, 153–172. Goldman-Eisler, F. (1972). Pauses, clauses, sentences. Language and Speech, 15(2), 103–113. Immonen, S., & Mäkisalo, J. (2010). Pauses reflecting the processing of syntactic units in monolingual text production and translation. Hermes, 44, 45–61. Ivir, V. (1981). Formal correspondence vs. translation equivalence revisited. Poetics Today, 2(4), 51–59. Jakobsen, A. L. (2002). Translation drafting by professional translators and by translation students. Traducción & Comunicación, 3, 89–103. Jakobsen, A. L. (2003). Effects of think aloud on translation speed, revision, and segmentation. In F. Alves (Ed.), Triangulating translation: Perspectives in process oriented research (pp. 69–95). Amsterdam: John Benjamins. Jakobsen, A. L. (2005). Instances of peak performance in translation. Lebende Sprachen, 50(3), 111–116. Jakobsen, A. L. (2016). Are Gaze Shifts a Key to a Translator’s Text Segmentation? Poznań Studies in Contemporary Linguistics, 52(2), 149–173. Miller, G. A. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychological Review, 63(2), 81–97. Schaeffer, M. (2013). The ideal literal translation hypothesis: The role of shared representations during translation. PhD thesis. University of Leicester, Leicester. Schilperoord, J. (1996). It’s about time: Temporal aspects of cognitive processes in text production. Amsterdam: Rodopi. Toury, G. (2012). Descriptive translation studies – And beyond (Rev. Ed.). Amsterdam: John Benjamins.

Chapter 5

Explore the Brain Activity during Translation and Interpreting Using Functional Near-Infrared Spectroscopy Fengmei Lu and Zhen Yuan

5.1

Introduction

Since the mid-1970s, functional near infrared spectroscopy (fNIRS) has been developed as a non-invasive technique to investigate brain cerebral hemodynamic levels associated with brain activity under different stimuli by measuring the change of absorption coefficient of the near-infrared light between 650 nm and 950 nm (Huppert et al. 2009; Jöbsis-vander Vliet 1977; Yodh and Chance 1995; Yuan 2013a, b; Yuan and Ye 2013; Yuan et al. 2010, 2014). Compared to other available functional neuroimaging modalities, such as functional magnetic resonance (fMRI) and positron emission tomography (PET), fNIRS has the advantages of portability, convenience and low cost. And more importantly, it offers unsurpassed high temporal resolution and quantitative information for both oxy-hemoglobin (HbO2) and deoxy-hemoglobin (HbR), which are essential for identifying rapid changes in dynamic patterns of brain activities including changes in blood oxygen, blood volume and blood flow. As a neuroimaging method, fNIRS enables continuous and noninvasively monitoring of changes in blood oxygenation and blood volume related to human brain functions. fNIRS can be implemented in the form of a wearable and noninvasive or minimally intrusive device, and it has the capacity to monitor brain activity under real life conditions and in everyday environments. During neural stimulus processing, there will be local increase in blood flow, blood volume and blood oxygenation in a stereotyped hemodynamic response. Recently, advances in the understanding of light propagation in diffusive media (also known as photon migration) and technical developments in optoelectronics components have made it possible to extract valuable optical/hemodynamic information from the human

F. Lu · Z. Yuan (*) Bioimaging Core, Faculty of Health Sciences, University of Macau, Macau, SAR, China e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2019 D. Li et al. (eds.), Researching Cognitive Processes of Translation, New Frontiers in Translation Studies, https://doi.org/10.1007/978-981-13-1984-6_5

109

110

F. Lu and Z. Yuan

brain. Different fNIRS instruments including commercial systems or laboratory prototypes have been developed and used effectively in preclinical and clinical studies. Obtaining measurements of the hemodynamic response of localized regions of the brain allows for inferences to be made regarding local neural activity. In addition, the processes of the text-to-text translation and simultaneous interpreting involve complex activities, which consist of many sub-skills including perception, listening and speaking, reading and writing, reasoning and decisionmaking, problem-solving, memory, attention, etc. Successful execution of simultaneous interpreting to a large extent depends on verbal working memory, simultaneous speech perception and articulation, and switching between languages. And the capability of translating and interpreting is an accompaniment of bilingualism. So far much work has been conducted to study the basic neural mechanism of the translation and simultaneous interpreting processes using different neuroimaging methods including electroencephalography (EEG), fMRI, direct cortical electrostimulation, PET and fNIRS (Borius et al. 2012; Hervais-Adelman et al. 2011; Janyan et al. 2009; Klein et al. 1995, 2006; Kurz 1995; Lehtonen et al. 2005; Price et al. 1999). Findings from previous work have indicated that (1) there is a hemispheric lateralization process involved in the translation; (2) Broca’s area seems to be the most important brain region in response to translation tasks; and (3) no particular brain regions have been identified as being excluded from translation processes. In this chapter, we will first review the methods and instruments of fNIRS. Then the applications of fNIRS in investigating the translation and simultaneous interpreting processes will be discussed.

5.2

Basic Principles of fNIRS

Frans Jöbsis-vander Vliet, the founder of in vivo fNIRS, reported the first real-time non-invasive detection of hemoglobin oxygenation using transillumination spectroscopy (Jöbsis-vander Vliet 1977). Jöbsis and Chance and their colleagues also used fNIRS to study cerebral oxygenation in human subjects after applying this technique on laboratory animals (Brazy et al. 1985; Chance 1991; Jöbsis-vander Vliet 1999). Later Ferrari and his research team investigated the effects of carotid artery compression on regional cerebral blood oxygenation and blood volume of cerebrovascular patients with reference to the data of newborn brain measurements utilizing prototype fNIRS instruments (Ferrari et al. 1986a, b). Importantly, Wyatt et al. and Reynolds et al. performed the first few quantitative measurements of various oxygenation and hemodynamic parameters in newborn infants, including changes in oxygenated (HbO2), deoxygenated (HbR) and total hemoglobin (HbT) concentrations, cerebral blood volume, and cerebral blood flow (Reynolds et al. 1988; Wyatt et al. 1986). fNIRS is based on the physical and physiological mechanisms that human tissues are relatively transparent to light in the near infrared (NIR) spectral window and the relatively high attenuation of NIR light in tissue is due to the main chromophore

5 Explore the Brain Activity during Translation and Interpreting Using. . .

111

hemoglobin (the oxygen transport red blood cell protein) located in small vessels of the microcirculation, such as capillary, arteriolar and venular beds. fNIRS is weakly sensitive to blood vessels >1 mm because they completely absorb the light. Given the fact that arterial blood volume fraction is approximately 30% in the human brain (Delpy and Cope 1997; Ito et al. 2005), the fNIRS technique offers the possibility to obtain information mainly concerning oxygenation and blood volume changes occurring within the venous compartments. fNIRS is a non-invasive and safe neuroimaging technique that utilizes laser diode and/or light-emitting diode light sources spanning the optical window and flexible fiber optics to carry the NIR light from (source) and to (detector) tissues. Fiber optics are very suitable for any head position and posture. fNIRS measurements can be performed in natural environments without the need for restraint or sedation. Adequate depth of NIR light penetration (almost one half of the source-detector distance) can be achieved with a source-detector distance of around 3 cm. The selection of the optimal source-detector distance depends on NIR light intensity and wavelength, as well as the age of the subjects and the head regions measured. As a consequence of the complex light scattering effect by different tissue layers, the length of the NIR light path through tissue is longer than the physical distance between the source and detector pair. According to Beer’s law (Yuan 2013b), the wavelength-dependent tissue optical density changes can be written in terms of the changes of the chromophores including HbO2 and HbR at time t with wavelength λ, as shown in Eq. (5.1) 

ΔODðr; t Þjλ1 ΔODðr; t Þjλ2





ε ðλ Þ ¼ DPF ðr Þlðr Þ 1 1 ε1 ð λ 2 Þ

ε2 ð λ 1 Þ ε2 ð λ 2 Þ



ΔHbO2 ðr; t Þ ΔHbRðr; t Þ

 ð5:1Þ

in which OD is the optical density as determined from the negative log ratio of the detected intensity of light with respect to the incident intensity of light using continuous-wave (CW) measurements, ΔOD is the optical density change (unitless quantity) at the position r, DPF(r) is the unitless differential path length factor, l (r) (mm) is the distance between the source and the detector,εi(λ) is the extinction coefficient of the ith chromophore at wavelength λ of laser source, ΔHbO2 and ΔHbR(μM) represent the chromophore concentration changes of oxy- and deoxyhemoglobin, respectively. After multiplying the inverse matrix of the extinction coefficients for both sides of Eq. (5.1), the time series matrix for the changes in HbO2 and HbR is written in Eq. (5.2): 

   QHbO2 ðr; t Þ ΔHbO2 ðr; t Þ ¼ =ðDPF ðr Þlðr ÞÞ ΔHbRðr; t Þ QHbR ðr; t Þ

ð5:2Þ

in which Q (r,t) vectors are the product of the inversion matrix of the extinction coefficients and the optical density change vectors. Similar operational procedures could be extended to nth wavelength case based on regularization methods, as shown in Eq. (5.3):

112

F. Lu and Z. Yuan

2

3 ΔODjλ1 6 ΔODjλ2 7   6 7  1 ΔHbO2 ... 7 ¼ ET R1 E ET R1 6 6 7=ðDPF  lÞ; ΔHbR 4 5 ΔODjλn 2 3 ε1 ð λ 1 Þ ε2 ð λ 1 Þ ... 5 ¼ 4 ... ε1 ð λ n Þ ε2 ð λ n Þ

E

ð5:3Þ

where the matrix E is the extinction coefficient matrix and R is defined as the a priori estimate of the covariance of the measurement error. The change in total hemoglobin concentration ΔHbT(μM) is defined as the sum of ΔHbO2 and ΔHbR.

5.3

The fNIRS Instrumentations

Different fNIRS instruments with related key features, advantages and disadvantages, and parameters measurable by using different fNIRS techniques have been developed and listed in Table 5.1 (Cutini et al. 2012; Marco and Valentina 2012). Three typical signal measurement techniques using fNIRS light are currently being used for optical tissue imaging: CW, time-domain (TD) and frequency-domain (FD) methods. CW fNIRS systems directly measure the intensity of light transmitted or reflected through the tissue. The light source used in CW systems generally has a constant intensity or is modulated at a low frequency (a few kHz). TD systems use short laser pulses, with temporal spread below a nanosecond, and detect the increased spread of the pulse after passing through tissue. FD systems use an amplitude-modulated source at a high frequency (a few hundred MHz) and measure the attenuation of amplitude and phase shift of the transmitted signal. Typically in this approach, a radio-frequency oscillator drives a laser diode and provides a reference signal for phase measurement. Among the three methods, the CW approach is relatively cheap and easy to implement. So far CW setups are also the most used optical neuroimaging/spectroscopy systems. The development of fNIRS instrumentation started in 1992 with a single channel system with low temporal resolution and poor sensitivity, as shown in Fig. 5.1. In 1995, the multi-channel systems (the first 10-channel system) were reported. The present high temporal resolution multi-channel systems, using the three different fNIRS techniques and complex data analysis systems, provide simultaneous multiple measurements and display the results in the form of a map or image over a specific cortical area or the whole brain, as displayed in Fig. 5.2. The realization of multi-channel wearable and/or wireless systems that allow fNIRS measurements even in normal daily activities gives fNIRS more potential than any other neuroimaging modality.

5 Explore the Brain Activity during Translation and Interpreting Using. . .

113

Table 5.1 Three typical fNIRS measurements systems

Main characteristics Sampling rate (Hz) Spatial resolution (cm) Penetration depth with a 4 cm source-detector distance Discrimination between cerebral and extracerebral tissue (scalp, skull, CSF) Possibility to measure deep brain structures Instrument size Instrument stabilization Transportability Instrument cost Telemetry Measurable parameters [HbO2], [HbR], [HbT] Scattering and absorption coefficient and pathlength measurement Tissue HbO2 saturation measurement (%)

fNIRS techniques-based instrumentation Continuous FrequencyTimewave domain domain 100 50 10 1 1 1 Low Deep Deep n.a.

Feasible

Feasible

Feasible on newborns Some bulky, some small n.r. Some easy, some feasible Some low, some high Available

Feasible on newborns Bulky

Feasible on newborns Bulky

n.r. Feasible

Required Feasible

Very high

Very high

Difficult

Not easy

Yes, changes No

Yes, absolute value Yes

Yes, absolute value Yes

No

Yes

Yes

CSF cerebrospinal fluid, HbR deoxy-hemoglobin, n.a. not available, n.r. not required, HbO2 oxy-hemoglobin, HbT HbO2 + HbR.

Fig. 5.1 Sketch of the development of fNIRS instrumentation from single channel with a low temporal resolution and poor sensitivity up to the multi-channel systems

114

F. Lu and Z. Yuan

Fig. 5.2 (a) A typical example of an fNIRS probe holder placed on the head of a participant (bilateral parietal lobe). In this setup, thin optical fibers (diameter 0.4 mm) convey near infrared light to the participant’s head (note that each location comprises two optical fibers, one for each wavelength), whereas optical fiber bundles (diameter 3 mm) capture the light that is scattered through the brain tissue. (b) The ISS Imagent: http://www.iss.com/biomedical/instruments/imagent. html; (c) the Hitachi ETG 4000:http://www.hitachi-medical-systems.eu/products-and-services/opti cal-topography/etg-4000.html; (d) the Nirsoptix Brain Monitor: http://www.nirsoptix.com/CW6. php and (e) the NIRScout: http://www.nirx.net/imagers/nirscout-xtended

5.4

The Applications of fNIRS in Translation and Interpreting

Since the mid-1990s, most of the research work done in neurosciences using fNIRS has focused on quantitative analysis and imaging of human and small animal brain functions. They have utilized these two techniques to localize or monitor the cerebral responses under different stimulus including visual (Heekeren et al. 1997; Meek et al. 1995; Ruben et al. 1997), auditory (Sakatani et al. 1999), somatosensory (Franceschini et al. 2003), motor (Colier et al. 1999; Hirth et al. 1996; Kleinschmidt et al. 1996), language (Sato et al. 1999), and even translation (Quaresima et al. 2002). Interestingly, findings of previous works indicate that the processes involved in using the native language (L1) or the non-native language (L2) in monolingual communication are not the same as that associated with translating (Grosjean 1985; Hurtado Albir 1999; Obler 1983), suggesting that translation-specific processes cannot be directly inferred from research on, or models of the bilingual brain. Several behavioral studies also reported that translation training has been shown to be able to modulate accuracy and response times (RTs) in a variety of linguistic and

5 Explore the Brain Activity during Translation and Interpreting Using. . .

115

non-linguistic tasks. For example, Bajo et al. (2000) reported that professional interpreters are able to perform better than interpretation students and bilinguals without translation experience in a semantic categorization task when non-typical exemplars are involved. It was also found that professional interpreters respond faster and more precisely than bilingual university students in both language and working memory tasks (Christoffels et al. 2006), and that they can even outperform foreign language teachers in terms of accuracy RTs. In particular, a comparison of stress-induced physiological responses between interpretation students and professional interpreters was performed, which revealed that the latter tend to maintain a lower and more constant pulse rate during simultaneous interpreting sessions (Kurz 2003). The differences in linguistic processing between interpreters and non-interpreters seem to be developed during the early stages of formal translation training (Fabbro and Darò 1995). An fNIRS research (Quaresima et al. 2002) utilizing a 12-channel CW system to explore the hemodynamics changes in the brain regions related to translation tasks was conducted. In this study, 8 right-handed male subjects aged between 19 and 24, all students and early bilinguals – Dutch (L1) and English (L2), participated in the tests. In the experiment, the subjects were asked to perform four tasks including: (1) Translating from native language (L1) into non-native language (L2); (2) Translating from non-native language (L2) into native language (L1); (3) Translating 7 short sentences in each direction, for example, ‘I want to go shopping.’ and ‘She writes with a pencil.’ (4) A control task composed of reading simple sentences out loud. The four tasks were randomly presented and each was performed 4 times. The NIR instrument utilized in this study was two pulsed synchronized instruments (OXYMON, Artinis Medical Systems, Arnhem, and The Netherlands) (Grosjean 1985). Two wavelengths (775 and 850 nm) with a laser power of about 1 mw were used and the light was transmitted and collected using a 2-meter-long optic fiber bundle with a diameter of 4 mm (four sources and five detectors). All the sources and the detectors were placed on a probe holder to keep the source–detector distance at 3.5 cm. The geometry of the probe allowed the concomitant measurement of 12 sites over a head area of about 7 cm  7 cm. The sampling frequency of the system was 0.1 Hz. Figure 5.3 shows how the probe holder was positioned on the left lateral frontal lobe and centered around the Broca’s area. A topographic presentation of the time courses of hemodynamic changes in the left hemisphere during translation from Dutch into English of seven short visually presented sentences was provided in Fig. 5.4. It was observed from Fig. 5.4 that the left inferior frontal cortex including the Broca’s area showed a consistent and incremental rise in HbO2 accompanied by a smaller decrease in HbR, which indicated that the Broca’s area was related to the neural activity during the translation process. The hemodynamic change pattern such as decreased HbR and increased

116

F. Lu and Z. Yuan

Fig. 5.3 Schematic representation of the optical probe (the white circles stand for source; the black circles represent detector;12 channels). The optical probe was located on the left lateral frontal lobe and centred around the Broca’s area according to the 10–20 system. (From Quaresima et al., 2002 with permission)

HbO2 is a representative feature of a localized increase in cerebral blood flow. A delay in the response of the HbR compared that from HbO2 may be identified. In addition, it was observed from Fig. 5.4 that the activation areas were more pronounced in the inferior frontal cortex, including Broca’s area. However, the sites adjacent to Broca’s area were not uniformly activated. The findings of the study (Quaresima et al. 2002) correspond with findings of some other neuroimaging studies of translation/interpreting (e.g. Rinne et al. 2000). Compared with EEG, fMRI and PET, the fNIRS technology promises higher ecological validity. The immense potential of fNIRS in translation and interpreting studies is certainly underexplored, if not untapped.

5 Explore the Brain Activity during Translation and Interpreting Using. . .

117

Fig. 5.4 Typical topographic presentation of the time courses of hemodynamic changes (average of responses over four 21-s blocks) during a translation task (consisting of a translation from Dutch into English of seven short visually presented sentences). The vertical lines indicate the translation period. During the rest period the subjects watched a row of crosses presented every 3 s on the screen. Here O2Hb represents HbO2 while HHb represents HbR (from reference Quaresima et al., 2002 with permission) Acknowledgment This research is supported by SRG2013-00035-FHS Grant, MYRG201400093-FHS Grant from University of Macau in Macao and FDCT 026/2014/A1 grant from Macao Government.

References Bajo, M. T., Padilla, F., & Padilla, P. (2000). Comprehension processes in simultaneous interpreting. In A. Chesterman, N. Gallardo San Salvador, & Y. Gambier (Eds.), Translation in context (pp. 127–142). Amsterdam: John Benjamins. Borius, P.-Y., Giussani, C., Draper, L., & Roux, F.-E. (2012). Sentence translation in proficient bilinguals: A direct electrostimulation brain mapping. Cortex, 48(5), 614–622.

118

F. Lu and Z. Yuan

Brazy, J. E., Lewis, D. V., Mitnick, M. H., & Jöbsis vander Vliet, F. F. (1985). Noninvasive monitoring of cerebral oxygenation in preterm infants: Preliminary observations. Pediatrics, 75(2), 217–225. Chance, B. (1991). Optical method. Annual Review of Biophysics and Biophysical Chemistry, 20(1), 1–28. Christoffels, I. K., de Groot, A. M. B., & Judith, F. K. (2006). Memory and language skills in simultaneous interpreters: The role of expertise and language proficiency. Journal of Memory and Language, 54(3), 324–345. Colier, W. N., Quaresima, V., Oeseburg, B., & Ferrari, M. (1999). Human motor-cortex oxygenation changes induced by cyclic coupled movements of hand and foot. Experimental Brain Research, 129(3), 457–461. Cutini, S., Moro, S. B., & Bisconti, S. (2012). Functional near infrared optical imaging in cognitive neuroscience: An introductory review. Journal of Near Infrared Spectroscopy, 20(1), 75–92. Delpy, D. T., & Cope, M. (1997). Quantification in tissue near–infrared spectroscopy. Philosophical Transactions of the Royal Society of London B: Biological Sciences, 352(1354), 649–659. Fabbro, F., & Darò, V. (1995). Delayed auditory feedback in polyglot simultaneous interpreters. Brain and Language, 48(3), 309–319. Ferrari, M., De Marchis, C., Giannini, I., Di Nicola, A., Agostino, R., Nodari, S., & Bucci, G. (1986a). Cerebral blood volume and hemoglobin oxygen saturation monitoring in neonatal brain by near IR spectroscopy. Advances in Experimental Medicine and Biology, 200, 203–211. Ferrari, M., Zanette, E., Giannini, I., Sideri, G., Fieschi, C., & Carpi, A. (1986b). Effects of carotid artery compression test on regional cerebral blood volume, hemoglobin oxygen saturation and cytochrome-c-oxidase redox level in cerebrovascular patients. Advances in Experimental Medicine and Biology, 200, 213–221. Franceschini, M. A., Fantini, S., Thompson, J. H., Culver, J. P., & Boas, D. A. (2003). Hemodynamic evoked response of the sensorimotor cortex measured noninvasively with near-infrared optical imaging. Psychophysiology, 40(4), 548–560. Grosjean, F. (1985). Polyglot aphasics and language mixing: A comment on Perecman (1984). Brain and Language, 26(2), 349–355. Heekeren, H. R., Obrig, H., Wenzel, R., Eberle, K., Ruben, J., Villringer, K., . . . Villringer, A. (1997). Cerebral haemoglobin oxygenation during sustained visual stimulation – A near– infrared spectroscopy study. Philosophical Transactions of the Royal Society of London B: Biological Sciences, 352(1354), 743-750. Hervais-Adelman, A., Moser-Mercer, B., Michel, C. M., & Golestani, N. (2011). The neural basis of simultaneous interpretation: A functional magnetic resonance imaging investigation of novice simultaneous interpreters. Paper presented at the 8th international symposium on Bilingualism, Oslo, Norway. Hirth, C., Obrig, H., Villringer, K., Thiel, A., Bernarding, J., Mühlnickel, W., et al. (1996). Non-invasive functional mapping of the human motor cortex using near-infrared spectroscopy. Neuroreport, 7(12), 1977–1981. Huppert, T. J., Diamond, S. G., Franceschini, M. A., & Boas, D. A. (2009). HomER: A review of time-series analysis methods for near-infrared spectroscopy of the brain. Applied Optics, 48(10), 280–298. Hurtado Albir, A. (1999). La competencia traductora y su adquisición. Un modelo holístico y dinámico. Perspectives, 7(2), 177–188. Ito, H., Kanno, I., & Fukuda, H. (2005). Human cerebral circulation: Positron emission tomography studies. Annuals of Nuclear Medicine, 19(2), 65–74. Janyan, A., Popivanov, I., & Andonova, E. (2009). Concreteness effect and word cognate status: ERPs in single word translation. In K. Alter, M. Horne, M. Lindgren, M. Roll, & J. von Koss Torkildsen (Eds.), Brain talk: Discourse with and in the brain (pp. 21–30). Lund: Lunds Universitet. Jöbsis vander Vliet, F. F. (1977). Noninvasive, infrared monitoring of cerebral and myocardial oxygen sufficiency and circulatory parameters. Science, 198(4323), 1264–1267.

5 Explore the Brain Activity during Translation and Interpreting Using. . .

119

Jöbsis vander Vliet, F. F. (1999). Discovery of the near-infrared window into the body and the early development of near-infrared spectroscopy. Journal of Biomedical Optics, 4(4), 392–397. Klein, D., Milner, B., Zatorre, R. J., Meyer, E., & Evans, A. C. (1995). The neural substrates underlying word generation: A bilingual functional-imaging study. Proceedings of the National Academy of Science for the United States of America, 92(7), 2899–2903. Klein, D., Zatorre, R. J., Chen, J.-K., Milner, B., Crane, J., Belin, P., & Bouffard, M. (2006). Bilingual brain organization: A functional magnetic resonance adaptation study. Neuroimage, 31(1), 366–375. Kleinschmidt, A., Obrig, H., Requardt, M., Merboldt, K.-D., Dirnagl, U., Villringer, A., & Frahm, J. (1996). Simultaneous recording of cerebral blood oxygenation changes during human brain activation by magnetic resonance imaging and near-infrared spectroscopy. Journal of Cerebral Blood Flow & Metabolism, 16(5), 817–826. Kurz, I. (1995). Watching the brain at work – An exploratory study of EEG changes during Simultaneous Interpreting (SI). The Interpreters’ Newsletter, 6, 3–16. Kurz, I. (2003). Physiological stress during simultaneous interpreting: A comparison of experts and novices. The Interpreters’ Newsletter, 12, 51–67. Lehtonen, M. H., Laine, M., Niemi, J., Thomsen, T., Vorobyev, V. A., & Hugdahl, K. (2005). Brain correlates of sentence translation in Finnish–Norwegian bilinguals. Neuroreport, 16(6), 607–610. Marco, F., & Valentina, Q. (2012). A brief review on the history of human functional near-infrared spectroscopy (fNIRS) development and fields of application. Neuroimage, 63(2), 921–935. Meek, J. H., Elwell, C. E., Khan, M. J., Romaya, J., Wyatt, J. S., Delpy, D. T., & Zeki, S. (1995). Regional changes in cerebral haemodynamics as a result of a visual stimulus measured by near infrared spectroscopy. Proceedings of the Royal Society of London B: Biological Sciences, 261(1362), 351–356. Obler, L. K. (1983). La neuropsychologie du bilinguisme. Langages, 18(72), 33–43. Price, C. J., Green, D. W., & Von Studnitz, R. (1999). A functional imaging study of translation and language switching. Brain, 122(12), 2221–2235. Quaresima, V., Ferrari, M., van der Sluijs, M. C., Menssen, J., & Colier, W. N. (2002). Lateral frontal cortex oxygenation changes during translation and language switching revealed by non-invasive near-infrared multi-point measurements. Brain Research Bulletin, 59(3), 235–243. Reynolds, E. O., Wyatt, J. S., Azzopardi, D., Delpy, D. T., Cady, E. B., Cope, M., & Wray, S. (1988). New non-invasive methods for assessing brain oxygenation and haemodynamics. British Medical Bulletin, 44(4), 1052–1075. Rinne, J., et al. (2000). The translating brain: cerebral activation patterns during simultaneous interpreting. Neuroscience letters, 294(2), 85–88. Ruben, J., Wenzel, R., Obrig, H., Villringer, K., Bernarding, J., Hirth, C., . . . Villringer, A. (1997). Haemoglobin oxygenation changes during visual stimulation in the occipital cortex. Advances in Experimental Medicine and Biology, 428, 181-187. Sakatani, K., Chen, S., Lichty, W., Zuo, H., & Wang, Y.-P. (1999). Cerebral blood oxygenation changes induced by auditory stimulation in newborn infants measured by near infrared spectroscopy. Early Human Development, 55(3), 229–236. Sato, H., Takeuchi, T., & Sakai, K. L. (1999). Temporal cortex activation during speech recognition: An optical topography study. Cognition, 73(3), B55–B66. Wyatt, J. S., Cope, M., Delpy, D. T., Wray, S., & Reynolds, E. O. (1986). Quantification of cerebral oxygenation and haemodynamics in sick newborn infants by near infrared spectrophotometry. The Lancet, 2, 1063–1066. Yodh, A., & Chance, B. (1995). Spectroscopy and imaging with diffusing light. Physics Today, 48(3), 34–41. Yuan, Z. (2013a). Combining independent component analysis and Granger causality to investigate brain network dynamics with fNIRS measurements. Biomedical Optics Express, 4(11), 2629–2643.

120

F. Lu and Z. Yuan

Yuan, Z. (2013b). A spatiotemporal and time-frequency analysis of functional near infrared spectroscopy brain signals using independent component analysis. Journal of Biomedical Optics, 18(10), 106011. Yuan, Z., & Ye, J. C. (2013). Fusion of fNIRS and fMRI data: Identifying when and where hemodynamic signals are changing in human brains. Frontiers in Human Neuroscience. https://doi.org/10.3389/fnhum.2013.00676. Yuan, Z., Zhang, Q., Sobel, E. S., & Jiang, H. (2010). Image-guided optical spectroscopy in diagnosis of osteoarthritis: A clinical study. Biomedical Optics Express, 1, 74–86. Yuan, Z., Zhang, J., Wang, X., & Li, C. (2014). A systematic investigation of reflectance diffuse optical tomography using nonlinear reconstruction methods and continuous wave measurements. Biomedical Optics Express, 5(9), 3011–3022.

Chapter 6

Translation in the Brain: Preliminary Thoughts About a Brain-Imaging Study to Investigate Psychological Processes Involved in Translation Fabio Alves, Karina S. Szpak, and Augusto Buchweitz

6.1

Introduction

Over the past years, the understanding of cognitive processes that underpin the task of translation have been an unceasing goal in the field of translation process research. In the early 1980s, the study of the psychological processes associated with translation drew primarily on think-aloud protocols, a research method in which participants verbalize their thought processes as they complete a task (Ericsson & Simon 1984). Krings (1986), Jääskeläinen (1987), Gerloff (1988), Séguinot (1989), and Bell (1991), among others, proposed models drawing primarily on the results of think-aloud protocols as a window into human thought. Conversely, Lörscher (1991) argued that the existing models of translation processes based on verbalization of thought processes could not account for the psychological processes that are engaged in translation task execution. To address this issue, Lörscher carried out one of the first psycholinguistic-oriented studies of translation performance. The study investigated the different stages of the translation process and the role of monitoring source/target language text segments in the unfolding of translation tasks. In recent years, the investigation of the psychological processes involved in translation continued to bear fruit and incorporated instruments from empirical research used in experimental psychology, such as eye tracking and screen recording (Alves et al. 2009; Jakobsen & Jensen 2008; O’Brien 2006). Translation process

F. Alves (*) · K. S. Szpak Laboratory for Experimentation in Translation (LETRA), Federal University of Minas Gerais (UFMG), Belo Horizonte, Brazil e-mail: [email protected] A. Buchweitz Brain Institute of Rio Grande do Sul, Pontificia Universidade Catolica do RS (PUCRS), Porto Alegre/RS, Brazil © Springer Nature Singapore Pte Ltd. 2019 D. Li et al. (eds.), Researching Cognitive Processes of Translation, New Frontiers in Translation Studies, https://doi.org/10.1007/978-981-13-1984-6_6

121

122

F. Alves et al.

research also developed its own tools for the investigation of online processes, such as the key-logging software Translog (Jakobsen & Schou 1999) and the Litterae search tool (Alves & Vale 2009), an annotation system for annotating key-logged Translog segments. Recent studies by Pavlović and Jensen (2009), Hvelplund (2011), Carl and Kay (2011), Carl and Dragsted (2012), and Alves et al. (2012) have combined key-logging and eye tracking research methods in the attempt to understand translation processing units on the basis of visual attention shifts between the source and the target texts. These results have provided new insights into the investigation of the processing units that require cognitive effort during translation task execution. The replication of these findings has shown that empirical research techniques are a good fit for investigating the cognitive processes involved in translation. Recently, a new trend in empirical-experimental research using brain-imaging methods, such as positron emission tomography (PET) and functional magnetic resonance imaging (fMRI), has shed light on the understanding of the biological bases of language. Drawing on the Inhibitory Control (IC) model,1 Price et al. (1999) observed different patterns of activation in a neuroimaging study of German (L1) and English (L2) word translation and language switching. The authors considered possible modulations by the language task schema at the semantic level and at the level of word recognition and word production. The results of this study have shown that translation, relative to reading, registers increased activation in areas associated with the control of action, such as the anterior cingulate and subcortical structures (putamen and head of the caudate). However, no increase in the dorsolateral prefrontal cortex was observed for translation under conditions of language switching, as expected by the authors. Results also revealed that activation in regions associated with semantic processing decreased for translation. According to Price et al. (1999), this result can indicate that the participants were able to translate single German words using a direct lexical route rather than an indirect semantic route. The authors correlate this finding to the role of inhibitory processes in selecting L1 or L2 production. Data also confirm that during translation task execution the demands placed on articulatory output increases. The authors argue that this is because the response associated with input orthography must be inhibited while the response associated with the translation equivalent is activated, an activity that seems to be governed by the anterior cingulate and subcortical structures. Finally, the authors conclude by stating that attention and inhibition play an important role in translation-

1 The Inhibitory Control (IC) model proposes the notion of a functional control circuit with three basic loci of control: an executive locus (the supervisory attentional system used for establishing and maintaining goals), a locus at the level of language task, and a locus within the bilingual lexicosemantic system itself. According to the IC model, in order to speak in one language rather than another or to translate between languages, individuals establish language task schemas. These are effectively action schemas in the domain of language and link input to, and output from, the bilingual lexico-semantic system to responses. (see Green 1998a, b for more information on this topic).

6 Translation in the Brain: Preliminary Thoughts About a Brain-Imaging. . .

123

related processes as the translator has to make language-related choices that involve eliminating competing semantic representations in different languages. Annoni et al. (2012), when observing the neurocognitive aspects of translation, tested a group of translators in an fMRI block design in which subjects had to reformulate two sets of German sentences so that they could be better understood. The first sets of sentences required a theory of mind analysis, whereas the second set consisted of purely logical sentences. Preliminary results showed that the translation task is associated to a dense left superior temporal, inferior and dorsolateral prefrontal activation, all areas related to language and language control. Interestingly, the results by Annoni et al. (2012) revealed activation in the dorsolateral prefrontal area, a region in which deactivation was observed in the study of Price et al. (1999). This difference in activation may have occurred due to differences in the experimental designs. One study investigated translation processing of relatively stable word meanings and the other one combined word meanings in a grammatically constrained higher-order representation, which required systems beyond coding and decoding words. Apart from methodological differences between Annoni et al. (2012) and Price et al. (1999), it is interesting to observe that both studies highlighted the importance of action control in translation-related processes. For PACTE (2011), the act of translating entails what the group defines as translation competence. This competence comprises five sub-competences as well as psycho-physiological components, in which the strategic sub-competence plays a central role. PACTE stresses the importance of action control in translation-related processes through the strategic sub-competence, which is responsible for activating the different sub-competences and compensate for any shortcomings. As the attentive reader may already have observed, the field of translation process research, being at the center of different domains such as cognitive neuroscience and social cognition, needs to be enriched by an interdisciplinary research. In trying to understand in greater detail the specific psychological processes involved in translation, the nature of action control and how the brain supports translation processing “beyond the literal meaning”, our goal in this chapter is to present some initial thoughts about the feasibility of integrating neuroscientific and behavioral data in the investigation of the inferential nature of translation process. In doing so, we draw on Grice’s view of pragmatics (Grice 1957), on Sperber and Wilson’s Relevance Theory, henceforth RT, (Sperber and Wilson 1986/1995), on Gutt’s metarepresentational view of translation (Gutt 2005, 2006) and on the literature on the psychological processes of understanding other people’s intentions, called theory of mind or mindreading (Baron-Cohen et al. 1985).

124

6.2

F. Alves et al.

Theoretical Underpinnings

Basnáková et al. (2013, p. 2572) point out that “our everyday conversations seem to be full of remarks with a meaning that critically hinges on the linguistic and social context in which they are embedded”. We can take as an example some classmates hearing their friend saying, “my hands got wet during the seminar presentation”; this statement will probably make them infer that their friend was uncomfortable and insecure in performing such an action. When asking whether she gave a good presentation, hearing her teacher saying, “It’s hard to give a good presentation” will probably make the student infer that her talk might not have been a success after all. In interpreting the speaker’s message, listeners need to take into account not only the coded meaning, but also information drawn from various contextual sources, including mechanisms for contextual disambiguation and for recovering implicit meanings that the speaker meant to share in a communicative situation. Wilson (2005) states that recognizing the speaker’s meaning amounts to recognizing the intentions behind the speaker’s communicative behavior. It is a special case of explaining an individual’s behavior in terms of attributed mental states, that is, “the communicator must intend his audience to believe that she intends them to believe a certain set of proportion” (Wilson 2005, p. 313). This distinction between coded meaning and speaker meaning is of great importance to communication and translation, since encoded information can be overruled by inferential processes that rely on information other than that directly available to the listener. On this account, in order to understand how the brain supports translation processes, it is necessary to grasp, beforehand, the theoretical underpinnings and the neural machinery of speaker meaning comprehension, as presented in the following sections.

6.2.1

The Speaker Meaning Comprehension Procedure

6.2.1.1

The Inferential Act of Communication

Considering that communication consists in the sharing of thoughts with others, RT postulates that people involved in the communicative act face two major obstacles. First, thoughts are not public, consequently, they cannot be perceived by others. In order to be shared, thoughts need to be perceptible. This perceptible evidence is called ostensive stimulus. According to Gutt (2006), the most sophisticated ostensive stimuli available are verbal expressions (utterances, texts), but even they do not, by themselves, give direct access to thoughts, which raises the second obstacle, that is, the most sophisticated ostensive stimuli only provides evidence from which the thoughts themselves need to be inferred. To illustrate these obstacles, Gutt (2006) presents Hirsch’s (1987) analogy of an iceberg: “the explicit meanings of a piece of writing are the tip of an iceberg of

6 Translation in the Brain: Preliminary Thoughts About a Brain-Imaging. . .

125

meaning; the larger part lies below the surface of the text and is composed of the reader’s own relevance knowledge” (1987, p. 33–34). Hirsch, then, suggested that the additional meaning/information derives from “the reader’s own relevant knowledge” (1987, p. 34). Gutt (2006) explains that RT postulates that it comes from reader’s cognitive environment, that is, all the information accessible to him/her, either from perception, memory, or by inference. For the comprehension of a particular utterance or text, only a subset of the cognitive environment is used. This subset, the context, is accessed in the search for relevance. For an information to be experienced as relevant, it must connect (link up) with information from one’s previous knowledge. According to Gutt (2006, p. 4), “when such link-ups take place, people experience them as cognitive effects”. As stated by this framework, a communicator produces an ostensive stimulus with an intention to convey a body of thought to his audience. From this stimulus, the audience is meant to infer not only that the communicator has such an intention but also what that intention is. To get a better idea on this inferential act of communication, let us consider a reading incentive campaign developed in Brazil some years ago. This campaign consists of a video that starts by showing the sentence: reading should be forbidden (LER DEVIA SER PROIBIDO, see picture below). What follows is a sequence of moral obligation sentences accompanied by images that represent revolutionary movements in world history (see Fig. 6.1). The video ends with the following assertion: (1) Be careful! Reading may make people dangerously more human. While watching the video, one’s cognitive environment starts to receive some ostensive stimuli from which a conclusion must be inferred. In our example, the intended meaning actually contradicts the linguistically encoded meaning. This immediately raises a question: how does one figure the intended meaning out, when the linguistically encoded meaning suggests something else? As postulated by RT, this brings us to the central point of communication and translation: “while linguistic coding does play an important role in verbal communication, it is not the decisive factor in the interpretation process” (Gutt 2005, p. 78). In our example, the linguistically encoded information is clearly rejected and replaced by inferential processes, which are built on additional information stored in the audience’s cognitive environment. For communication to succeed not only must information be accessible to the audience but it must be used by the audience for the interpretation of the text. On the reading campaign, keeping in mind that the act of reading can open doors, break barriers, and influence people’s attitudes, the communicator expects that: a) the audience will have these pieces of information available; and b) that they will be able to figure out that they should use them for interpreting this campaign. To identify the communicator-intended interpretation, the audience cannot simply resort to the information stored in their own cognitive environment; rather, it must be able to resort to the information the communicator would expect to share with the audience. In RT, the shared information is referred to as a mutual cognitive environment. The one that watches the reading campaign video needs to bear in

126

F. Alves et al.

Fig. 6.1 Source: Reading Campaign. Images taken from YouTube (accessed in 2015, December 09)

mind that, in Brazilian culture, this genre of advertisement is ruled by semiotic resources (language, images, irony, among others), so that the message can invoke within the audience a certain reaction. Hence, the video viewer would not be entitled to treat the advertisement as a straightforward assertion. S/he knows that the producers do believe that reading can change and improve people’s life. Therefore, to understand the campaign, s/he would need to adopt the beliefs assumed by the producers. The higher the resemblance between the cognitive environments of the video producers and the target audience, the higher will be the resemblance between the information that can be communicated to them. Conversely, the less the cognitive environments resemble each other, the less likely will be the interpretation that can

6 Translation in the Brain: Preliminary Thoughts About a Brain-Imaging. . .

127

be communicated to them. In this second situation, in which the communicator and the audience do not share a mutual cognitive environment, Gutt (2005) states that for communication to succeed, metarepresentation is needed. According to the author, in order to assess their mutual cognitive environment, communicator and audience must be able to represent each other’s thoughts, i.e., they must be able to read each other’s mind. Gutt (2005, p. 80) argues that “thinking about what someone else is thinking is an instance of metarepresentation”. The author adds that “people are able to think of or represent in their minds states of affairs in the world, as well as they are capable of thinking how other people represent those states of affairs in their minds – even if their own thoughts are different”. This parallel between the mindreading ability and the inferential communication can be explained by both cognitive and communicative principles of relevance. According to Wilson (2005, p. 306) understanding an utterance involves “constructing a hypothesis about the speaker’s meaning, the set of proportions, some explicit, others implicit, that the speaker overtly intended to convey – this is essentially an exercise in mind-reading”. As previously mentioned, Sperber and Wilson (1986/1995) explain that relevance is characterized as a property of inputs to cognitive processes and it can be analyzed in terms of cognitive effects and processing effort. The Cognitive Principle of Relevance states that human cognition tends to be geared towards the maximization of relevance, that is, it involves a cost-benefit notion (the cost being the mental effort required, and the benefits the cognitive effects achieved). Basically, the hearer takes the linguistically decoded meaning following a path of least effort, in doing so, he uses available contextual information to enrich it at the explicit level and complete it at the implicit level until the resulting interpretation meets his expectation of relevance; at which point, he stops. Wilson (2005, p. 315) points out that this cognitive tendency to maximize relevance makes it possible, at least to some extent, to predict and manipulate the mental states of others. In other words, the author explains that knowing that the hearer is likely to select the most relevant stimuli in his environment and process them so as to maximize their relevance, the speaker may be able to produce a stimulus which is likely to attract the hearer’s attention pointing him towards an intended conclusion. This intended conclusion involves not only an informative, but also a communicative intention. Drawing on Grice’s view of pragmatics, Wilson (2005) stresses that although the speaker does intend to affect the hearer’s thoughts in a certain way, if he does not offer an evidence of that intention, presumptions of relevance will not be generated, i.e., the informative intention will not be relevant enough to be worth processing. On this account, Wilson (2005, p. 318) presents Sperber’s (1994) theoretical framework, in which the relation between the pragmatic ability and the mindreading ability is investigated and explained in three increasingly complementary strategies which the hearer might use in interpreting an utterance, each requiring an extra order of mindreading. The author explains that the simplest one is the Naïve Optimistic

128

F. Alves et al.

hearer; he need not represent the speaker’s mental states at all in identifying the speaker’s meaning: he simply takes the first interpretation that seems relevant enough and treats it as the intended one (what the speaker says is what he means). Sperber adds that a more complex strategy, which requires an extra layer of the mindreading ability, is of the Cautious Optimistic hearer. Instead of taking the first interpretation, he considers what interpretation the speaker might have thought would be relevant enough (the hearer thinks that the speaker thinks that). The third strategy demands a further layer of mind-reading and is presented by the author as a hearer that uses a Sophisticated Understanding ability. This hearer considers what interpretation the speaker might have thought he would think was relevant enough (the hearer thinks that the speaker thinks that he thinks that). According to Gutt (2005), this further layer of the mindreading ability is necessary when translators are metarepresenting the author’s informative intentions to a target audience (for discussion on mindreading and translation metarepresentation see Sect. 6.2.2).

6.2.1.2

The Neuro-Machinery of the Inferential Act of Communication

We have been discussing that the speaker meaning comprehension requires the ability to infer and adopt the mental states of others. This ability is known in Cognitive Science as mindreading or theory of mind (Saxe & Kanwisher 2003). According to Leslie (2000, p. 1235), the term theory of mind was “coined by Premack and Woodruff (1978) to refer to our ability to explain, predict, and interpret behavior in terms of mental states, like wanting, believing, and pretending”. The author states that this ability is probably based on a specialized representational system and is evident even in young children. Although the ability to think about the mental states of others is described as a core component of translation competence, research into the translator’s capacity of representing other minds is scarce. Still, over the past decade, consistent observation in cognitive neuroscience has demonstrated that the mindreading ability engages a set of brain regions that includes the medial prefrontal cortex, the inferior parietal lobe, especially the temporal parietal junction, and the precuneus (for reviews, see Basnáková et al. 2013; Blakemore & Frith 2004; Gallagher & Frith 2003; Mitchell et al. 2006; Mitchell 2007; Saxe & Kanwisher 2003). Much of these findings come from standard mindreading tests which evaluate the participants’ ability to keep track of the beliefs of others such as the false-belief task, the visual perspective taking task, and the identity task (Arora 2015; Leslie 1987; Leslie & Polizzi 1998; Leslie & Thaiss 1992; Roth & Leslie 1998). These studies suggest that mind reading cannot be understood as a conscious, reflective process, but dependent on a dedicated inferential mechanism. In fact, given the complexity of mindreading and the variety of tasks it has to perform, Wilson (2005) suggests that it is reasonable to assume that this ability is not a single, relatively homogeneous system, but a collection of autonomous mechanisms articulated together in some way.

6 Translation in the Brain: Preliminary Thoughts About a Brain-Imaging. . .

129

According to RT, this complex mechanism “tends to automatically pick out potentially relevant stimuli and inferential mechanisms tend spontaneously to process them in the most productive way” (Wilson 2005, p. 315). In this regard, Basnáková et al. (2013 p. 2573) state that “in essence, listeners presume that speakers tailor their utterances to be relevant enough for the present communicative situation, and any obvious departures from this relevance send the listener looking for hidden meanings”. On this matter, Carston (2004, p. 3) makes clear that there are two distinctions of utterance understanding. The first is the distinction between linguistically decoded meaning and pragmatically inferred meaning. The second distinction concerns the two kinds of assumption communicated by a speaker: explicature and implicature. In short, these distinctions can be considered to be correlates to the Gricean distinction between “what is said” and “what is implicated”. Nevertheless, Alves (2010 p. 81) explains that contrary to the semantic conception of what is said, the concept of explicature entails both a component of meaning which is linguistically decoded and a component of meaning which is pragmatically derived, resulting, for instance, from processes of utterance disambiguation, from reference assignment, and/or from the free enrichment of the constitutive elements of an utterance. In an attempt to establish a distinction between implicatures and explicatures, Carston (2000, p. 9) defines an explicature as a propositional form communicated by an utterance, which is pragmatically construed from a linguistically encoded logical form. The contents of an explicature entail both linguistically decoded and pragmatically inferred material. An implicature, on the other hand, is understood “as any other propositional form communicated by an utterance whose contents consist solely of pragmatically inferred material” (2000, p. 9). Let us consider a simple example adapted from Carston (2004): (2) X: How did Mary react when she saw her daughter moving out? Y: When her daughter said good-bye, her eyes watered. Suppose that, in the particular context, X takes Y to have communicated the following assumptions: (3) a. When Mary heard her daughter saying good-bye, she realized they would be far from each other and, as a result, Mary cried. b. Mary was sad. On the basis of the distinctions presented by Carston, (3a) can be understood as an explicature of Y’s utterance and (3b) as an implicature. The author explains that “the decoded logical form of Y’s utterance, more or less visible in (3a), has been taken as a template for the development of a propositional form, while (3b) is an independent assumption, inferred as a whole from (3a)” (2004 p. 5). In other words, an explicature is a junction of decoded linguistic meaning and pragmatically inferred meaning, while an implicature is supplied wholly by pragmatic inference. As already mentioned, in understanding Y’s communicative intentions, X needs to construct a hypothesis about the communicator’s meaning. According to Wilson (2005), this processing is directly related to the ability of explaining an individual’s

130

F. Alves et al.

behavior in terms of attributed mental states. Depending on the communicative context, in interpreting an utterance, hearers will tailor different layers of mindreading (Sperber 1994), an ability dependent on complex inferential mechanisms (Basnáková et al. 2013; Wilson 2005).

6.2.2

Inferential Act of Communication and Translation

Building on the initial discussion about coded meaning and speaker meaning, and explicatures and implicatures, we intent to reflect further on the role played by inferential processing in performing a translation task. According to Gutt (2000), a translation consists primarily of establishing an interpretive resemblance between correlated passages in the source and target texts. To achieve that goal, Alves (2010) states that translators must process encoded items as well as the communicative cues related to them. As claimed by Gutt (2005), translation brings into contact people with different cognitive environments and, therefore, metarepresentation plays an important role in an ideal translation configuration, where communicator, translator, and audience all share a mutual cognitive environment, that is, their thoughts, beliefs, desires, etc. If the translator’s efforts are to succeed, s/he cannot simply use his/her own cognitive environment to understand the intentions of the communicator, rather, the translator has to metarepresent the cognitive environment shared between the communicator and the original audience to the target one. At this moment, a two-way metarepresentation is required. Going back to Sperber’s (1994) increasingly complementary strategies for interpreting an utterance, in order to make the target audience comprehend the author’s communicative intentions, translators need to tailor extra layers of the mindreading ability. For instance, when representing the mutual cognitive environment shared between the author and the original audience, the translator considers what interpretation the author might have thought would be relevant enough for his audience (the translator thinks that the author thinks that his/her audience thinks that). Nevertheless, the whole point of translation is to communicate with a target audience, who again may have a cognitive environment different from the original audience. At this moment, the translator needs to engage in a further layer of the mind reading ability. S/he needs to be aware of what interpretations the target audience might have thought s/he would think was relevant enough based on the mutual cognitive environment shared between author and the original audience (The translator thinks that the target audience thinks that s/he thinks that the author thinks that the original audience thinks that). As we can see, the most influential theoretical accounts of speaker meaning interpretation and translation processing highlight the inferential nature of such processes (Grice 1957; Gutt 2000, 2005; Sperber 1994; Sperber & Wilson 1986/ 1995; Wilson 2005). Building on the discussions presented in this chapter, we

6 Translation in the Brain: Preliminary Thoughts About a Brain-Imaging. . .

131

believe that translation processing requires deductions stemming from a cognitive context, from propositions derived from the meaning of the utterance, as well as from inferentially driven processes. At the neurobiological level, in consonance with the view that cognitive processes are best thought of as a combination of disparate mental processes, we believe that the inferential nature of translation processing might recruit some of the regions typically involved in tasks on reasoning about the mental states of others, such as the medial frontal/prefrontal cortex, the bilateral inferior parietal lobe, and the precuneus (Amodio & Frith 2006; Bašnáková et al. 2013; Mitchell et al. 2006; Saxe & Kanwisher 2003). In order to investigate this assumption, it was necessary to develop an experimental paradigm in which the translator has to infer the speaker’s informative intention by relying not only on the linguistic signal, as performed by Price et al. (1999), but also on the wider discourse and social context in which the utterance serves its communicative purpose. Accordingly, we present in the next section a proposal for an experimental design, which aims at investigating the inferential mechanisms involved in translation processing in the behavioral and neurophysiological levels. We have developed two experimental stimuli, one involving the processing translation beyond the literal code (processing the translation of idioms), and another one which the speaker meaning does critically depend on the particular context that the utterances are embedded in (processing the translation of passages dependent on a relevant context).

6.3 6.3.1

Experimental Paradigms Participants

Forty native speakers of Portuguese, with English as a second language, are going to participate in the experiment. Among them, a group of twenty participants are professional translators whereas the other twenty participants are translation students. All participants should be right-handed and have no history of neurological impairment or head injury. They are going to be instructed to either read or translate silently two experimental stimuli, one in the sentence level and one in the suprasentential level. Participants are also going to go through a training session before performing each task.

6.3.2

Experimental Stimulus I

We have created 24 experimental items, 12 targets and 12 controls. There are two experimental conditions in the study (A and B). Stimulus I sentences are considered to be complex clauses, i.e., there is a logic-semantic relation between them, which in

132

F. Alves et al.

case is dependent/dominant. RT postulates that this relation can be construed through pieces of information, or communicative cues, necessary for the correct interpretation of the sentence. The relation between communicative cues between clauses in condition (A) generates an implicature, and in condition (B) generates an explicature. To illustrate, consider an example from the experiment I: (4) (A) When she got her first job, she was paid peanuts (B) When she did her internship, she was paid a low salary2 The process of comprehending condition (A) predicts the hearer will: a) recognize the string (the noun “peanuts”, preceded by the verb “pay”); b) retrieve the conceptual representation it encodes (it is processed holistically and differs from its original meaning); and c) add some of its accompanying information to the context in order to derive the set of implications that is intended (the relation between the communicative cues getting the first job and being paid peanuts, generates an implicature that the girl earned a low salary). Decoding the utterance in condition (A), automatically triggers in the hearer’s mind a presumption, which will guide the hearer in bridging the gap between what is linguistically encoded and what is communicated. In other words, the speaker’s message is implicit and a pragmatic inference will be necessary to recover it. In condition B, this comprehension procedure, an inferential process, is not necessary; once the initial clause sets up a time framework in connection with an activity, wherein the relation between the communicative cues doing an internship and being paid a low salary, generates an explicature in which the speaker’s meaning is explicitly stated in the critical utterance. The structure of each item is as follows: utterances start with an initial adverbial clause of time (AdvCT) in a thematic position, which establishes the point of departure for understanding the second (main) clause, that contains an idiom (Id) in condition (A) and its literal meaning (LM) in condition (B): (5) (A) When she fell behind with her schoolwork (AdvCT), she hit the books (Id) (B) When she failed in her Biology test (AdvCT), she studied harder (LM)

6.3.3

Experimental Stimulus II

The goal in experimental stimulus II is to propose stimuli in which the ability to infer and understand the beliefs, desires, and intentions of others plays a central role. In order to set up this scenario, we draw on Mason and Just (2009), who postulate that the neural substrate of mindreading is associated with a putative protagonist perspective network; in the comprehension of narratives, the reader has to attribute thoughts, goals, and intentions to a character. This attribution triggers activation of a

2

Sentence size rages form 49 to 55 characters (W ¼ 0,973; p ¼ 0,64).

6 Translation in the Brain: Preliminary Thoughts About a Brain-Imaging. . .

133

specific circuit of brain areas that are also associated with the mind reading ability. In this way, critical sentences in experimental stimulus II are contextualized with three communicative cues, which create a narrative setting. The contextual information within the clauses is inferred from a co-referential relation between personal pronouns (she/her), as can be seen in example (6). Different from stimulus I, which dominant clauses present two distinct propositional forms of the same communicated message (paid peanuts and paid a low salary); stimulus II critical main clauses are repeated in conditions (A) and (B). By maintaining the same propositional form in both conditions, the role of guiding how the communicative intentions of the agent should be processed rests on the narrative setting encoding, i.e., through implicatures (condition A) or through explicatures (condition B): (6) (A) The 48-year-old actress, who hasn’t performed on stage for a long time, admitted to the reporters that when she got on the stage, her hands got wet. (B) The 10-year-old tourist, who has never seen snow before, looked surprised and asked her mother why when she grabbed the ice, her hands got wet.3 The passage in condition (A), invites the inference that the actress was anxious. This inference is based on the communicative cues that she has not performed on stage for a while. When these pieces of information are combined with the critical sentence, highlighted in bold, the relation between the items of information getting on the stage and hands getting wet, generates an implicature in which a feeling of anxiety is shared. In condition (B), on the other hand, this inferential processing is not necessary, since the relation between the communicative cues restricts and guides the processing of the critical sentence pieces of information, wherein the relation between playing with snow and hands getting wet, generates an explicature in which a physical effect is caused by an activity. As in stimulus I, 24 experimental items were created, 12 targets (condition A) and 12 controls (condition B). The structure of each item is as follows: experimental stimulus II starts with a nominal group (NG), which is embedded by a qualifier (QL), followed by a verbal process (VP) that introduces the critical sentence. The critical sentence, in turn, as in stimulus I, starts with an adverbial clause of time in thematic position, which establishes the point of departure for understanding the second (main) clause: (7) (A) Martha’s father (NG), who has never been far from his children (QL), confessed to his wife that (VP) when his daughter said good-bye (AdvCT), his eyes watered. (B) Martha’s father (NG), who has never been seen crying (QL), confessed to his wife that (VP) when he chopped the dinner onions (AdvCT), his eyes watered.

3

Sentence size ranges from 109 to 117 characters (W ¼ 0,96; p ¼ 0,31).

134

6.3.4

F. Alves et al.

Procedures

Experimental stimulus I and II will be presented in both fMRI and eye tracking environments, as displayed in Fig. 6.2. Stimuli will be presented in a counterbalanced order and participants will receive written instructions before the experimental sessions. We estimate that sessions will last approximately 40 minutes. Before the start of each trial, an inter-stimulus interval, represented by a fixation cross, will be presented for 6 s. Utterances will be presented in an event-related design and are going to stay on the screen until the participant presses a button with their right hand. Stimuli conditions were matched as close as possible on the following characteristics: length of each utterance (in characters) and length of the preceding context (in characters), lexical frequencies of the content words based on the frequency counts from British National Corpus (http://www.natcorp.ox.ac.uk). Through this experimental design proposal, we expect to answer some of the questions that remain unanswered regarding the neurocognitive aspects of the inferential nature of the translation process, as well as getting the first steps towards a fruitful interdisciplinary dialog between translation studies and cognitive neuroscience.

6.3.5

Expectations

For behavioral data, we believe that translating the author’s informative cues will require the ability of attributing representational states to others in condition (A) and a logical semantic processing in condition (B). Likewise, we expect longer reaction times and longer fixation durations for condition (A) in both experimental stimuli. For neurophysiological data, we believe that translating the author’s informative cues will most likely recruit some of the regions typically involved in tasks of reasoning about the mental states of others, such as the inferior parietal lobe, the medial prefrontal cortex, and the precuneus (Arora 2015). Regarding the act of translating, we expect the engagement of regions that have been implicated in the general act of action control, in particular the anterior cingulate and the subcortical areas (putamen and head of the caudate) as observed by Klein et al. (1995) and Price et al. (1999).

6.4

Concluding Remarks

The arguments presented in this chapter pointed out that translation, once embedded with cognitive science, embraces a wide variety of projects, methods, and approaches, which have evolved from small-scale research in the 1980s to today’s

6 Translation in the Brain: Preliminary Thoughts About a Brain-Imaging. . .

135

Fig. 6.2 Schematic representation of stimulus I (on the left) and stimulus II (o the right) paradigms

initiatives, which often comprise multidisciplinary teams of researchers. According to Hurtado Albir et al. (2015) the theoretical framework embraced by different groups reflects the interdisciplinary character of translation studies, in which theories, concepts, and methods have also been inspired by diverse areas as cognitive psychology, psycholinguistics, and cognitive linguistics. Despite this multidisciplinary scenario, the neuroscience of translation, according to Tymoczko (2005), remains up to now one of the major known unknowns of translation studies. The author claims that future research in translation studies is closely related to neuroscience. Perhaps the most radically new and illuminating research in the coming decades will result from the investigation of translation by neurophysiologists. At present, the activity of individual translators continues to be opaque to scholars. Some clues are garnered by tracking the working choices of the translators with computers that remember and time all work; other research attempts to open up the process by looking at translators’ journals or recording their think-aloud protocols. But all these methods are primitive at best in indicating what actually occurs in the brain as translators move between languages. (. . .) The immensely powerful, interesting, and important areas of research opening up in the near future will radically change the way translation is thought about and approached. They will also radically change the structure of research in translation studies. (. . .) The locus of research will move from individuals to groups, and research teams will evolve that bring together translation scholars, cognitive scientists, literacy and language experts, and neurophysiologists (Tymoczko 2005, p. 1092–1093).

By taking the first steps towards an interdisciplinary investigation of the inferential nature of the translation process, we are aware that the steps we aim to take may be ambitious in its nature, yet that is what lightens up researchers’ fire in taking small and large steps towards the unknowns of the translator’s brain. Acknowledgement Research funded by CNPq, the Brazilian Research Council, grant n 308892/ 2015-1, and FAPEMIG, the Agency for Research Support of the State of Minas Gerais, grant n PPM-00696-16.

136

F. Alves et al.

References Alves, F. (2010). Explicitness and explicitation in translation: A relevance-theoretic approach. In J. C. Costa & F. José Rauen (Eds.), Topics on relevance theory (pp. 77–97). Porto Alegre: ediPUCRS. Alves, F., & Vale, D. (2009). Probing the unit of translation in time: Aspects of the design and development of a web application for storing, annotating, and querying translation process data. Across Languages and Cultures, 10(2), 251–273. Alves, F., Pagano, A., & da Silva, I. (2009). A new window on translators’ cognitive activity: Methodological issues in the combined use of eye tracking, key logging and retrospective protocols. In I. Mees, F. Alves, & S. Göpferich (Eds.), Methodology, technology and innovation in translation process research: a tribbut tob Arnt Lykke Jakobsen (pp. 267–291). Copenhagen: Samfundslitteratur. Alves, F., Gonçalves, J. L., & Szpak, K. S. (2012). Identifying instances of processing effort in translation through heat maps: An eye-tracking study using multiple input sources. In M. Carl, P. Bhattacharyya, & K. K. Choudhary (Eds.), Proceedings of the first workshop on eye tracking and natural language processing (pp. 5–20). Amodio, D. M., & Frith, C. D. (2006). Meeting of minds: The medial frontal cortex and social cognition. Nature Reviews Neuroscience, 7(4), 268–277. Annoni, J. M., Lee-Jahnke, H., & Sturm, A. (2012). Neurocognitive aspects of translation. Meta, 57(1), 96–107. Arora, A., Weiss, B., Schurz, M., Aichhom, M., Wieshofer, R. C., & Perner, J. (2015). Left inferiorparietal love activity in perspective tasks: Identity statements. Frontiers in Human Neuroscience, 9(360), 1–17. Baron-Cohen, S., Leslie, A. M., & Frith, U. (1985). Does the autistic child have a “theory of mind”? Cognition, 21(1), 37–46. Bašnáková, J., Weber, K., Petersson, K. M., van Berkum, J., & Hagoort, P. (2013). Beyond the language given: The neural correlates of inferring speaker meaning. Cerebral Cortex, 24(10), 2572–2578. Bell, R. T. (1991). Translation and translating: Theory and practice. London: Longman. Binder, J. R., & Desai, R. H. (2011). The neurobiology of semantic memory. Trends in Cognitive Sciences, 15(11), 526–537. Blakemore, S. J., & Frith, U. (2004). How does the brain deal with the social world? Neuroreport, 15(1), 119–128. Carl, M., & Dragsted, B. (2012). Inside the monitor model: Processes of default and challenged translation production. Translation: Computation, Corpora, Cognition. Special Issue on the Crossroads Between Contrastive Linguistics, Translation Studies and Machine Translation, 2(1), 127–145. Carl, M., & Kay, M. (2011). Gazing and typing activities during translation: A comparative study of translation units of professional and student translators. Meta, 56(4), 952–975. Carston, R. (2000). Explicature and semantics. In S. Davis & B. Gillon (Eds.), Semantics: A reader (pp. 817–845). Oxford: Oxford University Press. Carston, R. (2004). Relevance theory and the saying/implicating distinction. In L. Horn & G. Ward (Eds.), The handbook of pragmatics (pp. 633–656). Oxford: Blackwell. Dalgleish, T. (2004). The emotional brain. Nature Reviews Neuroscience, 5(7), 583–589. Ericsson, K. A., & Simon, H. A. (1984). Protocol analysis: Verbal reports as data. Cambridge: MIT Press. Fan, Y., Duncan, N. W., de Greck, M., & Northoff, G. (2011). Is there a core neural network in empathy? An fMRI based quantitative meta-analysis. Neuroscience & Biobehavioral Reviews, 35(3), 903–911. Gallagher, H. L., & Frith, C. D. (2003). Functional imaging of “theory of mind”. Trends in Cognitive Sciences, 7(2), 77–83.

6 Translation in the Brain: Preliminary Thoughts About a Brain-Imaging. . .

137

Gerloff, P. A. (1988). From French to English: A look at the translation process in students, bilinguals, and professional translators. PhD thesis. Harvard University, Cambridge, MA. Green, D. (1998a). Mental control of bilingual lexico-semantic system. Bilingualism, 1, 67–81. Green, D. (1998b). Schemas, tags and inhibition. Reply to commentators. Bilingualism, 1, 100–104. Grice, H. (1957). Meaning. Philosophical Review, 66(3), 377–388. Gutt, E. A. (2000). Translation and relevance. In Cognition and context (2nd ed.). Manchester: St. Jerome. Gutt, E. A. (2005). Challenges of metarepresentation to translation competence. In P. A. Schmitt & G. Wotjak (Eds.), Translationskompetenz. Tagungsberichte der LICTRA (Leipzig International conference on translation studies) (pp. 77–89). Stauffenberg: Tübingen. Gutt, E. A. (2006). Teoria da Relevância e Tradução: e busca de um novo realismopara a tradução da Bíblia. In F. Alves & J. L. Gonçalves (Eds.), Relevância em Tradução: perspectivas teóricas e aplicadas (pp. 35–55). Belo Horizonte: Faculdade de Letras UFMG. Hirsch, E. D. (1987). Cultural literacy: What every American needs to know. Boston: Houghton Mifflin. Hurtado Albir, A., Alves, F., Englund Dimitrova, B., & Lacruz, I. (2015). A retrospective and prospective view of translation research from an empirical, experimental, and cognitive perspective: The TREC network. Translation & Interpreting, 7(1), 5–21. Hvelplund, K. T. (2011). Allocation of cognitive resources in translation: An eye-tracking and key-logging study. PhD thesis. Copenhagen Business School, Frederiksberg, Denmark. Jääskeläinen, R. (1987). What happens in a translation process: Think-aloud protocols of translation. Unpublished MA thesis. University of Joensuu, Savolinna. Jakobsen, A. L., & Jensen, K. T. H. (2008). Eye movement behaviour across four different types of reading task. In S. Gopferich, A. L. Jakobsen, & I. M. Mees (Eds.), Looking at eyes: Eye-tracking studies of reading and translation processing (pp. 103–124). Copenhagen: Samfundslitteratur. Jakobsen, A. L., & Schou, L. (1999). Translog documentation Version 1.0. In G. Hansen (Ed.), Probing the process of translation: Methods and results (pp. 1–36). Copenhagen: Samfundislitteratur. Klein, D., Milner, B., Zatorre, R. J., Meyer, E., & Evans, A. C. (1995). The neural substrates underlying word generation: A bilingual functional-imaging study. Proceedings of the National Academy of Science for the United States of America, 92(7), 2899–2903. Krings, H. P. (1986). Translation problems and translation strategies of advanced German learners of French (L2). In J. House & S. Blum-Kulka (Eds.), Interlingual and intercultural communication (pp. 263–276). Tübingen: Gunter Narr. Leslie, A. M. (1987). Pretense and representation: The origins of “theory of mind”. Psychological Review, 94(4), 412–426. Leslie, A. M. (2000). “Theory of mind” as a mechanism of selective attention. In M. S. Gazzaniga & B. Emilio (Eds.), The new cognitive neurosciences (pp. 1235–1247). Cambridge: MIT Press. Leslie, A. M., & Polizzi, P. (1998). Inhibitory processing in the false belief task: Two conjectures. Developmental Science, 1(2), 247–258. Leslie, A. M., & Thaiss, L. (1992). Domain specificity in conceptual development: Neuropsychological evidence from autism. Cognition, 43(3), 225–251. Leslie, A. M., Friedman, O., & German, T. P. (2004). Core mechanisms in ‘theory of mind’. Trends in Cognitive Sciences, 8(12), 528–533. Lörscher, W. (1991). Translation performance, translation process, and translation strategies: A psycholinguistic investigation. Tübingen: Gunter Narr. Mason, R. A., & Just, M. A. (2009). The role of the theory-of-mind cortical network in the comprehension of narratives. Language and Linguistics Compass, 3(1), 157–174. Mitchell, J. P. (2007). Activity in right temporo-parietal junction is not selective for theory-of-mind. Cerebral Cortex, 18(2), 262–271. Mitchell, J. P., Macrae, C. N., & Banaji, M. R. (2006). Dissociable medial prefrontal contributions to judgments of similar and dissimilar others. Neuron, 50(4), 655–663.

138

F. Alves et al.

O’Brien, S. (2006). Pauses as indicators of cognitive effort in post-editing machine translation output. Across Languages and Cultures, 7(1), 1–21. PACTE. (2011). Results of the validation of the PACTE translation competence model: Translation project and dynamic translation index. In S. O’Brien (Ed.), Cognitive exploration of translation (pp. 30–53). London: Bloomsbury. Pavlović, N., & Jensen, K. (2009). Eye tracking translation directionality. In A. Pym & A. Perekrestenko (Eds.), Translation research projects 2 (pp. 93–109). Tarragona: Intercultural Studies Group. Premack, D., & Woodruff, G. (1978). Does the chimpanzee have a theory of mind? Behavioral and Brain Sciences, 1(4), 515–526. Price, C. J., Green, D. W., & Von Studnitz, R. (1999). A functional imaging study of translation and language switching. Brain, 122(12), 2221–2235. Roth, D., & Leslie, A. M. (1998). Solving belief problems: Toward a task analysis. Cognition, 66(1), 1–31. Saxe, R., & Kanwisher, N. (2003). People thinking about thinking people: The role of the temporoparietal junction in “theory of mind”. Neuroimage, 19(4), 1835–1842. Scholl, B. J., & Leslie, A. M. (1999). Modularity, development and ‘theory of mind’. Mind & Language, 14(1), 131–153. Séguinot, C. (1989). Understanding why translators make mistakes. TTR, 2(2), 73–81. Sperber, D. (1994). Understanding verbal understanding. In J. Khalfa (Ed.), What is intelligence (pp. 179–198). Cambridge: Cambridge University Press. Sperber, D., & Wilson, D. (1986). Relevance: Communication and cognition. Oxford: Blackwell. Sperber, D., & Wilson, D. (1995). Relevance: Communication and cognition (2nd ed.). Oxford: Blackwell. Tymoczko, M. (2005). Trajectories of research in translation studies. Meta, 50(4), 1082–1097. Wilson, D. (2000). Metarepresentation in linguistic communication. In D. Sperber (Ed.), Metarepresentations: A multidisciplinary perspective (pp. 411–448). Oxford: Oxford University Press. Wilson, D. (2005). New directions for research on pragmatics and modularity. Lingua, 115, 1129–1146.

Chapter 7

Measuring Difficulty in Translation and Post-editing: A Review Sanjun Sun

7.1

Introduction

In the last decade, contributions to cognitive and psycholinguistic approaches to translation and interpreting processes have been constantly increasing. Muñoz’s (2014) review of advances in this field focuses on seven, albeit overlapping, topics or research areas: competence and expertise, mental load and linguistic complexity, advances in research methods, writing, revision and metacognition, recontextualized research, and cognition beyond conscious, rational thought. Of these topics, mental load, according to Muñoz (2012), is “a construct of paramount importance” (p. 172) for translation process research, and may help us unravel the complex relationships between consciousness, problem solving, automation, and expertise; it may also establish a bridge between translation and interpreting research. It might be an overstatement to say that mental load stays at the center of this integrated view of translation process research. Nonetheless, it deserves attention and emphasis. This article first clarifies conceptual issues and reviews difficulty, mental workload, cognitive load and other related terms, their histories and theories. Under the umbrella of cognitive science, it then reviews two lines of research, i.e., difficulty in human translation and in post-editing (PE) of machine translation. Studies concerning methods for measuring difficulty are presented and critically examined. As the author has already discussed methods for measuring difficulty in human translation elsewhere (see Sun 2015), the focus of this review is on measurement of cognitive effort in post-editing. Two assumptions in translation difficulty research are described towards the end of this article.

S. Sun (*) School of English and International Studies, Beijing Foreign Studies University, Beijing, China © Springer Nature Singapore Pte Ltd. 2019 D. Li et al. (eds.), Researching Cognitive Processes of Translation, New Frontiers in Translation Studies, https://doi.org/10.1007/978-981-13-1984-6_7

139

140

7.2

S. Sun

Difficulty and Related Terms and Disciplines

Translation process research has been advancing through interdisciplinary research. The related disciplines include, among others, cognitive science, psycholinguistics, psychology (e.g., developmental psychology, educational psychology, assessment psychology), and neuroscience. Translation difficulty also requires interdisciplinary study. According to Newell (2001, p. 15), interdisciplinary research involves determining relevant disciplines (interdisciplines, schools of thought) by looking into each discipline and see if there is already a literature on that topic, developing working command of relevant concepts, theories and methods of each discipline, generating disciplinary insights into the problem, and integrating those insights through the construction of a more comprehensive perspective. In the search for relevant disciplines and areas, terms play an important role. In a language, terminological variation is a common phenomenon, and the causes for the variation can be related to different origins of authors, different communicative registers, different stylistic and expressive needs of authors, contact between languages, or different conceptualizations and motivations (Freixa 2006). The variation in terminology poses a challenge for finding pertinent literature in other disciplines or sub-areas and for exploring comparability of studies. This is especially the case for research on difficulty in translation and post-editing. As mentioned in Sun (2015), difficulty, from the cognitive perspective, refers to the amount of cognitive effort required to solve a problem, and translation difficulty can be viewed as the extent to which cognitive resources are consumed by a translation task for a translator to meet objective and subjective performance criteria. Terms similar to or synonymous with difficulty include mental load, mental workload, cognitive workload, workload, cognitive load, cognitive effort, mental effort and so forth. In psychology, the word “difficulty” often appears in the phrase “task difficulty”. Since the 1920s, psychologists (e.g., Redfield 1922) started to pay attention to workload and difficulty. Notably, Thorndike et al. (1927) focused on the measurement of difficulty, and discussed various methods in their book; Woodrow (1936) compared two scaling methods for measuring difficulty; Freeman and Giese (1940) studied whether task difficulty could be measured by palmar skin resistance. In the 1950s, information-processing models became established in psychology through Miller’s (1956) article “The magical number seven, plus or minus two”, which suggested that the human perceptual systems are information channels with builtin limits, and Broadbent’s (1958) filter model of selective attention, which proposed that humans process information with limited capacity and an attentional filter screens out information to prevent the information-processing system from becoming overloaded (see Bermúdez 2014). The information processing approach has had profound influence on psychology and cognitive sciences and also in task difficulty and workload research. According to Bermúdez (2014), the basic assumption shared by cognitive sciences is that “cognition is information processing” (p. 130) and “the mind is an information-

7 Measuring Difficulty in Translation and Post-editing: A Review

141

processing system” (p. xxix). Evolution of workload theory has been driven largely through empirical work conducted from a human information processing approach, which takes into account all processes studied within cognitive psychology, such as perception, attention, memory, decision making, and problem solving (Embrey et al. 2006, p. 49). In a meta-analytic review, Block et al. (2010) define cognitive load as “the amount of information-processing (especially attentional or working memory) demands during a specified time period; that is, the amount of mental effort demanded by a primary task” (p. 331). Cognitive scientists have in recent decades suggested extending and moving beyond the basic assumption of the information process approach that the mind is an information-processing system, and have proposed dynamical systems theory and situated/embodied cognition theory. As argued by Bermúdez (2014, p. 421), these theories are “best seen as a reaction against some of the classic tenets of cognitive science” and hardly give us grounds for abandoning the whole idea of information processing. The following paragraphs discuss mental workload, mental effort, cognitive load, mental load, and task difficulty one by one, and focus on their origin, domain, and related research.

7.2.1

Mental Workload and Mental Effort

Mental workload has been an important concept in human factors and industrial psychology. It first appeared in the 1960s (e.g., Kalsbeek and Sykes 1967), and permeated the literature in the 1970s (Vidulich and Tsang 2012, p. 243). In 1980, Wierwille and Williges prepared a report entitled “An annotated bibliography on operator mental workload assessment” for U.S. Naval Air Test Center, which included over 600 references. Mental workload assessment has been for purposes of increasing safety, reducing errors, and improving system performance (Karwowski 2012); it usually concerns high risk tasks. For example, there have been many studies on measuring the mental workload of aircraft pilots and car drivers. A research focus in this field has been on how to measure mental workload. Wierwille and Williges (1980) in their bibliography identified 28 specific techniques in four major categories: subjective opinion, spare mental capacity, primary task, and physiological measures. Subjective measures have been very frequently used, and the most commonly used subjective measure is the rating scale. A frequently employed rating scale is NASA-TLX (Task Load Index) developed by Hart and Staveland (1988), which is the most cited work in the field of mental workload. NASA-TLX includes six workload-related subscales, as follows: Mental Demand, Physical Demand, Temporal Demand, Effort, Performance, and Frustration Level. The Effort subscale measures “How hard [you had] to work (mentally and physically) to accomplish your level of performance?” In some literature, this is referred to as “mental effort”, which means “the amount of capacity or resources that

142

S. Sun

is actually allocated to accommodate the task demands” (Paas and Van Merriënboer 1994a, p. 122). According to Paas and Van Merriënboer (ibid.), mental effort can be used as an index of cognitive load, as “the intensity of effort being expended by students is often considered to constitute the essence of cognitive load” (p. 122). This may be the case under certain circumstances, for example, when people are highly motivated. In interpreting studies, this reminds one of the Effort Models’ “tightrope hypothesis” proposed by Gile (1999, 2009), according to which interpreters tend to work close to processing capacity saturation; thus, cognitive effort in interpreting often equals cognitive load. It is worth mentioning that Frustration in NASA-TLX measures an affective component of mental workload.

7.2.2

Cognitive Load and Mental Load

The term “cognitive load” has been used in psychology since the 1960s (e.g., Bradshaw 1968). It has been mainly associated with cognitive load theory (CLT) in educational research since Sweller (1988) first developed the theory. The fundamental idea behind CLT, which has become an influential instructional design theory, is that “instructional design decisions should be informed by the architecture of the human cognitive system” (Brünken et al. 2010, p. 253), and studies along this line aim at deriving empirically based guidelines for instructional design. In the field of CLT, with regard to its measurement, cognitive load has been conceptualized in three dimensions: mental load, mental effort, and performance (Paas and Van Merriënboer 1994b). Of the three assessment dimensions, mental load refers to “the burden placed on learners due to instructional parameters” (Lusk and Atkinson 2007, p. 751). It is equivalent to Mental Demand in NASA-TLX, which measures “How much mental and perceptual activity was required (e.g., thinking, deciding, remembering, searching, etc.). The term “mental load” first appeared in the 1920s (e.g., Redfield 1922), and has been used interchangeably with mental workload in the field of psychology (e.g., Gopher 1994). There may exist some subtle difference in meaning between the two for researchers in human factors. In the term “mental load” there is an overtone of physical effort, whereas “mental workload” emphasizes the human information processing rate and the difficulty experienced (Moray 1977, p. 13). By drawing on work in human factors, researchers in CLT often divide methods for cognitive load measurement into two groups: analytical and empirical methods (Paas et al. 2008, p. 15). Empirical methods are subjective, performance and physiological measures, while analytical methods include expert opinions, mathematical modeling, and task analysis. A well-known and extensively used subjective rating scale is a 9-point Likert scale first used by (Paas 1992), which ranges from “very, very low mental effort” (1) to “very, very high mental effort” (9).

7 Measuring Difficulty in Translation and Post-editing: A Review

7.2.3

143

Task Difficulty

Compared with mental workload and cognitive load, difficulty is a common term, and thus “task difficulty” has been used more frequently in various fields since the 1920s (e.g., Thorndike et al. 1927). It has been defined along two lines: (1) task difficulty refers to “the degree of cognitive load, or mental effort, required to identify a problem solution” (Gallupe et al. 1988, p. 280); (2) task difficulty is informational complexity or task complexity, and is independent of use (Rost 2006, p. 49), as in “the effects of task difficulty on mental workload”. Although one may distinguish the two senses by using “subjective difficulty” and “objective difficulty” (DeKeyser 2003, p. 332), it is better to treat “task complexity” and “task difficulty” as different terms (Kuiken and Vedder 2007, p. 120). Difficulty has been addressed in reading and writing (see e.g., Muñoz 2012) as well as in translation. For example, Wilss (1982, p. 161) distinguished between four types of translation difficulty (TD) from a pedagogical perspective: 1) transfer-specific TD, covering the two directions native tongue – foreign language and vice versa, 2) translator-specific TD, distinguishing two levels, one for beginners and one for advanced translators, 3) text-type-specific TD, covering at least the three particularly transfer-relevant areas, LSP translation, literary translation and Bible translation, 4) singletext-specific TD motivated by the semantically and/or stylistically complicated manner of expression of the SL author.

Nord (2005, p. 168) made similar distinctions: text-specific difficulties, translatordependent difficulties, pragmatic difficulties, and technical difficulties. As a term, difficulty is listed in Key Terms in Translation Studies (Palumbo 2009). As these terms discussed above are embedded in their respective literature, they are used interchangeably in this review.

7.3

Cognitive Science and Translation Difficulty Research

Cognitive science is a cross-disciplinary enterprise devoted to understanding mind and intelligence from an information processing perspective. It is concerned with how information is perceived, represented, transformed, stored, and communicated. Cognitive science emerged in the 1970s and draws on a host of disciplines such as philosophy, psychology, neuroscience, artificial intelligence, linguistics, and anthropology (e.g., Frankish and Ramsey 2012). It covers memory, attention, consciousness, reasoning, problem solving, decision making, metacognition, expertise, computational linguistics, cognitive ergonomics, human-computer interaction, machine translation, and so forth (see Wilson and Keil 1999). Among these research fields, cognitive ergonomics overlaps with related disciplines such as human factors, applied psychology, and human-computer interaction (Cara 1999), and according to International Ergonomics Association (2015), its relevant topics include, among others, mental workload.

144

S. Sun

Translation process research has incorporated concepts, theories and methods (e.g., metacognition, expertise studies) from cognitive sciences (see Alves 2015), and there is a need to further integrate translation process research and the cognitive sciences (Shreve and Angelone 2010), and critically examine our traditional perspectives on translation processes in terms of the frameworks from cognitive science (Muñoz 2010). On the topic of translation difficulty, two lines of research can be identified in the literature: (1) difficulties in human translation; (2) difficulties in machine translation and post-editing. In a way, they can be situated within the broader framework of cognitive science.

7.4

Human Translation

Two essential questions in translation difficulty research are what makes a text difficult to translate and how to measure and predict the difficulty degree of a translation task. The two questions are complementary, and developing reliable measurement techniques can help advance our understanding of translation difficulty as well as translation processes. Dragsted (2004), for example, found in her empirical study that professional translators would adopt a more novice-like behavior during translation of a difficult text than during the translation of an easy text. Thus, translation difficulty is an important variable in translation process research. Sources of translation difficulty can be divided into two groups: task (i.e., translation) factors and translator factors (Sun 2015). Translation factors include readability (or reading comprehension) problems and translation-specific (or reverbalization) problems, while translator factors concern translation competence (or “ability variables” such as intelligence, aptitude, cognitive style, and working memory capacity), which is more permanent, and affection (or “affective variables” such as confidence, motivation, and anxiety), which is more susceptible to change (Robinson 2001, p. 32). Both groups of factors influence a translator’s perception of task difficulty. In the following three subsections, Sect. 7.4.1 is basically from the perspective of translation-specific problems (or target text characteristics), Sect. 7.4.2 from readability (or source text characteristics), while Sect. 7.4.3 from translator factors.

7.4.1

Choice Network Analysis

Campbell and Hale are pioneers in the empirical exploration of translation difficulty. Campbell and Hale (1999) identified several areas of difficulty in lexis and grammar, that is, words low in propositional content, complex noun phrases, abstractness, official terms, and passive verbs, and explored universal translation difficulties as well as language-specific difficulties. Campbell (1999) found that the source text can

7 Measuring Difficulty in Translation and Post-editing: A Review

145

be an independent source of translation difficulty and that a substantial proportion of the items can be equally difficult to translate into typologically different languages. The way Campbell and Hale assessed the difficulty of a source text was Choice Network Analysis (Campbell 2000), that is, to count the number of different renditions for specific items in that text made by multiple translators. Their rationale was that “the different renditions represent the options available to the group of subjects, and that each subject is faced with making a selection from those options”; “where there are numerous options, each subject exerts relatively large cognitive effort in making a selection; where there are few options, each subject exerts relatively small cognitive effort” (Hale and Campbell 2002, p. 15). This rationale has been found problematic. For example, O’Brien (2004) points out that if all translators produce the same solution, the cognitive effort involved might not be less than that required when the translators produce varying target texts; Sun (2015) notes that we cannot assume that the translators are faced with the same number of options in translation, as poor translators usually have fewer (no even none) options than do good translators. Nonetheless, Dragsted’s (2012) empirical study provides evidence for Choice Network Analysis. She found that target text variation was a reliable predictor of difficulty indicators observable in process data, although it was not certain whether high target text variation across participants implied that each individual translator considered several solutions. This finding deserves further exploration. It seems that there are two reasons to explain why Choice Network Analysis may work under some circumstances. One reason (or source of difficulty in translation) concerns equivalence at a series of levels, especially at the word level and above-word level (see Baker 2011). Non-equivalence, one-to-several equivalence, and one-to-part equivalence situations can create difficulty for translators, especially for novice translators. For instance, the word “presentation” (as in “Students will give a presentation”) has no one-to-one equivalent in Mandarin Chinese; translators would have to select from the synonyms like lecture, report, talk, or speech. This would require more cognitive effort on the part of translators, and also create considerable target text variation. It has been found in psychology that the more translations a word has, the lower the semantic similarity of the translation pair (Tokowicz et al. 2002). The translation ambiguity phenomenon is relatively common, especially in some genres such as philosophical writings. In Dictionary of Untranslatables (Cassin 2014), over 350 words (e.g., agency, actor) in various languages are explained; one may find that every language expresses a concept with a difference (see also Schleiermacher 2012). The other reason involves literal translation as a universal initial default strategy in translation, which is related to the Literal Translation Hypothesis. An oft-cited discussion about this hypothesis is as follows: The translator begins his search for translation equivalence from formal correspondence, and it is only when the identical-meaning formal correspondent is either not available or not able to ensure equivalence that he resorts to formal correspondents with not-quite-identical meanings or to structural and semantic shifts which destroy formal correspondence altogether. (Ivir 1981, p. 58)

146

S. Sun

Over the years, translation process researchers seem to have found some experimental evidence in favor of this hypothesis. Englund Dimitrova (2005) observed that translators may use literal translations as provisional solutions in order to minimize cognitive effort, and “there was a tendency for syntactic revisions to result in structures that were more distant from the structure in the ST than the first version chosen” (p. 121). Tirkkonen-Condit (2005) found that the tendency to translate word for word shows in novices as well as professional translators. She argued that literal translation as a default rendering procedure is triggered through automatic processes, and it goes on until interrupted by a monitor that alerts about a problem in the outcome and triggers off conscious decision-making to solve the problem. Balling et al. (2014), on the basis of three eye-tracking experiments, conclude that literal translation may be a universal initial default strategy in translation. Schaeffer and Carl (2014) also found supporting evidence that more translation choices lead to longer reading and processing time. When translating, if a literal translation is an acceptable solution, translators do not have to exert much cognitive effort and target text variation would obviously be small. If translators have to do syntactic reordering and proceed to less literal ones, the translation would involve more cognitive effort, and also translation competence comes into play. As a result, target text variation would be greater. This means that target text variation and translation difficulty may correlate under certain circumstances. Of course, there are other factors that may affect the process. For example, finding a literal translation may not be equally easy for the translators (Schaeffer and Carl 2013). In addition, Choice Network Analysis entails multiple participants, and is not for measuring translation difficulty for one translator.

7.4.2

A Readability Perspective

Reading comprehension and readability are important topics in reading research. For example, Gray and Leary (1935) in their book What Makes a Book Readable presented a comprehensive empirical study, and found that the number of different words in a text, the number of prepositional phrases contained in the text, and the proportionate occurrence of polysyllables bear a significant relationship to the difficulty a reader experiences in reading (pp. 8–9). Since the 1920s, researchers have been working on readability formulas for measuring text readability, and, to date, have published over 200 formulas (e.g., Flesch Reading Ease formula, the Flesch-Kincaid Readability test, the Dale-Chall formula) (Klare 1984). It has been found that vocabulary difficulty and sentence length are the strongest indexes of readability, and other predictor variables add little to the overall predictions of reading difficulty (Chall and Dale 1995). An explanation for this is that many predictor variables correlate with each other. For example, according to Zipf’s law (Zipf 1935), word frequency and word length are inversely related, i.e., short words occur with high frequency while longer words occur with lower frequency as a result of a biological principle of least effort.

7 Measuring Difficulty in Translation and Post-editing: A Review

147

However, it should be mentioned that reading difficulty often comes from the ideas rather than the words or sentences. The reason readability formulas work is that difficult passages that express difficult, abstract ideas tend to contain hard words, and vice versa (Rayner and Pollatsek 1989, p. 319). Cheating, that is, “trying to beat the formulas by artificially chopping sentences in half and selecting any short word to replace a long word”, may not change the true readability much (Fry 1988, p. 77). This may help explain O’Brien’s (2010) empirical finding that the application of controlled language rules increased reading ease only marginally and only for the text identified as “difficult” by the readability formula. As mentioned earlier, readability (or reading comprehension) is one of the two translation factors that cause translation difficulty. Also, readability-based measurements are objective and consequently can be performed automatically. For these reasons, several translation difficulty researchers have turned to readability formulas for a possible solution. In an effort to find texts of various translation difficulty levels for experimental purposes, Jensen (2009) employed three indicators of translation difficulty in his study: readability indices, word frequency, and non-literalness (that is, the number of occurrences of non-literal expressions, i.e. idioms, metaphors, and metonyms), and argues that these objective measures can help us gauge the degree of difficulty of some types of text. Despite this, he warned that the “readability indices cannot give us conclusive evidence of how difficult a translator perceives a text to be” (ibid., p. 67). Mishra et al. (2013), claimed that translation difficulty was mainly caused by three features: sentence length, degree of polysemy of a sentence (i.e., the sum of senses possessed by each word in the WordNet normalized by the sentence length), and sentence structural complexity (i.e., the total length of dependency links in the dependency structure of the sentence). Their experiment, which was based on 80 sentence translations from English into Spanish, Danish and Hindi by at least 2 translators, established a relationship between these three sentential properties and the Translation Difficulty Index, measured as gaze time and fixation count using eye tracking (see Sect. 7.5.2.3 below for meaning). This sentence-based design for measuring translation difficulty might lead to different results than a text-based design. Most studies on translation difficulty use short texts as test materials. Liu and Chiu (2009) aimed at identifying indicators that may be used to predict source material difficulty for consecutive interpreting, and used four methods to estimate and predict the difficulty of three non-technical source texts, that is, the Flesch Reading Ease formula, information density, new concept density, and expert judgment. They found that these measures all failed statistically in predicting source material difficulty, possibly due to the very small sample size of source texts (N ¼ 3). Sun and Shreve (2014) tried to find a method to measure difficulty in a translation task, and found that translation difficulty level and text readability were negatively and weakly related, which means that a text’s readability only partially accounted for its translation difficulty level. A post-translation questionnaire survey in their study showed that 77% of over 600 responses pointed to reverbalization in the target language as more difficult than source text comprehension. The survey result was

148

S. Sun

supported by Hvelplund’s (2011) finding that translators allocate more cognitive resources to target text processing than to source text processing as indicated, for instance, by processing time and processing load (i.e., eye-fixation duration). This implies that a readability-based approach may not work for translation difficulty measurement.

7.4.3

Workload Measures and Related Research

Techniques for measuring mental workload can be classified into three major categories: (1) subjective measures, (2) performance measures, and (3) physiological measures (see Sun, 2015). The baseline measure, according to Jex’s (1988, p. 14), is the individual’s subjective workload evaluation in each task, against which all objective measures must be calibrated. Performance measures derive an index of workload from some aspect of the participant’s behavior or activity, and two commonly used workload indicators are speed (i.e., time-on-task) and accuracy (i.e., number of errors). Alves et al. (2014) in a study used pause duration, drafting time, and number of renditions in microunits as indicators of effort in the translation process. Taking participants’ subjective evaluations of translation difficulty with NASA-TLX as the baseline measure, Sun and Shreve (2014) found that translation quality score was an unreliable indicator of translation difficulty level, while time-on-task was significantly, but weakly, related to translation difficulty level. This means that performance measures may not be sensitive to workload manipulations, partly because translation involves solving ill-defined problems (Englund Dimitrova 2005). As a physiological measure, pupillary responses have been used as an indicator of cognitive load (e.g., Beatty 1982). However, Hvelplund (2011) in his study did not observe a strong relationship between cognitive load and pupillary response in translation. He assumed that larger pupil sizes would reflect heavier cognitive load (p. 71), but found that for students, “pupils were smaller during translation of complex text than during translation of less complex text” (p. 206).

7.5

Post-editing of Machine Translation

Post-editing of machine translation (MT), which “involves a human editor revising an MT output up to an acceptable level of quality” (Kit and Wong 2015), has recently emerged as a major trend in the language industry. According to a survey by DePalma and Hegde (2010) among nearly 1000 language service providers around the world, two-fifths (41.2%) claimed to offer post-edited machine translation. There are a few reasons for the emergence of post-editing. The major one is that translation buyers are increasingly turning to machine translation and post-editing in response to surging content volumes and demand for faster turnaround times

7 Measuring Difficulty in Translation and Post-editing: A Review

149

(Common Sense Advisory 2014). The second reason pertains to the change in translation buyers’ expectations with regard to the type and quality of translated material (Allen 2003). High-quality translation is expensive and is not always needed. The third reason involves the increasing quality of machine translation output and the wide availability of computer-aided translation (CAT) tools (e.g., SDL Trados, Memsource), which often offer access to third-party machine translation engines (e.g., Google Translate, Microsoft Translator) via an application programming interface (API) and combine computer-aided human translation with postediting. Translation Automation User Society (TAUS 2014) predicts that postediting may “overtake translation memory leveraging as the primary production process” in the language industry. Post-editing is different from human translation in several aspects. In terms of requirements, for example, according to the post-editing guidelines by TAUS (2010), post-editors are expected to “[u]se as much of the raw MT output as possible”, and “[e]nsure that no information has been accidentally added or omitted”. Vasconcellos (1987) compared post-editing with traditional human revision, and noted that with revision the detection of errors is a discovery process (for e.g., mistranslations, omissions) whereas post-editing is an ongoing exercise of adjusting relatively predictable and recurring difficulties. Thus, post-editing poses specific problems to translators, and prompts strategies different from those used in human translation. In recent years, there has been an increased interest in the impact of post-editing on cognitive processes (e.g., O’Brien et al. 2014). The factors involved include productivity gains, cognitive effort, impact on quality, quality evaluation and estimation, among others (e.g., Arenas 2014). In post-editing, cognitive effort is largely determined by two main criteria: (1) the quality of the MT raw output; (2) the expected end quality of the translation (TAUS 2010). Generally speaking, the higher the quality of MT output, the less human effort needed for post-editing and hence the higher productivity (Kit and Wong 2015). The expected quality of the final translation can be “good enough” or “similar or equal to human translation”, and different quality expectations require different guidelines (TAUS 2010). For example, if the client expects a “good enough” translation, then there is no need to implement corrections or restructure sentences simply to improve the style of the text (ibid.). As these factors are usually discussed under the heading of MT evaluation, we will discuss MT evaluation in the next section in order to put things into perspective.

7.5.1

Machine Translation Evaluation

MT evaluation is intended for assessing the effectiveness and usefulness of existing MT systems and for optimizing their performance, and at the core of evaluation is the quality of MT output (Dorr et al. 2011; Kit and Wong 2015). The notion of quality is context dependent, and its evaluation is influenced by purpose, criteria, text type, and other factors. For this reason, different types of manual (or human) and automatic

150

S. Sun

evaluation measures have been developed, and with regard to their specific classification there are various proposals. According to Kit and Wong (2015), manual evaluation of MT output relies on users’ subjective judgments and experiences, and entails two aspects: intrinsic and extrinsic. Intrinsic measures focus on judgment of language quality, and include quality assessment, translation ranking, and error analysis; extrinsic measures seek to test the usability of MT output with respect to a specific task, and involve tasks such as information extraction, comprehension test (e.g., cloze test), and postediting. Automatic evaluation of MT output involves the use of quantitative metrics without human intervention, and includes text similarity metrics and quality estimation. Automatic measures have been developed to overcome the drawbacks of manual evaluation, such as costliness, subjectivity, inconsistency and slowness, and aim to provide an objective and cost-effective means for MT evaluation. Most of them are text similarity metrics; they judge the quality of MT output by comparing the output against a set of human reference translations of the same source sentences. Among the automatic metrics (see e.g., Dorr et al. 2011), BLEU (Bilingual Evaluation Understudy) (Papineni et al. 2002) is one of the most influential, and its central idea is that “[t]he closer a machine translation is to a professional human translation, the better it is” (p. 311). It counts the n-gram (or word sequences) matches between the MT output and human reference translations; the more the matches, the better the MT output is. Different from text similarity metrics, quality estimation is intended to predict quality of MT output without reference to any human translation. Its rationale is that the “quality of MT output is, to a certain extent, determined by a number of features of the source text and source/target language” (Kit and Wong 2015, p. 230). A study by Felice and Specia (2012) indicated that linguistic features (e.g., number, percentage and ratio of content words and function words, percentage of nouns, verbs and pronouns in the sentence) are complementary to shallow features (e.g., sentence length ratios, type/token ratio variations) for building a quality estimation system. Potential uses of quality estimation include filtering out poor sentence-level translations for human post-editing and selecting the best candidate translation from multiple MT systems (Specia et al. 2010). Despite being perceived as subjective, manual evaluation has been used as a baseline against which automatic evaluation measures are judged (e.g., Dorr et al. 2011). A strong correlation between automatic evaluation scores and human judgments indicates that the performance of an automatic evaluation metric is satisfactory. BLEU, for example, has a high correlation with human judgments of quality. Quality assessment involves human evaluators who are asked to rate on a five or seven point scale the quality of a translation (normally presented sentence by sentence) in terms of certain characteristics such as fluency and adequacy, and the average score on all the sentences and evaluators is the final score of an MT system (Kit and Wong 2015; Liu and Zhang 2015). Recent studies have shown that judgments of fluency and adequacy are closely related, which may indicate that humans evaluators have difficulty in distinguishing the two criteria (Koehn 2010,

7 Measuring Difficulty in Translation and Post-editing: A Review

151

p. 220). Compared with scoring-based quality assessment, translation ranking is often easier. It entails ranking a number of candidate translations of the same sentence from best to worst or picking a preferred version after a pairwise comparison. Since 2008, translation ranking has become the official human evaluation method in the statistical MT workshops organized by the Association for Computational Linguistics (Callison-Burch et al. 2008; Kit and Wong 2015, p. 222). Error analysis is a qualitative process, and it involves identifying MT errors and estimating the amount of work of post-editing, which will be discussed in detail in the next section.

7.5.2

Post-editing Effort Measurement

Post-editing effort can be used to evaluate the quality of machine translation (e.g., Aziz et al. 2013) and develop a suitable pricing model for post-editing. According to a survey by Common Sense Advisory, post-editing pricing ranges widely “from the unedited but optimized, direct-to-publish price of US$0.03 per word to equivalent human rates of US$0.25” (DePalma and Kelly 2009, p. 25). Hence the need for developing metrics for measuring post-editing effort and eventually for predicting post-editing effort. Krings (2001) in his comprehensive study identified three dimensions of postediting effort: temporal effort, technical effort, and cognitive effort. Temporal effort refers to time on task (so strictly speaking, it is not an “effort”), while technical effort consists of deletion, insertion, and reordering operations in post-editing. Cognitive effort involves “the type and extent of those cognitive processes that must be activated in order to remedy a given deficiency” (ibid., p. 179) in MT output. According to Krings (2001, pp. 178–182), both temporal effort and technical operations are determined by cognitive effort, which inevitably is the research focus. In the recent decade, research efforts in measuring cognitive effort in post-editing have been made along three intersecting lines: textual characteristics, characteristics of the translator/post-editor, and workload measures. The first two are causal factors whereas workload measures are effect factors (Meshkati 1988).

7.5.2.1

Characteristics of the Translator/Post-editor

Vieira (2014) investigated the role of individual factors including translators’ working memory capacity (WMC) and source language (SL) proficiency in predicting cognitive effort in post-editing, and observed a relationship between WMC and postediting productivity. This merits further attention. Working memory is usually believed to be a system that combines temporary storage and executive processing in order to help perform complex cognitive activities (Baddeley et al. 2015). As cognitive load is often defined as “the demand for working memory resources

152

S. Sun

required for achieving goals of specific cognitive activities in certain situations” (Kalyuga 2009, p. 35), working memory is closely related to difficulty. WMC is an important individual-differences variable for understanding variations in human behavior, and WM span tasks, such as counting span, operation span, and reading span, have been shown to be reliable and valid measures of WMC (Conway et al. 2005). These complex span tasks use “serial recall as a measure of how much [a] person can hold in memory while also performing a secondary task” (Ilkowska and Engle 2010, p. 298). For example, in Daneman and Carpenter’s (1980) seminal study, participants read aloud a series of sentences and then recalled the final word of each sentence; the number of final words recalled was the reading span, which varied from two to five. Over the years, it has been found that WMC span measures can predict a broad range of lower-order and higher-order cognitive capabilities, including language comprehension, reasoning, and general fluid intelligence, and that people high in WMC usually outperform those low in WMC in cognitive tasks (see Engle and Kane 2004; Orzechowski 2010). Hummel (2002), for example, noted a significant relation between WMC measured by an L2 reading span task and L2 proficiency. In a translation-related study, Silveira (2011) found that WMC interferes positively in participants’ accuracy in the translation task, though not in a significant way in participant’s response time. The strong correlation between WMC scores and higher-order cognition (hence the predictive utility of WMC), however, does not necessarily imply a cause-effect relationship between the two. There have been various explanations and hypotheses regarding the relationship (see Engle and Kane, 2004). One explanation is that WMC scores and higher-order cognition both reflect individual differences in speed of processing. Engle and Kane (2004) argue against it based on their opinion that WMC measures fundamentally tap an attention-control capability. Their argument makes sense in view of the mixed results (e.g., very weak correlations in Conway et al. 2002) reported in literature about the relationship between speed and WMC span measures. Daneman and Carpenter (1980) attributed individual differences in reading span to the chunking process: “the more concepts there are to be organized into a single chunk, the more working memory will be implicated” (p. 464). In other words, differences in the reading span were caused by differences in reading skills. In contrast, Turner and Engle (1989) suggested that WM may be an individual characteristic independent of the nature of the task (e.g., reading, writing), and differences in reading skills were caused by differences in the reading span (see also Engle and Kane 2004). WM is an active research field. Further discussion of it is out of the scope of this article. Suffice it to say that WM is a central construct in cognitive science, and we believe it is related to other constructs and concepts in translation process research, such as attention, pause, automatization, practice, experience, expertise, and translation competence. It has been noted that the differences between post-editors may be much larger than the difference between machine translation systems (Koehn and Germann 2014, p. 45).

7 Measuring Difficulty in Translation and Post-editing: A Review

7.5.2.2

153

Textual Characteristics

Just like the aforementioned readability perspective, textual characteristics of the source text and MT output are believed to be associated with post-editing effort. One of the research goals is to recognize those textual characteristics that can predict PE effort (Vieira 2014). In order to do that, researchers need to identify MT errors and negative translatability indicators (NTIs), the presence of which are supposed to increase PE effort (e.g., O’Brien 2007a). Nonetheless, it should be noted that some source-text features that are normally counted among NTIs may cause increased cognitive processing while those that are usually identified as NTIs (e.g., proper nouns, abbreviations, punctuation problems) may not put demands on cognitive processing (O’Brien 2004, 2005). There are several classifications of MT errors (see Mesa-Lao 2013 for an overview), which vary according to the type of MT engine, language pair, direction, genre and domain. Wisniewski et al. (2014) found, based on a corpus of EnglishFrench automatic translations accompanied with post-edited versions, annotated with PE error labels, that lexical errors accounted for 22%, morphological errors 10%, syntax errors 41%, semantic errors 12%, format errors 5%, and others 10%. Aziz et al. (2014, p. 190) observed that in the English-Spanish PE, production units (PUs, i.e., sequences of successive keystrokes that produce a coherent passage of text) involving verbs tended to be slightly more time-consuming while PUs related to nouns required slightly more typing. Vieira (2014) found that ST prepositional phrases and sentence-level type-token ratio had a significant relationship with cognitive effort in French-English PE, although the effects of source-text linguistic features were small. In recent years, considerable efforts have been made towards automatic MT error identification and automatic post-editing. For example, through experiments performed on English-to-Brazilian Portuguese MT, Martins and Caseli (2015) found it possible to use the decision tree algorithm to identify wrong segments with around 77% precision and recall. Rosa (2014) developed Depfix, a system for automatic post-editing of phrase-based English-Czech machine translation outputs.

7.5.2.3

Measures of Post-editing Effort

Identifying indices and indicators deemed to reflect or measure how much effort post-editing poses or may pose has been a growing topic of interest. Krings (2001) employed Think-Aloud Protocols (TAP), where post-editors were asked to verbalize their thoughts in a steady stream while performing post-editing in order to investigate their cognitive effort. He made some interesting discoveries, such as verbalization effort (i.e., verbalization volume, as an indicator of cognitive effort) during postediting of poor MT sentences was about twice as high as that for machine translations evaluated as good (Krings 2001, p. 291). As TAP is a method more qualitative than quantitative (Sun 2011) and some “thoughts pass through the

154

S. Sun

mind more quickly than they can be verbalized” (Krings 2001, p. 2), calculating the volume and density of verbalizations may not be an ideal method for measuring post-editing effort. O’Brien (2005) proposed the combined use of Campbell’s Choice Network Analysis and Translog (to focus on pauses and hesitations) for measuring post-editing effort. Besides these methods, temporal measures, eye tracking, subjective scales, automatic and semi-automatic metrics are often used in the measurement. They can be grouped into the aforementioned three categories: subjective, performance, and physiological measures, and are reviewed in the following paragraphs. (1) Temporal measures According to Krings (2001), temporal effort “constitutes the most externally visible indicator of post-editing effort and the most important criterion for determining the economic viability of machine translation” (p. 182), and it is determined by both cognitive and technical efforts. The question is whether temporal effort can be an index of cognitive effort. Koponen et al. (2012) suggested post-editing time (and its normalized version, seconds per word) as a way to assess post-editing effort, and their experiments indicated that time could be a good metric for understanding post-editing effort. In a study comparing the effort of post-editing English-Portuguese subtitles translated using MT and translation memory systems, Sousa et al. (2011) found a good correlation between their objective way of measuring post-editing effort (i.e., time) and subjective evaluation scores. However, studies by De Almeida (2013) and Arenas (2014) reported no correlation between the participants’ levels of experience and the total time taken to complete the post-editing task, and revealed a complex relationship between PE effort, PE performance, and previous experience. These studies did not address directly the correlation between PE time and PE effort. Nonetheless, they cast into doubt whether PE time alone is a robust measure of PE effort (Vieira 2014) and call for further studies. (2) Pause analysis Pauses have been used as indicators of cognitive processing in research on translation (e.g., Lacruz and Shreve 2014) as well as on speech production and writing (e.g., Schilperoord 1996). They help reduce the load on working memory, and would not occur provided that “(1) sufficient cognitive resources are dedicated to the [. . .] process; (2) the [process] is sufficiently automated (concerning syntactic, lexical and graphomotor processes); and (3) domain knowledge is sufficiently activated for it to be retrieved at a lesser cost” (Alamargot et al. 2007, p. 16). Pause analysis often focuses on three dimensions: pause duration, position, and proportion. In writing studies, for example, pauses are usually interpreted on the basis of four assumptions, one of which is that “pause duration varies as a function of the complexity of the processes engaged in” (Foulin, from Alamargot et al. 2007, p. 14).

7 Measuring Difficulty in Translation and Post-editing: A Review

155

In the field of post-editing, O’Brien (2006) investigated the relationship between pauses (recorded using Translog) and PE cognitive effort, which was indicated by differences in negative translatability indicators in the source text and Choice Network Analysis. She found little correspondence between pause duration and editing of the “difficult” element, though those “difficult” elements identified by Choice Network Analysis were always preceded by a pause. Based on observations in a case study, Lacruz et al. (2012) introduced the average pause ratio metric, which was found to be sensitive to the number and duration of pauses, and noted that APR could be a potentially valid measure of cognitive demand. In their follow-up study involving three participants, Lacruz and Shreve (2014) found that the behavioral metrics of average pause ratio and pause to word ratio appeared to be strongly associated with PE cognitive effort, which was measured indirectly by computing the number of complete editing events (i.e., event to word ratio, or EWR) in the TT segment from the keystroke log report. A complete editing event refers to “a sequence of actions leading to linguistically coherent and complete output” (ibid., p. 250), and the assumption of its use was that “each editing event resulted from a coherent expenditure of cognitive effort in post-editing the MT segment” (ibid., pp. 251–252) and more events would indicate higher cognitive effort. Another assumption was that all complete editing events would require the same amount of cognitive effort, which, as mentioned in their article, was problematic. Also, when computing EWR, the judgment of what constitutes a complete editing event is “to some extent subjective” (ibid., p. 269), and this makes the timeconsuming manual analysis of keystroke logs difficult to automate. As noted in both O’Brien’s (2006) and Lacruz and Shreve’s (2014) studies, the patterns of pause activity vary from one individual to another. Koponen et al. (2012) observed that post-editors adopted different editing trategies. For example, [S]ome editors maximize the use of MT words and cut-paste operations for reordering, while others appear to prefer writing out the whole corrected passage and then deleting MT words even when they are the same. . .[S]ome editors spend their time planning the corrections first and proceeding in order while others revise their own corrections and move around in the sentence. (Koponen et al. 2012, p. 19)

These different strategies would certainly impact the pausing behavior. Further studies in this direction preferably need to adopt a within-subject or longitudinal design. Nevertheless, we agree with O’Brien (2006) that pauses on their own probably are not a robust measure of PE effort. As an online method, pause analysis would better be used to identify problems in translation or PE and determine the workload levels associated with those problems. (3) Eye tracking The method of using an eye tracker to record eye movements and pupil size variation has been employed to investigate various cognitive processes in reading (e.g., Rayner 1998), writing (e.g., Alamargot et al. 2006), usability testing (e.g., Poole and Ball 2006), translation (e.g., Gopferich et al. 2008; O’Brien 2007b), and other fields. Common eye-tracking metrics include gaze time, fixation counts,

156

S. Sun

fixation durations, pupil dilation, blink rate, and scanpath similarity (for their meaning, see O’Brien 2011, pp. 238–241). A fundamental assumption in eye-tracking research is the eye-mind hypothesis (Just and Carpenter 1976, 1980), which posits that “the locus of the eye fixations reflects what is being internally processed” (1976, p. 471) and “the eye remains fixated on a word as long as the word is being processed” (1980, p. 330). Thus gaze time directly indicates the time it takes to process a fixated word. Of course, this hypothesis may not be valid under certain circumstances, e.g., during mindless reading, in which the eyes continue moving across the page (or screen) even though the mind is thinking about something unrelated to the text (Reichle et al. 2010). For this reason, Just and Carpenter (1976) specified several conditions for their hypothesis to be valid, e.g., asking the participant to work accurately but quickly and specifying the exact task goals (also see Goldberg and Wichansky 2003). Eye-tracking metrics have been used to measure cognitive load (cf. Tatler et al. 2014), based on such assumptions as: (1) longer gaze time (i.e., the sum of all fixation durations) corresponds to an increased level of cognitive processing; (2) fixation count (i.e., the number of fixations) is related to the number of components that an individual is required to process (Fiedler et al. 2012, p. 26). These assumptions were supported by Doherty et al.’s (2010) study testing the validity of eye tracking as a means of evaluating MT output, in which they found that gaze time and fixation count correlated reasonably well with human evaluation of MT output. However, they observed that average fixation duration and pupil dilations were not reliable indicators of reading difficulty for MT output, which corroborated the finding of Sharmin et al. (2008) that fixations in translation tasks were more frequent if the source text was complex, but not longer. In translation process research, eye tracking is usually used together with keystroke logging (e.g., Translog, Inputlog), and is supposed to provide data “all through a translation task without interruption or with few interruptions and to be able to ‘fill’ most of the pauses in keystroke activity with interesting data” (Jakobsen 2011, p. 41). A notable effort in this regard is the EU-funded Eye-to-IT project. (4) Evaluation metrics As mentioned in Sect. 7.5.1, there are manual and automatic MT evaluation measures. Automatic measures usually entail reference translations, and one common metric is edit distance, which refers to the minimum number of modifications (i.e., deletions, insertions, substitutions) required to transform an MT output into a reference translation. Translation Edit Rate (TER) (Snover et al. 2006) is a major edit-distance-based metric, and it correlates reasonably well with human judgments of MT quality. Reference translations can be done by professional translators (used in metrics like BLEU and TER) or created by post-editing the MT output. In the latter case, a popular metric is human-targeted Translation Edit Rate (HTER) (Snover et al. 2006), which guarantees only minimum number of edits necessary to transform an MT output into a fluent and adequate translation; it was found (ibid.) that HTER correlates with human judgments better than BLEU, TER, or METEOR (Banerjee and Lavie 2005), another major automatic evaluation metric.

7 Measuring Difficulty in Translation and Post-editing: A Review

157

Vieira (2014) in his study found that METEOR (Denkowski and Lavie 2011) was significantly correlated with all measures of PE cognitive effort considered, which included ST and MT-output characteristics and individual factors, especially for longer sentences. However, such metrics are measures of technical effort and do not directly measure cognitive effort. Koponen (2012) investigated the relationship between cognitive and technical aspects of post-editing effort by comparing translators’ perceived PE effort (as indicated by scores on a 1–5 scale) to actual edits made (measured by HTER) and found they did not always correlate with each other. Some edits are more difficult than others; certain types of errors require great cognitive effort although they involve few edits, and vice versa (Koponen et al. 2012). (5) Subjective scales Human perceptions of PE effort have been used often in studies measuring PE cognitive effort. In Specia’s study (2011), for instance, after post-editing each sentence, translators were asked to score the original translation according to its post-editing effort on a 4-point scale (with 1 being requiring complete retranslation and 4 being fitting for purpose). For the purpose of measuring cognitive effort, Vieira (2014) used Paas’s (1992) 9-point Likert scale together with average fixation duration and fixation count. Towards the use of subjective scales, De Waard and Lewis-Evans (2014) expressed reservations and argued that since subjective ratings have no actual absolute reference, ratings between conditions can only be compared in withinsubject designs, and that the variation of workload during task performance cannot be reflected in one rating. About the first point, the assumption in Sect. 7.6.2 may provide an explanation for why between-subject designs can also be used although within-subject designs are probably better in most cases. The second point makes sense and that is why many researchers choose to use subjective scales together with eye tracking and/or pause analysis. This section has reviewed the uses of temporal measures, pause analysis, eye tracking, evaluation metrics, and subjective scales in PE effort measurement. Their suitability for this purpose aside, adoption of these methods involves a trade-off between granularity of analysis and volume of analysis (Moran and Lewis 2011). For example, analysis of post-editing using eye tracking usually involves fewer test sentences compared with other methods, but can draw on more highly granular data. In PE studies, PE platforms are often adopted to facilitate research in e.g., keystroke analysis, temporal measurement, edit distance calculation, error annotation, or subjective rating. Such platforms include Blast (Stymne 2011), CASMACAT (Ortiz-Martínez et al. 2012), PET (Aziz and Specia 2012), TransCenter (Denkowski and Lavie 2012), and others.

158

7.6

S. Sun

Assumptions in Translation Difficulty Research

As mentioned in the previous section, assumptions are inherent in pause analysis and eye tracking research. According to Kanazawa (1998, p. 196), a scientific theory consists of two major parts: assumptions and hypotheses, and assumptions are “universal axiomatic statements about some part of the empirical world”. Assumptions can be implicit or explicit, and it is important that they are made explicit, especially in research methods used to test theories. Nkwake (2013, pp. 107–109) proposes five categories of assumptions on a continuum of explication: (1) very ambiguously tacit assumptions, (2) tacit but more obvious assumptions, (3) informally, explicit assumptions, (4) assumptions that are made explicit, and (5) explicit and tested assumptions. In this section, we make explicit some assumptions in translation difficulty research.

7.6.1

Assumption of Linearity

The assumption of linearity is that there is a straight-line relationship between the two variables of interest (e.g., Onwuegbuzie and Daniel 1999). It is an important assumption in parametric statistics involving two or more continuous variables (e.g., ANOVA, linear regression), and researchers generally assume linear relationships in their data (Nimon 2012). However, linearity is not guaranteed and should be validated (ibid, p. 4). In the workload-related literature, researchers (e.g., Cassenti et al. 2013; O’Donnell and Eggemeier 1986) have proposed an overall nonlinear relationship between workload and performance. Fig. 7.1 is the hypothesized workloadperformance curve by Cassenti and Kelley (2006; see Cassenti et al. 2013). Fig. 7.1 Cassenti and Kelley’s hypothesized workload-performance curve

7 Measuring Difficulty in Translation and Post-editing: A Review

159

This curve has four linear segments. For easy tasks, increased workload may lead to improved performance, or is not accompanied by variations in performance; for moderately difficult tasks, participants may not be able to increase their effort enough to meet the task demands, and thus increases in workload lead to gradually declined performance; for very difficult tasks that participants perceive as unreasonable, they reduce their effort, bring the workload to normal levels, and their performance deteriorates (Charlton 2002; O’Donnell and Eggemeier 1986). Correspondingly, there may be some dissociation of performance and subjective measures under certain conditions, especially when the workload is very low (floor effect) or very high (ceiling effect) (Vidulich and Tsang 2012, p. 259). The implications for translation difficulty research are that the experiment tasks should be moderately difficult for the participants.

7.6.2

The Same Ranking Assumption

This assumption has two meanings: (1) If a novice believes Passage A is more difficult to translate than Passage B, it is so for a professional translator (unless she works in the domain of Passage A for a long time); (2) If Passage A is more difficult to translate than Passage B for a translator, it will remain so for her (unless she works in the domain of Passage A for a long time). This assumption is based on the finding that “translation does not become easier with growing experience and expertise” (Sirén and Hakkarainen 2002, p. 71). A ranking is valid as long as “there is a single criterion [by] which the objects are evaluated, or the objects map to a linear scale” (Busse and Buhmann 2011, p. 220). This implies that the assumption of linearity is a prerequisite for this assumption. In a relevant study, Tomporowski (2003) found that training improved participants’ performance on each of the three cognitive tasks in his experiment; however, training on one task (i.e., Paced Auditory Serial Addition Task) did not lead to changes in total NASA-TLX workload ratings, whereas training on the other two tasks (i.e., an attentional-switching task and a response-inhibition task) led to decreased ratings of overall workload. That is, the impact of training on workload ratings may be task dependent. This finding can be explained in terms of the effect of practice on the development of automaticity and expertise. In writing process research, it has been noted that lexicon access and graphomotor execution can be automated with practice, while the processes involved in content generation, such as planning and reviewing, are difficult to become automatic (Alamargot et al. 2007, p. 15). Thus, features of automaticity in the translation process may need separate investigations (Moors and De Houwer 2006). Obviously, both the linearity assumption and the same ranking assumption require testing. Nonetheless, assumptions are necessarily simplifications and can help “explain phenomena in that part of the empirical world” (Kanazawa 1998, p. 198).

160

7.7

S. Sun

Discussion and Conclusion

This article has reviewed methods for measuring difficulty in translation and postediting and relevant research. One major reason for measuring difficulty is to avoid cognitive overload and underload, and help maintain optimal performance. From a pedagogical perspective, one needs deliberate practice in order to become an expert in an area (e.g., translation, post-editing), and one of the conditions is that the task is of appropriate difficulty for the individual (Shreve 2002). In terms of research methods, it is important to find a valid and reliable method for after-the-fact measurement of difficulty for the individual. It seems that the individual’s subjective workload evaluation in each task can serve as a baseline measure in translation and post-editing. Yet, paradoxically, the ultimate objective is to find an objective and automatic way to predict the workload independent of the individuals. Thus, researchers turn to language factors (i.e., source text features, translated/postedited text features, and their correspondence). A typical research procedure is that researchers count the occurrences of the language factors (or translation errors) in the criterion passages and correlate them with the difficulty values of the criterion passages in order to select the factors that can work as potential indexes of difficulty in translation or post-editing (Anagnostou and Weir 2007, p. 6). Translation and post-editing are closely related activities. Post-editors are usually translators. In difficulty research, they share some objectives and can draw on each other’s research methods and findings. Of course, they have differences in, among others, operations and behavior, translation error patterns, translation quality expectations, and research designs. For example, in a translation task, the quality is usually expected to be as good as possible, whereas in post-editing, post-editors are often instructed to use as much of the raw MT output as possible. Differences in quality expectations naturally lead to differences in perceived workload. Research indicates that the post-editing effort for all segments was lower than the translation effort for those segments (O’Brien 2007a). In terms of test materials, a group of sentences are usually adopted in post-editing studies while texts in translation difficulty research. Reading researchers have found that readers tend to “pause at the location where the difficulty arises. . .regress to an earlier part of the text, or they postpone solving the problem and move on to the next part of the text hoping to find a solution there” (Vonk and Cozijn 2003). In post-editing studies, however, participants are often presented with one sentence at a time, with revisits not being allowed (Vieira 2014, p. 212). In this review, we have assumed that all the research findings are equally trustworthy. This is not the case, although no one would deny that they are all valuable. Many, if not most, studies have a small sample size with a few participants. Findings from these exploratory studies are difficult to generalize, especially because there are many variables involved, such as text type, domain, language directionality, and professional experience. Hence the need for replication studies. A significant effort in this direction has been the translation process research database (TPR-DB) developed by the Center for Research and Innovation in Translation and Translation

7 Measuring Difficulty in Translation and Post-editing: A Review

161

Technology (CRITT) at Copenhagen Business School, which stores Translog-II data from reading, writing, translation, copying and post-editing experiments, and CASMACAT translation sessions from various language combinations (see Carl et al. 2015). There is some way to go before researchers can find an objective way to measure and predict difficulty in translation and post-editing. Acknowledgment This work was supported by the Young Faculty Research Fund of Beijing Foreign Studies University (Grant No. 2016JT004) and by the Fundamental Research Funds for the Central Universities (Grant No. 2015JJ003).

References Alamargot, D., Chesnet, D., Dansac, C., & Ros, C. (2006). Eye and pen: A new device for studying reading during writing. Behavior Research Methods, 38(2), 287–299. Alamargot, D., Dansac, C., Chesnet, D., & Fayol, M. (2007). Parallel processing before and after pauses: A combined analysis of graphomotor and eye movements during procedural text production. In M. Torrance, L. Van Waes, & D. Galbraith (Eds.), Writing and cognition: Research and applications (pp. 13–29). Amsterdam: Elsevier. Allen, J. (2003). Post-editing. In H. Somers (Ed.), Computers and translation: A translator’s guide (pp. 297–318). Amsterdam: John Benjamins. Alves, F. (2015). Translation process research at the interface. In A. Ferreira & J. W. Schwieter (Eds.), Psycholinguistic and cognitive inquiries into translation and interpreting (pp. 17–39). Amsterdam: John Benjamins. Alves, F., Pagano, A., & da Silva, I. (2014). Effortful text production in translation: A study of grammatical (de)metaphorization drawing on product and process data. Translation and Interpreting Studies, 9(1), 25–51. Anagnostou, N. K., & Weir, G. R. S. (2007). From corpus-based collocation frequencies to readability measure. In G. R. S. Weir & T. Ozasa (Eds.), Texts, textbooks and readability (pp. 34–48). Glasgow: University of Stratchclyde Publishing. Arenas, A. G. (2014). The role of professional experience in post-editing from a quality and productivity perspective. In S. O’Brien, L. W. Balling, M. Carl, M. Simard, & L. Specia (Eds.), Post-editing of machine translation: Processes and applications (pp. 51–76). Newcastle: Cambridge Scholars Publishing. Aziz, W., & Specia, L. (2012). PET: A standalone tool for assessing machine translation through post-editing. Paper presented at the Translating and The Computer 34, London. Aziz, W., Mitkov, R., & Specia, L. (2013). Ranking machine translation systems via post-editing. In I. Habernal & V. Matoušek (Eds.), Text, speech, and dialogue (pp. 410–418). London: Springer. Aziz, W., Koponen, M., & Specia, L. (2014). Sub-sentence level analysis of machine translation post-editing effort. In S. O’Brien, L. W. Balling, M. Carl, M. Simard, & L. Specia (Eds.), Postediting of machine translation: Processes and applications (pp. 170–199). Newcastle: Cambridge Scholars Publishing. Baddeley, A. D., Eysenck, M. W., & Anderson, M. C. (2015). Memory (2nd ed.). London: Psychology Press. Baker, M. (2011). In other words: A coursebook on translation (2nd ed.). New York: Routledge. Balling, L. W., Hvelplund, K. T., & Sjørup, A. C. (2014). Evidence of parallel processing during translation. Meta, 59(2), 234–259. Banerjee, S., & Lavie, A. (2005). METEOR: An automatic metric for MT evaluation with improved correlation with human judgments. Paper presented at the Workshop on Intrinsic and Extrinsic

162

S. Sun

Evaluation Measures for MT and/or Summarization at the 43rd Annual Meeting of the Association of Computational Linguistics (ACL-2005), Ann Arbor, Michigan. Beatty, J. (1982). Task-evoked pupillary responses, processing load, and the structure of processing resources. Psychological Bulletin, 91(2), 276–292. Bermúdez, J. L. (2014). Cognitive science: An introduction to the science of the mind (2nd ed.). Cambridge: Cambridge University Press. Block, R. A., Hancock, P. A., & Zakay, D. (2010). How cognitive load affects duration judgments: A meta-analytic review. Acta Psychologica, 134(3), 330–343. Bradshaw, J. L. (1968). Load and pupillary changes in continuous processing tasks. British Journal of Psychology, 59(3), 265–271. Broadbent, D. E. (1958). Perception and communication. London: Pergamon Press. Brünken, R. E., Plass, J. L., & Moreno, R. E. (2010). Current issues and open questions in cognitive load research. Cambridge: Cambridge University Press. Busse, L. M., & Buhmann, J. M. (2011). Model-based clustering of inhomogeneous paired comparison data. In M. Pelillo & E. R. Hancock (Eds.), Similarity-based pattern recognition (pp. 207–221). Berlin: Springer. Callison-Burch, C., Fordyce, C., Koehn, P., Monz, C., & Schroeder, J. (2008). Further metaevaluation of machine translation. In Proceedings of the third workshop on statistical machine translation (pp. 70–106). Columbus: Association for Computational Linguistics. Campbell, S. (1999). A cognitive approach to source text difficulty in translation. Target, 11(1), 33–63. Campbell, S. (2000). Choice network analysis in translation research. In M. Olohan (Ed.), Intercultural faultlines: Research models in translation studies: Textual and cognitive aspects (pp. 29–42). Manchester: St. Jerome. Campbell, S., & Hale, S. (1999). What makes a text difficult to translate? Refereed Proceedings of the 23rd Annual ALAA Congress. Retrieved March 1, 2015, from http://www.atinternational. org/forums/archive/index.php/t-887.html Cara, F. (1999). Cognitive ergonomics. In R. A. Wilson & F. C. Keil (Eds.), The MIT encyclopedia of the cognitive sciences (pp. 130–132). Cambridge: MIT Press. Carl, M., Bangalore, S., & Schaeffer, M. (2015). New directions in empirical translation process research: Exploring the CRITT TPR-DB. New York: Springer. Cassenti, D. N., & Kelley, T. D. (2006). Towards the shape of mental workload. Paper presented at the Human Factors and Ergonomics Society Annual Meeting, Boston, MA. Cassenti, D. N., Kelley, T. D., & Carlson, R. A. (2013). Differences in performance with changing mental workload as the basis for an IMPRINT plug-in proposal. Paper presented at the 22nd Annual Conference on Behavior Representation in Modeling and Simulation, Ottawa, Canada. Cassin, B. (2014). Dictionary of untranslatables: A philosophical lexicon. Princeton: Princeton University Press. Chall, J. S., & Dale, E. (1995). Readability revisited: The new Dale-Chall readability formula. Cambridge: Brookline Books. Charlton, S. G. (2002). Measurement of cognitive states in test and evaluation. In S. G. Charlton & T. G. O’Brien (Eds.), Handbook of human factors testing and evaluation (2nd ed., pp. 97–126). Mahwah: Lawrence Erlbaum. Common Sense Advisory. (2014). Ten concepts and data points to remember in 2014. MultiLingual, 1, 37-38. Conway, A. R. A., Cowan, N., Bunting, M. F., Therriault, D. J., & Minkoff, S. R. B. (2002). A latent variable analysis of working memory capacity, short-term memory capacity, processing speed, and general fluid intelligence. Intelligence, 30(2), 163–183. Conway, A. R. A., Kane, M. J., Bunting, M. F., Hambrick, D. Z., Wilhelm, O., & Engle, R. W. (2005). Working memory span tasks: A methodological review and user’s guide. Psychonomic Bulletin & Review, 12(5), 769–786. Daneman, M., & Carpenter, P. A. (1980). Individual differences in working memory and reading. Journal of Verbal Learning and Verbal Behavior, 19(4), 450–466.

7 Measuring Difficulty in Translation and Post-editing: A Review

163

De Almeida, G. (2013). Translating the post-editor: An investigation of post-editing changes and correlations with professional experience across two Romance languages. PhD thesis. Dublin City University, Dublin. De Waard, D., & Lewis-Evans, B. (2014). Self-report scales alone cannot capture mental workload. Cognition, Technology & Work, 16(3), 303–305. DeKeyser, R. (2003). Implicit and explicit learning. In C. J. Doughty & M. H. Long (Eds.), The handbook of second language acquisition (pp. 313–348). Oxford: Blackwell. Denkowski, M., & Lavie, A. (2011). Meteor 1.3: Automatic Metric for Reliable Optimization and Evaluation of Machine Translation Systems. In Proceedings of the 6th workshop on statistical machine translation (pp. 85–91). Edinburgh: Association for Computational Linguistics. Denkowski, M., & Lavie, A. (2012). TransCenter: Web-based translation research suite. Retrieved April 1, 2015, from https://www.cs.cmu.edu/~mdenkows/pdf/transcenter-amta2012.pdf DePalma, D. A., & Hegde, V. (2010). The market for MT post-editing. Lowell: Common Sense Advisory. DePalma, D. A., & Kelly, N. (2009). The business case for machine translation. Lowell: Common Sense Advisory. Doherty, S., O’Brien, S., & Carl, M. (2010). Eye tracking as an MT evaluation technique. Machine Translation, 24(1), 1–13. Dorr, B., Olive, J., McCary, J., & Christianson, C. (2011). Machine translation evaluation and optimization. In J. Olive, C. Christianson, & J. McCary (Eds.), Handbook of natural language processing and machine translation (pp. 745–843). New York: Springer. Dragsted, B. (2004). Segmentation in translation and translation memory systems: An empirical investigation of cognitive segmentation and effects of integrating a TM system into the translation process. PhD thesis. Copenhagen Business School, Frederiksberg, Denmark. Dragsted, B. (2012). Indicators of difficulty in translation: Correlating product and process data. Across Languages and Cultures, 13(1), 81–98. Embrey, D., Blackett, C., Marsden, P., & Peachey, J. (2006). Development of a human cognitive workload assessment tool: MCA final report. Dalton: Human Reliability Associates. Engle, R. W., & Kane, M. J. (2004). Executive attention, working memory capacity, and a two-factor theory of cognitive control. In B. Ross (Ed.), The psychology of learning and motivation (pp. 145–199). New York: Elsevier. Englund Dimitrova, B. (2005). Expertise and explicitation in the translation process. Amsterdam: John Benjamins. Felice, M., & Specia, L. (2012). Linguistic features for quality estimation. In Proceedings of the 7th workshop on statistical machine translation (pp. 96–103). Montréal: Association for Computational Linguistics. Fiedler, S., Glöckner, A., & Nicklisch, A. (2012). The influence of social value orientation on information processing in repeated voluntary contribution mechanism games: An eye-tracking analysis. In A. Innocenti & A. Sirigu (Eds.), Neuroscience and the Economics of Decision Making (pp. 21–53). London: Routledge. Frankish, K., & Ramsey, W. (Eds.). (2012). The Cambridge handbook of cognitive science. Cambridge: Cambridge University Press. Freeman, G. L., & Giese, W. J. (1940). The relationship between task difficulty and palmar skin resistance. The Journal of General Psychology, 23(1), 217–220. Freixa, J. (2006). Causes of denominative variation in terminology: A typology proposal. Terminology. International Journal of Theoretical and Applied Issues in Specialized Communication, 12(1), 51–77. Fry, E. B. (1988). Writeability: The principles of writing for increased comprehension. In B. L. Zakaluk & S. J. Samuels (Eds.), Readability: Its past, present, and future (pp. 77–95). Newark: International Reading Association. Gallupe, R. B., DeSanctis, G., & Dickson, G. W. (1988). Computer-based support for group problem-finding: An experimental investigation. MIS Quarterly, 12(2), 277–296.

164

S. Sun

Gile, D. (1999). Testing the Effort Models’ tightrope hypothesis in simultaneous interpreting – A contribution. Hermes, 23, 153–172. Gile, D. (2009). Basic concepts and models for interpreter and translator training (Rev. Ed.). Amsterdam: John Benjamin. Goldberg, J. H., & Wichansky, A. M. (2003). Eye tracking in usability evaluation: A practitioner’s guide. In R. Radach, J. Hyona, & H. Deubel (Eds.), The mind’s eye: Cognitive and applied aspects of eye movement research (pp. 493–516). Amsterdam: Elsevier. Göpferich, S., Jakobsen, A. L., & Mees, I. M. (Eds.). (2008). Looking at eyes: Eye-tracking studies of reading and translation processing. Copenhagen: Sammfundslitteratur. Gopher, D. (1994). Analysis and measurement of mental load. In G. d’Ydewalle, P. Eelen, & P. Bertelson (Eds.), International perspectives on psychological science, Vol. II: The state of the art (pp. 265–292). East Sussex: Lawrence Erlbaum. Gray, W. S., & Leary, B. E. (1935). What makes a book readable. Chicago: The University of Chicago Press. Hale, S., & Campbell, S. (2002). The interaction between text difficulty and translation accuracy. Babel, 48(1), 14–33. Hart, S. G., & Staveland, L. E. (1988). Development of NASA-TLX (Task Load Index): Results of empirical and theoretical research. In P. A. Hancock & N. Meshkati (Eds.), Human mental workload (pp. 139–183). Amsterdam: North-Holland. Hummel, K. M. (2002). Second language acquisition and working memory. In F. Fabbro (Ed.), Advances in the neurolinguistics of bilingualism (pp. 95–117). Udine: Forum. Hvelplund, K. T. (2011). Allocation of cognitive resources in translation: An eye-tracking and key-logging study. PhD thesis. Copenhagen Business School, Frederiksberg, Denmark. Ilkowska, M., & Engle, R. W. (2010). Trait and state differences in working memory capacity. In A. Gruszka, G. Matthews, & B. Szymura (Eds.), Handbook of individual differences in cognition (pp. 295–320). New York: Springer. International Ergonomics Association. (2015). Definition and domains of ergonomics. Retrieved March 1, 2015, from http://www.iea.cc/whats/ Ivir, V. (1981). Formal correspondence vs. translation equivalence revisited. Poetics Today, 2(4), 51–59. Jakobsen, A. L. (2011). Tracking translators’ keystrokes and eye movements with Translog. In C. Alvstad, A. Hild, & E. Tiselius (Eds.), Methods and strategies of process research (pp. 37–55). Amsterdam: John Benjamins. Jensen, K. T. (2009). Indicators of text complexity. In S. Göpferich, A. L. Jakobsen, & I. M. Mees (Eds.), Behind the mind: Methods, models and results in translation process research (pp. 61–80). Amsterdam: John Benjamins. Jex, H. R. (1988). Measuring mental workload: Problems, progress, and promises. In P. A. Hancock & N. Meshkati (Eds.), Hman mental workload (pp. 5–38). Amsterdam: North-Holland. Just, M. A., & Carpenter, P. A. (1976). Eye fixations and cognitive processes. Cognitive Psychology, 8(4), 441–480. Just, M. A., & Carpenter, P. A. (1980). A theory of reading: From eye fixations to comprehension. Psychological Review, 87(4), 329–354. Kalsbeek, J. W. H., & Sykes, R. N. (1967). Objective measurement of mental load. Acta Psychologica, 27, 253–261. Kalyuga, S. (2009). Managing cognitive load in adaptive multimedia learning. Hershey: Information Science Reference. Kanazawa, S. (1998). In defense of unrealistic assumptions. Sociological Theory, 16(2), 193–204. Karwowski, W. (2012). The discipline of human factors and ergonomics. In G. Salvendy (Ed.), Handbook of human factors and ergonomics (pp. 1–37). Hoboken: Wiley. Kit, C. Y., & Wong, B. T. M. (2015). Evaluation in machine translation and computer-aided translation. In S. W. Chan (Ed.), Routledge encyclopedia of translation technology (pp. 213–236). London: Routledge.

7 Measuring Difficulty in Translation and Post-editing: A Review

165

Klare, G. R. (1984). Readability. In P. D. Pearson & R. Barr (Eds.), Handbook of reading research (pp. 681–744). New York: Longman. Koehn, P. (2010). Statistical machine translation. New York: Cambridge University Press. Koehn, P., & Germann, U. (2014). The impact of machine translation quality on human postediting. Paper presented at the Workshop on Humans and Computer-Assisted Translation (HaCaT), Gothenburg, Sweden. Koponen, M. (2012). Comparing human perceptions of post-editing effort with post-editing operations. In Proceedings of the 7th Workshop on Statistical Machine Translation (pp. 181–190). Montreal: Association for Computational Linguistics. Koponen, M., Aziz, W., Ramos, L., & Specia, L. (2012). Post-editing time as a measure of cognitive effort. Paper presented at the AMTA 2012 Workshop on Post-Editing Technology and Practice (WPTP 2012), San Diego. Krings, H. P. (2001). Repairing texts: Empirical investigations of machine translation post-editing processes. (G. Koby, G. Shreve, K. Mischerikow & S. Litzer, Trans.). Kent, Ohio: Kent State University Press. Kuiken, F., & Vedder, I. (2007). Task complexity needs to be distinguished from task difficulty. In M. D. P. GarcíaMayo (Ed.), Investigating tasks in formal language learning (pp. 117–135). Clevedon: Multilingual Matters. Lacruz, I., & Shreve, G. M. (2014). Pauses and cognitive effort in post-editing. In S. O’Brien, L. W. Balling, M. Carl, M. Simard, & L. Specia (Eds.), Post-editing of machine translation: Processes and applications (pp. 246–272). Newcastle: Cambridge Scholars Publishing. Lacruz, I., Shreve, G. M., & Angelone, E. (2012). Average pause ratio as an indicator of cognitive effort in post-editing: A case study. Paper presented at the AMTA 2012 Workshop on PostEditing Technology and Practice (WPTP 2012), San Diego. Liu, M., & Chiu, Y.-H. (2009). Assessing source material difficulty for consecutive interpreting: Quantifiable measures and holistic judgment. Interpreting, 11(2), 244–266. Liu, Q., & Zhang, X. (2015). Machine translation: General. In S. W. Chan (Ed.), Routledge encyclopedia of translation technology (pp. 105–119). London: Routledge. Lusk, M. M., & Atkinson, R. K. (2007). Animated pedagogical agents: Does their degree of embodiment impact learning from static or animated worked examples? Applied Cognitive Psychology, 21(6), 747–764. Martins, D. B., & Caseli, H. (2015). Automatic machine translation error identification. Machine Translation, 29(1), 1–24. Mesa-Lao, B. (2013). Introduction to post-editing–The CasMaCat GUI. Retrieved March 1, 2015 from http://bridge.cbs.dk/projects/seecat/material/hand-out_post-editing_bmesa-lao.pdf Meshkati, N. (1988). Toward development of a cohesive model of workload. In P. A. Hancock & N. Meshkati (Eds.), Human mental workload (pp. 305–314). Amsterdam: North-Holland. Miller, G. A. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychological Review, 63, 81–97. Mishra, A., Bhattacharyya, P., & Carl, M. (2013, August 4–9). Automatically predicting sentence translation difficulty. Paper presented at the 51st Annual Meeting of the Association for Computational Linguistics, Sofia, Bulgaria. Moors, A., & De Houwer, J. (2006). Automaticity: A theoretical and conceptual analysis. Psychological Bulletin, 132(2), 297–326. Moran, J., & Lewis, D. (2011). Unobtrusive methods for low-cost manual evaluation of machine translation. Retrieved April 1, 2015 from http://lodel.irevues.inist.fr/tralogy/index.php? id¼141&format¼print Moray, N. (1977). Models and measures of mental workload. In N. Moray (Ed.), Mental workload: Its theory and measurement (pp. 13–21). New York: Springer. Muñoz Martín, R. (2010). Leave no stone unturned: On the development of cognitive translatology. Translation and Interpreting Studies, 5(2), 145–162. Muñoz Martín, R. (2012). Just a matter of scope. Translation Spaces, 1(1), 169–188.

166

S. Sun

Muñoz Martín, R. (2014). A blurred snapshot of advances in translation process research. MonTI. Special Issue (Minding Translation), 1, 49–84. Newell, W. H. (2001). A theory of interdisciplinary studies. Issues in Integrative Studies, 19, 1–25. Nimon, K. F. (2012). Statistical assumptions of substantive analyses across the general linear model: A mini-review. Frontiers in Psychology, 3, 1–5. Nkwake, A. M. (2013). Working with assumptions in international development program evaluation. New York: Springer. Nord, C. (2005). Text analysis in translation: Theory, methodology, and didactic application of a model for translation-oriented text analysis (2nd ed.). Amsterdam: Rodopi. O’Brien, S. (2004). Machine Translatability and Post-Editing Effort: How do they relate? Paper presented at the 26th Translating and the Computer Conference (ASLIB), London. O’Brien, S. (2005). Methodologies for measuring the correlations between post-editing effort and machine translatability. Machine Translation, 19(1), 37–58. O’Brien, S. (2006). Pauses as indicators of cognitive effort in post-editing machine translation output. Across Languages and Cultures, 7(1), 1–21. O’Brien, S. (2007a). An empirical investigation of temporal and technical post-editing effort. Translation and Interpreting Studies, 2(1), 83–136. O’Brien, S. (2007b). Eye-tracking and translation memory matches. Perspectives, 14(3), 185–205. O’Brien, S. (2010). Controlled language and readability. In G. M. Shreve & E. Angelone (Eds.), Translation and cognition (pp. 143–165). Amsterdam: John Benjamins. O’Brien, S. (2011). Cognitive explorations of translation. London: Continuum. O’Brien, S., Balling, L. W., Carl, M., Simard, M., & Specia, L. (Eds.). (2014). Post-editing of machine translation: Processes and applications. Newcastle: Cambridge Scholars Publishing. O’Donnell, R. D., & Eggemeier, F. T. (1986). Workload assessment methodology. In K. R. Boff, L. Kaufman, & J. P. Thomas (Eds.), Handbook of perception and human performance, Vol. II: Cognitive processes and performance (pp. 42/41–42–49). New York: Wiley. Onwuegbuzie, A. J., & Daniel, L. G. (1999, November 17–19). Uses and misuses of the correlation coefficient. Paper presented at the Annual Meeting of the Mid-South Educational Research Association, Point Clear, AL. Ortiz-Martínez, D., Sanchis-Trilles, G., Casacuberta, F., Alabau, V., Vidal, E., Benedı, J.-M . . . González, J. (2012). The CASMACAT project: The next generation translator’s workbench. Paper presented at the 7th Jornadas en Tecnologıa del Habla and the 3rd Iberian SLTech Workshop (IberSPEECH), Madrid. Orzechowski, J. (2010). Working memory capacity and individual differences in higher-level cognition. In G. Matthews & B. Szymura (Eds.), Handbook of individual differences in cognition (pp. 353–368). New York: Springer. Paas, F. G. W. C. (1992). Training strategies for attaining transfer of problem-solving skill in statistics: A cognitive-load approach. Journal of Educational Psychology, 84(4), 429–434. Paas, F. G. W. C., & Van Merriënboer, J. J. G. (1994a). Instructional control of cognitive load in the training of complex cognitive tasks. Educational Psychology Review, 6(4), 351–371. Paas, F. G. W. C., & Van Merriënboer, J. J. G. (1994b). Variability of worked examples and transfer of geometrical problem-solving skills: A cognitive-load approach. Journal of Educational Psychology, 86(1), 122–133. Paas, F. G. W. C., Ayres, P., & Pachman, M. (2008). Assessment of cognitive load in multimedia learning. In D. H. Robinson & G. Schraw (Eds.), Assessment of cognitive load in multimedia learning: Theory, methods and applications (pp. 11–35). Charlotte, NC: Information Age Publishing. Palumbo, G. (2009). Key terms in translation studies. London: Continuum. Papineni, K., Roukos, S., Ward, T., & Zhu, W.-J. (2002). BLEU: A method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics (pp. 311–318). University of Pennsylvania, Philadelphia: Association for Computational Linguistics.

7 Measuring Difficulty in Translation and Post-editing: A Review

167

Poole, A., & Ball, L. J. (2006). Eye tracking in HCI and usability research. In C. Ghaoui (Ed.), Encyclopedia of human computer interaction (pp. 211–219). London: Idea Group. Rayner, K. (1998). Eye movements in reading and information processing: 20 years of research. Psychological Bulletin, 124(3), 372–422. Rayner, K., & Pollatsek, A. (1989). Psychology of reading. Hillsdale: Lawrence Erlbaum. Redfield, C. L. (1922). Mental levels. Journal of Education, 95(8), 214–216. Reichle, E. D., Reineberg, A. E., & Schooler, J. W. (2010). Eye movements during mindless reading. Psychological Science, 21(9), 1300–1310. Robinson, P. (2001). Task complexity, cognitive resources, and syllabus design: A triadic framework for examining task influences on SLA. In P. Robinson (Ed.), Cognition and second language instruction (pp. 287–318). Cambridge: Cambridge University Press. Rosa, R. (2014). Depfix, a tool for automatic rule-based post-editing of SMT. The Prague Bulletin of Mathematical Linguistics, 102(1), 47–56. Rost, M. (2006). Areas of research that influence L2 listening instruction. In E. Usó Juan & A. Martínez Flor (Eds.), Current trends in the development and teaching of the four language skills (pp. 47–74). Berlin: Mouton de Gruyter. Schaeffer, M., & Carl, M. (2013). Shared representations and the translation process: A recursive model. Translation and Interpreting Studies, 8(2), 169–190. Schaeffer, M., & Carl, M. (2014). Measuring the cognitive effort of literal translation processes. Paper presented at the 14th Conference of the European Chapter of the Association for Computational Linguistics, Gothenburg, Sweden. Schilperoord, J. (1996). It’s about time: Temporal aspects of cognitive processes in text production. Amsterdam: Rodopi. Schleiermacher, F. (2012). On the different methods of translating (S. Bernofsky, Trans.). In L. Venuti (Ed.), The translation studies reader (3rd ed., pp. 43–63). London: Routledge. Sharmin, S., Špakov, O., Räihä, K.-J., & Jakobsen, A. L. (2008). Where on the screen do translation students look while translating, and for how long? In S. Göpferich, A. L. Jakobsen, & I. M. Mees (Eds.), Looking at eyes: Eye-tracking studies of reading and translation processing (pp. 31–51). Copenhagen: Samfundslitteratur. Shreve, G. M. (2002). Knowing translation: Cognitive and experiential aspects of translation expertise from the perspective of expertise studies. In A. Ruiccardi (Ed.), Translation studies: Perspectives on an emerging discipline (pp. 150–173). Cambridge: Cambridge University Press. Shreve, G. M., & Angelone, E. (Eds.). (2010). Translation and cognition. Amsterdam: John Benjamins. Silveira, F. d. S. D. d. (2011). Working memory capacity and lexical access in advanced students of L2 English. PhD thesis. Universidade Federal do Rio Grande do Sul, Brazil. Retrieved from http://www.lume.ufrgs.br/bitstream/handle/10183/39423/000824076.pdf?sequence¼1 Sirén, S., & Hakkarainen, K. (2002). Expertise in translation. Across Languages and Cultures, 3(1), 71–82. Snover, M., Dorr, B., Schwartz, R., Micciulla, L., & Makhoul, J. (2006). A study of translation edit rate with targeted human annotation. In Proceedings of the 7th Conference of the Association for Machine Translation in the Americas (pp. 223–231). Cambridge, MA. Sousa, S. C. M. d., Aziz, W. F., & Specia, L. (2011). Assessing the post-editing effort for automatic and semi-automatic translations of DVD subtitles. In Proceedings of the International Conference of Recent Advances in Natural Language Processing (pp. 97–103). Bulgaria. Specia, L. (2011). Exploiting objective annotations for measuring translation post-editing effort. Paper presented at the 15th Conference of the European Association for Machine Translation, Leuven. Specia, L., Raj, D., & Turchi, M. (2010). Machine translation evaluation versus quality estimation. Machine Translation, 24(1), 39–50.

168

S. Sun

Stymne, S. (2011). Blast: A tool for error analysis of machine translation output. Paper presented at the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, Portland, Oregon. Sun, S. (2011). Think-aloud-based translation process research: Some methodological considerations. Meta, 56(4), 928–951. Sun, S. (2015). Measuring translation difficulty: Theoretical and methodological considerations. Across Languages and Cultures, 16(1), 29–54. Sun, S., & Shreve, G. M. (2014). Measuring translation difficulty: An empirical study. Target, 26(1), 98–127. Sweller, J. (1988). Cognitive load during problem solving: Effects on learning. Cognitive Science, 12(2), 257–285. Tatler, B. W., Kirtley, C., Macdonald, R. G., Mitchell, K. M., & Savage, S. W. (2014). The active eye: Perspectives on eye movement research. In M. Horsley, M. Eliot, B. A. Knight, & R. Reilly (Eds.), Current trends in eye tracking research (pp. 3–16). London: Springer. TAUS. (2010). MT post-editing guidelines. Retrieved March 1, 2015, from https://www.taus.net/ think-tank/best-practices/postedit-best-practices/machine-translation-post-editing-guidelines TAUS. (2014). Post-editing: Championing MT. Retrieved March 1, 2015 from https://postedit.taus. net/ Thorndike, E. L., Bregman, E. O., Cobb, M. V., & Woodyard, E. (1927). The measurement of intelligence. New York: Bureau of Publications, Columbia University. Tirkkonen-Condit, S. (2005). The monitor model revisited: Evidence from process research. Meta, 50(2), 405–414. Tokowicz, N., Kroll, J. F., De Groot, A. M. B., & Van Hell, J. G. (2002). Number-of-translation norms for Dutch – English translation pairs: A new tool for examining language production. Behavior Research Methods, Instruments, & Computers, 34(3), 435–451. Tomporowski, P. D. (2003). Performance and perceptions of workload among young and older adults: Effects of practice during cognitively demanding tasks. Educational Gerontology, 29(5), 447–466. Turner, M. L., & Engle, R. W. (1989). Is working memory capacity task dependent? Journal of Memory and Language, 28(2), 127–154. Vasconcellos, M. (1987). A comparison of MT post-editing and traditional revision. In K. Kummer (Ed.), Proceedings of the 28th annual conference of the American Translators Association (pp. 409-416). Medford: Learned Information. Vidulich, M. A., & Tsang, P. S. (2012). Mental workload and situation awareness. In G. Salvendy (Ed.), Handbook of human factors and ergonomics (4th ed., pp. 243–273). Hoboken: Wiley. Vieira, L. N. (2014). Indices of cognitive effort in machine translation post-editing. Machine Translation, 28(3-4), 187–216. Vonk, W., & Cozijn, R. (2003). On the treatment of saccades and regressions in eye movement measures of reading time. In J. Hyona, R. Radach, & H. deubel (Eds.), The mind’s eye: Cognitive and applied aspects of eye movement research (pp. 291–312). London: Elsevier. Wierwille, W. W., & Williges, B. H. (1980). An annotated bibliography on operator mental workload assessment (Naval Air Test Center Report No. SY-27R-80). Patuxent River: Naval Air Test Center, System Engineering Test Directorate. Wilson, R. A., & Keil, F. C. (Eds.). (1999). The MIT encyclopedia of the cognitive sciences. Cambridge: MIT Press. Wilss, W. (1982). The science of translation: Problems and methods. Tübingen: Gunter Narr. Wisniewski, G., Kübler, N., & Yvon, F. (2014). A corpus of machine translation errors extracted from translation students exercises. Paper presented at the International Conference on Language Resources and Evaluation (LREC), Iceland. http://www.lrec-conf.org/proceedings/ lrec2014/pdf/1115_Paper.pdf Woodrow, H. (1936). The measurement of difficulty. Psychological Review, 43(4), 341–365. Zipf, G. K. (1935). The psycho-biology of language: An introduction to dynamic philology. Boston: Houghton Mifflin.

Chapter 8

Translation Competence as a Cognitive Catalyst for Multiliteracy – Research Findings and Their Implications for L2 Writing and Translation Instruction Susanne Göpferich

8.1

Introduction

Translating from the L1 into the L2 has been rejected in the foreign language instruction paradigms following the grammar translation method and has more or less been banned from L2 teaching (cf. Cook 2010; Liu 2009; Turnbull and DaileyO’Cain 2009a, p. 3 ff).1 One of the reasons is the assumption that maximum exposure to the L2 is the best way to learn it and that the L1, if resorted to, interferes negatively with L2 development. This assumption may explain why the use of the L1 and translation had received little attention in L2 writing research until the 1980s (Rijlaarsdam 2002, p. ix; Liu 2009, p. 12), even though there is a lack of evidence that resorting to the L1 in L2 language production is harmful (Cook 2010, p. 99). Empirical investigations of translation in L2 writing processes conducted since then have revealed, however, that translating from the L1 is a process that occurs naturally in L2 writing (see, e.g., Cohen and Brooks-Carson 2001; Cumming 1989; Liu 2009; Qi 1998; Roca De Larios et al. 1999; Sasaki 2004; Wang and Wen 2002). Moreover, not only negative transfer from L1 to L2 composing processes has been observed but also positive transfer, especially at the stages prior to formulating the actual text, such as idea generation, organization and elaboration (Cumming 1989; Kobayashi and Rinnert 1992; Uzawa 1994; Uzawa and Cumming 1989; Woodall 2002). In a wider context, it has been observed that students rely on their first

1

Fries (1945), for example, recommended avoiding translation or seeking L2 lexical equivalents for L1 expressions until learners had established a direct connection between what they wanted to express and adequate formulations for it in the L2. He was convinced that “translation and ‘word equivalents’, which seem to save time at the beginning, really cause delay in the long run and may if continued even set up such habits and confusion as to thwart any control of the new language” (Fries 1945, p. 6). S. Göpferich (Deceased)

© Springer Nature Singapore Pte Ltd. 2019 D. Li et al. (eds.), Researching Cognitive Processes of Translation, New Frontiers in Translation Studies, https://doi.org/10.1007/978-981-13-1984-6_8

169

170

S. Göpferich

language even when learning via a second language (Levi-Keren 2008; Logan-Terry and Wright 2010). These findings warrant taking a closer look at the types of translation that can be observed during L2 writing, the purposes for which they are used and the impact they have on L2 text quality. When following this more differentiated approach, we also have to take into account the different purposes for which texts are composed in the writers’ L2. L2 writing may have the function of either writing-to-learn, which should be the case, for example, when students, who are non-native speakers of English, are expected to write their term papers and theses in their L2 English, for example in programmes of English language, literature and culture. In these cases, the predominant function of writing is – or at least should be – an epistemic function. Writing in the L2, however, may also have the function of learning-to-write, which is the case in foreign language classrooms, whose primary objective is to foster students’ L2 proficiency.2 In this article, research evidence based on a literature review will be provided for four theses: 1. Suppression of L1 use in L2 writing may hamper students’ creativity and be detrimental to the epistemic or knowledge-constructing function of writing (Galbraith 1999). If this thesis holds true, scholars will experience disadvantages when being forced to write in an L2, in most cases English as a lingua franca, without having acquired strategies to fruitfully also draw on their L1 for this purpose. 2. Translation has both advantages and disadvantages for L2 writing pedagogy. These depend on the function for which translation is used and the writer’s translation competence. 3. Writers of academic texts, even if they only write in one language, must at least be able to read and draw on publications in one or more other languages. This requires a competence termed transliteracy. Transliteracy includes translation competence. 4. With increasing L2 proficiency, translation processes, which occur naturally in L2 writing, decrease in number and are shifted to the language-distant parts of the writing process, i.e., processes that Levelt (1989, p. 9 ff), with regard to speaking, termed “conceptualizing” (in contrast to “formulating”). In Sect. 8.2, research evidence will be provided for these theses followed by didactical implications that the findings have for L2 writing and translation instruction.

2

This is a task at the interface of foreign-language teaching and literacy development, two domains that, as Martin-Jones and Jones (2000) criticize, have not taken account of each other to a sufficient extent.

8 Translation Competence as a Cognitive Catalyst for Multiliteracy –. . .

8.2 8.2.1

171

Research Evidence for the Four Theses and Their Didactical Implications The Negative Impact of L1 Suppression and How to Circumvent It3

The potential negative impact of L1 suppression can be analyzed in a wider and a narrower context. The narrower context is limited to writing processes themselves, whereas the wider context includes the communicative environment in which writing is embedded, for example, the educational environment. In this wider context, English is the lingua franca in an increasing number of disciplines and gains in importance in tertiary education. Even in countries where English is no official language, such as Germany, English-medium instruction (EMI) or English as “the language of teaching and learning (LoTL)” (van der Walt 2013), has started to replace programs taught in the national language(s) (cf. Björkmann 2013).4 According to Wächter and Maiworm (2014), the number of programmes taught entirely in English in European institutions of higher education has increased by more than a factor of 10 since 2002 and amounts to approx. 8000 at present. In German universities alone, 1711 international programmes were offered in 2015, out of which 1430 were taught in English (DAAD 2015). At many universities throughout the world, students pursuing degrees in the fields of English literature, culture and linguistics, to name just one example, are required to write their term papers, even their first ones, and final theses in English. This brings us to the narrower context. If English is not their L1, they are immediately confronted with two concurrent challenges: the challenge of academic writing, which itself requires the students to adapt to a specific form of discourse with which they are not yet familiar, neither in their L1 nor in English, and the very challenge of first having to do this in their L2. Against this background, the question arises as to whether the requirement of writing academic texts in the L2, before having mastered this skill in the L1, leads to such an increase in task complexity that it overburdens students, which could have consequences reaching beyond the poorer linguistic quality that L2 compositions inevitably display. Having students write term papers in their L2 may further result in a less profound analysis of the subject matter, not to mention a less profound treatment of the L2 literature associated with the subject matter. These potential consequences of requiring students to write academic texts in their L2 may, in turn, be detrimental to the epistemic function of writing and, ultimately, to cognitive development (Göpferich and Nelezen 2014). This fear is nourished by findings from studies comparing academic achievement of international students at US colleges and universities who had completed high school in their non-English first-language environments with that of L2 students

3 4

Part of the research overview presented in Sect. 8.2.1 is based on Göpferich and Nelezen (2014). For reasons, see Knapp (2014, p. 167).

172

S. Göpferich

studying in their L2 who had graduated from US high schools. These studies consistently show that the former outperformed the latter (Muchisky and Tangren 1999). As Leki et al. (2008, p. 19) conclude with reference to Bosher and Rowekamp (1992) and Cumming (2001), “the best predictor of academic success in college for these students is the number of years spent in high school in L1 before immigration”.5 Similar observations were made with migrant children (Finnish children in Sweden and Spanish-speaking Mexican children in the USA). Those migrant children who had attended school in their country of origin before immigration outperformed migrant children who began school directly in their host country (cf. the review in Cummins 1996). Against this background, the question arises whether this negative impact can be circumvented by resorting to the L1 for epistemic purposes (cf. e.g., Lange 2012). When answering this question, numerous factors have to be taken into account, among them L1 and L2 proficiency levels, L1 writing ability, the type of the writer’s bilinguality and the level of translation competence that the person writing in the L2, or translating into the L2, has acquired. The relevance of these factors will become obvious when comparing L1 and L2 writing processes. Studies which have compared L1 and L2 text production have indicated that L2 text production processes, aside from the additional lexical and grammatical challenges associated with foreign language production in general, are strikingly similar to L1 text production processes. This suggests that there is a “composing competence” (Krapels 1990) which exists across languages and is at least partially independent of L2 language proficiency and transferable between languages (cf. e.g., Cumming 1989; Cummins 1996; Hirose and Sasaki 1994, p. 216 ff. Sasaki 2000). As Arndt (1987, p. 259) points out: “It is the constraints of the composing activity, or of the discourse type, which creates problems for students writing in L2, not simply difficulties with the mechanics of the foreign language.” However, Silva (1992) observed in a study in which he surveyed university students about their own L2 writing processes that exactly these difficulties with lexis and grammar, as well as interference between the L1 and L2, are so cognitively demanding that not only the form but also the content of L2 written work, and thus the epistemic function of writing, suffer. This leads to texts that are “less sophisticated” and express the ideas of the writer less effectively (Silva 1992, p. 33). Devine et al. (1993) came to a similar conclusion from the results of their study comparing the written compositions of 20 first-year college students in the United States, half of whom had English as their L1 and half as their L2. These subjects were further required to complete a questionnaire addressing their writing processes in order to investigate the metacognitive writing models used for L1 and L2 composition. The students writing in their L2 reported having to omit certain content from their texts when they felt

5 The fact that international students who had completed high school in their L1 environments outperformed international students who had already completed high school in their L2 in the USA, however, does not necessarily have language-related causes but could also be due to differences in the educational system and the quality of teaching.

8 Translation Competence as a Cognitive Catalyst for Multiliteracy –. . .

173

they did not possess the linguistic means to express this content correctly, a problem the L1 writers did not have.6 Unsurprisingly, the L1 essays were also rated more highly than their L2 counterparts (see also the literature review by Cumming 2001; Sasaki 2002, p. 51 ff). Such findings support Thesis 1 that the epistemic benefits of writing are less pronounced when this writing takes place in the L2. They also warrant the assumption that the epistemic function of writing can only be fully exploited, both in the L1 and in the L2, if students have achieved a certain minimum fluency with regard to lower-order processes, for example, at the lexical and grammatical levels, when writing in the respective language, because otherwise these lower-level processes will use cognitive capacity at the expense of higherlevel processes (Cumming 1989, p. 126). Thus, offering students compensatory writing courses, both in the L1 and the L2, which address lexical and grammatical decisions can be assumed to be useful for preparing them to derive the maximum benefit from discipline-specific courses, where writing, either in the L1 or the L2, is used as a means of more profound reflection. Several studies have established a correlation between the level of L2 proficiency and the varying amounts of attention given to different aspects of the writing process. From an analysis of English and French texts produced by native Englishspeaking university students while thinking aloud, Whalen and Menard (1995) found that L2 writers with insufficient L2 competence tend to neglect important macro-level writing processes, including planning, evaluation and revision, in order to focus on lower-level processes. Schoonen et al. (2003) provide further support for this finding from their study in which 281 8th-grade pupils composed texts in both their L1 and L2, the quality of which was then compared with their overall language competency: The L2 writer may be so much involved in these kinds of ‘lower-order’ problems of word finding and grammatical structures that they may require too much conscious attention, leaving little or no working memory capacity free to attend to higher-level or strategic aspects of writing, such as organizing the text properly or trying to convince the reader of the validity of a certain view. The discourse and metacognitive knowledge that L2 writers are able to exploit in their L1 writing may remain unused, or underused, in their L2 writing. (Schoonen et al. 2003, p. 171)

Roca De Larios et al. (2006) arrived at a similar interpretation after analyzing the L1 and L2 (English) texts and accompanying think-aloud protocols (TAPs) of 21 Spanish-speaking subjects who were separated into three groups based on their levels of English proficiency: In L2 writing [. . .] the patterns emerging from the data indicate that the lower the proficiency level of the writer, the more he or she engages in compensating for interlanguage deficits vis-à-vis ideational or textual occupations. (Roca De Larios et al. 2006, p. 110)

6

Cf. also Gantefort and Roth (2014), who, with reference to Börner (1989), speak of a discrepancy between L2 writers’ desire to express something and their ability to do so (“Gefälle zwischen Ausdruckswillen und Ausdrucksfähigkeit”) with their L2 text production output lagging behind the development of their conceptual capacities (“Nachhinken der fremdsprachlichen Produktion hinter der kognitiven Entwicklung”).

174

S. Göpferich

In accordance with this, Roca De Larios et al. (1999) also found language proficiency-dependent differences in the frequencies with which different types of restructuring, i.e. searches for alternative syntactic plans for linguistic, ideational or textual reasons, are used in formulation processes. In their study, more languageproficient writers used restructuring more for stylistic, ideational and textual reasons whereas less proficient writers needed to resort to restructuring more for compensatory purposes due to a lack of linguistic resources in the L2. If a certain minimum level of L2 proficiency has not yet been achieved, this low level of L2 proficiency can be assumed to lead to less precise texts, thus hampering the epistemic function of writing. The results reported warrant the assumption that L2 writing processes only strongly resemble L1 writing processes after a certain L2 competence threshold level has been reached (cf. Cumming 1989, p. 126; Kohro 2009, p. 16; Roca De Larios et al. 1999; Sasaki and Hirose 1996, p. 156). Below this threshold level, students’ L2 proficiency seems to have a major impact on their L2 writing ability (Hirose and Sasaki 1994, 217). In this connection, Sasaki (2004) brought up the question whether the writing processes of highly proficient L2 writers ultimately resemble those of highly proficient L1 writers or whether L2 writing processes generally, even at the highest competence levels, differ from the respective processes in the L1, at least with regard to certain features. Recent studies support multicompetence theory, according to which the acquisition of another language affects the knowledge of each language acquired previously so that a multilingual mind is not just the sum of two or more monolingual minds (Cook 2002, 2008; Kecskes and Papp 2000). Multi-competence theory thus gives rise to the assumption that L2 writing processes do not gradually become more similar to L1 writing processes but that increase in L2 writing proficiency also changes L1 writing, leading to a different multilingual system from which epistemic benefits may be derived, but which also involves potential risks for L1 competence. In this context, Ortega and Carson (2010) point out a desideratum: We need further research that helps us understand how the development of L2 composing competence interacts with, destabilizes, and most likely transforms the nature of L1 composing competence, and how the experiences afforded by different social contexts shape these processes. Perhaps the most salient observation to date in this area is that erosion and even loss of L1 composing capacities may be expected in certain contexts. (Ortega and Carson 2010, p. 63)

What has to be taken into account in many of the above-mentioned studies is that the writers’ language proficiency might have been confounded to a certain extent with their writing expertise. Cumming (1989) was among the first to consider writing expertise and L2 language proficiency separately. In a study with 23 participants who had different combinations of writing expertise and L2 proficiency for which they were controlled, Cumming (1989, p. 123) came to the conclusion that writing expertise is independent of language proficiency, again once the latter has reached a

8 Translation Competence as a Cognitive Catalyst for Multiliteracy –. . .

175

certain threshold level.7 He found that “writing expertise is a central cognitive ability – with second-language proficiency adding to it, facilitating it in a new domain, and possibly enhancing it”. Furthermore, he found that the average writers’ performance benefited much more from higher ESL proficiency than that of participants with either high or low levels of writing expertise (Cumming 1989, p. 105). Differences between the writing processes of more and less proficient L2 writers that can also be observed between more and less proficient L1 writers include that more proficient writers compose longer and more complex texts and write faster and more fluently than novices (Sasaki 2000, pp. 271, 282). More proficient writers also plan more at a global level whereas less proficient writers plan more at a local level (Sasaki 2000, p. 273 f. 278). In L2 writing, however, novices often stop to translate ideas they have generated from their L1 into their L2 (English), whereas expert L2 (English) writers rather stop to refine their English expressions (Sasaki 2000, p. 282). In accordance with Cummins’ (1981, 1996) interdependence hypothesis, it can be assumed that a cognitive-academic language proficiency (CALP)8 that has been acquired in one language can be transferred to another if a threshold level of proficiency in that other language has been achieved. If this threshold level has not been achieved, the lack of language proficiency may hamper the exploitation of this CALP in the L2 even in language-distant process such as planning and organizing texts. Resorting to the L1 for such processes even in L2 writing may help to avoid such shortcomings. Making use of a CALP acquired through the L1 in another language, however, requires active practising of writing academically in this other language, “adequate exposure” to it (Cummins 1981).9 At the same time, academic writing skills in the L1 may decline if writers exclusively write in their L2. This decline may occur in spite of the fact that the CALP is retained since the decline is limited to the language-close levels of writing (Carson and Kuehn 1992, p. 163). Carson and Kuehn (1992, p. 176 f.) also assume a writing aptitude, which “imposes a ceiling on writing development in the L1”. This ceiling, they assume, affects writers in whichever language they compose. For writing skills transfer from the L1 to the L2 the following ensues from this: Given the appropriate educational context, good L1 writers will be good L2 writers, but poor L1 writers may not rise above the level of their L1 abilities to become better L2 writers. If

7 Cf. Sasaki and Hirose (1996), who found that L2 proficiency accounted for 52% of their participants’ L2 writing ability variance. 8 Carson and Kuehn (1992, p. 159) term this general competence “generalized discourse competence”. It is “the ability to produce context-reduced academic prose in both L1 and L2 as a function of common underlying cognitive-academic language proficiency”. 9 Cf. Cummins (1996): “[W]riting expertise is common across languages but for effective writing performance in an L2 both expertise and specific knowledge of the L2 are required. As expressed by Cumming: ‘the present research has identified the empirical existence of certain cognitive abilities entailed in writing expertise – problem solving strategies, attention to complex aspects of writing while making decisions, and the qualities of content and discourse organization in compositions – which are not related directly to second language proficiency but which appear integral to effective performance in second language writing. (1987, p. 175)’”

176

S. Göpferich

poor L1 writing results from lack of L1 educational experience and there is writing aptitude, then there is potential for good L2 writing to develop. (Carson and Kuehn 1992, p. 177)

The potential risks for the individual academic development of students when exposed to an L2 for academic purposes before they have achieved a certain L2 proficiency level are also corroborated by studies which analyzed the effect of English-medium instruction (EMI) in mathematics and engineering. They point to negative effects of abandoning instruction in the local languages in favour of English. One longitudinal study (Klaasen 2001), however, found that this negative effect of learning in English diminishes within 1 year (on the effects of EMI on learning, see the research overview in Björkmann 2013, p. 22 ff). This might point to a threshold level of English proficiency that could be necessary to benefit from EMI, or at least not to suffer from it. It must be conceded that there are also studies which found no negative effect of EMI. However, these studies did not investigate the impact of EMI on students’ language and academic skills themselves but the subjective impressions that persons involved in EMI had on its effects. For example, in a study conducted at the University of the Basque Country, where Basque and Spanish are official languages and also English-medium courses are offered, data from 785 members of the university (teachers, students and administrative staff) were collected on their perceptions of internationalization, multilingualism and the implementation of English-medium courses using questionnaires, discussion groups and interviews. Lasagabaster (2015, p. 59) reports on this study that students “did not consider that EMI has any negative impact on their content learning, despite the fact that they acknowledged that their productive skills suffered from some limitations”. If their productive skills suffered, the epistemic function of their writing may have suffered as well. If the epistemic function of their writing suffered, this must have had a negative impact on their content learning, even if the students themselves had not been aware of it. Dafouz et al. (2014) compared Spanish university students’ academic performance in an EMI Business Administration programme with that of students taught in Spanish in this same programme on the basis of the grades they obtained on coursework and final exams in their first year. They could not find any statistically significant difference in their performance. From this finding they concluded that the language of instruction did not negatively affect the students’ performance in the disciplinary subjects under investigation. However, the lack of differences observed in this study may have been caused by differences in the assessment criteria that were applied to the two cohorts of students. The authors admit that “differences as to the precise criteria applied in the final evaluation of students’ performance seem to largely depend on individual teacher decisions” and, despite faculty meetings to reconcile assessment criteria, “a general consensus on how to assess individual student performance is still far from being a reality” (Dafouz et al. 2014, p. 229). Furthermore, it has to be taken into account that the results of this study reported so far focused on the first year of instruction only. An impact of EMI on cognitiveacademic development may only show at later stages of the programme of studies when students need to draw on a larger knowledge base. Since the study will be

8 Translation Competence as a Cognitive Catalyst for Multiliteracy –. . .

177

extended into a longitudinal one, it might shed light on the impact of increasing subject-knowledge complexity and at the same time language-proficiency on the academic performance of the two cohorts over time. We do not yet have contrastive studies which provide an answer to the question whether the quality of English academic texts composed by, for example, German students who had acquired academic literacy in German before learning to write academic texts in English exceeds that of German students whose academic writing socialization had taken place in English directly. The findings from the studies quoted above concerning migrant children and students whose achievement was better when schooled or socialized academically in their L1 first, however, suggests that learning to write academically in one’s L1 first might be the better option. Another argument in favour of academic writing socialization in students’ L1 is that academic writing in the L1 might never be learned once academic writing in the L2 English has become the default (cf. Casanave 1998; Flowerdew 2000; Shi 2003), thus leading to domain losses in the non-English national languages (cf. also Tang 2012, p. 228).10 A distinction that has been neglected in most of the studies reviewed so far is that between non-academic writing (e.g., writing narrative texts) and academic writing. Academic writing with its epistemic function places additional cognitive load on the writer. This additional cognitive load might have an effect on the transferability of writing competence acquired in the L1 on L2 writing. Writers who are proficient non-academic writers in their L1 may be able to transfer this non-academic writing proficiency to their L2, whereas such a transfer might not be possible for academic writing where writers might still need more practice in their L1 before they have achieved a level of competence that allows transfer to the L2. If this hypothesis holds true, at least two threshold levels have to be distinguished: one threshold L1-writing competence level that has to be achieved in order to be able to transfer one’s general writing abilities to the L2 and one L1-academic writing competence level that has to be achieved in order to benefit from a general academic writing competence in the L2.11 From the studies by Steinhoff (2007), Pohl (2007) and Beaufort (2007), we know that students come equipped with general writing competence when entering university which does not yet include academic writing competence (cf. also Knapp and Timmermann 2012, p. 43). The latter only gradually develops in the course of the students’ academic socialization, and this might be easier when it happens in the students’ L1. To sum up: With regard to students in EMI, whose English proficiency often lags behind their proficiency in their mother tongue, there is strong evidence that writing 10

The differences between L1 and L2 writing addressed so far have developmental origins. Apart from such developmental differences, culture-specific differences have to be taken into account as well, which are beyond the scope of this article. For further information on this aspect, see Göpferich (2015b, p. 225 f). 11 These thresholds must not be confused with Cummins’ (1979) BICS (Basic Interpersonal Communication Competence) and CALP (Cognitive Academic Language Proficiency). Both thresholds I refer to lie within the range of CALP.

178

S. Göpferich

in the L2 must have a negative impact on the epistemic function of writing. This negative impact results at least from lower-level processes still requiring a relatively large amount of cognitive capacity in L2 writing which will then not be available for higher-order processes such as goal setting and idea generation (Whalen and Menard 1995; Jones and Tetroe 1987; cf. also the overview in Roca De Larios et al. 2002, p. 32 f. and thus for fully exploiting the general cognitive-academic writing capacity that may already have been acquired through the L1. Higher-order processes are of utmost importance for lines of argumentation, however, and need special attention especially if the writers are not yet familiar with the specific requirements of the genre they have to compose. How can this disadvantage be avoided? The increased cognitive demands that writing tasks in the L2 place on students can be reduced by splitting them up into subtasks whose completion is less cognitively demanding because they can be tackled in a sequential order. For the cognitively most demanding subtasks involving higher-order processes, such as idea generation and organizing, in which the danger of an impaired epistemic benefit is highest, students may be allowed to resort to whatever language in which ideas come to their minds, i.e., they may be allowed to use either their L2 or their L1 or a mixture of both and may even resort to further languages if they are of any relevance to them (cf. the list of theoretically possible writing strategies for multilingual contexts suggested by Lange 2012). To further reduce cognitive demands in this phase, students may be freed from writing for a specific audience and following the conventions of a specific genre by just requiring them to write for their own understanding. For this type of intermediate text, Bräuer and Schindler (2013, p. 34 f.) introduced the concept of auxiliary text. Furthermore, for material-based writing, i.e. writing in which the writer’s own ideas have to be linked to previous research, characteristic of academic writing, reading assignments may be provided which help writers to familiarize themselves with the state-of-the-art and thus assist them in finding their own research gap. To allow students to resort to any language when composing these texts guarantees that ideas will not be suppressed just because they cannot be expressed in the L2. Once this epistemic process has been completed, the auxiliary texts can then be transformed into what Bräuer and Schindler call transfer texts. This is a process in which the ideas developed in the auxiliary texts have to be transformed into a reader-friendly language following the conventions of the L2 and the genre required. This process may be followed by requiring students to also produce what Bräuer and Schindler (2013) call reflective texts. In these, they document what they have learnt from the writing arrangement and in what respects they still feel insecure. Reflecting in this manner may also help students to generalize from the task at hand and make the knowledge they have acquired transferable to future tasks (cf. Perkins and Salomon 1988, n.d.). For teachers, reflective texts provide insight into the success of their teaching strategies and indicate potential for improvement and scaffolding.

8 Translation Competence as a Cognitive Catalyst for Multiliteracy –. . .

8.2.2

179

The Advantages and Disadvantages of Translation for L2 Writing Pedagogy

In Thesis 2 it was claimed that translation as a means of fostering students’ text production competence has both advantages and disadvantages depending on the function for which translation is used and the writer’s translation competence. Let us first consider the advantages: Writing a text in one’s L1 first and then translating it into the L2 may be a means of circumventing cognitive overload that may occur when composing directly in the L2. This observation was made by Uzawa (1996) in a study in which 22 university students had to complete three writing tasks: one writing task in their L1 (Japanese), one in their L2 (English) and one translation of a completed L1 text into their L2. Their translations were higher in linguistic quality than their L2 texts, which the author attributes to the fact that the translation task relieved the subjects of extensive planning processes, resulting in more attention available for linguistic details. The subjects found the translation exercises more helpful than essay writing because they felt that translating forced them to use vocabulary that they would not have thought of when composing directly in their L2 (Uzawa 1996). Furthermore, as Uzawa’s (1996) students confirmed, requiring students to render specific content provided in their L1 into their L2 may raise their awareness of lacunae in their L2 lexis and grammatical repertoire, a process referred to as noticing (see, e.g., Qi and Lapkin 2001; Schmidt 1990). This noticing may then induce them to try and close these gaps, which might not be the case when they formulate in their L2 directly, where they could try to circumvent expressing ideas or even forgo generating ideas that they cannot express in their L2 (on the relevance of noticing for second-language learning, see also the research overview in Uzawa 1996, p. 272 f). In sum, through translation, students can practise writing with a reduction in complexity, particularly on the macro-level, as the source text already provides the contents to be composed in verbalized form, allowing the students to pay greater attention to subtleties on the micro-level they might have otherwise ignored. The latter advantage of translation is also corroborated by Kim (2011), who found that having her students translate from their L1 into their L2 enabled them to evaluate their L2 texts more critically. Moreover, Manchón et al. (2000) found that backtranslating from the L2 into the L1 during the composition process is used to make the mental text “more resonant with meaning for the writer” (Ransdell and Barbier 2002, p. 8) and that these backward operations not only lead to a reiteration of content but also to elaboration processes, which are in the service of the epistemic function of writing. Translation and translation as a subprocess of L2 text production, however, may also have disadvantages. These are related to the danger of source-text fixedness and interference which may follow from it, a danger whose extent depends on the writers’ translation competence. The less translation competence L2 writers have, the more prone they will be to interference from the L1 due to fixedness on the source-text surface structure and a lack of flexibility involving departure from surface-level expressions, which may be in the way of finding idiomatic L2

180

S. Göpferich

expressions (Bayer-Hohenwarter 2012; Göpferich 2013; Mandelblit 1995). To put it simply, writers with limited translation competence may feel inclined to translate at the word-level, whereas writers with advanced translation competence translate at the more language-distant meaning level and are thus less prone to interference and the disadvantages that translation exercises may have for L2 writing development. This is in line with the observations Liu (2009) made with regard to translation processes that naturally occur in L2 writing and that are dependent on L2 proficiency (see Sect. 8.2.4). Both advantages and disadvantages of translation for L2 writing pedagogy could also be observed in a study in which German students of English language and literature had “to produce a German version” of a popular-science article they had first composed in their L2 English (Göpferich and Nelezen 2014). It should be noted that the concept of ‘translation’ was consciously avoided in the assignment because it might have falsely led the students to assume that a literal translation was required and that defects in the source text would thus have to be taken over into the target text. What the participants were rather expected to do was to produce a functional translation, which allows for deviations from the source text if these contribute to making the target text more suitable for its function (Göpferich and Nelezen 2014). The assumptions underlying these instructions were the following: The participants would experience cognitive relief due to the fact that a) they were allowed to use their L1, in which they would have a more differentiated repertoire of linguistic means available to them to express their ideas, and that b) the English text, due to its very existence in an externalized manner, would allow the participants to take a more critical stance towards the structure and line of argumentation of the text. If these assumptions held true, the German texts should have had a more logical structure and been more differentiated semantically than their English source texts and they should have contained fewer errors. In a contrastive analysis in which the English source texts and their German versions were assessed according to the Karlsruhe comprehensibility concept (Göpferich 2009), however, no noteworthy difference in the scores of the English texts and the German texts could be observed for any of the six subjects who participated in the study. Out of a maximum score of 45 points, the discrepancy between the source-text and target-text scores was only found to be between +4 and 2 points; in three cases, the text quality of the English texts was slightly better than that of their German counterparts, and in the three remaining cases, the opposite trend was observed (for the complete results, see Göpferich and Nelezen 2012). This lack of significant change from the English to the German texts arose from the manner in which the subjects composed their German texts: Instead of attempting to make changes on a macro-level, the subjects primarily transferred the contents of the source texts into the target texts on a sentence-to-sentence basis and thus focused on the micro-level, i.e., the sentence level and level of neighbouring sentences. This was the case although the English source texts had shortcomings at the macro-level. This behaviour is typical of translation novices and may be due to the students lacking translation competence. The changes made to the texts at the micro-level had little overall effect on macro-level issues such as the functional adequacy of the texts and

8 Translation Competence as a Cognitive Catalyst for Multiliteracy –. . .

181

their appropriateness for their audience. It should also be noted, however, that the similarity of the L1 and L2 texts on the macro-level may simply signal unawareness on the part of the subjects concerning the structural shortcomings of their texts, both in their L2 and in their L1, and thus weaknesses in their general composing competence (Göpferich and Nelezen 2014). A linguistic error analysis showed that, also contrary to the assumptions specified above, more errors were made in the German texts (227 total errors) than in the English texts (186 total errors). More specifically, the number of errors in the German texts was actually higher in every category (formal errors, lexical errors and grammatical errors) with the exception of text-level errors,12 of which there were fewer in the German texts. It is very likely that the high number of errors in the German texts might have been caused by the translation task itself. Though the term ‘translation’ was deliberately avoided in the assignment, it is likely that many of the errors were caused by either L2 interference or a strong degree of fixedness on source-text formulations (for more about the phenomenon of fixedness in psychology, see Duncker 1945; for fixedness in translation, see Mandelblit 1995). This is also supported by the fact that students of translation tend to produce errors arising from interference and fixedness at the beginning of their translation training, errors which tend to occur in lesser frequency as translation competence develops (Bayer-Hohenwarter 2012; Göpferich 2013). Considering that the subjects in this study had little or no experience in translation, it is reasonable to assume that their behaviour greatly resembled that of translation novices. In remaining as close to the source text as possible, the subjects may have been implementing a type of cognitive relief strategy: In order to save cognitive capacity for other processes (such as generating appropriate German renderings of English terms), they may have avoided diverting greatly from the source text, especially on the macro-level. With regard to text-level errors, four out of the six participants performed better in their German texts than in their English ones (45 errors in the English texts versus 39 errors in the German ones). This suggests that students, at least at the text level, are better able to express themselves in their L1 than in their L2 and seem to take a more critical stance towards their texts’ logical structure and argumentation at the micro-level, i.e. the level not exceeding two adjacent sentences. Another possible explanation for this result is that the subjects may have been able to improve upon these aspects of their German texts because the structure given in the English versions again offered them a certain amount of cognitive relief and thus enabled them to analyze it critically when transferring it into German. The subjects did not, however, make improvements from English to German in every text-level subcategory. While a notable improvement could be witnessed in

12

Text-level errors were defined as errors that can only be detected by looking beyond sentence boundaries.

182

S. Göpferich

the subcategories of “sense” and “implicitness”,13 there was even an increase in functional sentence perspective (FSP) errors from the English to the German texts (3 errors in English, 6 errors in German). The lattermost result is likely due to the differences between English and German in inflectional morphology and hence the ways in which these two languages can obtain certain topic-comment structures. Whereas in English, the S-V-O word order is relatively fixed, the German language allows for a greater degree of syntactic flexibility due to its rich inflectional morphology. Nonetheless, the subjects often seem to have simply imitated the word order used in their English source texts instead of finding an appropriate German alternative, probably due to fixedness on the source-language structure. Example [1] is a case in point. In this extract, an error occurred in the German version where there was none in the English source text. The error seems to have been caused by syntactical interference. The English word order was taken over in the German version, though in German, the order in which the information about frequency and location is placed should have been reversed to create the appropriate communicative dynamism (Göpferich and Nelezen 2014). [1] Melancholia, burnout-syndrome [sic], depression – mental diseases seem to be increasingly common in today’s society. (LaSe) [10 ] Melancholie, Burnout-Syndrom sowie Depressionen – physische [sic] Krankheiten treten immer häufiger in unserer Gesellschaft auf.

Example [2] illustrates a case in which a student, in her German version, was able to avoid an implicitness error she had made in her English text: [2] Though she is seriously ill, her husband and physician John does not trust her opinion and prescribes her a medication [sic] which insidiously worsens her condition. (LaSe) [20 ] Obwohl sie ihrer Meinung nach äußerst krank ist, sind alle ihre Bemühungen [sic] ergebnislos. Sie wird von ihrem Mann, der zugleich auch ihr Arzt ist, einfach nicht wahrgenommen [sic].

Here it seems that the author wanted to express that the protagonist believed that she was seriously ill and made every effort to convince her husband of this, but that he, in spite of all her efforts, did not believe her. The conjunction though should thus not refer to the assertion that she was seriously ill but to her efforts to convince her husband, an assertion left implicit in this sentence. In [20 ], we see that the author was aware of the shortcomings of her expression of ideas in her English sentence and included both ihrer Meinung nach (in her opinion) and Bemühungen (efforts) to make the relationship between the two statements more explicit; ideally, however, these efforts should have also been specified more closely (i.e., efforts to do what?).

13

A sense error was defined as an incomprehensible or nonsensical section longer than a phrase or a contradictory statement. If it involved less than a phrase, it was counted as a semantic error. An implicitness error was an error due to too much information left implicit, e.g., if the author did not express something to which a conjunction, etc. referred (e.g., There are three types of birch trees. Therefore, I will describe only one. Here, therefore refers to a sentence that was left implicit, i.e., I cannot cover them all.) (Göpferich and Nelezen 2014).

8 Translation Competence as a Cognitive Catalyst for Multiliteracy –. . .

183

Out of the six participants in the study, only one subject, a bilingual, transferred explicitness errors into her German text, while the rest of the subjects were able to avoid them. This indicates that the cause of such errors may be the inability for the subjects to express themselves explicitly in a foreign language to the extent they can in their native tongue. As a type of avoidance strategy, perhaps, they may simply omit what they have difficulty expressing in their L2, negatively impacting the comprehensibility of these texts. This exclusion of content also has negative effects on the epistemic function of writing, as students do not practise expressing their ideas precisely and completely (Göpferich and Nelezen 2014). Since translation has both advantages and disadvantages in L2 writing pedagogy, should it be used in or banned from L2 writing instruction? The advantages mentioned comprise (1) cognitive relief from macro-level processes liberating more cognitive resources for micro-level decisions, (2) providing students with occasions for noticing their L2 gaps, (3) backtranslation as a means of semantic checking, and (4) the creation of a greater awareness of structural differences between languages including language-specific requirements, for example, with regard to word order from a communicative perspective. The disadvantages are (1) the danger of interference particularly in students with low levels of translation competence and (2) the danger of an evolving dependence on the L1 as a bridging language for L2 text production combined with a neglect of developing paraphrasing and other strategies enabling writers to express themselves even with a limited L2 lexical and grammatical repertoire. In L2 writing pedagogy, the disadvantages can be circumvented by practising creative translation strategies, which help students to overcome their fixedness on the source text, and limiting translation assignments to situations where noticing gaps and creating an awareness for structural differences are focused on. Specially tailored translation tasks can furthermore facilitate students’ awareness of languagespecific coherence-generating means; for example, having them translate using a source text in which connectors were systematically deleted would force them during their translation processes to think about the logical relations between two parts of a sentence or two sentences and how to express these appropriately in each language. In this way, students are prompted to express explicitly certain logical relationships in written form, which, during a free writing task, is significantly more difficult to establish and monitor. The cognitive relief at the macro-level involved in translation can also be achieved by splitting up L2 text production assignments into less complex subtasks including the composition of auxiliary texts prior to transfer texts and the use of planning strategies (see the meta-analysis in Graham and Perin 2007, p. 466 f), which may be the better alternatives to translation assignments for this specific purpose.

184

8.2.3

S. Göpferich

Transliteracy and Translation Competence as Integral Components of Academic Writing Proficiency

An essential feature of academic writing is that it is material-based. The language of the material drawn on may be different from the language of text composition. Consequently, academic writers must be able to process information in their writing that they have acquired in a different language, a competence that Gentil (2005), with reference to Baker (2003), refers to as “transliteracy”. Transliteracy requires translation competence, the lack of which shows, for example in term papers, when foreign-language expressions from source texts are taken over in an untranslated manner or are translated word-by-word although this does not make sense. Both types of defects may not only point to a lack of translation competence but also to an insufficient comprehension of the source material. To avoid translation-induced errors, at least a minimum of translation competence needs to be developed in all those who, although they may only have to compose texts in one language, must be able to process information made available in different languages. Translation competence and actively practising translation in the functionalist sense, however, has additional advantages as has been found out in teaching approaches using “translanguaging”, also called “co-languaging” by van der Walt (2013). Translanguaging is an English translation of the Welsh term trawysieithu (Baker 2003, p. 81) coined by Williams (1994). He introduced the term to designate a teaching method successfully employed by him to foster the acquisition of two languages (in his case English and Welsh) and simultaneously a more profound processing of subject-domain knowledge through confronting students with input in both languages and requiring them to use this input in writing about it in the respective other language. Baker (2003, p. 82 f.) points out four advantages of translanguaging, which is covered by the functionalist concept of translation: (1) more profound cognitive processing of subject matter (“While full conceptual reprocessing need not occur, linguistic reprocessing is likely to help in deeper conceptualization and assimilation.” (Baker 2003, p. 83)); (2) fostering the language learners’ weaker language; (3) the improvement of communication between students and their families about the subject matter taught if one of the languages is the language spoken at home and the home environment does not speak the language usually used in higher education; and (4) benefits that students with lower languageproficiency in one of the languages can derive from the higher language-proficiency of other students.14 In a small-scale study, in which Knapp (2014) investigated the code-switching behaviour of German and international students studying at the University of Siegen when taking notes during lectures delivered in either German or English and their preferred language chosen for exams, she found that students had a strong 14

The beneficial effect of fostering transfer between languages for students’ L2 (in this case English) development has also been found by Göbel and Vieluf (2014). For an overview of its positive effect on the elaboration of subject-matter, see Lamsfuß-Schenk (2015, p. 154).

8 Translation Competence as a Cognitive Catalyst for Multiliteracy –. . .

185

inclination to stick to the language in which the lectures were delivered for notetaking and the language of instruction and the reading assignments for their exams, an observation that she terms “continuity principle of language choice” (Knapp 2014, p. 183). This continuity principle on the one hand and the beneficial effects of translanguaging for deeper cognitive processing on the other hand are additional reasons for actively involving students in translanguaging exercises. As Knapp found, students rarely translanguage on their own. She explains the rare cases in which students did as indicators that these students had understood the content but were not able to express it in their L2 English under the time pressure that is involved in note-taking in lectures and compensated for this by resorting to their L1 German (Knapp 2014, p. 182). While mere reproduction in the language in which the content was conveyed is no guarantee for real comprehension, correct reverbalization in another language is (Knapp 2014, p. 192 f). What is more important is that, in a multicultural society, students are expected to be able to work with the knowledge they have acquired in one language in their other working languages (cf. also Cook 2010, pp. xx, 109).15 If they do not learn to translate between their languages, this competence might not be attained (cf. also Cook 2010, p. 99; Knapp 2014, p. 183). Answers that students provided in Knapp’s survey on their language(s) preferred for instruction also indicate that they feel that deeper processing and more understanding tends to be obtained when complex content is at least also presented in their L1 German (Knapp 2014, p. 186 f). Thus, Knapp (2014, p. 194) pleads for “taking advantage of students’ multilingual resources” because they “may have beneficial effects on three levels: to clarify meaning [. . .], to enhance the depth of processing, and to make available acquired knowledge in diverse linguistic contexts, inside and outside university”. To implement teaching that utilizes students’ bi- or plurilingual resources, disciplinary boundaries have to be overcome or, as Gentil (2005, p. 460) concedes: “The current disciplinary divisions of labor among L1 composition, L2 writing, foreign language teaching, and translation constrain possibilities for research on and education for academic biliteracy.” In this context, findings on the impact of CLIL (Content and Language Integrated Learning) in secondary education can also be enlightening. Although these findings seem to indicate that CLIL has a favourable effect on students’ language competence development and at least no negative effect on the subject-domain competence that they acquire (Bonnet 2015, p. 173; Piske 2015, p. 115 f. Smit 2015, p. 87), it has to

Cf. Cook (2010, p. 109): “In multilingual, multicultural societies (which nowadays means just about everywhere), and in a world of constant cross-linguistic and cross-cultural global communication, there are reasons to see translation as being widely needed in everyday situations, and not as a specialized activity at all. This is true whether we take translation in the established sense of producing texts and utterances which replace ‘textual material in one language by equivalent textual material in another language’ (Catford 1965, p. 20), or in the looser sense of what is done by a ‘a bilingual mediating agent between monolingual communication participants in two different language communities’ – to use the definition of the translator by House (1977, p. 1). [. . .] translation exists in a broad spectrum between these two poles.” 15

186

S. Göpferich

be taken into account that (1) CLIL is a term that encompasses various forms of teaching subject matter and a foreign language in an integrated manner with or without also integrating the students’ L1(s) and that these different forms of CLIL may have different effects (Wolff and Sudhoff 2015), that (2) CLIL takes place in various contexts, and that (3) the control groups and experimental groups compared in the studies conducted so far (may have) differed in variables such as their motivation for learning languages, their social background, their self-esteem and others, that may have confounded the results. In order to determine which forms of CLIL actually support both the acquisition of subject matter, language-proficiency, literacy and other competences best, we need more fine-grained studies on the impact of CLIL, which take all these variables into account (Rüschoff et al. 2015). And we also need investigations which do not just elicit opinions about the impact of CLIL but actually measure students’ academic-cognitive development with regard to their subject-domains in a more fine-grained manner (cf. also Shohamy 2013).16 From a pedagogical-didactical perspective, the choice of the language of instruction should be determined by criteria related to how conducive the language(s) of instruction is/are for the promotion of cognitive-academic development. As Gnutzmann et al. (2015, p. 21) point out, however, these criteria seem to be subordinated to the criterion of increasing the economic attractivity of programmes.17 This is a dangerous development which might benefit the masses in a superficial manner at the expense of cognitive depth for the individual. At least three different categories of teaching subject matter in combination with a foreign language can be distinguished: The first category comprises approaches in which subject matter is simply taught through the medium of a foreign language (in most cases English; EMI in a narrow sense). The second category includes dual approaches which aim at both the learning of subject matter and the development of foreign-language skills. For this approach, Smit (2015, p. 76) introduced the term “ICLHE” (Integrating Content and Language in Higher Education); Wolff and Sudhoff (2015, p. 18) refer to it as “Tertiary CLIL” or “CLIL in Tertiary Education”. And the third category includes approaches integrating the development of subjectdomain knowledge with L2 competence by also drawing on the students’ L1, for example by using translanguaging exercises. At least theoretically, this bi- or plurilingual ICLHE variant should turn out to be the best approach because the dual coding of subject matter is assumed to lead to a more profound elaboration and

Shohamy (2013, p. 203) calls for “extensive research to examine empirically the cost and benefits of the use of EMI at HEIs [higher education institutions]; the main goal being the need to explore how much language is being gained by such programs as well as how much academic content is being achieved.” 17 It is interesting to note that propagating the exclusion of the learners’ mother tongue(s) from foreign-language classrooms has also had economic reasons. If only the target-language is used in the classroom, there is no need to differentiate between students with different mother tongues and to develop textbooks and other teaching material which are adapted to the students’ first language (s) (cf. Cook 2010, p. 19: “Direct Method [by which Cook means all methods which only use the target language] was in tune with mass production, national building, and imperialism.”). 16

8 Translation Competence as a Cognitive Catalyst for Multiliteracy –. . .

187

thus a beneficial effect on students’ academic development in their subject domains (Lamsfuß-Schenk 2015, p. 154). More research is needed to gain insight into the conditions and prerequisites that have to be fulfilled to optimally combine the development of subject-domain competence with language proficiency (cf. Doiz et al. 2013, p. 217). This research should also shed light on the prerequisites with regard to writing and language proficiency levels that students need to come equipped with and the qualification requirements teachers have to fulfil (Gnutzmann et al. 2015, p. 21 f).

8.2.4

The L2-Proficiency-Dependent Functions of Translation in L2 Writing Processes

Liu (2009) was able to show that the functions for which the L1 is used in L2 composing processes depend on the writers’ L2 proficiency. In an experimental study, six native speakers of Chinese who had been educated in Taiwan, had learned English since the age of 12 and had not lived in other countries for more than a year had to compose a text in their L2 English while thinking aloud. The topic they had to write about was a comparison of American Christmas and Chinese New Year. Liu assumed that her participants had acquired knowledge about this topic basically in their L1 Chinese so that the writing assignment would evoke more L1 concepts and thus induce the participants to translate18 from their L1 into their L2 during the composing process (Liu 2009, p. 43). The participants were encouraged to take as much time as they needed to complete the assignment and were requested to verbalize anything that crossed their minds during the composing process in whatever language it occurred to them. Three of the six participants had TOEFL scores of 590 and above (high-proficiency group), the other three had TOEFL scores of 570 and less (low-proficiency group). Their think-aloud was transcribed into protocols and segmented into units. The units were classified into four categories:

18

Liu (2009) introduces a concept of translation that is wide enough to also include the process of converting ideas into linguistic representations in the L1 or monolingual text composition, as in Hayes and Flower’s (1980) early writing model. She defines translation as follows: “I define translation from a broader perspective. Translation in my research is not ‘the replacement of a representation of a text in one language by a representation of an equivalent text in a second language’ (Hartmann and Stork 1972, p. 713) at the textual level. It includes the processes of informational or conceptual coding, decoding, and reformulating at the cognitive level. Therefore, the translation process in writing may apply to both monolinguals and multilinguals. For monolinguals, the L1 writing process involves translating conceptual representations into linguistic codes through reorganization, resynthesization, and reconstruction. For multilinguals, especially the bilinguals in this book [i.e., native speakers of Chinese with English as their L2], the L2 writing process involves not only the writing process of the monolinguals as mentioned above but also the cognitive process of language switch. Under this definition, translation in L2 writing involves research areas such as L1 transfer (linguistic and rhetorical), the use of L1, and language switching (LS).” (Liu 2009, p. 11).

188

S. Göpferich

(1) thinking aloud in English only (L2 only), (2) thinking aloud in Chinese only or Chinese first and then translating into English (L1 only or L1 !L2), (3) thinking aloud in English first and then repeating in Chinese (L2 !L1), and (4) thinking aloud in unidentifiable chunks (Liu 2009, p. 45). In addition, she conducted cued retrospective interviews. Liu (2009) found that the low-proficiency group used their L1 significantly more during the L2 composing process than the high-proficiency group. Furthermore, Liu observed that the low-proficiency group also relied more often on their L1 to reconfirm or monitor ideas expressed in their L2 than the high-proficiency group (Liu 2009, p. 54) and that the low-proficiency L2 writers translated significantly more at the syntactical level during the L2 composing process than at the semantic level, whereas more proficient L2 writers translated significantly more at the semantic level than at the syntactical level. In other words, low-proficiency L2 writers were more fixed on L1 syntactical structures whereas high-proficiency L2 writers more or less just retrieved concepts via their L1 and then went on composing directly in their L2.19 The typical procedure of a low-proficiency L2 writer is reflected by the following statement: “Usually, I use Chinese to generate ideas, and if I like the idea, I will try to translate it into English... If I don’t use Chinese to lead the phrase or words, I’ll forget about what I want to say in English.” (Liu 2009, p. 68). Liu explains this observation as follows: This quote suggests that the L2 operation consumes too much cognitive energy and produces too much mental load for the unskilled writers to conceive of semantic formulations as well as to organize them with syntactic structures for textual production. Therefore, unskilled writers tend to rely on L1 to generate and form ideas in words and phrases. Once the idea has been well formulated semantically and has been represented by L1 syntactic structures, unskilled L2 writers may finally translate the L1 idea into L2 with L2 syntax. In other words, the unskilled L2 writers use L1 to take care of as many cognitive subprocesses as possible to reduce their mental loads. As a result, the L1-L2 code translation may take place at the level close to the textual output, i.e., the syntactic level. Since most of the semantic-level concerns have been taken care of by L1, unskilled L2 writers may primarily pay attention to the use of L2 for the syntactic and lower level activities, such as orthography, grammar, equivalent lexical choices, and local changes. In a nutshell, skilled L2 writers tend to have more semantic transformation, whereas unskilled L2 writers tend to have more syntactic translation. (2009, p. 68)

Liu (2009, p. 69) also observed that skilled L2 writers may resort to the strategies of unskilled writers whenever they encounter difficulties, and that unskilled writers make use of the strategies of skilled writers when they are capable of doing so. The participants of the high-proficiency group mainly used their L1 for higher-order processes such as planning, for controlling the incoming information and editing the

19

A potential explanation for this could be that in the brains of bilinguals with low language proficiency, the L1 and L2 lexica are stored independently whereas in the brains of highly proficient bilinguals this is not the case (Perani et al. 1996). Those writers whose L1 and L2 lexica are stored in the same cortical structures may be able to access L2 lexical entries directly via the concept whereas those L2 writers who have separate L1 and L2 mental lexica may only be able to access L2 lexical items via L1 lexical entries and thus through translation (Liu 2009, p. 24).

8 Translation Competence as a Cognitive Catalyst for Multiliteracy –. . .

189

written text whereas the intended meaning was expressed directly in the L2 (Liu 2009, p. 58 f). Wang and Wen (2002) also analyzed L1 and L2 use in different subprocesses of writing by means of think-aloud protocols of 16 Chinese EFL students. They differentiated between process controlling, idea organizing, idea generation, task examining and text generation. For each subprocess they determined the ratio of the words in their participants’ think-aloud that were uttered in connection with this subprocess in each language and the entire number of words uttered in connection with this subprocess in both languages. In process-controlling processes, L1 use dominated with on average 81.5%. For idea organization, their participants used their L1 to an extent of on average 70%, and for idea generation, to an extent of on average 61.5%. Task examination was carried out in the L1 to an extent of on average only 21%, and text generation, the most language-close process, to an extent of on average only 13.5% (Wang and Wen 2002, p. 234). In line with Liu’s findings, Wang and Wen also found that the language of text generation depends on the writers’ L2 proficiency: “less proficient writers construct sentences through L1-toL2 translation, while proficient writers generate text directly in L2” (Wang and Wen 2002, p. 240). In the other subprocesses examined, the decline in the use of the L1 that could be observed with increasing L2 proficiency was less salient among their participants (Wang and Wen 2002, p. 241). With regard to these subprocesses, however, the question remains to be answered whether L1 use in these subprocesses declines to a more considerable extent as well once the participants have exceeded a certain L2 proficiency threshold level (Wang and Wen 2002, p. 241).20 The more complex cognitive operations are, the more inclined writers seem to be to resort to their L1. This is also confirmed in a think-aloud pilot study conducted by Qi (1998). In this study, a single participant had to complete two L2 writing tasks and two translations from her L1 (Chinese) into her L2 (English) as well as to solve two mathematical problems with one task in each set involving high cognitive demand and the other a lower one. Qi (1998, p. 423) found a positive correlation between L1 use and the task demands. Her data strongly indicate that whenever the participant intuitively anticipated that the load of a task she faced would exceed the limit of her working memory span, she automatically switched to L1, her stronger language, to process the information in order to minimize the load to which the use of a weaker language might otherwise add. (Qi 1998, p. 428)

As factors that may influence switching from the L2 to the L1, Qi identified an implicit need to encode efficiently a non-linguistic thought in the L1 to initiate a thinking episode; a need to facilitate the development of a thought; a need to verify lexical choices; and a need to avoid overloading the working memory. (Qi 1998, p. 428 f)

20

Process-oriented longitudinal studies of writing skills development are a desideratum. Apart from Steinhoff’s (2007) and Pohl’s (2007) corpus-based longitudinal studies of academic writing skills development, one of the few writing skills development studies that have been conducted to date is the one by Sasaki (2004), which covers a time span of 3.5 years.

190

S. Göpferich

Other factors that might induce L1 use in L2 writing are the language in which knowledge required for completing the writing assignment has been acquired as well as the language of the writing prompt (Wang and Wen 2002, pp. 240, 244). In an empirical study involving 28 Chinese students who had to compose texts in their L2 English on topics involving either knowledge they had acquired in their L1 Chinese or their L2 English and in which they had to do their planning in Chinese, Friedlander (1990) found that students planned better and produced better contents when the language of knowledge acquisition and planning matched. Qi (1998, p. 430) hypothesizes that this might only be the case if the knowledge acquired in one language has never been reprocessed in the other. What are the didactical implications of these findings? Resorting to one’s L1 seems to be a natural reaction by writers and any other type of cognitive problem solvers whenever cognitive overload seems to occur. Suppressing L1 use in L2 writing, or problem solving in general, may thus indeed have negative effects on problem solving and thus the epistemic function of writing. Potential detrimental effects that resorting to one’s L1 might have for L2 text quality can be assumed to be the smaller the higher up in the text generation process the switch to the L1 is performed, i.e. the more language-remote the level at which it occurs, because at the more language-remote levels, the danger of interference is smaller. The findings from translation studies referred to in Sect. 8.2.2 suggest that with increasing translation competence and thus interference-resistance of the writer, the potential detrimental effects of resorting to translation at the language-close level decrease. This is an argument in favour of fostering translation competence not only in professional translators but in everybody wishing to communicate successfully in a multilingual society, where trans- and multiliteracy are a requirement that encompasses the ability to translanguage, a form of translation in the functionalist sense (cf. e.g., Holz-Mänttäri 1984; Nord 1993; Reiss and Vermeer 1984; Vermeer 1978). Further implications that the findings reported have for writing pedagogy depend on the function for which texts are produced. In writing-to-learn assignments, i.e. assignments whose function is an epistemic one, which involve knowledge transformation in the sense of Bereiter and Scardamalia (1987) and which are more cognitively demanding than simple knowledge telling assignments, L1 use should not be suppressed or discouraged because this suppression is likely to impair the epistemic function of writing. In learning-to-write exercises, however, in which knowledge telling plays a more prominent role and which rather focus on linguistic fluency and flexibility, suppression of L1 use may be a useful pedagogical strategy. In this type of assignments, the suppression of the L1 fosters the development of problem avoidance strategies and other strategies needed for making oneself understood even with a limited linguistic repertoire, for example through paraphrasing. The importance of increasing L2 writers’ repertoire of problem-solving strategies was illustrated by Macaro (2014) in an exploratory study focusing on the impact of writing strategies on the writing output of two students with comparable L2 proficiency. He argues:

8 Translation Competence as a Cognitive Catalyst for Multiliteracy –. . .

191

Both linguistic knowledge and strategic behaviour are involved in the completion of a language task. It is in the effective combination of strategies with the linguistic knowledge currently at the learner’s disposal that tasks are successfully completed and, if success is measured not only in terms of task completion but also in terms of quality of language, that levels of proficiency can be determined. Put differently, I am arguing that proficiency in a writing task is the sum of linguistic knowledge and strategic behaviour. (Macaro 2014, p. 60)

Consequently, teachers must not only develop students’ linguistic competence but also their repertoire of problem-solving strategies. Suppression of the L1 may be one useful didactical measure for these purposes. L2 academic writing courses, however, should not be limited to the latter type of writing assignments but should also include writing-to-learn assignments with an epistemic function. If the latter function is neglected, students may learn to write fluently and employ problem-solving and problem-avoidance strategies, but they might easily feel out of their depth when knowledge transforming is involved, for which they need to learn appropriate strategies of cognitive relief, for example, by switching to the L1 when they start feeling cognitively overburdened. For knowledge telling tasks, students may be able to stick to their L2 throughout the entire writing process, but they may have to resort to their L1 for higher-level writing processes such as planning and structuring when completion of the assignment also involves knowledge transforming with an epistemic function.

8.3

Summary and Conclusion

The research reviewed in this article has provided evidence that translation competence as a competence that, in the functionalist sense, includes the ability to translanguage, cannot only serve as a cognitive catalyst for trans- and multiliteracy, but even represents a requirement of it. In which phases of the writing process and for what purposes translation from the L1 (or other languages) into the language of text composition turns out to be useful or rather detrimental for L2 text quality, however, has to be answered in a differentiated manner. Resorting to the L1 is a means to avoid cognitive overload and to take full advantage of the epistemic function of writing, which might be hampered if L1 use is suppressed in L2 writing. In this respect, L1 use in L2 writing, or more generally, L2 cognitive problem solving, is a catalyst. L1 use in L2 writing, however, may also hamper L2 writing because it may prevent students from developing problem-avoidance and problem-solving strategies needed for expressing themselves fluently in the L2 even with a limited repertoire of L2 lexis and grammatical constructions at their disposal. In addition, resorting to the L1, especially at language-close levels and not only at the more abstract levels of idea generation and organization, also involves the danger of interference. This danger can be overcome through the acquisition of translation competence and thus interference resistance.

192

S. Göpferich

Though translation itself involves certain disadvantages for writing pedagogy, a number of potential advantages have been pointed out as well. These include: (1) cognitive relief from macro-level writing processes liberating more cognitive resources for micro-level decisions, (2) providing students with occasions for noticing their L2 gaps, (3) backtranslation as a means of semantic checking, and (4) the creation of a greater awareness of structural differences between languages such as those connected to word order from a communicative or functional perspective. Two further pedagogical strategies for L2 writing development have been addressed: The first one is the reduction of the complexity of L2 writing assignments by splitting them up into less complex subtasks to reduce the cognitive load that has to be coped with at once. An example of such as subtask is the production of an auxiliary text which precedes the production of the L2 transfer text and in which resorting to the L1 and mixing both the L1 and L2 are allowed. This is a useful alternative of translation when the latter is just employed to achieve cognitive relief at the macro-level (see advantage 1 of translation above). The second strategy concerns study skills language courses which focus on lower-level language skills development such as vocabulary building and grammar refreshing. Such courses should assist students in gaining more fluency at these lower levels in order to have more cognitive capacity available for higher-level writing processes. Whereas study skills language courses can be taught as add-on courses, writing for epistemic purposes is closely related to the disciplines and should be taught in a content and skills integrated manner, for example in writing-intensive seminars (for Content and Skills Integrated Teaching, see Göpferich 2015a). In a wider context, this article can be considered a plea for developing translation competence not only in future professional translators but in students of all disciplines, for whom it represents a soft skill in our multilingual and multicultural societies (Cook 2010, p. 109). It can be acquired through translanguaging. Translanguaging prepares students to benefit from all their language resources for the purpose of knowledge construction and deep learning and has the positive side effect that domain losses in their L1s are avoided and their L1s can be resorted to whenever their English proficiency is insufficient for deep understanding.

References Arndt, V. (1987). Six writers in search of texts: A protocol-based study of L1 and L2 writing. ELT Journal, 41(4), 257–267. Baker, C. (2003). Biliteracy and transliteracy in Wales: Language planning and the Welsh national curriculum. In N. H. Hornberger (Ed.), Continua of biliteracy: An ecological framework for educational policy, research and practice in multilingual settings (pp. 71–90). Clevedon: Multilingual Matters. Bayer-Hohenwarter, G. (2012). Translatorische kreativität: Definition-messung-entwicklung. Tübingen: Narr Verlag. Beaufort, A. (2007). College writing and beyond: A new framework for university writing instruction. Logan: Utah State University Press.

8 Translation Competence as a Cognitive Catalyst for Multiliteracy –. . .

193

Bereiter, C., & Scardamalia, M. (1987). The psychology of written composition. Hillsdale: Lawrence Erlbaum. Björkmann, B. (2013). English as an academic lingua franca: An investigation of form and communicative effectiveness. Berlin: Walter de Gruyter. Bonnet, A. (2015). Sachfachlicher kompetenzerwerb in naturwissenschaftlichen CLIL-kontexten. In B. Rüschoff, J. Sudhoff, & D. Wolff (Eds.), CLIL revisited. Eine kiritische Analyse zum gegenwärtigen Stand des bilingualen Sachfachunterrichts (pp. 165–182). Frankfurt: Peter Lang. Börner, W. (1989). Didaktik schriftlicher textproduktion in der fremdsprache. In G. Antos & H. P. Krings (Eds.), Textproduktion (pp. 348–376). Tübingen: Niemeyer. Bosher, S., & Rowekamp, J. (1992). Language proficiency and academic success: The refugee/ immigrant in higher education (Eric Document ED 353 914). Retrieved from http://files.eric.ed. gov/fulltext/ED353914.pdf Bräuer, G., & Schindler, K. (2013). Authentische Schreibaufgaben – ein Konzept. In G. Bräuer & K. Schindler (Eds.), Schreibarrangements für Schule, Hochschule und Beruf (pp. 12–63). Stuttgart: Fillibach bei Klett. Carson, J. E., & Kuehn, P. A. (1992). Evidence of transfer and loss in developing second language writers. Language Learning, 42(2), 157–179. Casanave, C. P. (1998). Transitions: The balancing act of bilingual academics. Journal of Second Language Writing, 7(2), 175–203. Catford, J. C. (1965). A linguistic theory of translation: An essay in applied linguistics. New York: Oxford University Press. Cohen, A. D., & Brooks-Carson, A. (2001). Research on direct versus translated writing: Students’ strategies and their results. The Modern Language Journal, 85(2), 169–188. Cook, V. (2002). Background to the L2 user. In V. Cook (Ed.), Portraits of the L2 user (pp. 1–31). Clevedon: Multilingual Matters. Cook, V. (2008). Multi-competence: Black hole or wormhole for second language acquisition research. In Z. H. Han (Ed.), Understanding second language process (pp. 16–26). Clevedon: Multilingual Matters. Cook, G. (2010). Translation in language teaching: An argument for reassessment. Oxford: Oxford University Press. Cumming, A. (1987). Writing expertise and second-language proficiency in ESL writing performance. PhD thesis. University of Toronto, Toronto. Cumming, A. (1989). Writing expertise and second-language proficiency. Language Learning, 39(1), 81–141. Cumming, A. (2001). Learning to write in a second language: Two decades of research. International Journal of English Studies, 1(2), 1–23. Cummins, J. (1979). Cognitive/Academic language proficiency, linguistic interdependence, the optimum age question and some other matters. Working Papers on Bilingualism, (19), 121–129. Cummins, J. (1981). The role of primary language development in promoting educational success for language minority students. In Schooling and language minority students: A theoretical framework (pp. 3–49). Los Angeles: California State University, Evaluation, Dissemination and Assessment Center. Cummins, J. (1996). Interdependence of first- and second-language proficiency in bilingual children. In E. Bialystok (Ed.), Language processing in bilingual children (pp. 70–89). Cambridge: Cambridge University Press. DAAD. (2015). International Programmes in Germany 2015. Retrieved July 30, 2015, from https://www.daad.de/deutschland/studienangebote/international-programs/en/ Dafouz, E., Camacho, M., & Urquia, E. (2014). ‘Surely they can’t do as well’: A comparison of business students’ academic performance in English-medium and Spanish-as-first-languagemedium programmes. Language and Education, 28(3), 223–236. Devine, J., Railey, K., & Boshoff, P. (1993). The implications of cognitive models in L1 and L2 writing. Journal of Second Language Writing, 2(3), 203–225.

194

S. Göpferich

Doiz, A., Lasagabaster, D., & Sierra, J. M. (2013). English-medium instruction at universities: Global challenges. Bristol: Multilingual Matters. Duncker, K. (1945). On problem-solving. Psychological Monographs, 58(5), 1–114. Flowerdew, L. (2000). Using a genre-based framework to teach organizational structure in academic writing. ELT Journal, 54(4), 369–378. Friedlander, A. (1990). Composing in English: Effects of a first language on writing in English as a second language. In B. Kroll (Ed.), Second language writing: Research insights for the classroom (pp. 109–125). New York: Cambridge University Press. Fries, C. C. (1945). Teaching and learning English as a foreign language. Ann Arbor: Michigan University Press. Galbraith, D. (1999). Writing as a knowledge-constituting process. In M. Torrance & D. Galbraith (Eds.), Knowing what to write: Conceptual processes in text production (pp. 139–164). Amsterdam: Amsterdam University Press. Gantefort, C., & Roth, H.-J. (2014). Schreiben unter den Bedingungen individueller Mehrsprachigkeit. In D. Knorr & U. Neumann (Eds.), Mehrsprachige Studierende schreiben. Schreibwerkstätten an deutschen Hochschulen (pp. 54–73). Münster: Waxmann. Gentil, G. (2005). Commitments to academic biliteracy: Case studies of francophone university writers. Written Communication, 22(4), 421–471. Gnutzmann, C., Jakisch, J., & Rabe, F. (2015). Englisch im Studium. Ergebnisse einer Interviewstudie mit Lehrenden. In A. Knapp & K. Aguado (Eds.), Fremdsprachen in Studium und Lehre–Chancen und Herausforderungen für den Wissenserwerb (pp. 17–45). Frankfurt: Peter Lang. Göbel, K., & Vieluf, S. (2014). The effects of language transfer as a resource in instruction. In P. Grommes & A. Hu (Eds.), Plurilingual education: Policies – Practices – Language development (pp. 181–195). Amsterdam: John Benjamins. Göpferich, S. (2009). Comprehensibility assessment using the Karlsruhe comprehensibility concept. The Journal of Specialised Translation, 11, 31–52. Göpferich, S. (2013). Translation competence: Explaining development and stagnation from a dynamic systems perspective. Target, 25(1), 61–76. Göpferich, S. (2015a). Sich Fachliches erschreiben: Förderung literaler Kompetenzen als Förderung des Denkens im Fach. In Vortrag anlässlich des 5-jährigen Jubiläums des Zentrums für fremdsprachliche und berufsfeldorientierte Kompetenzen (ZfbK) der Justus-Liebig-Universität Gießen. Gießen. Göpferich, S. (2015b). Text competence and academic multiliteracy: From text linguistics to literacy development. Tübingen: Narr. Göpferich, S., & Nelezen, B. (2012). Data documentation for the articl, Die Sprach (un) abhängigkeit von Textproduktionskompetenz: Translation als Werkzeug der Schreibprozessforschung und Schreibdidaktik. Retrieved August 17, 2014, from http://www. susanne-goepferich.de/Data_Documentation_Writing_L1_L2 Göpferich, S., & Nelezen, B. (2014). The language-(in) dependence of writing skills: Translation as a tool in writing process research and writing instruction. MonTI, 1, 117–149. Graham, S., & Perin, D. (2007). A meta-analysis of writing instruction for adolescent students. Journal of Educational Psychology, 99, 445–476. Hartmann, R. R. K., & Stork, F. C. (Eds.). (1972). Dictionary of language and linguistics. Amsterdam: Applied Science. Hayes, J. R., & Flower, L. S. (1980). Identifying the organization of writing process. In L. W. Gregg & E. R. Steinberg (Eds.), Cognitive processes in writing (pp. 3–30). Hillsdale: Lawrence Erlbaum. Hirose, K., & Sasaki, M. (1994). Explanatory variables for Japanese students’ expository writing in English: An exploratory study. Journal of Second Language Writing, 3(3), 203–229. Holz-Mänttäri, J. (1984). Translatorisches handeln: Theorie und methode. In Suomalainen tiedeakatemia. Helsinki. House, J. (1977). A model for translation quality assessment. Tübingen: Narr.

8 Translation Competence as a Cognitive Catalyst for Multiliteracy –. . .

195

Jones, S., & Tetroe, J. (1987). Composing in a second language. In A. Matsuhashi (Ed.), Writing in real time: Modelling production processes (pp. 34–57). Norwood: Ablex. Kecskes, I., & Papp, T. (2000). Foreign language and mother tongue. Mahwah: Lawrence Erlbaum. Kim, E.-Y. (2011). Using translation exercises in the communicative EFL writing classroom. ELT Journal, 65(2), 154–160. Klaasen, R. G. (2001). The international university curriculum. Challenges in English medium instruction. PhD thesis. Delft University of Technology, Delft. Knapp, A. (2014). Language choice and the construction of knowledge in higher education. European Journal of Applied Linguistics, 2(2), 165–203. Knapp, A., & Timmermann, S. (2012). UniComm Englisch – Ein Formulierungswörterbuch für die Lehrveranstaltungskommu-nikation. FLuL–Fremdsprachen Lehren und Lernen, 41(2), 42–59. Kobayashi, H., & Rinnert, C. (1992). Effects of first language on second language writing: Translation versus direct composition. Language Learning, 42(2), 183–209. Kohro, Y. (2009). A contrastive study between L1 and L2 compositions: Focusing on global text structure, composition quality, and variables in L2 writing. Dialogue, 8, 1–19. Krapels, A. R. (1990). An overview of second language writing process research. In B. Kroll (Ed.), Second language writing: Research insights for the classroom (pp. 37–56). New York: Cambridge University Press. Lamsfuß-Schenk, S. (2015). Sachfachlicher Kompetenzerwerb in gesellschaftlichen CLILKontexten. In B. Rüschoff, J. Sudhoff, & D. Wolff (Eds.), CLIL revisited. Eine kiritische Analyse zum gegenwärtigen Stand des bilingualen Sachfachunterrichts (pp. 151–164). Frankfurt: Peter Lang. Lange, U. (2012). Strategien für das wissenschaftliche Schreiben in mehrsprachigen Umgebungen. Eine didaktische Analyse. In D. Knorr & A. Verhein-Jarren (Eds.), Schreiben unter Bedingungen von Mehrsprachigkeit (pp. 139–155). Frankfurt: Peter Lang. Lasagabaster, D. (2015). Multilingualism at tertiary level: Achievements and challenges. In A. Knapp & K. Aguado (Eds.), Fremdsprachen in Studium und Lehre: Chancen und Herausforderungen für den Wis-senserwerb (Foreign languages in higher education: Opportunities and challenges for the acquisition of knowledge) (pp. 47–68). Frankfurt: Peter Lang. Leki, I., Cumming, A., & Silva, T. (2008). A synthesis of research on second language writing in English. New York: Routledge. Levelt, W. J. M. (1989). Speaking: From intention to articulation. Cambridge: MIT Press. Levi-Keren, M. (2008). Factors explaining biases in mathematic tests among immigrant students in Israel. PhD thesis in Hebrew. Tel Aviv University, Tel Aviv. Liu, Y. (2009). Translation in second language writing: Exploration of cognitive process of translation. Saarbrücken: VDM Publishing. Logan-Terry, A., & Wright, L. (2010). Making thinking visible: An analysis of English language learners’ interactions with access-based science assessment items. AccELLerate!, 2(4), 11–14. Macaro, E. (2014). Reframing task performance: The relationship between tasks, strategic behaviour, and linguistic knowledge in writing. In H. Byrnes & R. Manchón (Eds.), Task-based language learning: Insights from and for L2 writing (pp. 53–77). Amsterdam: John Benjamins. Manchón, R., Roca de Larios, J., & Murphy, L. (2000). An approximation to the study of backtracking in L2 writing. Learning and Instruction, 10(1), 13–35. Mandelblit, N. (1995). The cognitive view of metaphor and its implications for translation theory. In B. Lewandowska-Tomaszcyk & M. Thelen (Eds.), Translation and meaning (pp. 483–495). Maastricht: Hoogeschool Maastricht. Martin-Jones, M., & Jones, K. (2000). Multilingual literacies: Comparative perspectives on research and practice. Amsterdam: John Benjamins. Muchisky, D., & Tangren, N. (1999). Immigrant student performance in an academic intensive English program. In L. Harklau, K. Losey, & M. Siegal (Eds.), Generation 1.5 meets college composition (pp. 211–234). Mahwah: Lawrence Erlbaum.

196

S. Göpferich

Nord, C. (1993). Einführung in das funktionale Übersetzen: am Beispiel von Titeln und Überschriften. Tübingen: Francke. Ortega, L., & Carson, J. (2010). Multicompetence, social context, and L2 writing research praxis. In T. Silva & P. K. Matsuda (Eds.), Practicing theory in second language writing (pp. 48–71). West Lafayette: Parlor Press. Perani, D., Dehaene, S., Grassi, F., Cohen, L., Cappa, S. F., Dupoux, E., . . . Mehler, J. (1996). Brain processing of native and foreign languages. Neuroreport, 7(15–17), 2439–2444. Perkins, D. N., & Salomon, G. (1988). Teaching for transfer. Educational Leadership, 46(1), 22–32. Perkins, D. N., & Salomon, G. (n.d.). The science and art of transfer. Retrieved August 17, 2014. from http://learnweb.harvard.edu/alps/thinking/docs/trancost.htm Piske, T. (2015). Zum Erwerb der CLIL-Fremdsprache. In B. Rüschoff, J. Sudhoff, & D. Wolff (Eds.), CLIL Revisited: Eine kritische Analyse zum gegenwärtigen Stand des bilingualen Sachfachunterrichts (pp. 101–125). Frankfurt: Peter Lang. Pohl, T. (2007). Studien zur Ontogenese wissenschaftlichen Schreibens. Tübingen: Niemeyer. Qi, D. S. (1998). An inquiry into language-switching in second language composing processes. Canadian Modern Language Review, 54(3), 413–435. Qi, D. S., & Lapkin, S. (2001). Exploring the role of noticing in a three-stage second language writing task. Journal of Second Language Writing, 10(4), 277–303. Ransdell, S., & Barbier, M.-L. (2002). New directions for research in L2 writing. Dordrecht: Kluwer Academic. Reiss, K., & Vermeer, H. J. (1984). Grundlegung einer allgemeinen Translationstheorie. Tübingen: Niemeyer. Rijlaarsdam, G. (2002). Preface. In S. Ransdell & M.-L. Barbier (Eds.), New directions for research in L2 writing (pp. ix–ix). Dordrecht: Kluwer Academic. Roca De Larios, J., Murphy, L., & Manchón, R. M. (1999). The use of restructuring strategies in EFL writing: A study of Spanish learners of English as a foreign language. Journal of Second Language Writing, 8(1), 13–44. Roca De Larios, J., Murphy, L., & Marin, J. (2002). A critical examination of L2 writing process research. In S. Ransdell & M.-L. Barbier (Eds.), New directions for research in L2 writing (pp. 11–47). Dordrecht: Kluwer Academic. Roca De Larios, J., Manchón, R. M., & Murphy, L. (2006). Generating text in native and foreign language writing: A temporal analysis of problemsolving formulation processes. The Modern Language Journal, 90(1), 100–114. Rüschoff, B., Sudhoff, J., & Wolff, D. (Eds.). (2015). CLIL Revisited: Eine kritische Analyse zum gegenwärtigen Stand des bilingualen Sachfachunterrichts. Frankfurt: Peter Lang. Sasaki, M. (2000). Toward an empirical model of EFL writing processes: An exploratory study. Journal of Second Language Writing, 9(3), 259–291. Sasaki, M. (2002). Building an empirically-based model of EFL learners’ writing processes. In S. Ransdell & M.-L. Barbier (Eds.), New directions for research in L2 writing (pp. 49–80). Dordrecht: Kluwer Academic. Sasaki, M. (2004). A multiple-data analysis of the 3.5-year development of EFL student writers. Language Learning, 54(3), 525–582. Sasaki, M., & Hirose, K. (1996). Explanatory variables for EFL students’ expository writing. Language Learning, 46(1), 137–174. Schmidt, R. W. (1990). The role of consciousness in second language learning. Applied Linguistics, 11(2), 129–158. Schoonen, R., Gelderen, A. v., Glopper, K. d., Hulstijn, J., Simis, A., Snellings, P., & Stevenson, M. (2003). First language and second language writing: The role of linguistic knowledge, speed of processing, and metacognitive knowledge. Language Learning, 53(1), 165–202. Shi, L. (2003). Writing in two cultures: Chinese professors return from the West. Canadian Modern Language Review, 59(3), 369–392.

8 Translation Competence as a Cognitive Catalyst for Multiliteracy –. . .

197

Shohamy, E. (2013). A critical perspective on the use of English as a medium of instruction at universities. In A. Doiz, D. Lasagabaster, & J. M. Sierra (Eds.), English-medium instruction at universities: Global challenges (pp. 196–210). Bristol: Multilingual Matters. Silva, T. (1992). L1 vs L2 writing: ESL graduate students’ perceptions. TESL Canada Journal, 10(1), 27–47. Smit, U. (2015). CLIL und der tertiäre Sektor. In B. Rüschoff, J. Sudhoff, & D. Wolff (Eds.), CLIL Revisited: Eine kritische Analyse zum gegenwärtigen Stand des bilingualen Sachfachunterrichts (pp. 75–98). Frankfurt: Peter Lang. Steinhoff, T. (2007). Wissenschaftliche Textkompetenz: Sprachgebrauch und Schreibentwicklung in wissenschaftlichen Texten von Studenten und Experten. Tübingen: Niemeyer. Tang, R. (2012). Two sides of the same coin: Challenges and opportunities for scholars from EFL backgrounds. In R. Tang (Ed.), Academic writing in a second or foreign language: Issues and challenges facing ESL/EFL academic writers in higher education contexts (pp. 204–232). London: Bloomsbury. Turnbull, M., & Dailey-O’Cain, J. (2009a). Concluding Reflections: Moving Forward. In M. Turnbull & J. Dailey-O’Cain (Eds.), First language use in second and foreign language learning. Bristol: Multilingual Matters. Turnbull, M., & Dailey-O’Cain, J. (2009b). Introduction. In M. Turnbull & J. Dailey-O’Cain (Eds.), First language use in second and foreign language learning. Bristol: Multilingual Matters. Uzawa, K. (1994). Translation, L1 Writing, and L2 Writing of Japanese ESL Learners. Journal of the Canadian Association of Applied Linguistics, 16(2), 119–134. Uzawa, K. (1996). Second language learners’ processes of L1 writing, L2 writing, and translation from L1 into L2. Journal of Second Language Writing, 5(3), 271–294. Uzawa, K., & Cumming, A. (1989). Writing strategies in Japanese as a foreign language: Lowering or keeping up the standards. Canadian Modern Language Review, 46(1), 178–194. van der Walt, C. (2013). Multilingual higher education: Beyond English medium orientations. Bristol: Multilingual Matters. Vermeer, H. J. (1978). Ein Rahmen für eine allgemeine Translationstheorie. Lebende Sprachen, 3, 99–102. Wächter, B., & Maiworm, F. (Eds.). (2014). English-taught programmes in European higher education. The stay of play. Bonn: Lemmens Medien. Wang, W., & Wen, Q. (2002). L1 use in the L2 composing process: An exploratory study of 16 Chinese EFL writers. Journal of Second Language Writing, 11(3), 225–246. Whalen, K., & Menard, N. (1995). L1 and L2 writers’ strategic and linguistic knowledge: A model of multiple-level discourse processing. Language Learning, 45(3), 381–418. Williams, C. (1994). Arfarniad o ddulliau dysgu ac addysgu yng nghyd-destun addysg uwchradd ddwyieithog. PhD thesis. University of Wales, Bangor. Wolff, D., & Sudhoff, J. (2015). Zur Definition des Bilingualen Lehrens und Lernens. In B. Rüschoff, J. Sudhoff, & D. Wolff (Eds.), CLIL Revisited: Eine kritische Analyse zum gegenwärtigen Stand des bilingualen Sachfachunterrichts (pp. 9–39). Frankfurt: Peter Lang. Woodall, B. R. (2002). Language-switching: Using the first language while writing in a second language. Journal of Second Language Writing, 11(1), 7–28.

Index

A Activation threshold, 16, 29, 30 Annotation, 37, 53, 122, 157 Automatic processes, 7, 51, 63, 146 Automatic speech recognition, 52

B Bilingual brain, 7, 15, 16, 114 Bilingual knowledge, 40, 41, 45 Bilingual lexicon, 26–28 Bilingual mental lexicon, 25, 26, 53, 55 Bilingual mind, 8, 52, 55 Bilingual processing, 15–17, 24, 26, 31, 37, 40, 41, 44–46 Brain regions, 26, 43, 44, 110, 115, 128 Brain structure, 42

C CASMACAT, 157, 161 Center for Research and Innovation in Translation and Translation Technology (CRITT ), 75, 161 Choice Network Analysis, 58, 145, 146, 154, 155 Co-activation, 11, 51 Cognitive effort, 49, 122, 139, 140, 142, 145, 146, 149, 151, 153–155, 157 Cognitive processes, 31 Cognitive processing, 40, 63, 153, 154, 156, 184, 185 Computational model, 49 Computational theory, 16, 20, 50

Conceptual encodings, 63 Conceptual mediation, 34, 37, 40 Conceptual node, 53 Continuity Hypothesis, 26 Corpus technology, 15 Co-text, 56, 65 Cross-cultural, 11, 185 Cross-linguistic, 11, 20, 185

D Default strategy, 145, 146 Domain competence, 62, 185, 187 Domain knowledge, 154, 184, 186 Drafting, 148

E Electroencephalography (EEG), 15, 110 Encyclopaedic knowledge, 8, 24, 29 Error analysis, 150, 181 External resources, 50, 63 Eye-mind, 156 Eye-to-IT, 156 Eye tracking, 6, 121, 146, 147, 154, 156–158

F First language, 169, 186 Fixation count, 147, 156, 157 Fixation duration, 148, 156, 157 Formal correspondence, 62, 145 From-scratch translation, 50, 57, 59

© Springer Nature Singapore Pte Ltd. 2019 D. Li et al. (eds.), Researching Cognitive Processes of Translation, New Frontiers in Translation Studies, https://doi.org/10.1007/978-981-13-1984-6

199

200 Functional magnetic resonance imaging (fMRI), 7, 15, 109, 110, 122, 123, 134 Functional near infrared spectroscopy (fNIRS), 109–115

G Gaze time, 6, 147, 155, 156

H Hidden Markov models, 51 Horizontal translation, 39, 40 Human-computer interaction, 143 Human translation processes, 50, 52, 53 Hybrid encodings, 63

I Idiomatic expressions, 23, 55, 56 Information density, 147 Inputlog, 156 Interpretability Hypothesis, 26

K Key-logging, 6, 15, 122 Keystroke analysis, 157 Keystroke-based segmentation, 73, 86, 89

L Language Faculty, 16–21, 24, 25, 28, 30, 31, 35, 36, 39, 43–45 Lexical choice, 58, 65 Lexical memory, 21, 45 Linguistic-cognitive orientation, 4, 12 Literal translation, 56, 145, 146, 180

M Machine translation output, 54, 149 Manual analysis, 75, 155 Mental effort, 74, 127, 140–143 Minimal translatable units, 105 Monolingual lexicon, 27 Monolingual processing, 30, 38 MT systems, 64, 149, 150 Multi-channel systems, 112, 113 Multi-competence theory, 174

N NASA-TLX, 141, 142, 148, 159 Neurocognitive bilingualism, 15, 16, 24, 27

Index Neuroimaging, 42, 43, 109–112, 122 Neuroscience of translation, 135 n-gram, 150 Noisy channel model, 50, 52–55, 64 Non-invasive, 109–111 Non-language, 30 Non-literalness, 147 Non-structure-routed, 37, 38

P Processing economy, 36 Parallel corpus, 41 Pause duration, 74, 75, 148, 154, 155 Pausing, 155 Peak performance, 71 Positron emission tomography, 7, 109 Post-editing effort, 151, 153, 154, 157, 160 Post-editors, 50, 52, 55, 57, 61, 64, 149, 152, 153, 155, 160 Priming effects, 55–58, 64 Principle of least effort, 146 Procedural encodings, 50, 63 Process data, 50, 51, 63, 145 Processing economy, 16, 36, 40 Processing effort, 60, 62, 63, 127 Processing segment, 73 Processing time, 146, 148 Processing units, 74, 89, 106, 107, 122 Professional interpreters, 115 Professional translators, 46, 56, 72, 131, 144, 146, 156, 190, 192

Q Quality estimation, 150

R Readability, 144, 146, 147, 153 Relevance theory (RT), 49, 60, 64, 123 Revision, 43, 52, 139, 149, 173

S Screen recording, 6, 121 Second language, 25, 170, 175, 187 Segment boundary, 73, 74, 77, 78, 82 Semantic representation, 6, 17, 37 Shallow Structure Hypothesis, 26 Source language, 9, 20, 26, 31, 32, 52, 56, 60, 61, 151 Statistical model, 51, 52 Structural reconstruction, 21, 33 Syntactic entropy, 59

Index Syntactic parsing, 30, 53 Syntactic priming, 60, 107 Syntactic processing, 25, 26, 30, 43 Syntactic structure, 43, 58, 60

T Target language, 9, 10, 20, 26, 31, 33, 45, 46, 51, 53, 56–59, 61, 62, 64, 121, 147, 150 Technical effort, 151, 157 Temporal effort, 151, 154 Theory of mind, 123, 128 Think-aloud protocols, 121, 135, 173, 189 Three-Store Hypothesis, 27 Tightrope hypothesis, 142 Total reading time, 58, 59 Translation Edit Rate (TER), 156 Translation error, 160 Translation literality, 56, 64 Translation model, 11, 53, 54 Translation problem, 62

201 Translation process research database (TPR-DB), 160 Translation process research (TPR), 4–6, 62, 121, 123, 139, 144, 152, 156, 160 Translation strategies, 9, 50, 52, 64, 183 Translation students, 56, 131 Translation studies, 3, 4, 6–8, 11, 12, 134, 135, 190 Translator’s mind, 6, 7, 51, 74, 78, 100 Translog, 122, 154–156, 161 Typing pause, 81, 105

V Vertical translation, 31, 40

W Word order, 56, 58, 63, 182, 183, 192 Word retrieval, 43, 44 Writing expertise, 174, 175

Smile Life

When life gives you a hundred reasons to cry, show life that you have a thousand reasons to smile

Get in touch

© Copyright 2015 - 2024 AZPDF.TIPS - All rights reserved.