[New publication] Linguistica Antverpiensia, Vol 17 (2018): Methods for the Study of Multimodality in Translation

Methods for the Study of Multimodality in Translation

Edited by Anne Maria Ketola, Tiina Tuominen, Catalina Jiménez Hurtado

 

Introduction – Why methods matter: Approaching multimodality in translation research, by Anne Maria Ketola, Tiina Tuominen, Catalina Jiménez Hurtado

Abstract: The study of multimodal phenomena calls upon translation scholars to cross disciplinary boundaries and adopt a range of theoretical and methodological approaches. The diversity of the multimodal landscape brings about research challenges that must be carefully addressed to ensure that these research efforts yield useful and credible results. This special issue is dedicated to a discussion on how to engage in multimodal translation research: how traditional research methods can be adapted and what kinds of novel approach can be adopted or developed in order to deal with a diversity of multimodal data. In this introduction, we first discuss definitions of mode and multimodality and reflect on the nature of multimodality as a topic of research within Translation Studies. We then explain our rationale for dedicating the special issue to research methods and introduce three areas of multimodal translation research that, in our view, merit particular attention from a methodological point of view. Finally, we introduce the articles contained in this special issue.

Tracking multimodal cohesion in Audio Description: Examples from a Dutch audio description corpus, by Nina Reviers

Abstract: One of the main questions addressed by multimodality research—the main conceptual frameworks for analysing audiovisual texts—is how the different modes of audiovisual texts combined—visual, verbal, aural—create supplementary meaning in texts, over and above the meanings conveyed by the individual constituents. Ensuring that this multimodal interaction or multimodal cohesion remains intact is a key challenge in the practice of audiovisual translation (AVT), and particularly in Audio Description (AD) for the blind and visually impaired. The present article therefore studies the functioning of multimodal cohesion in audio-described texts by analysing the types of interaction between descriptive units and sound effects in a selection of Dutch audio-described films and series. The article begins with a detailed description of the methodology which is based on multimodal transcription and concludes with an overview of the types of multimodal cohesive relations identified.

A multimodal analysis of political satire: Webcomics and GIFs in Post-Arab Spring Egypt, by Hanem El-Farahaty

Abstract: The Egyptian Revolution in 2011 and the protests known as “the Arab Spring” resulted in an increase in the number of comic writers and amateurs who use specialist software to mix modes digitally for novel sign-making (Gursimesk, 2016; Lankshear & Knobel, 2008). These people draw creative forms of satire, including but not limited to political webcomics and Graphics Interchange Formats (GIFs) which exist alongside political cartoons. This article aims to employ a multimodal analysis of a sample of Egyptian webcomics and GIFs in order to highlight how visual design elements complement each other, in an effort to support translators and/or readers/viewers in their interpretation of the images and, by extension, in their ability to make sense of reality. With this aim in mind, this article combines Serafini’s (2010) tripartite approach to perception, structure and ideology; Kress and van Leeuwen’s (2006) discussion of elements of visual design, and Genette’s and Maclean’s (1991) approach to paratext. The analysis will also be informed by interviews with the founders and co-founders of two Facebook pages, and with a cartoonist working for Almasry Alyoum newspaper. First, the article discusses briefly the translation of comics or webcomics and GIFs, particularly the challenges of translating political webcomics and GIFs into English. The theoretical framework and a case study are then discussed in the second part.

Communicative functions and characterization in the Chinese graphic novel adaptations of “Little Women”, by Tzu-yi Lee

Abstract: This article attempts to use an integrated theoretical framework to examine the three graphic novels published in Taiwan based on Louisa May Alcott’s Little Women. After the concept of intertextuality (Kristeva, 1980), Kress and van Leeuwen’s (1996) visual grammar helps us to explore how the visual semiotic resources in three Taiwanese graphic novel editions of Little Women ascribe meanings, achieve functions and communicate ideologies to enhance the characterization of Jo March. Progressing from the images, image–text relationships are discussed in terms of how these signifiers shape signs and meaning. Finally, by exploring speech or thought representation, the researcher investigates the translators’ awareness of the readers and situates their use of signs socially and culturally. In this study, the integrated framework reveals how these semiotic resources characterize Jo March, and the double meanings (Kristeva, 1980, pp. 65–66) they present, differently. These characterisations may, in turn, may have affected the communicative function of each version. The study concludes with a discussion of the strengths and weaknesses of these applications.

From paratext to polysemiotic network: A holistic approach to the study of subtitled films, by Hannah Silvester

Abstract: This article proposes a framework for the study of interlingually subtitled films which draws on aspects of existing linguistic– and multimodal–analytical approaches. The methodology comprises contextual analysis at three levels. It proposes an initial study of the subtitling situation and technical aspects of the films. It then recommends an analysis of the plot, director interviews and reviews or articles written upon release of the film in order to identify key themes and the skopos of the films. Finally, in the light of information revealed in these stages of analysis, it proposes a close examination of the subtitles in context, in relation to the skopos of the films, and taking into account the film’s key themes and the cultural context(s) of its release. This allows for an analysis of subtitles which incorporates a variety of factors affecting the context of reception, integrating the multimodal nature of subtitling and the significance of cultural context and readability. Illustrative examples are taken from a case study of an English subtitled version of the French film, “La Squale”.

Translating multimodal texts in space: A case study of St Mungo Museum of Religious Life and Art, by Min-Hsiu Liao

Abstract: Multimodality has received considerable critical attention in Translation Studies over the past decades. However, how translations interact with or within three-dimensional material space is still under-researched. This article proposes to use the study of geosemiotics (Scollon & Scollon, 2003) as the theoretical framework within which to explore this new territory. The case study was carried out at the St Mungo Museum of Religious Life and Art in Glasgow. The multimodal analysis divides museum space into four ranks: the museum surroundings, the museum building, the museum exhibition and the museum objects. The findings reveal that the translated exhibition texts interact with the four ranks of spaces to consistently minimize the narratives of Christian heritage in Glasgow and manifest a multi-religious and multi-ethnic Scottish identity. This study demonstrates how the (non-)provision of translations has the potential to influence the construction of in-place meaning in the multimodal museum space.

Multimodality and dubbing in video games: A research approach, by Laura Mejías-Climent

Abstract: This article discusses a new taxonomy of dubbing synchronies in the Spanish localized version of the video game Batman: Arkham Knight. To do so, the concept of the video game is first reviewed as the most sophisticated example of audiovisual text in terms of its multimodal nature. The article also describes how the message is conveyed through the different communication channels that form a part of its semiotic structure. Next, the research approach proposal is presented, based on the multimodal configuration of video games: game situations will be the unit of analysis used in this approach, since they are a basic unit with which to organize the content of a game and they distinguish video games as a unique type of audiovisual text. These game situations are used to analyse the specific features of dubbing and types of synchrony. Finally, some results are offered to confirm that the taxonomy of dubbing in video games discussed here applies to this particular case.

Concept Selection and Translation Strategy: Subtitling for the Deaf based on Corpus Analysis, by Catalina Jiménez Hurtado, Silvia Martínez Martínez

Abstract: In the past 20 years, corpus analysis has been applied to different translation modalities. This study used an annotated multimodal corpus of 52 international films of different genres, which had been dubbed in Spanish and subtitled for Spanish Deaf and Hard-of-Hearing (DHH) viewers, according to the AENOR, UNE 153010 (2012) standard. The corpus was annotated at two levels. At the first level, we annotated the information that professional subtitlers selected from the audio mode of the source text to translate into subtitles. At the second level, captured information regarding the translation strategies was used. This allowed us to analyse the translation process and reflect the translation preferences of professional subtitlers. Our first objective was to show how corpus analysis can be applied to the study of multimodal texts. The second objective was to provide valuable insights into the understanding, description and specification of the conceptual and epistemological nature of subtitling for the DHH.

Paintings to my ears: A method of studying subjectivity in audio description for art museums, by M Olalla Luque, Silvia Soler Gallego

Abstract: An increasing number of museums are using audio description (AD) to provide access for visually impaired visitors to their collections by means of a wide range of mobile devices and apps. However, research in this field is still limited. The present article proposes a methodological approach that combines interviews and corpus analysis for studying audio-descriptive guides of art museums. More specifically, the proposed methodology focuses on the study of subjectivity, which has become a controversial aspect of AD in the past few years and the focus of a number of studies. The article offers a step-by-step description of the methodology and illustrates it with data from a study carried out by the authors on this type of multimodal communication.

How are translations created? Using multimodal conversation analysis to study a team translation process, by Liisa Tiittula, Maija Hirvonen

Abstract: This article demonstrates a methodology for studying the translation process from the perspective of multimodal social interaction and applies this methodology to a case analysis of collaborative audio description. The methodology is multimodal conversation analysis, which aims to uncover the way in which multimodal communication resources (e.g., talk, gaze, gestures) are used holistically and situatedly in building human action. Being empirical and data-driven, multimodal conversation analysis observes human conduct in its natural setting. This article analyses video data from an authentic audio-description process and presents the multimodal constitution of problem-solving sequences during translating. In addition, the article discusses issues regarding the methodological choices facing researchers who are interested in human interaction in translation. The article shows that applying multimodal conversation analysis opens new avenues for research into the translation process and collaborative translation.

Visual metaphors in medical knowledge representation, by Maribel Tercedor Sánchez, Alicia Casado Valenzuela

Abstract: Understanding the role of mental images and embodiment in metaphorical thought is fundamental to the study of metaphor in science in general, and in medicine in particular. And analysing typologies of metaphorical images and their function is a key issue in assessing their success in disseminating knowledge, since knowledge construction depends on the interaction between verbal and visual information (Ketola, 2016). In this article, we describe the premises and methodological steps followed in analysing and describing visual metaphorical information in medical texts for lay audiences. We follow a data-driven approach in which images were extracted from the VariMed database, a multimodal terminographical tool for translation, linguistic research and knowledge dissemination. An experiment was carried out to explore how successfully metaphorical images used as illustrations for specific medical concepts were identified and understood. We conclude that metaphorical images were preferred over non-metaphorical illustrations for medical concepts, which may point to lay audiences’ familiarity with metaphorical multimodal references.

Encouraging legibility and comprehensibility through multimodal patient information guides, by Juan Antonio Prieto-Velasco, Vicent Montalt-Resurrecció

Abstract: Medical texts intended for patients are a key instrument in doctor–patient communication. Through a process of heterofunctional translation, they can be adapted to the needs and expectations of their recipients. Most techniques aimed at making reading and understanding easier are linguistic in nature, and little attention has been paid to the role played by visuals. Through a questionnaire and a focus group, this pilot study explored patients’ perception and reception of images in a patient information guide. Our main finding was that visuals depicting medical concepts can be graphical support for unfamiliar concepts and encourage comprehension of texts aimed at patients. The most useful type of visual appears to be simplified images describing concepts with clarity and preventing recipients from recalling unpleasant experiences.

The relevance of redundancy in multimodal documents, by Olli Philippe Lautenbacher

Abstract: The aim of this article is to refine the role of redundancy in deferred multimodal communication, from the standpoint of both communicators and their audiences and, by extension, translators. What is advocated here is the idea of a recursive reading process consisting of three phases (perception, construction and integration) and that this process is based on the detection of a salient series of trigger stimuli that the communicator offers as incentives. The shared ground of significance of these trigger compounds actually reveals core meanings in the document, especially when there is exophoric reference. In the translation process, any change within this redundancy system, such as a modification in the balance between endophora and exophora, might alter the overall reception experience.

Images without words—the focus group as a method of examining the reading experience of comics, by Eliisa Pitkäsalo

Abstract: This article presents a pilot study conducted in Hungary and Italy in the autumn of 2015. The study had two goals: to test the method of focus-group discussion for collecting data on a multimodal reading experience and to collect information on participants’ reading experience with comics, with an emphasis on their ability to understand the visual language of comics. The study examined the importance of the verbal mode in understanding non-verbal messages and the role cultural background and the competence to read comics play in participants’ interpretations of the meanings of the visual content of comics. In addition to answering the main research questions, the article reflects on focus groups as a data-collection method. The study indicated that cultural background, including the tradition of reading comics, may have an effect on enhancing participants’ understanding of the visual content of comics, which is also an important aspect the translator has to take into consideration.

Methodological contributions of Social Neuroscience for the study of multimodality in translation, by Antonio Javier Chica Núñez

Abstract: This study analyses the interplay of various communication modes that enables emotions to be transmitted efficiently from source (ST) to target text (TT) in audio description (AD) as a multimodal text. It draws on existing experimental designs, including neutral or emotional conditions based on the congruency of stimuli from various modes (images, dialogue semantic content or prosody in a film, together with the semantic content of AD). This article reviews the methodological contribution that Social Neuroscience could make to the study of multimodal translation. To this end, some neurobiological models and studies are quoted regarding multimodal emotional information processing (Brück, Kreifelts, & Wildgruber, 2011), the impact of multimodal emotional processing on subjects’ empathy (Regenbogen et al., 2012) and the dynamics of neural networks involved in human empathy and communication through the presentation of multimodal stimuli (Regenbogen, Habel, & Kellerman, 2013). Finally, an experimental design that focuses on the transfer of feelings and emotions in film AD, which would be suitable for a potential pilot study, is presented.