Sunday, June 2, 2013

I can't believe I didn't post this here earlier. This is a video I created to help interpreters, coaches, parents and players involved with Deaf rugby. Whether it's someone coming to work with an all deaf team like the All Deaf, or working with one Deaf player on a hearing team, my hope is that this tutorial will help with rugby specific vocabulary. The video has voice over for the ASL impaired.

Wednesday, January 16, 2013

Differences in Construal of Constructed Dialogue Between an ASL Lecture and an English Interpretation: A squib

I wrote this as a project for a class on cognitive linguistics. I think cognitive linguistics has a significant application to interpreting studies in terms of cognitive processing. Because this was written for an audience already familiar with cog-lin and other ASL linguistic concepts I don't do a ton of explaining in this paper. There are a lot of good online resources to get the basic concepts of construal. Many of the texts in the bibliography are available as free Google books. One thing to note is that word in between lines like this, |  |, represent concepts and depiction represented by the word rather than the word itself. For example, | conversation | would indicate a depicted construal of a conversation rather than an actual conversation.

  Introduction


            One of the basic theories of cognitive grammar (Langacker, 2008; Taylor, 2002; Croft and Cruse, 2004) is that of content and construal, where content represents the truth conditions of a circumstance and construal is, “our ability to conceive and portray the same situation in alternate ways” (Langacker, 2008, p. 43). In language, constructed dialogue (CD) is used to report the words spoken by another person, or that were spoken at a time other than the present. Different languages use CD differently. Constructed dialogue has been shown to be an integral feature in all types of American Sign Language discourse (Quinto-Pozos, 2007), while it is less common in English. This difference in use of CD is especially stark in formal lectures, where CD is still commonly used in ASL (Roy, 1989), but is rarely used in English (Tannen, 1986; Shaw, 1987). The brief study below examines one interpretation of an ASL lecture into English and applies the theory of construal in examining how the interpreter renders the content of the lecture when it is presented as CD in ASL.

The Study


Data

            The data for this study consisted of one 14-minute lecture presented in ASL and interpreted into English. The presenter is a Gallaudet University employee. The interpreter is a Masters student in the Department of Interpretation in her fourth semester of study. The lecture was about mentoring. This is important in that most of the constructed dialogue from the Deaf presenter involved | participants | for whom one had more power (the mentor) and one had less power (the person being mentored). How this may have impacted the interpreter’s decisions will be discussed below. Another aspect to keep in mind is that all of the constructed dialogue in the data represent hypothetical reported speech. That is, in all cases the | participants | are not meant to represent actual named people and their | conversation | is not meant to be taken as anything that was actually uttered by any real world referential entities.

Analysis

            The data was analyzed for general instances of constructed dialogue. I did not attempt a detailed analysis as I was looking for general trends. For example, each segment of constructed dialogue was coded as one instance from the when the presenter started the depiction until she returned to her presenter role regardless of how many turns of conversation were depicted between | participants |. 

Results

During the presentation the presenter produced 20 instances of constructed dialogue. Of these 20 the interpreter used constructed dialogue in her interpretation 8 times. In 6 of these cases the interpreter presented constructed dialogue from the perspective of the person in the less powerful position. In the other 12 instances the interpreter either omitted the constructed dialogue and went with an interpretation that made no mention of the fact that CD had been present in the source message (4 times), or used indirect reported speech by relaying the result of the depicted conversation (8 times).

CD in ASL
Omission
3rd Party Reported Speech

20
4
8


CD in English
8

Discussion

            The results showing that the interpreter used constructed dialogue are not a surprise. Previous research (Nilsson, 2010; DeMeo, 2012), along with knowledge of some interpreter training curricula leads us to expect this result. I would like to examine how the differences in the two texts can be explained in terms of construal. There are differences in Specificity, Focus, and Foreground v. Background.

Difference in Specificity

            Specificity refers to the level of detail in an expression (Langacker, 2008). In this case the ASL lecture contains greater specificity in that it shows actual dialogue between two | participants |. In contrast, the interpreted text shows less specificity by either omitting the dialogue in extreme cases, or by indicating that a conversation takes place between | participants |, but instead of providing the same type of dialogue, the interpreter tells the audience the result of the | conversation |. This difference in specificity is diagrammed below.





 




Focusing and Foreground vs. Background

Other concepts that I find relevant to this discussion are those of focusing, and backgrounding vs. foregrounding. According to Langacker, “Through linguistic expressions, we access particular portions of our conceptual universe. The dimension of construal referred to here as focusing includes the selection of conceptual content for linguistic presentation, as well as its arrangement into what can broadly be described (metaphorically) as foreground vs. background” (2008, p. 53). In the case of constructed dialogue in ASL the focus is on the not only the content of the | conversation | but also on the | participants |. This means that how the | conversation | is relayed is nearly as important as the end result. An audience fluent in ASL sees depicted the manner in which the | participants | interact, which tells the audience something about the speaker’s perception of both the | participants | and the | conversation |. Conversely when the interpreter chooses to relay only the result of the conversation then the result becomes the focus rather than the content and the | participants |. When this happens the content is pushed farther to the background and the | participants | become more peripheral.

Possible Reasons for the Interpreter’s Decisions

This relates back to Nilsson’s (2010) assertion that instances of constructed dialogue are so densely packed with information that it is difficult for an interpreter working into a linear spoken language to convey all of them while under the time pressure found in simultaneous interpretation. It seems when there are multiple turns of conversation within an instance of constructed dialogue an interpreter is more likely to shift to reporting the outcome of the conversation. In the data for this study the interpreter never relays more than one turn of constructed dialogue while interpreting any one instance of CD. 
There are several possible reasons for this. First, it can be difficult for interpreters to relay multiple turns of constructed dialogue due to the difference in time needed to introduce each speaker. In ASL the switch can be done with shifts of body position, head, or eye gaze. In English, establishing who is speaking generally calls for an introductory phrase like, “Then the supervisor says…” If there are multiple turns of constructed dialogue it may not be feasible for the interpreter to keep up due to the inherent lag coupled with the longer production time. In these cases it is more expedient in terms of both production and processing to introduce the | participants | and then only provide the result of the conversation. 
Another possibility is that while constructed dialogue is often found in spoken English, it is typically found in narrative storytelling and it is not as heavily employed in formal registers (Tannen, 1986; Shaw, 1987). The data for this study comprised a mock formal setting of a lecture within the actual formal setting of a comprehensive exam. Through both frames (as described in Metzger, 1999) the interpreter could feel influenced to produce an interpretation that adheres to the expectations of formal English discourse.


Possible Influence of Power Relationships

            As noted above, when the interpreter did employ constructed dialogue she relayed the dialogue of the person with less power 6/8 times. While there are many possible explanations for this pattern the statements about the perceived formality of constructed action in a formal English seem plausible here. In talking to faculty and students in the Department of Interpretation, some English speakers believe that use of CD in English is often associated with younger speakers engaged in narratives, the concept of the chatty teenager was used by several informants. With this in mind it is worth noting that the interpreter overwhelmingly used CD to represent the speech of the | person being mentored |. In terms of conceptualization for both the general American public and for the Deaf presenter the person being mentored is seen as being younger. Also, according to Tannen (1995), the person seeking information is often seen as being in a less powerful one down position. This conceptualization of the | person being mentored | could lead the interpreter to allow herself to employ CD to represent this | person’s | speech.

Possible Net Result of Construal on an Audience

            In examining the differences in construal of constructed dialogue between the ASL presentation and the interpretation it is helpful to think of the impact on construal on a mixed audience of people who know ASL and people who do not. In a mixed audience, assuming, as is the case in the data for this study, that the interpretation reaches a certain threshold for factual equivalence the two groups that comprise the audience would be exposed to the same truth conditions. It is plausible to think that if they were given a questionnaire on the topic and facts of the lecture that the groups would score similarly. However, their experience of the lecture will have been different. This is not only because of the differences in linguistic resources available between the two languages, but also because of how those resources are typically employed in a given setting. As we have seen above ASL appears to require the use of constructed dialogue. English is more limited in terms of how, where, and how much constructed dialogue is employed. Thus the non-signing audience receives the result of the depicted interaction but little if any of the manner in which that result is derived. The non-signing audience does not get the same sense of how accommodating the | mentor | is. Nor do they see the level of trepidation on the part of the | person being mentored |. While some of these affectual features are evident in the use of constructed dialogue by the interpreter through the use of ventriloquizing (Tannen, 2010), the fact that the interpreter employs this tactic far less than the presenter creates this difference in construal. Even if the interpreter did endeavor to present more formal equivalence in their product the experience of the lecture for the non-signing audience would still not match that of the ASL fluent audience. As noted above frequent use of constructed dialogue is not common in a formal English presentation and could skew the contextual force (Isham, 1986) for the non-signers. In this case their experience of the speaker’s approach to the discourse would be more similar in terms of form, but their perception of the speaker herself, (in terms of formality, appropriateness, adherence to convention, etc) would likely be different. Thus, the interpreter’s goal of providing a text that matches the audience expectations draws the construal of the lecture further from that of the original source.


Conclusion:
           
            Though audiences attending to the ASL lecture or the English interpretation are exposed to the same truth conditions they have different experiences of the presentation. They are left with different impressions of the speaker’s style. Future research in this area could explore difference in audience perception of the presenter when the interpreter tries to produce a product closer in formal equivalence than they might usually. I do not say closer since in terms of construal because no two descriptions of any event will be exactly the same even if they are produced in the same language. Thus it is impossible to have the same construal of any content in a dual language situation. Other research might seek to explore if there is a pattern of construal that can be found in multiple interpretations of a text. For example, do specific ASL expressions of CD trigger a CD or non-CD interpretation more often than others? This type of study could help identify discourse features that are commonly accepted as being appropriate for CD in an English interpretation. This knowledge could aid future interpreter education.



 References:

Croft, W. and Cruse, D.A. (2004). Cognitive Linguistics. Cambridge: Cambridge University Press

DeMeo, M. (2012). Interpreting Constructed Action and Constructed Dialogue in American Sign Language into Spoken English, (Unpublished Master’s Thesis), Gallaudet University, Washington, D.C.

Isham, W. (1986). The Role of Message Analysis in Interpretation. Paper presented at the Proceedings of the 9th National Convention of RID.

Langacker, R. (2008). Cognitive Grammar: A Basic Introduction. Oxford: Oxford University Press

Metzger, M. (1999). Sign Language Interpreting: Deconstructing the Myth of Neutrality, Gallaudet University Press, Washington, DC

Nilsson, A. (2010). Studies in Swedish Sign Language: Reference, real space
blending, and interpretation. (Doctoral dissertation). Stockholm University,
Sweden.

Quinto-Pozos, D. (2007). Can constructed action be considered obligatory? Lingua,
117, 1285-1314

Shaw, R. (1987) Determining register in sign-to-English interpreting. Sign Language Studies Monographs. Burtonsville, MD: Linstock Press

Tannen, D. (1986). Introducing constructed dialogue in Greek and American conversational and literary narratives. Direct and indirect speech (pp. 311-322). Berlin: Mouton.

Tannen, D. (1995). Talking from 9 to 5: Women and Men in the Workplace: Language, Sex and Power,  Avon Books, New York, NY

Tannen, D. (2010). Abduction and identity in family interaction: ventriloquizing as indirectness. Journal of Pragmatics, 42, 307-316.

Taylor, J. (2002). Cognitive Grammar. Oxford: Oxford University Press

Tuesday, January 10, 2012

A Proposed Examination of Co-Speech Gesture in American Sign Language to English Interpreting

Introduction


Ten years ago I went on a trip to the promotion and production studio of a major television network.  At the time I was considering putting together an audition reel for commercial voiceover work.  While at the studio I had the opportunity to watch several voiceover actors recording promos for upcoming programming.  What struck me about their performances was their conspicuous use of co-speech gesture (CSG), presumably to help guide their inflection and intonation.  If the actor wanted to hold inflection steady during a line he would move his hand along a plane extending from just below eye level.  If the line called for him to inflect down and drive a sentence he would then thrust this hand down while making a fist.  If the script called for an upward inflection another  actor might send her hand upward like a Roman or Bellamy salute.
A brief review of training literature for voice actors turned up only one published source (Alburger, 2011) that explicitly suggests using gesture to enhance performance while working; however, interviews I conducted with voice actors confirmed that co-speech gesture in voice acting is an important aspect of the performance that is both intentional and largely spontaneous.  Alsburger, as well as the actors interviewed, note that for naturalistic copy, for example scripts that portray normal conversational speech, CSG usually mimics normal conversational CSG.  The kind of CSG I witnessed at the television studio is used for “sports promos and hard hitting ‘news-y’ stuff” (Rodd, personal communication).  It [CSG] does seem to help push the copy along and honor the punctuation.” (Rodd, personal communication, 2011).  Another actor related a comment that also touches on similarities between voice actors and interpreters, “Sometimes when I really get into a read, especially a longer narrative read, my hands almost work like a conductor leading an orchestra. They help me keep the timing, and when I need to go big, my movements are bigger. When I need to go small my movements are smaller.” (Hutchinson, personal communication, 2011).

These observations were consigned to memory until recently when on an interpreting assignment, I became conscious of my own use of co-speech gesture while interpreting by way of a consumer’s comment.  The consumer noted that I gestured more when interpreting a particularly dense concept from American Sign Language (ASL) into English.  This comment caused me to recall the voiceover actors I had seen in the past and to wonder if interpreters and actors were using similar, largely unconscious, techniques in order to aid their inflection and intonation.  The link between gesture, cohesion, prosody, and emotive production are discussed by Eidsvik (2006) in examining mirror neurons and their link to language production.  Eidsvik notes that many cognitive language processes, including gesture, are knitted together in the compact part of the brain known as Broca’s Area, indicating that the link between language production and gesture is inherent.  Eidsvik goes on to say that mirror neurons respond to viewing a gesture to trigger an empathic response that mimics that of the person making the gesture.  It then stands to reason that when trying to convey the affect found in a script one would feel compelled to use gestures that are, in memory, associated with that message.  Like voice over work, interpreting involves a kind of performance.  In both cases the person speaking is conveying someone else’s words and is tasked with doing so while conveying tone and inflection consistent with the original intent.  I wondered if interpreters’ use of CSG included a similar cohesive function as it appeared to serve for actors. Like voice actors, interpreters are also not specifically trained in the use of specific co-speech gestures to elicit specific results.  Instead they are left to do or use what may come naturally.  In subsequent observation of interpreters at work I believe that the use of co-speech gesture may be consciously produced and serve three functions, 1) to help with “punctuation” and timing in the source language output, 2) to facilitate the retrieval of lexical items, and  3) to elicit feedback from communication participants.
This research proposal aims to examine the production of co-speech gesture in ASL-English categories and to examine their function in the rendering of a target message.  In this study, I will follow work done by Casey and Emmorey (2009) in examining whether the use of co-speech gesture by interpreters working from American Sign Language to English pattern like gestures that emerged in the spontaneous English language production by bimodal bilinguals (people who are fluent in both a manual and a spoken language) as they converse in a non-interpreting context. If the CSG patterns found in English target language production of bilingual bimodal interpreters differ from those found by Casey and Emmorey, it is possible that there is an effect of fluency, or of comfort level, on the inhibition of gesture. There may also be evidence that the highly demanding task of interpretation results in different patterns of co-speech gesture than what emerges during spontaneous discourse.

Literature Review

            To date no research has been published regarding the use of co-speech gesture during interpretation; however, there is ample research on the function of co-speech gesture in cognition (Cassel, 1998; Wesp et al, 2001; Feyereisen, 2006; Casey and Emmorey, 2009).  Cassel (1998) lays a foundation for the study of CSG saying,

A growing body of evidence shows that people unwittingly produce gestures along with speech in many different communicative situations.  These gestures have been shown to elaborate upon and enhance the content of accompanying speech… Gestures have also been shown to identify underlying reasoning processes that the speaker did not or could not articulate (Cassel, 1998: 191). 

It is this identification of unarticulated cognitive processes that is of interest here, specifically I would like to determine whether the use of CSG by interpreters is quantitatively different than use by general language users in terms of either amount or function.  Since interpreting is a cognitively different task than spontaneous language production (Christoffles & de Groot, 2005; Grosjean, 2011), we may expect to see differences in the use of CSG by interpreters.  Of further interest is whether the amount of CSG produced during interpretation is correlated to the cohesiveness of the target language output. That CSG could impact quality, or give insight into the cognitive act of interpreting, is supported by Casey et al (forthcoming) who note that, “gesture creation…affects both language production and comprehension” (3). If the use of CSG impacts speech production and comprehension, it is worthy of study for what it may reveal non-linguistic aspects of successful interpretation.  The understanding of the role of gesture during interpretation could inform both the practice and training of interpreters.  Conversely, inhibition of CSG may lead to disfluencies and/or interfere with lexical search (Wesp et al, 2006).  If this is true, research on CSG use by interpreters may indicate that student and novice interpreters could benefit from information on the role of CSG while interpreting. 
            Co-speech gesture has also been shown to occur during specific cognitive tasks, including lexical search, recall of concepts, supporting the rhythm of language production, engagement, and as mentioned above, comprehension and production of language (Wesp et al, 2001).  All of these individual cognitive tasks are involved in the larger task of interpretation.  Further, Nagpal et al (2011) note that research shows CSG is primarily used to help speakers access language to aid production and are produced even when there is no audience able to see the gestures.  They further posit that people will gesture more when producing their L2 since it is harder to produce the language in which the speaker is less proficient.  This concept of gesture as related to effort of formulation and production also implies that interpreter use of CSG could relate to the richness of their interpretations.
Wesp et al (2001) note that, “spatial imagery serves a short term memory function during lexical search…gesture keeps the pre-lexical concept in memory while lexical search is happening” (p. 591).  Streeck (2009) describes how CSG is used to engage interlocutors via two phases of lexical search gesture one in which the searching party discourages participation of other parties, often accompanied by a shift in eye gaze, and a second phase in which “collaboration is sought.” (Streeck, 2009: 108).  Interpreters rarely shift their gaze from the signer during ASL-English interpreting and so one might predict that the two phases described by Streeck could involve smaller and then larger gestures with the seeking of collaboration being directed at either the signer or the team interpreter.  Lexical search behavior may also take the form of deictics, which can reference the immediate physical space, an unseen real space, or a conceptual space (Streeck, 2009).  Feyereisen (2006) describes two types of co-speech gestures.  The first type, representational or iconic gestures, represent "visual or dynamic features of the referent" and evoke mental activation of images, both visual and motor. Representational gestures depict an image when produced. The second type, nonrepresentational gestures, do not depict a particular referent; rather, they are produced with a single non-representational form regardless of the content of the message. Non-representational gestures are sometimes called beats because they are tied to the rhythm and stress that occurs during speech production.  These nonrepresentational beat gestures are represented, in an extreme form, by the actions of the voice actors discussed earlier in this paper.  These are also the gestures I predict will serve a cohesive function for interpreters. 
Casey & Emmorey (2009) hypothesize that representational gesture use by balanced bimodal bilinguals is influenced by activation of the right parietal cortex, which may be involved in processing spatial information.  The authors support this by citing research shows that in bimodal bilinguals this area of the brain is activated when producing spatial prepositions in English whereas monolingual English speakers did not activate the right parietal cortex when using the same prepositions.  The authors also hypothesize that bimodal bilinguals may produce more deictic gestures than monolinguals when discussing route, mapping, or other spatial information. 
Casey et al (forthcoming) further note that use of CSG helps with recall and spatial cognition.  The authors note research that found adults use of CSG while describing events helps with their recall of those events in both the short and long terms.  The authors also posit that gesture rates related to learning a manual language may improve cognitive abilities by adding a manual component to the encoding of events in memory. These hypotheses suggest that interpreters who are receiving manual language input and producing spoken language output may also use representational and deictic gestures as cohesive aides when discussing spatial information.  If this is the case then research on co-speech gesture as it relates to working memory during simultaneous interpretation could be another viable research topic that could build on the research proposed here.
Studies describe a difference between representational and nonrepresentational in terms of the concepts these gestures help language users produce.  Representational gestures are more often used to recall and describe spatial concepts as in Wesp et al’s (2001) examination of participants describing a painting to another person.  In this study the authors suggest that CSG increases when the speaker is attempting to describe something visual but that “…in search for synonyms or lexical search for nonspatial concepts, the need for gesturing is reduced” (Wesp et al, 2001: 593). Feyereisen (2006) reinforces this concept, saying that since they lack content nonrepresentational gestures likely aid memory through emphasis of the sentence they co-occur with by adding a visual and motor component to the spoken language.  Feyereisen’s (2006) study examined how a speaker’s use of CSG impacted sentence recall by an observer. Feyereisen found that sentence recall was enhanced most, compared to recall without gesture, when presented with representational gestures, but was also improved when presented with nonrepresentational gesture. Feyereisen highlights an especially germane concept related to ASL-English interpreting, 
 It is now well established that sentences that refer to actions like shaking a bottle or stirring a cup of coffee  are recalled and recognised in higher proportion if the subjects perform the action during verbal presentation (subject-performed tasks or SPTs), by comparison with merely reading or listening to the sentences (so-called verbal tasks or VTs). Sentences are also better recalled if subjects only see the experimenter performing the action (experimenter-performed tasks or EPTs). (198)

This relates to cross modality interpreting in that the way ASL presents many of the types of concepts Feyereisen describes is through the use of iconic signs, which may be considered representational gestures or “classifier constructions” (Casey and Emmorey, 2009).  It is possible that comprehension of an ASL source message is easier due to the representational gesture and the semantic/lexical information coexisting as one item.
Two of the studies discussed above, Casey and Emmorey (2009) and Casey et al (forthcoming) comprise the primary foundation of the study proposed below.  Casey and Emmorey (2009) found that the co-speech gesture rates among native bimodal bilinguals, people who grew up with both a manual language (ASL in this case) and a spoken language (English) were statistically similar to the CSG rates of English monolinguals.  They found that the bimodal bilinguals used more iconic gestures, more character viewpoint gestures, and a greater variety of handshapes.  The two groups were equal in their use of deictics and two-handed gestures where each hand represented an different entity (i.e. two people juxtaposed in space). The monolingual control group used more beat gestures.
            Casey and Emmorey (2009) posit that the equal gesture rates suggest that ASL signs occur in the place of co-speech gestures, rather than in addition to them.  Further, they suggest that the use of ASL signs when speaking to a monolingual non-user of ASL is not part of the normal CSG system but rather an inability to suppress ASL, the participant’s L1, while speaking English.  The authors note,

…our findings are consistent with Emmorey et al.’s hypothesis that the locus of
lexical selection for all bilinguals is relatively late in language production. If the architecture of the bilingual language production system required that a single lexical representation be selected at the preverbal message level or at the lemma level, we would expect no ASL signs to be produced when bilinguals talk with non-signing English speakers” (Casey & Emmorey, 2009). 

If this is true for all bilinguals second language learners of ASL could exhibit similar behaviors as the experimental group analyzed by Casey and Emmorey.
In the discussion of their findings Casey and Emmorey (2009) suggest that “late acquisition of a second language may affect co-speech gesture in ways that differ from simultaneous acquisition of two native languages” (304). Casey et al (forthcoming) also report that the co-speech gesture rate increases for second language learners of ASL after one year of ASL instruction.  Along with this increase in overall use of CSG the forthcoming paper specifies that these new ASL users increased their use of representational gestures and used at least one ASL sign during their language production.  The authors suggest that the reason for these findings involve cognitive processes associated with learning a manual L2 and an inability to suppress their L2.  Though they do not mention the possibility of these findings being due simply to exuberance exhibited by L2 ASL users after one year of instruction they do acknowledge the following:
Another possibility is that sign production while speaking does not reflect a failure to suppress ASL, but rather an increase in the repertoire of conventional gestures. Under this hypothesis, ASL learners have not begun to acquire a new lexicon (as have the Romance language learners), but instead have learned new emblematic gestures (akin to “quiet” (finger to lips), “stop” (palm outstretched), “good luck” (fingers crossed), or “thumbs up”) (Casey et al, forthcoming: 19).

Research comparing early L2 ASL learners to monolingual English speaking controls is in its early stages. I believe that comparing all of the groups discussed so far, native bimodal bilinguals, early L2 learners of ASL, monolingual English speakers, as well as bimodal bilinguals who acquired L2 ASL as adults would be descriptive of CSG use by bimodal bilinguals.  Examining the CSG rates and types used by the bilingual adult L2 acquisition group would make a good lead in to the study I am proposing and would fill the gap between the groups discussed when Casey et al say, “It appears that both life-long and short-term exposure to ASL increases the use of meaningful, representational gestures when speaking.” (16)
Finally, Casey et al (forthcoming) close by noting that learning a manual language may stimulate a stronger link between language and gesture.  This, along with the research on the functions and cognitive effects of CSG on recall, cohesion, and affect (tone and inflection) suggest that a study of CSG use by interpreters could provide insight into how the cognitive process of interpreting manifests in CSG.

Methodology
Participants
Data from samples of two groups (students and experienced interpreters) will be examined under two conditions.  The first participant group is interpreting students, specifically senior undergraduates and second year masters students in interpreting programs. Interpreting students are unique in the field in terms of their ability to film themselves during live interpreting scenarios.  Interpreting programs offer students opportunities for interpreting practice with live participants, which presents a measure of ecological validity to the data.  Metzger (1999) suggests that mock interpreting scenarios with live participants provide viable data for the study of interpreters.  Another advantage for using students lies in their experience with being recorded.  Students in many interpreting programs regularly record themselves, which may help overcome issues from the observer’s paradox.  Finally, students present a sample for two possible comparisons.  The first is the comparison of students’ use of CSG with that of experienced interpreters.  If the CSG rates of interpreters compares favorably with the rates found in spontaneous language production the second is a comparison of advanced (near graduation) interpreting students’ use of CSG with the results of Casey et al (forthcoming) finding that one year of ASL instruction increased the rate of CSG in students.
The second group of participants will consist of experienced interpreters.  All participants in this sample would be nationally certified and have at least five years of professional experience.  These characteristics are consistent with the generally accepted notion of “experienced” in the North American ASL-English interpreting community.  This sample can present some challenges in terms of data collection.  For one, the confidential nature of interpreting often precludes recording of actual job situations.  It is possible that permission to record interpreters at some public events may be obtained.  Additionally, there may be reluctance on the part of the interpreters to be publically videotaped.
Both the student and experienced groups will be comprised of second language learners of ASL.  The reason for selecting this group is to remove the possible confounding factor of L1 suppression.  Casey and Emmorey (2009) looking at the use of CSG by bimodal bilinguals for whom ASL is their first language posit that, ASL signs produced by bimodal bilinguals during spoken language production are separate from the cognitive process that produces co-speech gesture and are instead intrusions caused by a failure to suppress the speaker’s manual language.  Using only interpreters for whom ASL is their L2 would avoid the L1 language intrusion factor though it is possible that with less experience suppressing the manual modality language L2 ASL users may show more lexical items in their CSG than L1 ASL users do during spontaneous language production.

Conditions
            In the proposed study, I will examine interpreters’ use of gestures under two types of interpreting conditions ‑ authentic interpreting situations and interpretations produced using a recorded source text in a lab.  While looking at various types of texts with interpreters working both in teams and alone would be interesting and provide the most realistic data, for this study I propose looking at interpreters working in teams of two, interpreting monologues by deaf ASL users.  This condition best replicates the working conditions under which interpreters normally work.  As such it would be the easiest for which to obtain real world recorded data, which could also be replicated under lab conditions.  Recordings of interpreters working with live source texts would provide ecological validity to the data as well as an opportunity to examine the use of CSG in interpreters in authentic settings.  Use of a recorded source text would allow for control of the source message and a reduction in variables as a means of standardizing test conditions in a way that is more scientifically valid.  Also, use of a recorded source text in lab conditions may allow for easier data collection due to the difficulties of scheduling and obtaining the consent of multiple real world parties as required in the recording of real world data.

Coding
            Gesture coding will be based on the coding scheme devised by Casey and Emmorey (2009).  Casey and Emmorey coded for “…ASL signs versus non-sign gestures; iconic, deictic, and beat types; character and observer viewpoints; and handshape form.” (2009: 296).  Casey and Emmorey (2009) define ASL signs as “identifiable lexical signs (e.g., CAT or BIRD) or classifier constructions that a non-signer would be unlikely to produce. For example, a bent V handshape is used in ASL as a classifier for animals.” (297).  Iconic gestures are any gestures that look like what they represent, i.e. mimicking driving a car or tracing the outline of an object.  Deictic gestures point to a referent that is either present in the physical space where the language production is taking place or to a conceptual referent for example, pointing down when referring to China.  Character/observer viewpoint gestures are “produced from the perspective of a character, i.e., produced as if the gesturer were the character. For example, moving two fists outward to describe Sylvester swinging on a rope.” (Casey and Emmorey, 2009: 297).  Handshape form examines whether the gesture is made with a handshape that is typically associated with ASL (i.e. the “ILY” handshape) or with a gesture that is common among North American English speakers, (i.e. the “rock on” gesture).  As Casey and Emmorey (2009) note, these categories are not mutually exclusive and one gesture may fall into more than one category.
            Beat gestures are commonly found in spoken language at points of emphasis, to provide or illustrate some rhythmic feature of the discourse, and in word search behaviors (Casey and Emmorey, 2009; Wesp, 2001; Feyereisen, 2006; Streeck, 2009).  A preliminary analysis I conducted on one interpreting sample showed the use of deictic, character viewpoint, ASL signs and fingerspelling and beat gestures.  The most common gesture types were beat and ASL signs and fingerspelling.  The ASL signs and fingerspelling seemed to be exclusively used between the interpreting team as a means of getting clarification or confirmation of the signed source message.  I believe the beat gestures could be a candidate for further analysis.  The interpreting sample I examined suggests simple beat gestures, emphatic beat gestures, lexical search gestures, and cohesive beat gestures.  Simple beat gestures are the kind of simple rhythm gestures associated with everyday spoken language production.  Emphatic beat gestures seem to serve a similar function as the gestures used by voice over actors.  Lexical search gestures accompany parts of the interpretation where the interpreter is clearly searching for a word or phrase to match a SL concept that they comprehend but are struggling to reformulate.  Cohesive beat gestures appear to help guide the interpreter through transitions and relationships between propositions. An example of a cohesive beat gesture would be an interpreter flipping their palm orientation while moving their hand from one side of neutral space to the other while interpreting a SL message about contrasting concepts.  Casey and Emmorey (2009), and Casey et al (forthcoming) decline to speculate as to the impetus for the CSG found in the bimodal bilinguals they studied even when their data might suggest that a beat gesture may be associated with something like a lexical search behavior as in this example of an iconic-beat gesture, “one participant held out his hand imitating Sylvester holding out a tin cup and bounced hishand with the accompanying speech ‘and takes his um tin cup’.” (Casey & Emmorey, 2009: 297).  Such an analysis may prove to be outside the scope of the proposed project as well.  However, I believe it presents fertile ground for research and presents the next step in this branch of study.



Sources:
Casey, S. & Emmorey, K. (2009) 'Co-speech gesture in bimodal bilinguals'.  Language and Cognitive Processes, 24:2, 290 - 312

Casey, S., Emmorey, K., & Larrabee, H. (Forthcoming). The effects of learning American Sign Language on oo-speech gesture.

Cassel, J. (1998) “A FRAMEWORK FOR GESTURE GENERATION AND INTERPRETATION” in Computer Vision for Human-Machine Interaction, Cambridge University Press

Eidsvik, C. (2006). Voice and gesture within the context of mirror neuron research.  The Journal of Moving Image Studies, 5, pages here.

Feyereisen, P. (2006) “Further investigation on the mnemonic effect of
gestures: Their meaning matters,” in EUROPEAN JOURNAL OF COGNITIVE PSYCHOLOGY 2006, 18 (2), 185±205

Grosjean, F. (2011), “Those Incredible Interpreters,” in Psychology Today, September 14, 2011



Nagpal, J., Nicoladis, E., & Marentette, P. (2011). Predicting individual differences in L2
speakers' gestures [Electronic version]. International Journal of Bilingualism.

Streeck, J. (2009) “Gesturecraft: The Manu-facture of Meaning”
John Benjamins Publishing Company

Wesp, R.; Hesse, J.; Keutmann, D.; Wheaton, K. (2001) “Gestures Maintain Spatial Imagery,” in The American Journal of Psychology; Winter 2001; 114, 4; Research Library pg. 591

Saturday, April 23, 2011

Interpreting Acronyms

The following is another presentation video I wanted to post somewhere prettier than the dotSUB website.  I'm using it for a teaching project for my pedagogy class.  I'm hoping that some time this summer I can post a write up to go with it.  Maybe something that can be used as a short training exercise or skill builder.

The video is a corporate advertisement for an HSPD-12 compliant CAC reader for networked peripherals by a company called Cryptek.  They made the video, I'm using it because it's a nice short piece showing how acronyms are used in government discourse.

As always, thanks for reading and please check back for more articles and skill development tools.

Friday, March 18, 2011

Kinda Fun

I'm developing a training program for a class. The program is about making trainings and training materials accessible. One of the inspirations for developing this training is my experience having to interpret non-captioned videos during training classes. The fact is that interpreting a video is not equal access. In order to demonstrate to hearing people the effect of lag time when interpreting a video I edited the audio on this training video I found on YouTube.


Saturday, March 5, 2011

Interpreters as Possible Agents of Standardization of ASL



So I feel like I promised more than just YouTube videos so here's something more academic. Like most of what I had posted on the old site this was originally written as a class assignment. The assignment was to come up with a research proposal. In this case I think the lit review is the most interesting part of the project. Hopefully it interests you as well.

I recently found out I'm not alone in terms of interest in this project.  As it turns out there's a group from Gallaudet who are doing actual true-biz research on this very topic.  They will be presenting their paper as part of a panel at Georgetown University Round Table on Language and Linguistics on Friday, March 11, 2011.  In fact they are on the same panel I'm on so I was pretty surprised to see their topic.  I'm excited to see what they came up with.


Cheers,

-Roberto





Intro

One truism that the field of linguistics has given the world is that languages change. Any introductory linguistics class, or at least any introductory linguistics class I have ever come across, includes the idea that languages are not static, they are ever evolving as the people and cultures they inhabit grow and evolve. For languages used by diverse speakers over varied geographical space languages develop dialects, variations that are particular to a group of speakers that stand apart from but are generally mutually intelligible to the greater body of speakers. For example, many speakers of English in North America can tell you that people from the southern United States speak differently than people in the Ozark mountains. Still, people form these two regions are likely to be able to converse with each other without tremendous difficulty. While a language may have many dialects spoken by various groups of speakers they often also have a standard variety. This standard is the variety usually taught in schools and to people learning the language. Often this standard exists as an idea more than as a reality as one may be more likely to find speakers using non-standard forms more often than standard forms in their day-to-day communication. One example of this, which I will discuss in more detail later, can be found in the work done by Lucas et al (2001) who found that non-standard forms of some ASL signs were found more often than standard forms in a corpus of recorded ASL conversations.

As time goes on dialects of a language may move closer to, or farther from the standard for a variety of reasons. Among these are degrees of isolation or contact. Chambers (2009) quotes Weinreich in explaining the effects of isolation and contact this way:

“Contact breeds imitation and imitation breeds linguistic convergence. Linguistic divergence results from secession, estrangement, loosening of contact.” Weinreich ([1953] 1963: viii)” (Chambers, 2009: 73).


Fromkin and Rodman (1998) provide an example of this noting that English speakers on the island of Ocracoke are exhibiting a move towards Standard American English due to natives leaving the island and retirees from the mainland moving into the community. Economic mobility also plays a part in dialect leveling. Chambers (2009) summarizes several studies showing that as people move upwards in terms of socio-economic status they tend to hyper-correct in an attempt to sound like the class they are joining. This often results in upwardly mobile speakers using more standard forms than the members of the class they are joining for whom class is static. Economic mobility becomes entwined with geographic mobility, when people have more money they then have more opportunity to travel, greater access to educational opportunities, and greater freedom to live and work in different areas.

Another factor in language change includes the addition, subtraction, and changes of meaning of lexical items. As new concepts or objects are introduced new words are added, or the meanings of existing words is changed to describe these new ideas. The latter process is known as broadening (Fromkin and Rodman, 1998). How quickly new terms are adopted differs on different levels. For example, language change may be rapid for an individual who adopts new terms or a new way of speaking related to contact or mobility, but slow for a group where acceptance and agreement may need to be negotiated.

Anecdotal Assertions

As a PhD student in the Department of Interpretation at Gallaudet University I encountered a common sentiment from both professors and students that American Sign Language (ASL) is becoming standardized due to the influence of readily available video technology. On the surface this seems like a reasonable assumption. The decade between the turn of the century and today has seen an explosion of technology and formats for video communication. As videophone and web-cams become more advanced and less expensive and bandwidth becomes cheaper Deaf people have greater access to remote communication than ever before. The “Facetime” feature on Apple’s iPhone 4 brings the promise of video communication mobility in the near future and with it the full participation of Deaf people in the cell phone culture that has spread throughout the United States. Further, websites like YouTube, facebook, and free web-hosting sites have allowed ASL users to post video-logs (vlogs) in what is for many of them, their native language. This phenomenon should be given a full measure of significance as it allows Deaf people to break free of English, which for many of them is a second language and thus an imperfect medium for self expression. Vlogs allow Deaf people to both express and receive content in their native language on a footing roughly equal to what native English users have had for the last decade. It is not unreasonable to look at this cultural and communicative technological renaissance and think that there must then be a standardizing effect on ASL. After all, we can now see people from any part of the country using ASL and incorporate their variants into our own language use.

Despite these facts I have not found any empirical evidence that this type of standardization is taking place. Beyond that, I have not found much evidence to support the idea that this kind of standardization is likely to take place outside of the creation and standardization of new or novel lexical items. By this I mean the agreement of the Deaf community on lexical items that parallel the same word creation patterns seen in English primarily with lexical items that refer to new phenomena as a direct result of new technology. For example words like, “blog,” or the lexicalization of the acronym LOL (Laughing Out Loud). In the 2005 PBS documentary “Do You Speak American?” linguist William Labov indicates that the northern cities vowel shift is moving the dialect of the Great Lakes region farther from Standard American English despite the fact that as a nation most of us have access to the same forms of media. The implication is that even though we share a knowledge of and exposure to Broadcast English or standard American English though television, film, and radio we have not standardized our pronunciation or vocabulary in our everyday speech. So why would we think ASL would standardize as a result of increased access to video communication?

What is “Non-Standard” ASL?

Lucas et al (2001) provides a thorough run down of variation in ASL. The authors note that there is a concept of Standard ASL being the citation form signs found in ASL textbooks. The authors go on to provide myriad examples of how ASL users produce language that is different than the standard. For example, in their discussion of the “location variable” the authors note that signs produced at or on the forehead in their citation forms are often produced lower on the head, face, or in space. Turning again to anecdotal reports that seem to have become commonly accepted among people who work in fields related to deafness is the notion that Deaf children rarely receive the kind of instruction in ASL that their hearing counterparts receive in regard to English. That is, while both Deaf and hearing children grow up learning the conventions of standard English in structured classroom activities Deaf children and even deaf adults do not receive similar instruction about standard ASL.

Videophones

As I mentioned above one of the roots of standardization often cited in the halls of Gallaudet University is the increased access to videophones. This increased availability allows Deaf people to communicate with other ASL users across North America. The logic is that this exposure leads to standardization. However, hearing people have had the telephone for roughly 100 years and the impact of the device on standardization of spoken language has been minimal. I suggest that this is due to the fact that we do not generally call random people from around the country. Most of our calls are to people we already know, or to people in our localities. One exception is calls to customer service centers that use call centers around the country. If a customer calls a service center in a different location it is possible that they will encounter a representative who speaks a different dialect. How much impact this interaction may have, even over a lifetime of such interactions, is unknown but can be presumed to be minimal. Added to this is the fact that customer service telephone representatives are encouraged to speak as close to the standard dialect as possible thus minimizing the possible effect of the influence of their dialect. There is a corollary for this in the Deaf community in the form of Video Relay Interpreters (VI). Much like a hearing person calling a customer service call center a Deaf person accessing an interpreter through the Video Relay System (VRS) can get an interpreter from any part of the country. I will discuss this more in depth below.

Interpreters

Before discussing contact between Deaf people and Video Interpreters I would like to discuss the general demographics of interpreters past and present. One of the early demographic surveys of interpreters was done by Dennis Cokely (Cokely, 1981). The data Cokely reports indicates that 65% of the respondents did not have Deaf parents and did not grow up using ASL as their first language. Later studies have indicated the number as being closer to 50% but indicated that the number of non-CODA interpreters was expected to rise due to rapid professionalization of the field leading to a proliferation of interpreter training programs (Alcorn & Humphries, 1995). Since more interpreters are joining the field as non-native signers they have to learn ASL somewhere else. Historically non-native signers learned ASL primarily through exposure to and socialization with the Deaf community (Alcorn & Humphries, 1995; Brunson, ????; Cokely, 2005). With more interpreters coming up through ITPs many if not most of them are learning language primarily in the classroom and refining their language use through limited and often structured contact with the Deaf community. Cokely (2007) notes that a great majority of interpreters he surveyed reported that they spend less than 10% of their free time socializing with Deaf people. With this in mind it stands to reason that one of two things may be evident in the ASL production of these interpreters. They will either produce primarily citation forms (the form of lexical items as presented in text books), or they will produce forms particular to the dialect of their local Deaf community. The most likely situation is a combination of citation forms and local dialect. In addition to being second language learners the demographic studies noted above also state that a majority of interpreters are women. This is relevant to the current topic in that research (Mulrooney, 2000; Trudgill, 1972; Cheshire, 1978) shows that women are more likely to use overt “prestige forms” which often closely parallel citation form production of language. Essentially, women are more likely to produce language that is acknowledged by the greater society as “correct” or “proper.”

Other Factors in Standardization

Before examining the possibility that interpreters are agents of standardization in ASL I would like to note some of the other factors involved in standardization, if indeed ASL is standardizing. Cokely (2005) notes that Deaf people are now upwardly mobile in a way that they were not in the past. The passage of the Americans With Disabilities act along with previous legislation has opened up more educational and employment opportunities for Deaf people over the last twenty years. The result is advancing social, economic, and geographic mobility for Deaf people. This mobility brings Deaf people into contact with other Deaf people from other regions and backgrounds more often than in the past. Examples of this mobility can be seen in the attendance of Deaf people at national conventions like the World Deaf Expo, which drew over 20,000 visitors.

Another standardizing factor is the existence of a prestige dialect of ASL. In yet another example of anecdotal “common wisdom” it has long been held that Gallaudet University has had a standardizing effect on the language. The commonly accepted notion is that as the educational and cultural Mecca of the Deaf World the signs used at Gallaudet University are held in higher esteem and thus have a greater chance of becoming widely used than signs viewed as regional from other places. So, if people at Gallaudet start to use a sign in a certain way or adopt a certain variant that variant will be carried across the country and by dint of being a “Gallaudet sign” will be accepted and adopted by signers in other regions. Whether or not this actually happens is unresolved from a research standpoint but is held as folk wisdom in the Deaf community.

Though the Gallaudet effect has not been fully researched a recent study in regional variation in ASL used in Vermont has turned up another factor in standardization. Palmer and Morris (2010) found that in the three age groups they studied, knowledge of Vermont variants showed a negative correlation to age. The authors expressed a couple theories as to why younger signers knew fewer of the Vermont variants. One theory was that since the local Deaf school had only recently (1970s) switched to a philosophy that embraced ASL the teachers there placed an emphasis on “real ASL” rather than on the regional variants. The second theory, which relates to the first, involves a story related by a Deaf ASL teacher in Vermont. According to this teacher when she was training to become an ASL teacher the instructor of her training emphasized teaching ASL as presented in a specific curriculum rather than teaching the local dialect (Palmer & Morris, 2010).

Another possible factor in the reported standardizing of ASL is the increased mainstreaming of Deaf children in public schools (Cokely, 2005). As Deaf residential school enrollment falls, more Deaf children are being placed in classes with hearing peers and hearing teachers accessing the environment through an interpreter. Depending on the circumstances of the Deaf child the interpreter may be their primary language model. That is, the Deaf child may be learning ASL from the interpreter. If the interpreter is a graduate of an ITP and has learned ASL through one of the popular curriculums it is likely that the interpreter is presenting the child with the standard forms which the child will then incorporate into their own language production.

Factors Working Against Standardization

Despite greater mobility, Cokely’s keynote address at the 2010 PCRID conference noted that deaf people are still unemployed and underemployed at a greater rate than other Americans. In the same address he states that English literacy for Deaf people still lags far behind their hearing peers. So, while Deaf people are experiencing greater mobility, they are still behind their hearing counterparts.

Another barrier to standardization is the concept of covert prestige. Chambers (2009) cites several studies that indicate that some non-standard dialects persist because they carry covert prestige which shows the speaker’s connection to a particular group. Preston (2002) discusses Prestige vs. solidarity forms noting that women often prefer the former and men prefer the latter.

Linguistic Theoretical Basis for Interpreters as Possible Agents of Standardization

Milroy (2002) summarizes and explains the utility of the “social network” concept in sociolinguistics. She notes several characteristics of networks and how they constrain language use and provide opportunities for language innovation and change. She describes how dense networks provide both support and pressure, which creates an environment resistant to language change. If network ties weaken then a situation more open to language change is created. Variation research is primarily interested in first order network ties (people who associate closely). Milroy notes three kinds of ties, “Exchange” (strong ties), “Interactive” (non-supportive, acquaintances), and passive (distant but influential, i.e. extended family). Localized speech styles are constructed through close knit networks, communities where members live, work, and socialize largely with the same people. She also says that network analysis is important/useful where speakers cannot be separated by traditional social categories like class, race, education, etc, it also can help determine differences between speakers rather than “classes.” Also, immigrant communities represent good places for network studies of bilinguals in that network studies can examine not only specific language shifts but also code switching patterns.

Rickford (1985) looked at two speakers from the same community, one white and one black. He references and ultimately supports Labov’s work in Philadelphia, which found that white and black non-standard dialects do not converge except when both move towards the national standard, but not towards each other. Also, Black people tend to converge with white norms when they have high contact with Whites but not vice versa. Ultimately, anatomy, geography and socioeconomic status are not factors in Black/White speech differences, which are attributable in part to social contact and mobility. This applies to the Deaf community in that it is possible that when Deaf people from disparate backgrounds come into contact they will both move their language production towards the standard.

Meyerhoff (2002) examines Eckert’s study of Detroit high school students which shows that some of the most socially outcast members of a comunity were using that freedom to change their language use in a way that ended up driving linguistic change in their community.

Chambers (2003) provides a summary of the use of network studies in sociolinguistics. Network studies allow researchers to study variation with finer granularity than they can by applying general sociolinguistic categories like class, age, and gender. While these “macroscopic” (Chambers, 2003: 75) classifications do apply in network studies examining and defining networks allows researchers to parse out the reasons behind why, “some social groups are not class-differentiated and nevertheless show linguistic differentiation.” (Chambers, 74). Also, networks, and thus network studies, are largely defined by the people being studied. As Chambers points out, while an individual cannot change where they were born, their gender, or their parent’s economic status, they can to some extent choose where and with whom they will hang out. Two of the measures of network studies are a measure of “network density” (Chambers, 79) and a measure of network integration (Chambers, 83). Density describes how many people in the network are known by any individual. In a dense network many of the individuals will know each other. Network integration determines the depth of the connections between members by developing and assigning a set of criteria for measuring integration. Chambers (2003) provides a few examples, and highlights work done by Milroy (1980). In her study Milroy classified subjects according to five criteria. This neighborhood included ties of kinship, workplace, gender, and participation (including what kind of participation and with whom). Subjects that scored high on the integration scale were shown to also use the most non-standard or local vernacular forms.

Network studies are often used to reveal why members of a geographically defined and in some ways homogenous (i.e. all members are from the same social class) community shows evidence of linguistic variation. As mentioned above this is usually a micro-level analysis that works best with when the subjects are “localized and close knit.” (Chambers, 76). At first glance it seems that a network study approach would not meet with much success when applied to a population like the American Deaf community, which is geographically dispersed and varied in terms of social class. However, as Croneberg (1965) shows the American Deaf community is a linguistic and cultural entity that exists both within and separate from mainstream American society. In this way their linguistic and cultural “isolation” might be compared to the alpine villagers or inner-city gang members found in other network studies. The use of American Sign Language and degree of separation from mainstream American media could stand in for geographic proximity and homogenous social class when justifying the use of a network study approach with the American Deaf population. Indeed Croneberg’s (1965) description of the deaf community would likely sound familiar to researchers like Labov and Lippi-Green as described in Chambers (2003).

If one were to use a network approach to examining variation in the American Deaf community then it would be necessary to develop a network integration scale. The scale I present below draws on Milroy’s (1980) scale accounting for factors of kinship, occupational environment, and participation in activities and associations. I also rely on Croneberg’s (1965) description of the deaf community in terms of their general upbringing, associations with peer groups, and values. Each of the seven network categories proposed includes brief points on why the measure is significant.

Proposed Network
(1) Deaf parents (kinship)
a. Early acquisition of ASL and Deaf culture
b. Family reinforcement of Deaf identity

(2) Attended Deaf Residential School
a. Socialization and acquisition of ASL and Deaf culture at the next possible intervention point
b. 2 points for resident, 1 point for day student

(3) Attended Gallaudet University
a. Acculturation at third possible intervention point
b. Early adult opportunity to choose Deaf identity over attempting to enter the mainstream

(4) Member of deaf social/school organization (participation)
a. I.e. Fraternity/sorority, deaf student association, Member of deaf club, Member of NAD, Member of deaf sports team or other organized activity

(5) Work in “high Deaf” employment (occupation)
a. Job site has higher concentration of Deaf employees than found in general population
b. Job is in a deafness centered field

(6) Socialize primarily with Deaf friends (association/participation)
a. Ties to other Deaf people

(7) Uses ASL as primary means of communication
a. Important to distinguish deaf Deaf from others who may also be considered part of the community (i.e. CODAS) and who may show up strong in some parts of the scale and yet may not be core members of the community.

I considered including an eighth measurement of having a Deaf spouse or partner but decided to exclude it on the grounds that the choice of a partner may not be a social statement despite anecdotal evidence from my own experience. I cannot claim with any certainty that choosing an in-group partner is a reflection of a value based choice (I want someone like me) or a product of environment (I am most often around people like me). That is, is the pairing of core members based on a conscious choice, or the product of mostly associating with other core members? I have no suggestion on this point.

In order to provide a preliminary test of my proposed measures I applied the scale to members of a Deaf family.

    1  2   3  4  5  6   7
D +  +  +  +  +  +  +
R +  +  +  +  +  +  +
L  -  +  +  +   -  +  +
T +  -   +  -   +   -  -
B -   -  +  +  +   -   -

In this small sample population R and D are siblings, L is R’s spouse, T is R and L’s child, and B is T’s spouse. R worked in the printing department at the Washington Post, a well-known employer of Deaf people until the department closed with the advent of electronic publishing. D was an administrator at a school for the Deaf. L held various jobs but was primarily a stay-at-home parent. T and B are both hearing and work as ASL-English interpreters. All members of the population are graduates of Gallaudet University.

According to the scale and adopting Milroy’s (1980) terms, R and D would be considered core members of the Deaf community, L would be secondary, and T and B would be periphery. I believe this paints an accurate picture of the roles of these people in the Deaf community. D and R have been core members of the Deaf community since birth and have never strayed. Both married Deaf partners and D has deaf children and grand children. They both have few or no hearing friends and the hearing friends they do have are former work associates who know ASL. While L did attend a residential school it was an oral program and she strayed form the Deaf community as a young adult associating mostly with hearing people, working as the only deaf person in her workplace, and pairing up with a hearing partner. Her subsequent marriage to R was a driving force in her re-assimilation into the Deaf community. T and B represent common periphery members of the community. Though both are hearing they are fluent in ASL, spend all of their occupational time and some of their leisure time with Deaf people, have Deaf family members and, in this case, have educational ties to the Deaf community through their affiliation with Gallaudet University. T and B will never be “language leaders” (Chambers, 112) but may still be linguistically important. Due to their frequent contact with Deaf people around the country by way of their work as interpreters it is possible that they can function as linguistic innovators, at least as far as introducing variables across traditional sociolinguistic borders.

Interactions in VRS

With all of the above information in mind I call into question the idea that video technology, whether in the form of videophones or vlogs is truly behind any perceived standardization of American Sign Language. At the very least I would like to propose that if there is a sort of technological leveling going on it is not because Deaf people are in greater contact with each other through video. After all, just because you can call anyone anywhere, will you? I contend that, as Labov indicates in the quote above, languages evolve away from the standard in spite of technology. Hearing people have had the same telephones, television, radio, and film for 100 years or more and yet we do not attribute dialect leveling in North American English to these technologies. I propose that Deaf people, like hearing people, primarily use their videophones to call their friends and family, people who likely share the same linguistic variants as the person calling them. However, there is one video interaction that may have a leveling effect.

In her 1997 article, “Who Speaks for the Deaf Community? Not Who You Would Think!” Elizabeth Broeckner provides a generally a scathing critique of the role interpreters play in the Deaf community. Broecker notes that interpreters have become the face of the Deaf community in America because Deaf people passively allow these interpreters to represent them. She suggests that hearing people in Deaf Studies programs are often taught more about Deaf history, culture, the ADA, and linguistics of ASL than Deaf people are taught in the course of their own schooling thus making hearing people the authoritative voice of Deaf culture. Broeckner’s assertions provide the first motivation for an exploration of interpreters as possible agents of dialect leveling in ASL. If Cokely and Broeckner are correct and interpreters by and large are second language learners who have an understanding of standard ASL and linguistics then it is reasonable to assume that interpreters produce more standard forms than people who learn ASL organically.

Interpreters are also now nearly unavoidable in the daily lives of Deaf Americans. Broeckner presents four criteria for someone purporting to represent the Deaf community being able to make an impact on Hearing society. They are, high visibility, high credibility, occasion to interact with people in all aspects of American life, and the ability to influence wide-spread public opinion again and again (1997, 7). Though Broeckner was talking about interpreters representing Deaf people to the hearing world the same factors apply to interpreters interactions with Deaf people. Video Relay interpreters are highly visible in that Deaf people as a population make several thousand VRS calls each day. They have high credibility in that VRS interpreters are supposed to be the best interpreters available. They have occasion to interact with Deaf people from all over the country in a variety of communicative contexts. They also have the ability to influence opinion again and again due to the fact that they have dozens of customers over the course of a day. Thus, interpreters may be exposing Deaf people to more standard forms than the Deaf callers see outside of interpreted interaction.

In their examination of the effect of video on ASL and interpreting Weisneberg and Garcia (2007) suggest that, “the technology has allowed for an efficiency and speed of communication that is so important to deaf callers that they are willing to drastically change their language…” (Weisenberg and Garcia 2007: 32). Weisenberg and Garcia were building on the work of Keating and Mirus (2003) who performed an early study of how communicating through video has influenced Deaf people’s signing. They note that the participants in their study made changes to their signing to in order to make sure they were understood, “Signers show multiple ways to adjust their sign production in order to maximize the communicative potential of the computer-mediated signing space.” (Keating and Mirus, 2003: 704). Keating and Mirus cite disruptions to video picture quality, reduced signing space, and a lack of shared referential space, as constraints that cause signers to adjust their language production in the hopes of greater clarity. One of the clarifying strategies noted by Keating and Mirus is an increased use of citation form signs.

One objection to the idea of interpreters as standard bearers (pun intended) is the idea that second language learners do not grasp the language fully enough to influence natural users. For example, Alley (forthcoming) examines Deaf callers propensity to shift to contact sign in VRS. Indeed both Keating and Mirus (2003) and Weisenberg and Garcia (2007) found that Deaf people often switch to contact sign when conversing over video. However, I suggest that the switch to contact sign is primarily a syntactic phenomenon where as the kind of dialect leveling referred to is primarily lexical and phonological, which may not be impacted by syntax. Therefore, if there is a move towards citation form or standard signing over video and interpreters are prone to using standard forms it is possible that any technological leveling is due to Deaf people’s interaction with interpreters. Because a VRS call can originate anywhere in the country and the Deaf caller could connect to an interpreter anywhere in the country there is a much greater chance that the interpreter and the Deaf caller would encounter a person who uses a different dialect than they themselves use. It is also possible that as periphery members of the network interpreters are more open to adopting new variants than core members of the Deaf community. In keeping with Fromkin and Rodman (1998) interpreters may quickly put these variants to use and then in keeping with Eckert (2001) pass them on to other segments of the Deaf community who may in turn adopt the variant in question. The final section of this paper will address a proposed pilot study to examine this idea.

Pilot Study Idea

In order to study how much possible influence interpreters may have over standardization we have to know how much contact video interpreters have with Deaf consumers. We have to know who Deaf people call and for how long in order to gauge how much of their time is spent in contact with signers who use dialects other than that used by the Deaf caller. We also need to know who the interpreters are and where, both geographically and institutionally, they learned ASL. I propose a survey of both interpreters and Deaf people asking the following.

-Online survey of interpreters. IRB and RID help with interpreters.
-Age (0-17 survey terminates, thank you for your time), 18-29, 30-39, 40-49, 50+
-Years interpreting
-Certifications
-Year of first certification
-Age of start of ASL acquisition
-CODA Y/N
-ITP Grad Y/N
-Community College Grad Y/N
-Major
-University Grad Y/N
-Major
-Graduate School Y/N
-Major
-Do you work in VRS Y/N
-What % of your work is in VRS each month?
-Where (city/state) did most of your ASL acquisition take place?
-(Answer mapped to Lucas et al dialect map/RID regions)
-Did your ASL acquisition primarily use any of the following curriculums?
-Signing Naturally (Smith, Lentz, Mikos)
-American Sign Language aka The Green Books (Cokely, Baker-Shenk)
-The Joy of Signing (Riekenhof)
-Another ASL curriculum
-Primarily learned ASL from Deaf people

-Survey of Deaf people. IRB and NAD help (possible to do in ASL online?)

-Age (0-17 survey terminates, thank you for your time), 18-29, 30-39, 40-49, 50+
-Do you own a videophone?
-Do you use any other video chat or video calling device?
-Do you use VRS?
-How many calls do you make through VRS each day?
-How many calls do you make through VRS each week?
-What types of calls do you make most often?
-How long in minutes do you think your average call through VRS lasts?
-Do you video chat with other Deaf people?
-How many calls do you make to other Deaf people each day?
-How many calls do you make to other Deaf people each week?
-Are these calls usually to
-Family and friends
-For work or business?
-How long in minutes do you think your average call to another Deaf person lasts?
-How did you learn ASL?
-From my parents
-At a school for the Deaf
-From Deaf people later in life
-Other (Please specify)

-Where did you attend K-12 school?
-State
-Deaf school or hearing school?
-Did you attend college?
-Were there other Deaf students at your college or university?
-Were there Deaf professors?
-Did you attend Gallaudet University?

Additionally I would like to recruit Deaf informants to keep a “Call Journal” near their VP and log their calls for one month. If the results show that Deaf people spend most of their videophone time on interpreted calls and very little time talking to other Deaf people from different areas or backgrounds it would indicate that it is possible that the perceived technological leveling could be due to contact with interpreters who employ the standard forms of ASL.

Bibliography

-Alcorn, B. and Humpries, J., 1995, So You Want to be an Interpreter, H & H Publishing Company; 2nd edition
-Alley, E., (forthcoming) Deaf Perspective on the Use of American Sign Language or Contact Sign When Using Video Relay Services, Gallaudet University, Washington D.C.

-Broecker, E. L., 1997, “Who Speaks for the Deaf Community? Not Who You Would Think!” in Who Speaks for the Deaf Community, A Deaf American Monograph, Farb, Anita B. ed., National Association of the Deaf, Silver Spring, MD
-Brunson, J. (????), Sign Language Interpreting: Moving Towards Professionalization, Gallaudet University.
-Chambers, J.K., 2003/2009, Sociolinguistic Theory, Blackwell Pubishers, Malden, MA
-Cheshire, J. 1978, Present Tense Verbs in Reading English, in Sociolinguistic Pattern sin British English, Trudgill, P. ed. Edward Arnold, London
-Cokely, D., 1981, “Sign Language Interpreters: A Demographic Study,” in Sign Language Studies, Stokoe, William C., ed., Linstock Press Inc, Silver Spring, MD
-Cokely, D. 2005, (The book you lent me the other day. I can’t find it online.)
-Cokely, D. 2010, What’s in Our Backpack?, Keynote address at 2010 PCRID Conference.
-Croneberg, C., 1965, Appendix C and Appendix D in A Dictionary of American Sign Language, Stokoe, W., Casterline, D., Croneberg, C. Linstock Press.
-Eckert, P. 2001, Style and Social Meaning, in Style and Sociolinguistic Variation, Eckert, P and Rickford, J, eds. Cambridge University Press
-Fromkin, V. and Rodman, R. 1998, An Introduction to Language, Hartcourt Brace College Publishers, Orlando, Florida
- Keating, E. and Mirus, G. (2003). “American Sign Language in virtual space: Interactions between deaf users of computer-mediated video communication and the impact of technology on language practices.” Language in Society 32, 693-714.
-Lucas, C. et al., 2001, Sociolinguistics in Deaf Communities, Gallaudet University Press, Washington D.C.
-MacNeil, R., 2005, “Do You Speak American?” PBS documentary
-Meyerhoff, M. 2002, Communities of Practice, in The Handbook of Language Variation and Change, Chambers, J.K. ed. Blackwell Publishers
-Milroy, 2002, Social Networks, in The Handbook of Language Variation and Change, Chambers, J.K. ed. Blackwell Publishers
- Mulrooney, K., 2000. Variation in Fingerspelling in American Sign Language. Gallaudet University
-Preston, D. 2002, Language with an Attitude, in The Handbook of Language Variation and Change, Chambers, J.K. ed. Blackwell Publishers
-Rickford, J. 1985, Ethnicity as a Sociolinguistic Boundary, in American Speech, Vol. 60, #2 99-125.
-Trudgill, P. 1972, Sex, Covert Prestige and Linguistic Change in Urban British English of Norwich. In Language in Society, 1: 179-195
- Weisenberg, J. and Garcia, E. 2007. From Telephone to Dial Tone: A Look at Video
Interpreting. RID Views, June 2007, 10 & 32.