recentpopularlog in

robertogreco : signlanguage   10

Ask Dr. Time: Orality and Literacy from Homer to Twitter
"So, as to the original question: are Twitter and texting new forms of orality? I have a simple answer and a complex one, but they’re both really the same.

The first answer is so lucid and common-sense, you can hardly believe that it’s coming from Dr. Time: if it’s written, it ain’t oral. Orality requires speech, or song, or sound. Writing is visual. If it’s visual and only visual, it’s not oral.

The only form of genuine speech that’s genuinely visual and not auditory is sign language. And sign language is speech-like in pretty much every way imaginable: it’s ephemeral, it’s interactive, there’s no record, the signs are fluid. But even most sign language is at least in part chirographic, i.e., dependent on writing and written symbols. At least, the sign languages we use today: although our spoken/vocal languages are pretty chirographic too.

Writing, especially writing in a hyperliterate society, involves a transformation of the sensorium that privileges vision at the expense of hearing, and privileges reading (especially alphabetic reading) over other forms of visual interpretation and experience. It makes it possible to take in huge troves of information in a limited amount of time. We can read teleprompters and ticker-tape, street signs and medicine bottles, tweets and texts. We can read things without even being aware we’re reading them. We read language on the move all day long: social media is not all that different.

Now, for a more complicated explanation of that same idea, we go back to Father Ong himself. For Ong, there’s a primary orality and a secondary orality. The primary orality, we’ve covered; secondary orality is a little more complicated. It’s not just the oral culture of people who’ve got lots of experience with writing, but of people who’ve developed technologies that allow them to create new forms of oral communication that are enabled by writing.

The great media forms of secondary orality are the movies, television, radio, and the telephone. All of these are oral, but they’re also modern media, which means the media reshapes it in its own image: they squeeze your toothpaste through its tube. But they’re also transformative forms of media in a world that’s dominated by writing and print, because they make it possible to get information in new ways, according to new conventions, and along different sensory channels.

Walter Ong died in 2003, so he never got to see social media at its full flower, but he definitely was able to see where electronic communications was headed. Even in the 1990s, people were beginning to wonder whether interactive chats on computers fell under Ong’s heading of “secondary orality.” He gave an interview where he tried to explain how he saw things — as far as I know, relatively few people have paid attention to it (and the original online source has sadly linkrotted away):
“When I first used the term ‘secondary orality,’ I was thinking of the kind of orality you get on radio and television, where oral performance produces effects somewhat like those of ‘primary orality,’ the orality using the unprocessed human voice, particularly in addressing groups, but where the creation of orality is of a new sort. Orality here is produced by technology. Radio and television are ‘secondary’ in the sense that they are technologically powered, demanding the use of writing and other technologies in designing and manufacturing the machines which reproduce voice. They are thus unlike primary orality, which uses no tools or technology at all. Radio and television provide technologized orality. This is what I originally referred to by the term ‘secondary orality.’

I have also heard the term ‘secondary orality’ lately applied by some to other sorts of electronic verbalization which are really not oral at all—to the Internet and similar computerized creations for text. There is a reason for this usage of the term. In nontechnologized oral interchange, as we have noted earlier, there is no perceptible interval between the utterance of the speaker and the hearer’s reception of what is uttered. Oral communication is all immediate, in the present. Writing, chirographic or typed, on the other hand, comes out of the past. Even if you write a memo to yourself, when you refer to it, it’s a memo which you wrote a few minutes ago, or maybe two weeks ago. But on a computer network, the recipient can receive what is communicated with no such interval. Although it is not exactly the same as oral communication, the network message from one person to another or others is very rapid and can in effect be in the present. Computerized communication can thus suggest the immediate experience of direct sound. I believe that is why computerized verbalization has been assimilated to secondary ‘orality,’ even when it comes not in oral-aural format but through the eye, and thus is not directly oral at all. Here textualized verbal exchange registers psychologically as having the temporal immediacy of oral exchange. To handle [page break] such technologizing of the textualized word, I have tried occasionally to introduce the term ‘secondary literacy.’ We are not considering here the production of sounded words on the computer, which of course are even more readily assimilated to ‘secondary orality’” (80-81).

So tweets and text messages aren’t oral. They’re secondarily literate. Wait, that sounds horrible! How’s this: they’re artifacts and examples of secondary literacy. They’re what literacy looks like after television, the telephone, and the application of computing technologies to those communication forms. Just as orality isn’t the same after you’ve introduced writing, and manuscript isn’t the same after you’ve produced print, literacy isn’t the same once you have networked orality. In this sense, Twitter is the necessary byproduct of television.

Now, where this gets really complicated is with stuff like Siri and Alexa, and other AI-driven, natural-language computing interfaces. This is almost a tertiary orality, voice after texting, and certainly voice after interactive search. I’d be inclined to lump it in with secondary orality in that broader sense of technologically-mediated orality. But it really does depend how transformative you think client- and cloud-side computing, up to and including AI, really are. I’m inclined to say that they are, and that Alexa is doing something pretty different from what the radio did in the 1920s and 30s.

But we have to remember that we’re always much more able to make fine distinctions about technology deployed in our own lifetime, rather than what develops over epochs of human culture. Compared to that collision of oral and literate cultures in the Eastern Mediterranean that gave us poetry, philosophy, drama, and rhetoric in the classical period, or the nexus of troubadours, scholastics, printers, scientific meddlers and explorers that gave us the Renaissance, our own collision of multiple media cultures is probably quite small.

But it is genuinely transformative, and it is ours. And some days it’s as charming to think about all the ways in which our heirs will find us completely unintelligible as it is to imagine the complex legacy we’re bequeathing them."
2018  timcarmody  classics  homer  literature  poetry  literacy  orality  odyssey  walterong  secondaryorality  writing  texting  sms  twitter  socialmedia  technology  language  communication  culture  oraltradition  media  film  speech  signlanguage  asl  tv  television  radio  telephones  phones 
january 2018 by robertogreco
Chirologia, or The Natural Language of the Hand (1644) | The Public Domain Review
"Is gesture a universal language? When lost for words, we point, wave, motion and otherwise use our hands to attempt to indicate meaning. However, much of this form of communication is intuitive and is not generally seen to be, by itself, an effective substitution for speech.

John Bulwer (1606 – 1656), an English doctor and philosopher, attempted to record the vocabulary contained in hand gestures and bodily motions and, in 1644, published Chirologia, or the Naturall Language of the Hand alongside a companion text Chironomia, or the Art of Manual Rhetoric, an illustrated collection of hand and finger gestures that were intended for an orator to memorise and perform whilst speaking.

For Bulwer, gesture was the only from of speech that was inherently natural to mankind, and he saw it as a language with expressions as definable as written words. He describes some recognisable hand gestures, such as stretching out hands as an expression of entreaty or wringing them to convey grief, alongside more unusual movements, including pretending to wash your hands as a way to protest innocence, and to clasp the right fist in the left palm as a way to insult your opponent during an argument. Although Bulwer’s theory has its roots in classical civilisation, from the works of Aristotle, he was inspired by hundreds of different works, including biblical verses, medical texts, histories, poems and orations, in order to demonstrate his conclusions.

The language of gesture proved a popular subject in the age of eloquence, and inspired many similar works. Bulwer’s work was primarily meant for the pulpit, but also had applications for the stage. Although we do not know if these hand gestures were ever used by public speakers as they were intended, there is some evidence of the book’s impact on popular culture. Laurence Sterne’s novel Tristram Shandy (completed in 1767) features characters who clasp their hands together in the heat of argument, one who dramatically holds his left index finger between his right thumb and forefinger to signal a dispute, and another who folds his hands as a gesture of idleness.

This was not the end for the Chirologia, however. Some years after publishing the book, Bulwer became one of the first people in England to propose educating deaf people. Although the link to deaf studies seems evident, the Chirologia only makes passing reference to deafness, but this nevertheless may have inspired Bulwer’s further research in the area, and how fingerspelling and gesture can be used as a form of communication in themselves. The hand shapes described in the Chirologia are still used in British Sign Language today."

[via: ]
gestures  1644  books  hands  chirologia  communication  signlanguage  johnbulwer  universality  meaning  expression  speech 
november 2016 by robertogreco
The 'Not Face' Is Universally Understood - D-brief
"When your boss strolls up to your desk at 5 p.m. on a Friday and asks you to work on Saturday, your facial expression tells the whole story. And, according to a new study from researchers at Ohio State University, no matter if your boss comes from Nigeria, Nepal or Nebraska, the look on your face will still come across loud and universally clear.

How Many Ways to Say No?

The study, led by Aleix Martinez, a professor of electrical and computer engineering at OSU, looked at the facial expressions of 158 students with a range of native languages as they expressed “I don’t want to.”

Speakers of English, Spanish, Mandarin Chinese and American Sign Language (ASL) were filmed while reciting a sentence with a negative valence, or responding to a question that they were likely to disagree with. The researchers manually selected the telltale signs of what they called the “Not Face” — furrowed brows, raised chin and compressed lips — from the images and set a computer algorithm to work sorting out “Not Faces” from others. They published their results Monday in the journal Cognition.

The Universally Understood ‘Not Face’

They found that the “Not Faces” appeared with the same frequency as spoken syllables, indicating that it was a genuine mode of communication, as opposed to a random occurrence. What’s more, the expression translated almost perfectly across languages, implying that the genesis of this particular expression extends far back into the past. While our words may differentiate us, our expressions remain a global unifier.

Martinez has done research into facial expressions before. In a 2014 study, he categorized 21 unique emotions, including “happily disgusted,” and “sadly angry,” for use in cognitive analysis. The new research builds on his previous findings by definitively linking a facial expression to language. While most of us recognize nonverbal modifiers with ease, proving that one of these modifiers exists across cultures and languages will allow for more accurate facial recognition software, as well as insights into the beginnings of communication and language.

Words and sentences make up only a part of human communication — anyone who has ever succeeded in obtaining directions in a foreign country by sole use of hand movements can attest. These arm-flailing conversations may look ridiculous, but they nevertheless succeed in getting the basic concept across. Even in normal conversation, our faces and bodies convey subtle shades of nuance that can add up to distinctly alter the meaning of a sentence.

Crucial for Sign Language

In certain languages, the unspoken cues hold much more significance. Sign language, for example, is based off of hand and body movements, but also relies heavily on a diverse array of facial expressions. For proof, look no further than ASL translator Lydia Callis, who became an Internet sensation during Hurricane Sandy for her virtuosic use of facial expressions while signing about the impending storm.

In his study, Martinez found that ASL users also deploy the “Not Face,” but do so to even greater effect than verbal language users. While those speaking English, Spanish and Chinese used the expression to strengthen the stated emotion, ASL users would replace the sign for “not” entirely, using only the “Not Face” to convey the same statement.

Martinez says that this is the first documented instance of ASL speakers completely replacing a word with a facial expression. Such a discovery highlights the crucial role facial expressions play in fully communicating how we feel to others.

Martinez hopes to expand his library of faces by teaching computer algorithms to recognize different expressions without the need for manual selection. Once they have that ability, he plans to use thousands of hours of YouTube videos to train them and hopefully compile a database of human expressions.

Such a database of expressions might be of interest to robots like Sophia, whose accurate but still creepy impressions made headlines at this year’s SXSW."
asl  expression  communication  via:anne  2016  disagreement  aleixmartinez  spanish  español  mandarin  negativevalence  notface  translation  universality  signlanguage  signing 
march 2016 by robertogreco
Sign language that African Americans use is different from that of whites - The Washington Post
"Carolyn McCaskill remembers exactly when she discovered that she couldn’t understand white people. It was 1968, she was 15 years old, and she and nine other deaf black students had just enrolled in an integrated school for the deaf in Talledega, Ala.

When the teacher got up to address the class, McCaskill was lost.

“I was dumbfounded,” McCaskill recalls through an interpreter. “I was like, ‘What in the world is going on?’ ”

The teacher’s quicksilver hand movements looked little like the sign language McCaskill had grown up using at home with her two deaf siblings and had practiced at the Alabama School for the Negro Deaf and Blind, just a few miles away. It wasn’t a simple matter of people at the new school using unfamiliar vocabularly; they made hand movements for everyday words that looked foreign to McCaskill and her fellow black students.

So, McCaskill says, “I put my signs aside.” She learned entirely new signs for such common nouns as “shoe” and “school.” She began to communicate words such as “why” and “don’t know” with one hand instead of two as she and her black friends had always done. She copied the white students who lowered their hands to make the signs for “what for” and “know” closer to their chins than to their foreheads. And she imitated the way white students mouthed words at the same time as they made manual signs for them.

Whenever she went home, McCaskill carefully switched back to her old way of communicating.

What intrigues McCaskill and other experts in deaf culture today is the degree to which distinct signing systems — one for whites and another for blacks — evolved and continue to coexist, even at Gallaudet University, where black and white students study and socialize together and where McCaskill is now a professor of deaf studies."

"Another widely held but erroneous belief is that sign languages are direct visual translations of spoken languages, which would mean that American signers could communicate fairly freely with British or Australian ones but would have a hard time understanding an Argentinian or Armenian’s signs.

Neither is true, explains J. Archer Miller, a Baltimore-based lawyer who specializes in disability rights and has many deaf clients. There are numerous signing systems, and American Sign Language is based on the French system that Gallaudet and his teacher, Laurent Clerc, imported to America in the early 19th century.

“I find it easier to understand a French signer” than a British or Australian one, Miller says, “because of the shared history of the American and French systems.”

In fact, experts say, ASL is about 60 percent the same as French, and unintelligible to users of British sign language.

Within signing systems, just as within spoken languages, there are cultural and regional variants, and Miller explains that he can sometimes be stumped by a user’s idiosyncracies. He remembers in Philadelphia coming across an unfamiliar sign for “hospital” (usually depicted by making a cross on the shoulder, but in this case with a sign in front of the signer’s forehead).

What’s more, Miller says, signing changes over time: The sign for “telephone,” for example, is commonly made by spreading your thumb and pinkie and holding them up to your ear and mouth. An older sign was to put one fist to your ear and the other in front of your mouth to look like an old-fashioned candlestick phone.

So it’s hardly surprising, Miller says, that Americans’ segregated pasts led to the development of different signing traditions — and that contemporary cultural differences continue to influence the signing that black and white Americans use.

Some differences result from a familiar history of privation in black education. Schools for black deaf children — the first of them opened some 50 years after the Hartford school was founded, and most resisted integration until well after the Brown v. Board of Education decision of 1954— tended to have fewer resources. Students were encouraged to focus on vocational careers — repairing shoes or working in laundries — rather than pursuing academic subjects, Lucas says, and some teachers had poor signing skills.

But a late-19th-century development in the theory of how to teach deaf children led, ironically, to black students’ having a more consistent education in signing. The so-called oralism movement, based on the now controversial notion that spoken language is inherently superior to sign language, placed emphasis on teaching deaf children how to lip-read and speak.

Driven by the slogan “the gesture kills the word,” the oralism theory was put into practice in the United States predominantly in white schools. Black students, Lucas says, were left to manage with their purely manual form of communication.

Ultimately rejected by people who felt it prevented deaf people from developing their “natural,” manual language, oralism fell out of favor in the 1970s and ’80s, but white signers continued to mouth words. That was one of the key differences McCaskill noted when she joined the integrated Alabama School for the Deaf. And the distinction is still evident today, Lucas says, among older signers."

"There’s little evidence of Black ASL in the Gallaudet University classroom when McCaskill leads a diverse group of about 20 students in a discussion of “The Dynamics of Oppression,” a course that examines oppression across different cultures and explores parallels in the deaf community. In the classroom, just as in a professional setting, Lucas says, students and teachers generally employ a formal, academic norm, much as would be the case with spoken English.

But as students break into smaller discussion groups, their signing becomes more colloquial. They refer to regional differences in signing and occasionally stop to discuss a sign that is unfamiliar to one of them.

And when a smaller group of black students meets to describe and demonstrate the distinctive flavor of Black ASL, they refer emotionally to their attachment to their own brand of signing and how it reflects their identities as African American members of the deaf community.

“It shows our personality,” says Dominique Flagg, through an interpreter.

“Our signing is louder, more expressive,” explains Teraca Florence, a former president of the Black Deaf Student Union at the university, where 8 percent of the student body is African American. “It’s almost poetic.”

Proud as they are of its distinctive rhythm and style, Flagg and the other students say they worry about assumptions others make about their signing. “People sometimes think I am mad or have an attitude when I am just chatting with my friends, professors and other people,” Flagg says.

Others express concern that Black ASL is sometimes seen as less correct or even stereotyped as street language, echoing a sentiment expressed by some African American signers interviewed for the book who describe the ASL used by white people as “cleaner” and “superior.”

It’s a familiar feeling for McCaskill, who remembers how she had to learn to fit in with the white kids at her integrated school all those years ago.

“I would pick up their signs,” McCaskill says.

And when she went home, she remembers, “friends and family would say, ‘Wait a minute, you’re signing like the white students. You think you’re smart. You think you're better than us.’ ”"
asl  signlanguage  communication  deaf  deafculture  2016  gallaudetuniversity  carolynmccaskill 
january 2016 by robertogreco
Internet slang meets American Sign Language — Hopes&Fears — flow "Internet"
"How do you sign "new" words? The Deaf community works as a network, collectively brainstorming new sign language terms over the web, until dominant signs emerge."
language  signlanguage  signing  asl  2015  slang  words  deaf  mikesheffield  change 
february 2015 by robertogreco
Signed languages can do so many things spoken languages can’t | Sarah Klenbort | Comment is free |
"I also used to assume all deaf people would prefer to be hearing.

The deaf community is no utopia, but it does offer an alternative language, culture and social life to those who choose to be a part of it. In fact, signed languages can do many things spoken languages can’t. In fact, here’s a list of ways in which visual languages are superior to the spoken word:

10. You can carry on a complex conversation in the loudest pub or club, while people all around you scream into each other’s ears, trying to convey something as simple as, I’m going to the toilet now.

9. Visual languages are more accessible, not only for people who are fully deaf, but (in theory) for the 1 in 6 Australians who have a hearing loss.

8. You can ask your partner to pick up the mail from the balcony when he’s standing in the parking lot, four floors down, without disturbing the neighbours.

7. You can talk underwater.

6. Storytelling is more engaging and detailed in visual languages. Because they are visual-spatial, signed languages are particularly adept at describing space and movement.

5. You can talk through car windows. It’s easy to give directions to a signing friend driving behind or in front.

4. Deaf people who sign have been proven to be more “multilingual”. In a fascinating study lead by UK academic and researcher Sabaji Panda, it was found that if you put two deaf people in a room, who have no shared language, it’s only a matter of hours before they find a way to communicate (imagine trying that with hearing people.) Because signed languages have shorter histories, their grammars typically share certain features, which means that even if two deaf people have no common vocabulary, it takes short time before they can figure out a way to communicate.

3. You can critique a terrible lecture/performance/reading without anyone in the audience hearing you.

2. Unlike Esperanto, that failed international spoken language, International Sign has taken off since the advent of social media. Deaf people often learn and use IS when they travel overseas, skype, and/or present at international deaf conferences and events.

1. A signed language, often referred to as the “natural language of the deaf”, offers deaf people a sense of belonging and a positive identity.

I can’t speak for the deaf community – I’m not deaf – but I can share what I’ve learned from my daughter’s experience. She speaks clearly, but she doesn’t hear well. She loves Auslan and is proud of her deaf identity. What’s more of a loss for her than any hearing loss is the fact that she has so few peers to sign with; the majority of deaf children in Australia have no exposure to Auslan.

Auslan is not taught in government schools or early intervention programs.

Over 95% of deaf children are born to hearing parents, who are often told not to sign by medical professionals and speech therapists—they claim it will impede spoken language development, though studies show the opposite is true. And my six year-old proved this in a speech competition last month. It feels appropriate to end with her words.

My daughter’s art teacher recently asked her to paint what she most loved about herself. “What’s that?” I asked.

“That I’m deaf!” she said as if I was stupid. “I painted myself signing.”"
signlanguage  via:ablerism  visuallanguages  language  languages  communication  deaf  2014 
october 2014 by robertogreco
ASL + Sign Language + Culture
"Free sign language, self-study resources and extracurricular materials for ASL students, instructors and teachers, homeschoolers, parents, sign language interpreters, and language enthusiasts who are interested in learning how to sign language online and/or beyond classes for practice or self-study."

via ]
asl  language  signlanguage  srg  references  video 
june 2013 by robertogreco
Sign Language Researchers Broaden Science Lexicon -
"Scientific terms like “organism” and “photosynthesis” have no widely accepted equivalent in sign language, so deaf students and professionals have unexpected hurdles when talking about science. Here, Lydia Callis, a professional sign language interpreter, translates a shortened version of an article by Douglas Quenqua, explaining how new signs are being developed that may enhance scientific learning and communication."

"Surprisingly, some deaf students say that relying on sign language gives them an advantage over hearing students. Because it is acted out, with everything from facial expressions to speed of motion available as tools to convey meaning, and because it is in many ways less codified than written language, sign language can illustrate difficult scientific principles better than traditional languages can."

[See also: AND ]
lexicon  douglasquenqua  lydiacallas  learning  communication  2012  asl  signing  language  signlanguage 
december 2012 by robertogreco

Copy this bookmark:

to read