recentpopularlog in

robertogreco : turingtest   5

Facebook, communication, and personhood - Text Patterns - The New Atlantis
"William Davies tells us about Mark Zuckerberg's hope to create an “ultimate communication technology,” and explains how Zuckerberg's hopes arise from a deep dissatisfaction with and mistrust of the ways humans have always communicated with one another. Nick Carr follows up with a thoughtful supplement:
If language is bound up in living, if it is an expression of both sense and sensibility, then computers, being non-living, having no sensibility, will have a very difficult time mastering “natural-language processing” beyond a certain rudimentary level. The best solution, if you have a need to get computers to “understand” human communication, may to be avoid the problem altogether. Instead of figuring out how to get computers to understand natural language, you get people to speak artificial language, the language of computers. A good way to start is to encourage people to express themselves not through messy assemblages of fuzzily defined words but through neat, formal symbols — emoticons or emoji, for instance. When we speak with emoji, we’re speaking a language that machines can understand.

People like Mark Zuckerberg have always been uncomfortable with natural language. Now, they can do something about it.

I think we should be very concerned about this move by Facebook. In these contexts, I often think of a shrewd and troubling comment by Jaron Lanier: “The Turing test cuts both ways. You can't tell if a machine has gotten smarter or if you've just lowered your own standards of intelligence to such a degree that the machine seems smart. If you can have a conversation with a simulated person presented by an AI program, can you tell how far you've let your sense of personhood degrade in order to make the illusion work for you?” In this sense, the degradation of personhood is one of Facebook's explicit goals, and Facebook will increasingly require its users to cooperate in lowering their standards of intelligence and personhood."
williamdavies  markzuckerberg  communication  technology  2015  facebook  alanjacobs  jaronlanier  turingtest  ai  artificialintelligence  personhood  dehumanization  machines 
september 2015 by robertogreco
Teaching Machines and Turing Machines: The History of the Future of Labor and Learning
"In all things, all tasks, all jobs, women are expected to perform affective labor – caring, listening, smiling, reassuring, comforting, supporting. This work is not valued; often it is unpaid. But affective labor has become a core part of the teaching profession – even though it is, no doubt, “inefficient.” It is what we expect – stereotypically, perhaps – teachers to do. (We can debate, I think, if it’s what we reward professors for doing. We can interrogate too whether all students receive care and support; some get “no excuses,” depending on race and class.)

What happens to affective teaching labor when it runs up against robots, against automation? Even the tasks that education technology purports to now be able to automate – teaching, testing, grading – are shot through with emotion when done by humans, or at least when done by a person who’s supposed to have a caring, supportive relationship with their students. Grading essays isn’t necessarily burdensome because it’s menial, for example; grading essays is burdensome because it is affective labor; it is emotionally and intellectually exhausting.

This is part of our conundrum: teaching labor is affective not simply intellectual. Affective labor is not valued. Intellectual labor is valued in research. At both the K12 and college level, teaching of content is often seen as menial, routine, and as such replaceable by machine. Intelligent machines will soon handle the task of cultivating human intellect, or so we’re told.

Of course, we should ask what happens when we remove care from education – this is a question about labor and learning. What happens to thinking and writing when robots grade students’ essays, for example. What happens when testing is standardized, automated? What happens when the whole educational process is offloaded to the machines – to “intelligent tutoring systems,” “adaptive learning systems,” or whatever the latest description may be? What sorts of signals are we sending students?

And what sorts of signals are the machines gathering in turn? What are they learning to do?
Often, of course, we do not know the answer to those last two questions, as the code and the algorithms in education technologies (most technologies, truth be told) are hidden from us. We are becoming as law professor Frank Pasquale argues a “black box society.” And the irony is hardly lost on me that one of the promises of massive collection of student data under the guise of education technology and learning analytics is to crack open the “black box” of the human brain.

We still know so little about how the brain works, and yet, we’ve adopted a number of metaphors from our understanding of that organ to explain how computers operate: memory, language, intelligence. Of course, our notion of intelligence – its measurability – has its own history, one wrapped up in eugenics and, of course, testing (and teaching) machines. Machines now both frame and are framed by this question of intelligence, with little reflection on the intellectual and ideological baggage that we carry forward and hard-code into them."



"We’re told by some automation proponents that instead of a future of work, we will find ourselves with a future of leisure. Once the robots replace us, we will have immense personal freedom, so they say – the freedom to pursue “unproductive” tasks, the freedom to do nothing at all even, except I imagine, to continue to buy things.
On one hand that means that we must address questions of unemployment. What will we do without work? How will we make ends meet? How will this affect identity, intellectual development?

Yet despite predictions about the end of work, we are all working more. As games theorist Ian Bogost and others have observed, we seem to be in a period of hyper-employment, where we find ourselves not only working numerous jobs, but working all the time on and for technology platforms. There is no escaping email, no escaping social media. Professionally, personally – no matter what you say in your Twitter bio that your Tweets do not represent the opinions of your employer – we are always working. Computers and AI do not (yet) mark the end of work. Indeed, they may mark the opposite: we are overworked by and for machines (for, to be clear, their corporate owners).

Often, we volunteer to do this work. We are not paid for our status updates on Twitter. We are not compensated for our check-in’s in Foursquare. We don’t get kick-backs for leaving a review on Yelp. We don’t get royalties from our photos on Flickr.

We ask our students to do this volunteer labor too. They are not compensated for the data and content that they generate that is used in turn to feed the algorithms that run TurnItIn, Blackboard, Knewton, Pearson, Google, and the like. Free labor fuels our technologies: Forum moderation on Reddit – done by volunteers. Translation of the courses on Coursera and of the videos on Khan Academy – done by volunteers. The content on pretty much every “Web 2.0” platform – done by volunteers.

We are working all the time; we are working for free.

It’s being framed, as of late, as the “gig economy,” the “freelance economy,” the “sharing economy” – but mostly it’s the service economy that now comes with an app and that’s creeping into our personal not just professional lives thanks to billions of dollars in venture capital. Work is still precarious. It is low-prestige. It remains unpaid or underpaid. It is short-term. It is feminized.

We all do affective labor now, cultivating and caring for our networks. We respond to the machines, the latest version of ELIZA, typing and chatting away hoping that someone or something responds, that someone or something cares. It’s a performance of care, disguising what is the extraction of our personal data."



"Personalization. Automation. Management. The algorithms will be crafted, based on our data, ostensibly to suit us individually, more likely to suit power structures in turn that are increasingly opaque.

Programmatically, the world’s interfaces will be crafted for each of us, individually, alone. As such, I fear, we will lose our capacity to experience collectivity and resist together. I do not know what the future of unions looks like – pretty grim, I fear; but I do know that we must enhance collective action in order to resist a future of technological exploitation, dehumanization, and economic precarity. We must fight at the level of infrastructure – political infrastructure, social infrastructure, and yes technical infrastructure.

It isn’t simply that we need to resist “robots taking our jobs,” but we need to challenge the ideologies, the systems that loath collectivity, care, and creativity, and that champion some sort of Randian individual. And I think the three strands at this event – networks, identity, and praxis – can and should be leveraged to precisely those ends.

A future of teaching humans not teaching machines depends on how we respond, how we design a critical ethos for ed-tech, one that recognizes, for example, the very gendered questions at the heart of the Turing Machine’s imagined capabilities, a parlor game that tricks us into believing that machines can actually love, learn, or care."
2015  audreywatters  education  technology  academia  labor  work  emotionallabor  affect  edtech  history  highered  highereducation  teaching  schools  automation  bfskinner  behaviorism  sexism  howweteach  alanturing  turingtest  frankpasquale  eliza  ai  artificialintelligence  robots  sharingeconomy  power  control  economics  exploitation  edwardthorndike  thomasedison  bobdylan  socialmedia  ianbogost  unemployment  employment  freelancing  gigeconomy  serviceeconomy  caring  care  love  loving  learning  praxis  identity  networks  privacy  algorithms  freedom  danagoldstein  adjuncts  unions  herbertsimon  kevinkelly  arthurcclarke  sebastianthrun  ellenlagemann  sidneypressey  matthewyglesias  karelčapek  productivity  efficiency  bots  chatbots  sherryturkle 
august 2015 by robertogreco
Eyeo 2014 - Claire Evans on Vimeo
"Science Fiction & The Synthesized Sound – Turn on the radio in the year 3000, and what will you hear? When we make first contact with an alien race, will we—as in "Close Encounters of the Third Kind"—communicate through melody? If the future has a sound, what can it possibly be? If science fiction has so far failed to produce convincing future music, it won’t be for lack of trying. It’s just that the problem of future-proofing music is complex, likely impossible. The music of 1,000 years from now will not be composed by, or even for, human ears. It may be strident, seemingly random, mathematical; like the “Musica Universalis” of the ancients, it might not be audible at all. It might be the symphony of pure data. It used to take a needle, a laser, or a magnet to reproduce sound. Now all it takes is code. The age of posthuman art is near; music, like mathematics, may be a universal language—but if we’re too proud to learn its new dialects, we’ll find ourselves silent and friendless in a foreign future."
claireevans  sciencefiction  scifi  music  future  sound  audio  communication  aesthetics  robertscholes  williamgibson  code  composition  2014  johncage  film  history  ai  artificialintelligence  machines  universality  appreciation  language  turingtest 
february 2015 by robertogreco
Furbidden Knowledge - Radiolab
"In 1999, Freedom Baird was in grad school, and Furbies--those furry little robot toys that talk to you and tell you to play with them--were all the rage. So Freedom, who was thinking about becoming a mom someday, decided to get a little practice by adopting two gerbils and one Furby. And that led to a chance discovery...and an idea for an experiment that Freedom believed could serve as a kind of emotional Turing test, a way to ask whether machines are more alive than dolls.

In order to test Freedom's idea, we gathered up a Barbie, a hamster named Gerbie, and a Furby. Then, we invited five brave kids into the studio: Taro Higashi Zimmerman, Luisa Tripoli-Krasnow, Sadie Kathryn McGearey, Olivia Tate McGearey, Lila Cipolla, and Turin Cipolla.

We ran our results by Caleb Chung, the man who created Furby. And according to Caleb, the reason Furby gets under our skin is simple...but Jad and Robert aren't ready to buy his explanation. Sherry Turkle returns to help us think about what's going on."

[Complete show: http://www.radiolab.org/story/137407-talking-to-machines/ ]
furby  furbies  machines  behavior  interaction  2011  1999  freedombaird  toys  robots  turingtest  calebchung  sherryturkle  radiolab 
december 2013 by robertogreco
An Essay on the New Aesthetic | Beyond The Beyond | Wired.com
[New URL: http://www.wired.com/2012/04/an-essay-on-the-new-aesthetic/
See also: http://booktwo.org/notebook/sxaesthetic/
http://www.aaronland.info/weblog/2012/03/13/godhelpus/#sxaesthetic
http://www.joannemcneil.com/new-aesthetic-at-sxsw/
http://noisydecentgraphics.typepad.com/design/2012/03/sxsw-the-new-aesthetic-and-commercial-visual-culture.html
http://russelldavies.typepad.com/planning/2012/03/sxsw-the-new-aesthetic-and-writing.html ]

"The “New Aesthetic” is a native product of modern network culture. It’s from London, but it was born digital, on the Internet. The New Aesthetic is a “theory object” and a “shareable concept.”

The New Aesthetic is “collectively intelligent.” It’s diffuse, crowdsourcey, and made of many small pieces loosely joined. It is rhizomatic, as the people at Rhizome would likely tell you. It’s open-sourced, and triumph-of-amateurs. It’s like its logo, a bright cluster of balloons tied to some huge, dark and lethal weight.

There are some good aspects to this modern situation, and there are some not so good ones."

"That’s the big problem, as I see it: the New Aesthetic is trying to hack a modern aesthetic, instead of thinking hard enough and working hard enough to build one. That’s the case so far, anyhow. No reason that the New Aesthetic has to stop where it stands at this moment, after such a promising start. I rather imagine it’s bound to do otherwise. Somebody somewhere will, anyhow."
machinevision  glitches  digitalaccumulation  walterbenjamin  socialmedia  bots  uncannyvalley  surveillance  turingtest  renderghosts  imagerecognition  imagery  beauty  cern  postmodernity  hereandnow  temporality  pixels  culturalagnosticism  london  theory  networkculture  theoryobjects  smallpieceslooselyjoined  collectiveintelligence  digitalage  digital  modernism  aesthetics  vision  robots  cubism  impressionism  history  artmovements  machine-readableworld  russelldavies  benterrett  siliconrounsabout  art  marcelduchamp  joannemcneil  jamesbridle  sxsw  brucesterling  2012  newaesthetic  crowdsourcing  rhizome  aaronstraupcope  thenewaesthetic 
april 2012 by robertogreco

Copy this bookmark:





to read