recentpopularlog in

robertogreco : alanturing   13

Doug Engelbart, transcontextualist | Gardner Writes
"I’ve been mulling over this next post for far too long, and the results will be brief and rushed (such bad food, and such small portions!). You have been warned.

The three strands, or claims I’m engaging with (EDIT: I’ve tried to make things clearer and more parallel in the list below):

1. The computer is “just a tool.” This part’s in partial response to the comments on my previous post. [http://www.gardnercampbell.net/blog1/?p=2158 ]

2. Doug Engelbart’s “Augmenting Human Intellect: A Conceptual Framework” [http://www.dougengelbart.org/pubs/augment-3906.html ] is “difficult to understand” or “poorly written.” This one’s a perpetual reply. 🙂 It was most recently triggered by an especially perplexing Twitter exchange shared with me by Jon Becker.

3. Engelbart’s ideas regarding the augmentation of human intellect aim for an inhuman and inhumane parsing of thought and imagination, an “efficiency expert” reduction of the richness of human cognition. This one tries to think about some points raised in the VCU New Media Seminar this fall.

These are the strands. The weave will be loose. (Food, textiles, textures, text.)

1. There is no such thing as “just a tool.” McLuhan wisely notes that tools are not inert things to be used by human beings, but extensions of human capabilities that redefine both the tool and the user. A “tooler” results, or perhaps a “tuser” (pronounced “TOO-zer”). I believe those two words are neologisms but I’ll leave the googling as an exercise for the tuser. The way I used to explain this is my new media classes was to ask students to imagine a hammer lying on the ground and a person standing above the hammer. The person picks up the hammer. What results? The usual answers are something like “a person with a hammer in his or her hand.” I don’t hold much with the elicit-a-wrong-answer-then-spring-the-right-one-on-them school of “Socratic” instruction, but in this case it was irresistible and I tried to make a game of it so folks would feel excited, not tricked. “No!” I would cry. “The result is a HammerHand!” This answer was particularly easy to imagine inside Second Life, where metaphors become real within the irreality of a virtual landscape. In fact, I first came up with the game while leading a class in Second Life–but that’s for another time.

So no “just a tool,” since a HammerHand is something quite different from a hammer or a hand, or a hammer in a hand. It’s one of those small but powerful points that can make one see the designed built world, a world full of builders and designers (i.e., human beings), as something much less inert and “external” than it might otherwise appear. It can also make one feel slightly deranged, perhaps usefully so, when one proceeds through the quotidian details (so-called) of a life full of tasks and taskings.

To complicate matters further, the computer is an unusual tool, a meta-tool, a machine that simulates any other machine, a universal machine with properties unlike any other machine. Earlier in the seminar this semester a sentence popped out of my mouth as we talked about one of the essays–“As We May Think”? I can’t remember now: “This is your brain on brain.” What Papert and Turkle refer to as computers’ “holding power” is not just the addictive cat videos (not that there’s anything wrong with that, I imagine), but something weirdly mindlike and reflective about the computer-human symbiosis. One of my goals continues to be to raise that uncanny holding power into a fuller (and freer) (and more metaphorical) (and more practical in the sense of able-to-be-practiced) mode of awareness so that we can be more mindful of the environment’s potential for good and, yes, for ill. (Some days, it seems to me that the “for ill” part is almost as poorly understood as the “for good” part, pace Morozov.)

George Dyson writes, “The stored-program computer, as conceived by Alan Turing and delivered by John von Neumann, broke the distinction between numbers that mean things and numbers that do things. Our universe would never be the same” (Turing’s Cathedral: The Origins of the Digital Universe). This is a very bold statement. I’ve connected it with everything from the myth of Orpheus to synaesthetic environments like the one @rovinglibrarian shared with me in which one can listen to, and visualize, Wikipedia being edited. Thought vectors in concept space, indeed. The closest analogies I can find are with language itself, particularly the phonetic alphabet.

The larger point is now at the ready: in fullest practice and perhaps even for best results, particularly when it comes to deeper learning, it may well be that nothing is just anything. Bateson describes the moment in which “just a” thing becomes far more than “just a” thing as a “double take.” For Bateson, the double take bears a thrilling and uneasy relationship to the double bind, as well as to some kinds of derangement that are not at all beneficial. (This is the double-edged sword of human intellect, a sword that sometimes has ten edges or more–but I digress.) This double take (the kids call it, or used to call it, “wait what?”) indicates a moment of what Bateson calls “transcontextualism,” a paradoxical level-crossing moment (micro to macro, instance to meta, territory to map, or vice-versa) that initiates or indicates (hard to tell) deeper learning.
It seems that both those whose life is enriched by transcontextual gifts and those who are impoverished by transcontextual confusions are alike in one respect: for them there is always or often a “double take.” A falling leaf, the greeting of a friend, or a “primrose by the river’s brim” is not “just that and nothing more.” Exogenous experience may be framed in the contexts of dream, and internal thought may be projected into the contexts of the external world. And so on. For all this, we seek a partial explanation in learning and experience. (“Double Bind, 1969,” in Steps to an Ecology of Mind, U Chicago Press, 2000, p. 272). (EDIT: I had originally typed “eternal world,” but Bateson writes “external.” It’s an interesting typo, though, so I remember it here.)


It does seem to me, very often, that we do our best to purge our learning environments of opportunities for transcontextual gifts to emerge. This is understandable, given how bad and indeed “unproductive” (by certain lights) the transcontextual confusions can be. No one enjoys the feeling of falling, unless there are environments and guides that can make the falling feel like flying–more matter for another conversation, and a difficult art indeed, and one that like all art has no guarantees (pace Madame Tussaud).

2. So now the second strand, regarding Engelbart’s “Augmenting Human Intellect: A Conceptual Framework.” Much of this essay, it seems to me, is about identifying and fostering transcontextualism (transcontextualization?) as a networked activity in which both the individual and the networked community recognize the potential for “bootstrapping” themselves into greater learning through the kind of level-crossing Bateson imagines (Douglas Hofstadter explores these ideas too, particularly in I Am A Strange Loop and, it appears, in a book Tom Woodward is exploring and brought to my attention yesterday, Surfaces and Essences: Analogy as the Fuel and Fire of Thinking. That title alone makes the recursive point very neatly). So when Engelbart switches modes from engineering-style-specification to the story of bricks-on-pens to the dialogue with “Joe,” he seems to me not to be willful or even prohibitively difficult (though some of the ideas are undeniably complex). He seems to me to be experimenting with transcontextualism as an expressive device, an analytical strategy, and a kind of self-directed learning, a true essay: an attempt:

And by “complex situations” we include the professional problems of diplomats, executives, social scientists, life scientists, physical scientists, attorneys, designers–whether the problem situation exists for twenty minutes or twenty years.

A list worthy of Walt Whitman, and one that explicitly (and for me, thrillingly) crosses levels and enacts transcontextualism.

Here’s another list, one in which Engelbart tallies the range of “thought kernels” he wants to track in his formulative thinking (one might also say, his “research”):

The “unit records” here, unlike those in the Memex example, are generally scraps of typed or handwritten text on IBM-card-sized edge-notchable cards. These represent little “kernels” of data, thought, fact, consideration, concepts, ideas, worries, etc. That are relevant to a given problem area in my professional life.

Again, the listing enacts a principle: we map a problem space, a sphere of inquiry, along many dimensions–or we should. Those dimensions cross contexts–or they should. To think about this in terms of language for a moment, Engelbart’s idea seems to be that we should track our “kernels” across the indicative, the imperative, the subjunctive, the interrogative. To put it another way, we should be mindful of, and somehow make available for mindful building, many varieties of cognitive activity, including affect (which can be distinguished but not divided from cognition).

3. I don’t think this activity increases efficiency, if efficiency means “getting more done in less time.” (A “cognitive Taylorism,” as one seminarian put it.) More what is always the question. For me, Engelbart’s transcontextual gifts (and I’ll concede that there are likely transcontextual confusions in there too–it’s the price of trancontextualism, clearly) are such that the emphasis lands squarely on effectiveness, which in his essay means more work with positive potential (understanding there’s some disagreement but not total disagreement about… [more]
dougengelbart  transcontextualism  gardnercampbell  2013  gregorybateson  marshallmcluhan  socraticmethod  education  teaching  howweteach  howwelearn  learning  hammerhand  technology  computers  computing  georgedyson  food  textiles  texture  text  understanding  tools  secondlife  seymourpapert  sherryturkle  alanturing  johnvonneumann  doublebind  waltwhitman  memex  taylorism  efficiency  cognition  transcontextualization 
july 2017 by robertogreco
Networked Learning as Experiential Learning | EDUCAUSE
"No one believes that knowing the alphabet and sounding out words mean that a person possesses the deep literacy needed for college-level learning. Yet our ideas about digital literacy are steadily becoming more impoverished, to the point that many of my current students, immersed in a "walled garden" world of apps and social media, know almost nothing about the web or the Internet. For the first time since the emergence of the web, this past year I discovered that the majority of my sophomore-level students did not understand the concept of a URL and thus struggled with the effective use and formation of hyperlinks in the networked writing class that VCU's University College affectionately calls "Thought Vectors in Concept Space"—a phrase attributed by Kay to Engelbart and one that describes the fundamentally experiential aspect of networked learning.5 My students appeared not to be able to parse the domains in which they published their work, which meant that they could not consistently imagine how to locate or link to each other's work by simply examining the structure of the URLs involved. If one cannot understand the organizing principles of a built environment, one cannot contribute to the building. And if one cannot contribute to the building, certain vital modes of knowing will be forever out of reach.

Yet educators seeking to provide what Carl Rogers called the "freedom to learn" continue to work on those digital high-impact practices.6 It is a paradoxical task, to be sure, but it is one worth attempting—particularly now, when "for the first time in the still-short span of human history, the experience of creating media for a potentially large public is available to a multitude."7 Students' experience of what Henry Jenkins has articulated as the networked mediation of "participatory culture" must extend their experience to school as well.8 School as a site of the high-impact practice of learner-built, instructor-facilitated, digitally networked learning can transform the experience of education even as it preserves, and scales, our commitment to the education of the whole person.

The web was designed for just this kind of collaboration. One does not need permission to make a hyperlink. Yet one does need "the confident insight, the authority of media-making" to create meaning out of those links. Such confidence and authority should be among the highest learning outcomes available to our students within what Mimi Ito and others have described as "connected learning."9 Learner-initiated connections that identify both the nodes and the lines between them, instead of merely connecting the dots that teachers have already established (valuable as that might be), co-create what Lawrence Stenhouse argues is "the nature of knowledge . . . as distinct from information"—"a structure to sustain creative thought and provide frameworks for judgment." Such structures can encourage an enormously beneficial flowering of human diversity, one that lies beyond the reach of prefabricated outcomes: "Education as induction into knowledge is successful to the extent that it makes the behavioural outcomes of the students unpredictable."10

Offering students the possibility of experiential learning in personal, interactive, networked computing—in all its gloriously messy varieties—provides the richest opportunity yet for integrative thinking within and beyond "schooling." If higher education can embrace the complexity of networked learning and can value the condition of emergence that networked learning empowers, there may still be time to encourage networked learning as a structure and a disposition, a design and a habit of being."
networkedlearning  2016  gardnercampbell  jeromebruner  georgekuh  experientialleaerning  experience  learning  howwelearn  education  carlrogers  hypertext  web  online  internet  literacy  alankay  dougengelbart  adelegoldberg  tednelson  vannevarbush  jcrlicklider  georgedyson  alanturing  johnvonneumann  self-actualization  unschooling  deschooling  progressive  networks  social 
february 2016 by robertogreco
Teaching Machines and Turing Machines: The History of the Future of Labor and Learning
"In all things, all tasks, all jobs, women are expected to perform affective labor – caring, listening, smiling, reassuring, comforting, supporting. This work is not valued; often it is unpaid. But affective labor has become a core part of the teaching profession – even though it is, no doubt, “inefficient.” It is what we expect – stereotypically, perhaps – teachers to do. (We can debate, I think, if it’s what we reward professors for doing. We can interrogate too whether all students receive care and support; some get “no excuses,” depending on race and class.)

What happens to affective teaching labor when it runs up against robots, against automation? Even the tasks that education technology purports to now be able to automate – teaching, testing, grading – are shot through with emotion when done by humans, or at least when done by a person who’s supposed to have a caring, supportive relationship with their students. Grading essays isn’t necessarily burdensome because it’s menial, for example; grading essays is burdensome because it is affective labor; it is emotionally and intellectually exhausting.

This is part of our conundrum: teaching labor is affective not simply intellectual. Affective labor is not valued. Intellectual labor is valued in research. At both the K12 and college level, teaching of content is often seen as menial, routine, and as such replaceable by machine. Intelligent machines will soon handle the task of cultivating human intellect, or so we’re told.

Of course, we should ask what happens when we remove care from education – this is a question about labor and learning. What happens to thinking and writing when robots grade students’ essays, for example. What happens when testing is standardized, automated? What happens when the whole educational process is offloaded to the machines – to “intelligent tutoring systems,” “adaptive learning systems,” or whatever the latest description may be? What sorts of signals are we sending students?

And what sorts of signals are the machines gathering in turn? What are they learning to do?
Often, of course, we do not know the answer to those last two questions, as the code and the algorithms in education technologies (most technologies, truth be told) are hidden from us. We are becoming as law professor Frank Pasquale argues a “black box society.” And the irony is hardly lost on me that one of the promises of massive collection of student data under the guise of education technology and learning analytics is to crack open the “black box” of the human brain.

We still know so little about how the brain works, and yet, we’ve adopted a number of metaphors from our understanding of that organ to explain how computers operate: memory, language, intelligence. Of course, our notion of intelligence – its measurability – has its own history, one wrapped up in eugenics and, of course, testing (and teaching) machines. Machines now both frame and are framed by this question of intelligence, with little reflection on the intellectual and ideological baggage that we carry forward and hard-code into them."



"We’re told by some automation proponents that instead of a future of work, we will find ourselves with a future of leisure. Once the robots replace us, we will have immense personal freedom, so they say – the freedom to pursue “unproductive” tasks, the freedom to do nothing at all even, except I imagine, to continue to buy things.
On one hand that means that we must address questions of unemployment. What will we do without work? How will we make ends meet? How will this affect identity, intellectual development?

Yet despite predictions about the end of work, we are all working more. As games theorist Ian Bogost and others have observed, we seem to be in a period of hyper-employment, where we find ourselves not only working numerous jobs, but working all the time on and for technology platforms. There is no escaping email, no escaping social media. Professionally, personally – no matter what you say in your Twitter bio that your Tweets do not represent the opinions of your employer – we are always working. Computers and AI do not (yet) mark the end of work. Indeed, they may mark the opposite: we are overworked by and for machines (for, to be clear, their corporate owners).

Often, we volunteer to do this work. We are not paid for our status updates on Twitter. We are not compensated for our check-in’s in Foursquare. We don’t get kick-backs for leaving a review on Yelp. We don’t get royalties from our photos on Flickr.

We ask our students to do this volunteer labor too. They are not compensated for the data and content that they generate that is used in turn to feed the algorithms that run TurnItIn, Blackboard, Knewton, Pearson, Google, and the like. Free labor fuels our technologies: Forum moderation on Reddit – done by volunteers. Translation of the courses on Coursera and of the videos on Khan Academy – done by volunteers. The content on pretty much every “Web 2.0” platform – done by volunteers.

We are working all the time; we are working for free.

It’s being framed, as of late, as the “gig economy,” the “freelance economy,” the “sharing economy” – but mostly it’s the service economy that now comes with an app and that’s creeping into our personal not just professional lives thanks to billions of dollars in venture capital. Work is still precarious. It is low-prestige. It remains unpaid or underpaid. It is short-term. It is feminized.

We all do affective labor now, cultivating and caring for our networks. We respond to the machines, the latest version of ELIZA, typing and chatting away hoping that someone or something responds, that someone or something cares. It’s a performance of care, disguising what is the extraction of our personal data."



"Personalization. Automation. Management. The algorithms will be crafted, based on our data, ostensibly to suit us individually, more likely to suit power structures in turn that are increasingly opaque.

Programmatically, the world’s interfaces will be crafted for each of us, individually, alone. As such, I fear, we will lose our capacity to experience collectivity and resist together. I do not know what the future of unions looks like – pretty grim, I fear; but I do know that we must enhance collective action in order to resist a future of technological exploitation, dehumanization, and economic precarity. We must fight at the level of infrastructure – political infrastructure, social infrastructure, and yes technical infrastructure.

It isn’t simply that we need to resist “robots taking our jobs,” but we need to challenge the ideologies, the systems that loath collectivity, care, and creativity, and that champion some sort of Randian individual. And I think the three strands at this event – networks, identity, and praxis – can and should be leveraged to precisely those ends.

A future of teaching humans not teaching machines depends on how we respond, how we design a critical ethos for ed-tech, one that recognizes, for example, the very gendered questions at the heart of the Turing Machine’s imagined capabilities, a parlor game that tricks us into believing that machines can actually love, learn, or care."
2015  audreywatters  education  technology  academia  labor  work  emotionallabor  affect  edtech  history  highered  highereducation  teaching  schools  automation  bfskinner  behaviorism  sexism  howweteach  alanturing  turingtest  frankpasquale  eliza  ai  artificialintelligence  robots  sharingeconomy  power  control  economics  exploitation  edwardthorndike  thomasedison  bobdylan  socialmedia  ianbogost  unemployment  employment  freelancing  gigeconomy  serviceeconomy  caring  care  love  loving  learning  praxis  identity  networks  privacy  algorithms  freedom  danagoldstein  adjuncts  unions  herbertsimon  kevinkelly  arthurcclarke  sebastianthrun  ellenlagemann  sidneypressey  matthewyglesias  karelčapek  productivity  efficiency  bots  chatbots  sherryturkle 
august 2015 by robertogreco
Ed-Tech's Monsters #ALTC
[video here: https://www.youtube.com/watch?v=Kiotl4G6fMw ]

"No doubt, we have witnessed in the last few years an explosion in the ed-tech industry and a growing, a renewed interest in ed-tech. Those here at ALT-C know that ed-tech is not new by any means; but there is this sense from many of its newest proponents (particularly in the States) that ed-tech has no history; there is only now and the future.

Ed-tech now, particularly that which is intertwined with venture capital, is boosted by a powerful forms of storytelling: a disruptive innovation mythology, entrepreneurs' hagiography, design fiction, fantasy.

A fantasy that wants to extend its reach into the material world.

Society has been handed a map, if you will, by the technology industry in which we are shown how these brave ed-tech explorers have and will conquer and carve up virtual and physical space.

Fantasy.

We are warned of the dragons in dangerous places, the unexplored places, the over explored places, the stagnant, the lands of outmoded ideas — all the places where we should no longer venture. 

Hic Sunt Dracones. There be dragons.

Instead, I’d argue, we need to face our dragons. We need to face our monsters. We need to face the giants. They aren’t simply on the margins; they are, in many ways, central to the narrative."



"I’m in the middle of writing a book called Teaching Machines, a cultural history of the science and politics of ed-tech. An anthropology of ed-tech even, a book that looks at knowledge and power and practices, learning and politics and pedagogy. My book explores the push for efficiency and automation in education: “intelligent tutoring systems,” “artificially intelligent textbooks,” “robo-graders,” and “robo-readers.”

This involves, of course, a nod to “the father of computer science” Alan Turing, who worked at Bletchley Park of course, and his profoundly significant question “Can a machine think?”

I want to ask in turn, “Can a machine teach?”

Then too: What will happen to humans when (if) machines do “think"? What will happen to humans when (if) machines “teach”? What will happen to labor and what happens to learning?

And, what exactly do we mean by those verbs, “think” and “teach”? When we see signs of thinking or teaching in machines, what does that really signal? Is it that our machines are becoming more “intelligent,” more human? Or is it that humans are becoming more mechanical?

Rather than speculate about the future, I want to talk a bit about the past."



"To oppose technology or to fear automation, some like The Economist or venture capitalist Marc Andreessen argue, is to misunderstand how the economy works. (I’d suggest perhaps Luddites understand how the economy works quite well, thank you very much, particularly when it comes to questions of “who owns the machinery” we now must work on. And yes, the economy works well for Marc Andreessen, that’s for sure.)"



"But even without machines, Frankenstein is still read as a cautionary tale about science and about technology; and Shelley’s story has left an indelible impression on us. Its references are scattered throughout popular culture and popular discourse. We frequently use part of the title — “Franken” — to invoke a frightening image of scientific experimentation gone wrong. Frankenfood. Frankenfish. The monster, a monstrosity — a technological crime against nature.

It is telling, very telling, that we often confuse the scientist, Victor Frankenstein, with his creation. We often call the monster Frankenstein.

As the sociologist Bruno Latour has argued, we don’t merely mistake the identity of Frankenstein; we also mistake his crime. It "was not that he invented a creature through some combination of hubris and high technology,” writes Latour, "but rather that he abandoned the creature to itself.”

The creature — again, a giant — insists in the novel that he was not born a monster, but he became monstrous after Frankenstein fled the laboratory in horror when the creature opened his “dull yellow eye,” breathed hard, and convulsed to life.

"Remember that I am thy creature,” he says when he confronts Frankenstein, "I ought to be thy Adam; but I am rather the fallen angel, whom thou drivest from joy for no misdeed. Everywhere I see bliss, from which I alone am irrevocably excluded. I was benevolent and good— misery made me a fiend.”

As Latour observes, "Written at the dawn of the great technological revolutions that would define the 19th and 20th centuries, Frankenstein foresees that the gigantic sins that were to be committed would hide a much greater sin. It is not the case that we have failed to care for Creation, but that we have failed to care for our technological creations. We confuse the monster for its creator and blame our sins against Nature upon our creations. But our sin is not that we created technologies but that we failed to love and care for them. It is as if we decided that we were unable to follow through with the education of our children.”

Our “gigantic sin”: we failed to love and care for our technological creations. We must love and educate our children. We must love and care for our machines, lest they become monsters.

Indeed, Frankenstein is also a novel about education. The novel is structured as a series of narratives — Captain Watson’s story — a letter he sends to his sister as he explores the Arctic— which then tells Victor Frankenstein’s story through which we hear the creature tell his own story, along with that of the De Lacey family and the arrival of Safie, “the lovely Arabian." All of these are stories about education: some self-directed learning, some through formal schooling.

While typically Frankenstein is interpreted as a condemnation of science gone awry, the novel can also be read as a condemnation of education gone awry. The novel highlights the dangerous consequences of scientific knowledge, sure, but it also explores how knowledge — gained inadvertently, perhaps, gained surreptitiously, gained without guidance — might be disastrous. Victor Frankenstein, stumbling across the alchemists and then having their work dismissed outright by his father, stoking his curiosity. The creature, learning to speak by watching the De Lacey family, learning to read by watching Safie do the same, his finding and reading Volney's Ruins of Empires and Milton’s Paradise Lost."



"To be clear, my nod to the Luddites or to Frankenstein isn’t about rejecting technology; but it is about rejecting exploitation. It is about rejecting an uncritical and unexamined belief in progress. The problem isn’t that science gives us monsters, it's that we have pretended like it is truth and divorced from responsibility, from love, from politics, from care. The problem isn’t that science gives us monsters, it’s that it does not, despite its insistence, give us “the answer."

And that is problem with ed-tech’s monsters. That is the problem with teaching machines.

In order to automate education, must we see knowledge in a certain way, as certain: atomistic, programmable, deliverable, hierarchical, fixed, measurable, non-negotiable? In order to automate that knowledge, what happens to care?"



"I’ll leave you with one final quotation, from Hannah Arendt who wrote,
"Education is the point at which we decide whether we love the world enough to assume responsibility for it and by the same token save it from that ruin which, except for renewal, except for the coming of the new and young, would be inevitable. And education, too, is where we decide whether we love our children enough not to expel them from our world and leave them to their own devices, nor to strike from their hands their chance of undertaking something new, something unforeseen by us, but to prepare them in advance for the task of renewing a common world.”

Our task, I believe, is to tell the stories and build the society that would place education technology in that same light: “renewing a common world.”

We in ed-tech must face the monsters we have created, I think. These are the monsters in the technologies of war and surveillance a la Bletchley Park. These are the monsters in the technologies of mass production and standardization. These are the monsters in the technologies of behavior modification a la BF Skinner.

These are the monsters ed-tech must face. And we must all consider what we need to do so that we do not create more of them."
audreywatters  edtech  technology  education  schools  data  monsters  dragons  frankenstein  luddites  luddism  neoluddism  alanturing  thomaspynchon  society  bfskinner  standardization  surveillance  massproduction  labor  hannaharendt  brunolatour  work  kevinkelly  technosolutionism  erikbrynjolfsson  lordbyron  maryshelley  ethics  hierarchy  children  responsibility  love  howwelearn  howweteach  teaching  learning  politics  policy  democracy  exploitation  hierarchies  progress  science  scientism  markets  aynrand  liberarianism  projectpigeon  teachingmachines  personalization  individualization  behavior  behaviorism  economics  capitalism  siliconvalley 
september 2014 by robertogreco
There is no Such Thing as Invention — I.M.H.O. — Medium
"I remember the very instant that I learned to be creative, to ‘invent’ things, to do things in an interesting and unusual way, and it happened by accident, literally.

I created mess around myself, the kind of chaos that would be very dangerous in an operating theater but which is synonymous with artists’ studios, and in that mess I edited the accidents. By increasing the amount of mess I had freed things up and increased the possibilities, I had maximised the adjacent possible and was able to create the appearance of inventing new things by editing the mistakes which appeared novel and interesting.

[photo with caption "Francis Bacon’s studio did not look like a clinical laboratory.']

If you really think about it, there is no other way. Whether this mess in internal in our brains, or external in our environment, we can only select things that are possible, invention is merely when the possible is new. Real invention, out of nowhere, not selecting from the possible, is impossible, by definition."

[via: http://kottke.org/13/06/how-to-invent-things-edit-your-mess ]
davidgalbraith  creativity  invention  messiness  adjacentpossible  2013  francisbacon  howwework  reynerbanham  alanturing  claudeshannon  jazz  harlem  richarddawkins  theselfishgene  stuartkauffman  naturalselection  siliconvalley  freedom  autonomy  burningman  openstudioproject  lcproject  environment  innovation  critical-messtheory  criticalmesses 
june 2013 by robertogreco
The Great Pretender: Turing as a Philosopher of Imitation - Ian Bogost - The Atlantic
"Such is Turing's legacy: that of a nested chain of pretenses, each pointing not to reality, but to the caricature of another idea, device, individual, or concept. In the inquest on his death, Turing's coroner wrote, "In a man of his type, one never knows what his mental processes are going to do next." It's easy to take this statement as a slight, an insult against a national hero whose culture took him as a criminal just for being a gay man. But can't you also see it differently, more generously? Everyone--everything--is one of his or her or its own type, its internal processes forever hidden from view, its real nature only partly depicted through its behavior. As heirs to Turing's legacy, the best we can do is admit it. Everyone pretends. And everything is more than we can ever see of it."
history  technology  alanturing  2012  ianbogost  computing  via:ayjay 
july 2012 by robertogreco
Bruce Sterling's Turing Centenary Speech | Beyond The Beyond | Wired.com
Discussed: weirdness, femininity, AI skepticism, the aesthetics of computational art. Sort of a mess but consistently interesting.
ai  technology  gender  via:jbushnell  brucesterling  newaesthetic  art  alanturing 
july 2012 by robertogreco
Brute Force Architecture and its Discontents - etc
"More so than cardboard or other model making materials, blue foam erases the signature of its creator allowing for an easier ‘apples to apples’ comparison. The anonymizing uniformity of the cut surfaces and alien blueness of the foam itself allowed multiple workers to prepare options in parallel without the differences of personal craft becoming an element of distraction during moments of evaluation. The cumulative effect means that a table covered in foam models all produced by different individuals can be assessed for their ideas rather than the quirks of who made them or how they were created. What’s on display are the ideas themselves, without any distracting metadata or decoration. This is the model making equivalent of Edward Tufte’s quest to eliminate chartjunk."
bryanboyer  thermalpaper  smlxl  flatness  hierarchy  computation  computing  alanturing  ideation  oma  mvrdv  rex  big  howwework  thinking  making  bruteforcearchitecture  2012  zahahadid  collaboration  chartjunk  edwardtufte  process  remkoolhaas  architecture  design  horizontality  horizontalidad 
june 2012 by robertogreco
Q&A;: Hacker Historian George Dyson Sits Down With Wired's Kevin Kelly | Wired Magazine | Wired.com
"In some creation myths, life arises out of the earth; in others, life falls out of the sky. The creation myth of the digital universe entails both metaphors. The hardware came out of the mud of World War II, and the code fell out of abstract mathematical concepts. Computation needs both physical stuff and a logical soul to bring it to life…"

"…When I first visited Google…I thought, my God, this is not Turing’s mansion—this is Turing’s cathedral. Cathedrals were built over hundreds of years by thousands of nameless people, each one carving a little corner somewhere or adding one little stone. That’s how I feel about the whole computational universe. Everybody is putting these small stones in place, incrementally creating this cathedral that no one could even imagine doing on their own."
artificialintelligence  ai  software  nuclearbombs  stanulam  hackers  hacking  alanturing  coding  klarivanneumann  nilsbarricelli  MANIAC  digitaluniverse  biology  digitalorganisms  computers  computing  freemandyson  johnvanneumann  interviews  creation  kevinkelly  turing'smansion  turing'scathedral  turing  wired  history  georgedyson 
february 2012 by robertogreco
russell davies: again with the post digital
"And then, this morning, when struggling to think of a good ending to this, I heard a brilliant talk by George Dyson – describing the early history of computing unearthed from correspondence between Turing and Von Neumann. And I thought I heard him cite this quote from Turing. I wasn’t quite fast enough with my pen to be 100% sure and I can’t find it on Google, but I think this is what he said. And, if it is, it’s exactly what I mean and we can leave it at that. What I think he said is this: “being digital should be more interesting than just being electronic”. I’m sure that meant something slightly different in the middle of the last century but the words are useful and simple now, they’ll do for me as a tiny rallying cry; being digital should be more interesting than just being electronic."
russelldavies  2011  alanturing  georgedyson  andyhuntington  papernet  internetofthings  brucesterling  mattjones  screenfatigue  newspaperclub  boredom  materials  physical  digital  embodiment  embodieddata  spimes  post-digital  iot 
november 2011 by robertogreco
Kevin Kelly -- The Technium - Turing'd
"Once you are Turing'd it is much easier to believe other occupations which we humans used to do uniquely, can be done by computers. You tend to be open to disruptive technology in all parts of your life."
kevinkelly  technology  society  work  luddites  turing  computing  education  future  business  software  alanturing 
march 2008 by robertogreco

Copy this bookmark:





to read