recentpopularlog in

robertogreco : sidneypressey   4

The History of Ed-Tech: What Went Wrong?
"There’s a popular origin story about education technology: that, it was first developed and adopted by progressive educators, those interested in “learning by doing” and committed to schools as democratic institutions. Then, something changed in the 1980s (or so): computers became commonplace, and ed-tech became commodified – built and sold by corporations, not by professors or by universities. Thus the responsibility for acquiring classroom technology and for determining how it would be used shifted from a handful of innovative educators (often buying hardware and software with their own money) to school administration; once computers were networked, the responsibility shifted to IT. The purpose of ed-tech shifted as well – from creative computing to keyboarding, from projects to “productivity.” (And I’ll admit. I’m guilty of having repeated some form of this narrative myself.)

[tweet: "What if the decentralized, open web was a historical aberration, an accident between broadcast models, not an ideal that was won then lost?"
https://twitter.com/ibogost/status/644994975797805056 ]

But what if, to borrow from Ian Bogost, “progressive education technology” – the work of Seymour Papert, for example – was a historical aberration, an accident between broadcast models, not an ideal that was won then lost?

There’s always a danger in nostalgia, when one invents a romanticized past – in this case, a once-upon-a-time when education technology was oriented towards justice and inquiry before it was re-oriented towards test scores and flash cards. But rather than think about “what went wrong,” it might be useful to think about what was wrong all along.

Although Papert was no doubt a pioneer, he wasn’t the first person to recognize the potential for computers in education. And he was hardly alone in the 1960s and 1970s in theorizing or developing educational technologies. There was Patrick Suppes at Stanford, for example, who developed math instruction software for IBM mainframes and who popularized what became known as “computer-assisted instruction.” (Arguably, Papert refers to Suppes’ work in Mindstorms when he refers to “the computer being used to program the child” rather than his own vision of the child programming the computer.)

Indeed, as I’ve argued repeatedly, the history of ed-tech dates at least as far back as the turn of the twentieth century and the foundation of the field of educational psychology. Much of we see in ed-tech today reflects those origins – the work of psychologist Sidney Pressey, the work of psychologist B. F. Skinner, the work of psychologist Edward Thorndike. It reflects those origins because, as historian Ellen Condliffe Lagemann has astutely observed, “One cannot understand the history of education in the United States during the twentieth century unless one realizes that Edward L. Thorndike won and John Dewey lost.”

Ed-tech has always been more Thorndike than Dewey because education has been more Thorndike than Dewey. That means more instructivism than constructionism. That means more multiple choice tests than projects. That means more surveillance than justice.
(How Thorndike's ed-tech is now being rebranded as “personalization” (and by extension, as progressive education) – now that's an interesting story..."

[via: ""Edward L. Thorndike won and John Dewey lost" is pretty much the perfect tl;dr version of the history of education."
https://twitter.com/jonbecker/status/884460561584594944

See also: "Or David Snedden won. People forget about him."
https://twitter.com/doxtdatorb/status/884520604287860736 ]
audreywatters  ianbogost  johndewey  seymourpapert  edtech  computers  technology  education  ellencondliffe  edwardthorndike  bfskinner  sidneypressey  psychology  management  administration  it  patricksuppes  constructivism  constructionism  progressive  mindstorms  progressiveeducation  standardization  personalization  instructivism  testing  davidsnedden  history 
july 2017 by robertogreco
Teaching Machines and Turing Machines: The History of the Future of Labor and Learning
"In all things, all tasks, all jobs, women are expected to perform affective labor – caring, listening, smiling, reassuring, comforting, supporting. This work is not valued; often it is unpaid. But affective labor has become a core part of the teaching profession – even though it is, no doubt, “inefficient.” It is what we expect – stereotypically, perhaps – teachers to do. (We can debate, I think, if it’s what we reward professors for doing. We can interrogate too whether all students receive care and support; some get “no excuses,” depending on race and class.)

What happens to affective teaching labor when it runs up against robots, against automation? Even the tasks that education technology purports to now be able to automate – teaching, testing, grading – are shot through with emotion when done by humans, or at least when done by a person who’s supposed to have a caring, supportive relationship with their students. Grading essays isn’t necessarily burdensome because it’s menial, for example; grading essays is burdensome because it is affective labor; it is emotionally and intellectually exhausting.

This is part of our conundrum: teaching labor is affective not simply intellectual. Affective labor is not valued. Intellectual labor is valued in research. At both the K12 and college level, teaching of content is often seen as menial, routine, and as such replaceable by machine. Intelligent machines will soon handle the task of cultivating human intellect, or so we’re told.

Of course, we should ask what happens when we remove care from education – this is a question about labor and learning. What happens to thinking and writing when robots grade students’ essays, for example. What happens when testing is standardized, automated? What happens when the whole educational process is offloaded to the machines – to “intelligent tutoring systems,” “adaptive learning systems,” or whatever the latest description may be? What sorts of signals are we sending students?

And what sorts of signals are the machines gathering in turn? What are they learning to do?
Often, of course, we do not know the answer to those last two questions, as the code and the algorithms in education technologies (most technologies, truth be told) are hidden from us. We are becoming as law professor Frank Pasquale argues a “black box society.” And the irony is hardly lost on me that one of the promises of massive collection of student data under the guise of education technology and learning analytics is to crack open the “black box” of the human brain.

We still know so little about how the brain works, and yet, we’ve adopted a number of metaphors from our understanding of that organ to explain how computers operate: memory, language, intelligence. Of course, our notion of intelligence – its measurability – has its own history, one wrapped up in eugenics and, of course, testing (and teaching) machines. Machines now both frame and are framed by this question of intelligence, with little reflection on the intellectual and ideological baggage that we carry forward and hard-code into them."



"We’re told by some automation proponents that instead of a future of work, we will find ourselves with a future of leisure. Once the robots replace us, we will have immense personal freedom, so they say – the freedom to pursue “unproductive” tasks, the freedom to do nothing at all even, except I imagine, to continue to buy things.
On one hand that means that we must address questions of unemployment. What will we do without work? How will we make ends meet? How will this affect identity, intellectual development?

Yet despite predictions about the end of work, we are all working more. As games theorist Ian Bogost and others have observed, we seem to be in a period of hyper-employment, where we find ourselves not only working numerous jobs, but working all the time on and for technology platforms. There is no escaping email, no escaping social media. Professionally, personally – no matter what you say in your Twitter bio that your Tweets do not represent the opinions of your employer – we are always working. Computers and AI do not (yet) mark the end of work. Indeed, they may mark the opposite: we are overworked by and for machines (for, to be clear, their corporate owners).

Often, we volunteer to do this work. We are not paid for our status updates on Twitter. We are not compensated for our check-in’s in Foursquare. We don’t get kick-backs for leaving a review on Yelp. We don’t get royalties from our photos on Flickr.

We ask our students to do this volunteer labor too. They are not compensated for the data and content that they generate that is used in turn to feed the algorithms that run TurnItIn, Blackboard, Knewton, Pearson, Google, and the like. Free labor fuels our technologies: Forum moderation on Reddit – done by volunteers. Translation of the courses on Coursera and of the videos on Khan Academy – done by volunteers. The content on pretty much every “Web 2.0” platform – done by volunteers.

We are working all the time; we are working for free.

It’s being framed, as of late, as the “gig economy,” the “freelance economy,” the “sharing economy” – but mostly it’s the service economy that now comes with an app and that’s creeping into our personal not just professional lives thanks to billions of dollars in venture capital. Work is still precarious. It is low-prestige. It remains unpaid or underpaid. It is short-term. It is feminized.

We all do affective labor now, cultivating and caring for our networks. We respond to the machines, the latest version of ELIZA, typing and chatting away hoping that someone or something responds, that someone or something cares. It’s a performance of care, disguising what is the extraction of our personal data."



"Personalization. Automation. Management. The algorithms will be crafted, based on our data, ostensibly to suit us individually, more likely to suit power structures in turn that are increasingly opaque.

Programmatically, the world’s interfaces will be crafted for each of us, individually, alone. As such, I fear, we will lose our capacity to experience collectivity and resist together. I do not know what the future of unions looks like – pretty grim, I fear; but I do know that we must enhance collective action in order to resist a future of technological exploitation, dehumanization, and economic precarity. We must fight at the level of infrastructure – political infrastructure, social infrastructure, and yes technical infrastructure.

It isn’t simply that we need to resist “robots taking our jobs,” but we need to challenge the ideologies, the systems that loath collectivity, care, and creativity, and that champion some sort of Randian individual. And I think the three strands at this event – networks, identity, and praxis – can and should be leveraged to precisely those ends.

A future of teaching humans not teaching machines depends on how we respond, how we design a critical ethos for ed-tech, one that recognizes, for example, the very gendered questions at the heart of the Turing Machine’s imagined capabilities, a parlor game that tricks us into believing that machines can actually love, learn, or care."
2015  audreywatters  education  technology  academia  labor  work  emotionallabor  affect  edtech  history  highered  highereducation  teaching  schools  automation  bfskinner  behaviorism  sexism  howweteach  alanturing  turingtest  frankpasquale  eliza  ai  artificialintelligence  robots  sharingeconomy  power  control  economics  exploitation  edwardthorndike  thomasedison  bobdylan  socialmedia  ianbogost  unemployment  employment  freelancing  gigeconomy  serviceeconomy  caring  care  love  loving  learning  praxis  identity  networks  privacy  algorithms  freedom  danagoldstein  adjuncts  unions  herbertsimon  kevinkelly  arthurcclarke  sebastianthrun  ellenlagemann  sidneypressey  matthewyglesias  karelčapek  productivity  efficiency  bots  chatbots  sherryturkle 
august 2015 by robertogreco
The Invented History of 'The Factory Model of Education'
[Follow-up notes here: http://www.aud.life/2015/notes-on-the-invented-history-of-the-factory-model-of ]

"Sal Khan is hardly the only one who tells a story of “the factory of model of education” that posits the United States adopted Prussia’s school system in order to create a compliant populace. It’s a story cited by homeschoolers and by libertarians. It’s a story told by John Taylor Gatto in his 2009 book Weapons of Mass Instruction. It’s a story echoed by The New York Times’ David Brooks. Here he is in 2012: “The American education model…was actually copied from the 18th-century Prussian model designed to create docile subjects and factory workers.”

For what it’s worth, Prussia was not highly industrialized when Frederick the Great formalized its education system in the late 1700s. (Very few places in the world were back then.) Training future factory workers, docile or not, was not really the point.

Nevertheless industrialization is often touted as both the model and the rationale for the public education system past and present. And by extension, it’s part of a narrative that now contends that schools are no longer equipped to address the needs of a post-industrial world."



"Despite these accounts offered by Toffler, Brooks, Khan, Gatto, and others, the history of schools doesn’t map so neatly onto the history of factories (and visa versa). As education historian Sherman Dorn has argued, “it makes no sense to talk about either ‘the industrial era’ or the development of public school systems as a single, coherent phase of national history.”"



"As Dorn notes, phrases like “the industrial model of education,” “the factory model of education,” and “the Prussian model of education” are used as a “rhetorical foil” in order make a particular political point – not so much to explain the history of education, as to try to shape its future."



"Many education reformers today denounce the “factory model of education” with an appeal to new machinery and new practices that will supposedly modernize the system. That argument is now and has been for a century the rationale for education technology. As Sidney Pressey, one of the inventors of the earliest “teaching machines” wrote in 1932 predicting "The Coming Industrial Revolution in Education,"
Education is the one major activity in this country which is still in a crude handicraft stage. But the economic depression may here work beneficially, in that it may force the consideration of efficiency and the need for laborsaving devices in education. Education is a large-scale industry; it should use quantity production methods. This does not mean, in any unfortunate sense, the mechanization of education. It does mean freeing the teacher from the drudgeries of her work so that she may do more real teaching, giving the pupil more adequate guidance in his learning. There may well be an “industrial revolution” in education. The ultimate results should be highly beneficial. Perhaps only by such means can universal education be made effective.

Pressey, much like Sal Khan and other education technologists today, believed that teaching machines could personalize and “revolutionize” education by allowing students to move at their own pace through the curriculum. The automation of the menial tasks of instruction would enable education to scale, Pressey – presaging MOOC proponents – asserted.

We tend to not see automation today as mechanization as much as algorithmization – the promise and potential in artificial intelligence and virtualization, as if this magically makes these new systems of standardization and control lighter and liberatory.

And so too we’ve invented a history of “the factory model of education” in order to justify an “upgrade” – to new software and hardware that will do much of the same thing schools have done for generations now, just (supposedly) more efficiently, with control moved out of the hands of labor (teachers) and into the hands of a new class of engineers, out of the realm of the government and into the realm of the market."
factoryschools  education  history  2015  audreywatters  edtech  edreform  mechanization  automation  algorithms  personalization  labor  teaching  howweteach  howwelearn  mooc  moocs  salkhan  sidneypressey  1932  prussia  horacemann  lancastersystem  frederickjohngladman  mikecaulfield  jamescordiner  prussianmodel  frederickengels  shermandorn  alvintoffler  johntaylorgatto  davidbrooksm  monitorialsystem  khanacademy  stevedenning  rickhess  us  policy  change  urgency  futureshock  1970  bellsystem  madrassystem  davidstow  victorcousin  salmankhan 
april 2015 by robertogreco
Should We Automate Education? | EdTech Magazine
"In 1962, Raymond Callahan published Education and the Cult of Efficiency, a historical account of the influence that “scientific management” (also known as “Taylorism,” after its developer, Frederick Taylor) had on American schools in the early 20th century — that is, the push to run schools more like factories, where the productivity of workers was measured, controlled and refined.

Callahan’s main argument was that the pressures on the education system to adopt Taylorism resulted neither in more refined ways to teach nor in better ways to learn, but rather, in an emphasis on cost cutting. Efficiency, he argued, “amounted to an analysis of the budget. … Decisions on what should be taught were not made on educational, but on financial grounds.”

Fifty years later, we remain obsessed with creating a more “efficient” educational system (although ironically, we object to schools based on that very “factory model”). Indeed, this might be one of the major promises that educational technologies make: to deliver a more efficient way to teach and learn, and a more efficient way to manage schooling.

Deciding What We Want From Education

Adaptive learning — computer-based instruction and assessment that allows each student to move at her or his pace — is perhaps the latest in a series of technologies that promise more ­efficient education. The efficiency here comes, in part, from the focus on the individual — personalization — instead of on an entire classroom of students.

But it’s worth noting that adaptive learning isn’t new. “Intelligent tutoring systems” have been under development for decades now. The term “intelligent tutoring” was coined in the 1980s; research into computer-assisted instruction dates to the 1960s; and programmed instruction predates the computer altogether, with Sidney Pressey’s and B. F. Skinner’s “teaching machines” of the 1920s and 1950s, respectively.

“Education must become more efficient,” Skinner insisted. “To this end, curricula must be revised and simplified, and textbooks and classroom techniques improved.”

Rarely do we ask what exactly “efficiency” in education or ed tech ­entails. Does it mean a reduction in ­errors? Faster learning? Reshaping the curriculum based on market demands? Does it mean cutting labor costs — larger classroom sizes, perhaps, or teachers replaced by machines?

We also often fail to ask why efficiency would be something we would value in education at all. Schools shouldn’t be factories. Students aren’t algorithms.

What happens if we prioritize efficiency in education? By doing so, are we simply upgrading the factory model of schooling with newer technologies? What happens to spontaneity and messiness? What happens to contemplation and curiosity?

There’s danger, I’d argue, in relying on teaching machines — on a push for more automation in education. We forget that we’re teaching humans."
audreywatters  automation  education  edtech  learning  children  humanism  humans  efficiency  2014  1962  raymondcallahan  management  taylorism  factoryschools  schools  industrialeducation  schooling  adaptivelearning  bfskinner  sidneypressey  computers  computing  technology  curiosity  messiness  spontaneity  unschooling  deschooling 
april 2014 by robertogreco

Copy this bookmark:





to read