recentpopularlog in

robertogreco : cognitivescience   6

▶ Audrey Watters | Gettin' Air with Terry Greene
"Audrey Watters (@audreywatters) is an ed-tech folk hero who writes at Hack Education @hackeducation where, for the past nine years, she has taken the lead in keeping the field on its toes in regards to educational technology's "progress". Her long awaited and much anticipated book, "Teaching Machines", will be out in the new year."
2019  audreywatters  edtech  terrygreene  bfskinner  technology  schools  education  turnitin  history  learning  behaviorism  cognition  cognitivescience  psychology  automation  standardization  khanacademy  howweteach  liberation  relationships  agency  curiosity  inquiry  justice  economics  journalism  criticism  vr  facebook  venturecapital  capitalism  research  fabulism  contrafabulism  siliconvalley  archives  elonmusk  markzuckerberg  gatesfoundation  billgates 
11 weeks ago by robertogreco
Why books don’t work | Andy Matuschak
"Books are easy to take for granted. Not any specific book, I mean: the form of a book. Paper or pixels—it hardly matters. Words in lines on pages in chapters. And at least for non-fiction books, one implied assumption at the foundation: people absorb knowledge by reading sentences. This last idea so invisibly defines the medium that it’s hard not to take for granted, which is a shame because, as we’ll see, it’s quite mistaken.

Picture some serious non-fiction tomes. The Selfish Gene; Thinking, Fast and Slow; Guns, Germs, and Steel; etc. Have you ever had a book like this—one you’d read—come up in conversation, only to discover that you’d absorbed what amounts to a few sentences? I’ll be honest: it happens to me regularly. Often things go well at first. I’ll feel I can sketch the basic claims, paint the surface; but when someone asks a basic probing question, the edifice instantly collapses. Sometimes it’s a memory issue: I simply can’t recall the relevant details. But just as often, as I grasp about, I’ll realize I had never really understood the idea in question, though I’d certainly thought I understood when I read the book. Indeed, I’ll realize that I had barely noticed how little I’d absorbed until that very moment.

I know I’m not alone here. When I share this observation with others—even others, like myself, who take learning seriously—it seems that everyone has had a similar experience. The conversation often feels confessional: there’s some bashfulness, almost as if these lapses reveal some unusual character flaw. I don’t think it’s a character flaw, but whatever it is, it’s certainly not unusual. In fact, I suspect this is the default experience for most readers. The situation only feels embarrassing because it’s hard to see how common it is.

Now, the books I named aren’t small investments. Each takes around 6–9 hours to read. Adult American college graduates read 24 minutes a day on average, so a typical reader might spend much of a month with one of these books. Millions of people have read each of these books, so that’s tens of millions of hours spent. In exchange for all that time, how much knowledge was absorbed? How many people absorbed most of the knowledge the author intended to convey? Or even just what they intended to acquire? I suspect it’s a small minority Unfortunately, my literature reviews have turned up no formal studies of this question, so I can only appeal to your intuition..

I’m not suggesting that all those hours were wasted. Many readers enjoyed reading those books. That’s wonderful! Certainly most readers absorbed something, however ineffable: points of view, ways of thinking, norms, inspiration, and so on. Indeed, for many books (and in particular most fiction), these effects are the point.

This essay is not about that kind of book. It’s about explanatory non-fiction like the books I mentioned above, which aim to convey detailed knowledge. Some people may have read Thinking, Fast and Slow for entertainment value, but in exchange for their tens of millions of collective hours, I suspect many readers—or maybe even most readers—expected to walk away with more. Why else would we feel so startled when we notice how little we’ve absorbed from something we’ve read?

All this suggests a peculiar conclusion: as a medium, books are surprisingly bad at conveying knowledge, and readers mostly don’t realize it.

The conclusion is peculiar, in part, because books are shockingly powerful knowledge-carrying artifacts! In the Cosmos episode, “The Persistence of Memory,” Carl Sagan exalts:

What an astonishing thing a book is. It’s a flat object made from a tree with flexible parts on which are imprinted lots of funny dark squiggles. But one glance at it and you’re inside the mind of another person, maybe somebody dead for thousands of years. Across the millennia, an author is speaking clearly and silently inside your head, directly to you. Writing is perhaps the greatest of human inventions, binding together people who never knew each other, citizens of distant epochs. Books break the shackles of time. A book is proof that humans are capable of working magic.
Indeed: books are magical! Human progress in the era of mass communication makes clear that some readers really do absorb deep knowledge from books, at least some of the time. So why do books seem to work for some people sometimes? Why does the medium fail when it fails?

In these brief notes, we’ll explore why books so often don’t work, and why they succeed when they do.Let’s get it out of the way: I’m aware of the irony here, using the written medium to critique the written medium! But if the ideas I describe here prove successful, then future notes on this subject won’t have that problem. This note is mere kindling, and I’ll be very happy if it’s fully consumed by the blaze it ignites. Armed with that understanding, we’ll glimpse not only how we might improve books as a medium, but also how we might weave unfamiliar new forms—not from paper, and not from pixels, but from insights about human cognition."



"Why lectures don’t work"



"Why books don’t work"



"What about textbooks?"



"What to do about it

How might we make books actually work reliably? At this point, the slope before us might feel awfully steep. Some early footholds might be visible—a few possible improvements to books, or tools one might make to assist readers—but it’s not at all clear how to reach the summit. In the face of such a puzzle, it’s worth asking: are we climbing the right hill? Why are we climbing this particular hill at all?

I argued earlier that books, as a medium, weren’t built around any explicit model of how people learn. It’s possible that, in spite of this “original sin,” iterative improvements to the form, along with new tools to support readers, can make books much more reliable. But it’s also possible that we’ll never discover the insights we need while tethered to the patterns of thought implicit in this medium.

Instead, I propose: we don’t necessarily have to make books work. We can make new forms instead. This doesn’t have to mean abandoning narrative prose; it doesn’t even necessarily mean abandoning paper—rather, we can free our thinking by abandoning our preconceptions of what a book is. Maybe once we’ve done all this, we’ll have arrived at something which does indeed look much like a book. We’ll have found a gentle path around the back of that intimidating slope. Or maybe we’ll end up in different terrain altogether.

So let’s reframe the question. Rather than “how might we make books actually work reliably,” we can ask: How might we design mediums which do the job of a non-fiction book—but which actually work reliably?

I’m afraid that’s a research question—probably for several lifetimes of research—not something I can directly answer in these brief notes. But I believe it’s possible, and I’ll now try to share why.

To begin, it’s important to see that mediums can be designed, not just inherited. What’s more: it is possible to design new mediums which embody specific ideas. Inventors have long drawn on this unintuitive insightSee e.g. Douglas Engelbart’s 1962 “Augmenting Human Intellect” for a classic primary source or Michael Nielsen’s 2016 “Thought as a Technology” for a synthesis of much work in this space., but I’ll briefly review it in case it’s unfamiliar. Mathematical proofs are a medium; the step-by-step structure embodies powerful ideas about formal logic. Snapchat Stories are a medium; the ephemerality embodies powerful ideas about emotion and identity. The World Wide Web is a medium (or perhaps many mediums); the pervasive hyperlinks embody powerful ideas about the associative nature of knowledge.

Perhaps most remarkably, the powerful ideas are often invisible: it’s not like we generally think about cognition when we sprinkle a blog post with links. But the people who created the Web were thinking about cognition. They designed its building blocks so that the natural way of reading and writing in this medium would reflect the powerful ideas they had in mind. Shaped intentionally or not, each medium’s fundamental materials and constraints give it a “grain” which make it bend naturally in some directions and not in others.

This “grain” is what drives me when I gripe that books lack a functioning cognitive model. It’s not just that it’s possible to create a medium informed by certain ideas in cognitive science. Rather, it’s possible to weave a medium made out of those ideas, in which a reader’s thoughts and actions are inexorably—perhaps even invisibly—shaped by those ideas. Mathematical proofs, as a medium, don’t just consider ideas about logic; we don’t attach ideas about logic to proofs. The form is made out of ideas about logic.

How might we design a medium so that its “grain” bends in line with how people think and learn? So that by simply engaging with an author’s work in the medium—engaging in the obvious fashion; engaging in this medium’s equivalent of books’ “read all the words on the first page, then repeat with the next, and so on”—one would automatically do what’s necessary to understand? So that, in some deep way, the default actions and patterns of thought when engaging with this medium are the same thing as “what’s necessary to understand”?

That’s a tall order. Even on a theoretical level, it’s not clear what’s necessary for understanding. Indeed, that framing’s too narrow: there are many paths to understanding a topic. But cognitive scientists and educators have mapped some parts of this space, and they’ve distilled some powerful ideas we can use as a starting point.

For example, people struggle to absorb new material when their working memory is already overloaded. More concretely: if you’ve just been introduced to a zoo of new terms, you … [more]
books  learning  howwelearn  text  textbooks  andymatuschak  2019  canon  memory  understanding  lectures  cognition  cognitivescience  web  internet  howweread  howwewrite  reading  writing  comprehension  workingmemory  michaelnielsen  quantumcountry  education  unschooling  deschooling 
june 2019 by robertogreco
On how to grow an idea – The Creative Independent
"In the 1970s, a Japanese farmer discovered a better way to do something—by not doing it. In the introduction to Masasobu Fukuoka’s One-Straw Revolution, Frances Moore Lappé describes the farmer’s moment of inspiration:
The basic idea came to him one day as he happened to pass an old field which had been left unused and unplowed for many years. There he saw a tangle of grasses and weeds. From that time on, he stopped flooding his field in order to grow rice. He stopped sowing rice seed in the spring and, instead, put the seed out in the autumn, sowing it directly onto the surface of the field when it would naturally have fallen to the ground… Once he has seen to it that conditions have been tilted in favor of his crops, Mr. Fukuoka interferes as little as possible with the plant and animal communities in his fields.


Fukuoka’s practice, which he perfected over many years, eventually became known as “do nothing farming.” Not that it was easy: the do-nothing farmer needed to be more attentive and sensitive to the land and seasons than a regular farmer. After all, Fukuoka’s ingenious method was hard-won after decades of his own close observations of weather patterns, insects, birds, trees, soil, and the interrelationships among all of these.

In One Straw Revolution, Fukuoka is rightly proud of what he has perfected. Do-nothing farming not only required less labor, no machines, and no fertilizer—it also enriched the soil year by year, while most farms depleted their soil. Despite the skepticism of others, Fukuoka’s farm yielded a harvest equal to or greater than that of other farms. “It seems unlikely that there could be a simpler way of raising grain,” he wrote. “The proof is ripening right before your eyes.”

One of Fukuoka’s insights was that there is a natural intelligence at work in existing ecosystems, and therefore the most intelligent way to farm was to interfere as little as possible. This obviously requires a reworking not only of what we consider farming, but maybe even what we consider progress.

“The path I have followed, this natural way of farming, which strikes most people as strange, was first interpreted as a reaction against the advance and reckless development of science. But all I have been doing, farming out here in the country, is trying to show that humanity knows nothing. Because the world is moving with such furious energy in the opposite direction, it may appear that I have fallen behind the times, but I firmly believe that the path I have been following is the most sensible one.”

The One Straw Revolution by Masanobu Fukuoka

✶✶

In my view, Fukuoka was an inventor. Typically we associate invention and progress with the addition or development of new technology. So what happens when moving forward actually means taking something away, or moving in a direction that appears (to us) to be backward? Fukuoka wrote: “This method completely contradicts modern agricultural techniques. It throws scientific knowledge and traditional farming know-how right out the window.”

This practice of fitting oneself into the greater ecological scheme of things is almost comically opposite to the stories in John McPhee’s Control of Nature. There, we find near-Shakespearean tales of folly in which man tries and fails to master the sublime powers of his environment (e.g. the decades-long attempt to keep the Mississippi river from changing course).

Any artist or writer might find this contrast familiar. Why is it that when we sit down and try to force an idea, nothing comes—or, if we succeed in forcing it, it feels stale and contrived? Why do the best ideas appear uninvited and at the strangest times, darting out at us like an impish squirrel from a shrub?

The key, in my opinion, has to do with what you think it is that’s doing the producing, and where. It’s easy for me to say that “I” produce ideas. But when I’ve finished something, it’s often hard for me to say how it happened—where it started, what route it took, and why it ended where it did. Something similar is happening on a do-nothing farm, where transitive verbs seem inadequate. It doesn’t sound quite right to say that Fukuoka “farmed the land”—it’s more like he collaborated with the land, and through his collaboration, created the conditions for certain types of growth.

“A great number, if not the majority, of these things have been described, inventoried, photographed, talked about, or registered. My intention in the pages that follow was to describe the rest instead: that which is generally not taken note of, that which is not noticed, that which has no importance: what happens when nothing happens other than the weather, people, cars, and clouds.”

Attempt at Exhausting a Place in Paris by George Perec

✶✶

I’ve known for my entire adult that going for a walk is how I can think most easily. Walking is not simply moving your thinking mind (some imagined insular thing) outside. The process of walking is thinking. In fact, in his book Spell of the Sensuous: Perception and Language in a More-than-Human World, David Abram proposes that it is not we who are thinking, but rather the environment that is thinking through us. Intelligence and thought are things to be found both in and around the self. “Each place is a unique state of mind,” Abram writes. “And the many owners that constitute and dwell within that locale—the spiders and the tree frogs no less than the human—all participate in, and partake of, the particular mind of the place.”

This is not as hand-wavy as it sounds. Studies in cognitive science have suggested that we do not encounter the environment as a static thing, nor are we static ourselves. As Francisco Varela, Evan Thompson, and Eleanor Rosch put it in The Embodied Mind (a study of cognitive science alongside Buddhist principles): “Cognition is not the representation of a pre-given world by a pre-given mind but is rather the enactment of a world and a mind… “ (emphasis mine). Throughout the book, the authors build a model of cognition in which mind and environment are not separate, but rather co-produced from the very point at which they meet.

[image]

“The Telegarden is an art installation that allows web users to view and interact with a remote garden filled with living plants. Members can plant, water, and monitor the progress of seedlings via the tender movements of an industrial robot arm.”

✶✶

Ideas are not products, as much as corporations would like them to be. Ideas are intersections between ourselves and something else, whether that’s a book, a conversation with a friend, or the subtle suggestion of a tree. Ideas can literally arise out of clouds (if we are looking at them). That is to say: ideas, like consciousness itself, are emergent properties, and thinking might be more participation than it is production. If we can accept this view of the mind with humility and awe, we might be amazed at what will grow there.


breathing [animation]

✶✶

To accompany this essay, I’ve created a channel on Are.na called “How to grow an idea.” There you’ll find some seeds for thought, scattered amongst other growths: slime molds, twining vines, internet gardens, and starling murmurations. The interview with John Cage, where he sits by an open window and rejoices in unwritten music, might remind you a bit of Fukuoka, as might Scott Polach’s piece in which an audience applauds the sunset. The channel starts with a reminder to breathe, and ends with an invitation to take a nap. Hopefully, somewhere in between, you might encounter something new."
intelligence  methodology  ideas  jennyodell  2018  are.na  masasobufukuoka  francesmoorelappé  farming  slow  nothing  idleness  nature  time  patience  productivity  interdependence  multispecies  morethanhuman  do-nothingfarming  labor  work  sustainability  ecosystems  progress  invention  technology  knowledge  johnmcphee  collaboration  land  growth  georgesperec  walking  thinking  slowthinking  perception  language  davidabram  cognitivescience  franciscovarela  evanthompson  eleanorrosch  buddhism  cognition  johncage  agriculture 
april 2018 by robertogreco
How do Smartphones Affect Human Thought? » Cyborgology
"Actually, they tested more than intuitiveness, but also ability, yet I digress. This hypothesis implies (though does not state) a research question: How does smartphone usage affect cognitive processes? This is an important question, but one the research was never prepared to answer thoughtfully. Rather, the authors recast this question as a prediction, embedded in a host of assumptions which privilege unmediated thought.

This approach is inherently flawed. It defines cognitive functioning (incorrectly) as a raw internal process, untouched by technology in its purest state. This approach pits the brain against the device, as though tools are foreign intruders upon the natural body. This is simply not the case. Humans defining characteristic is our need for tools. Our brains literally developed with and through technology. This continues to be true. Brains are highly plastic, and new technologies change how cognition works. Our thought processes are, and always have been, mediated.

With a changing technological landscape, this means that cognitive tests quickly become outdated and fail to make sense as ‘objective’ measures of skill and ability. In other words, definitions of high functioning cognition are always in flux. Therefore, in reading cognitive research that makes evaluative claims, we should critically examine which forms of cognition the study privileges. In turn, authors should make their assumptions clear. In this case, we can discern that the authors define high cognitive functioning as digitally unmediated.

Certainly, it is useful to understand how cognition is changing, and traditional measures are good baselines to track that change. But change does not indicate laziness, stupidity, or, as the authors claim, no thinking at all. It indicates, instead, the need for new measures.

A more interesting question, for me, is how are intelligence and thoughtfulness changing? Rather than understand the brain and the device as separate sources of thought, could we instead render them connected nodes within a thought ecology? Such a rendering first, recognizes the increasing presence of digital devices in everyday life, and second, explicitly accounts for the relationship between structural inequalities and definitions of intelligence.

Definitions of intelligence have a long history of privileging the skills and logics of dominant groups. If cognitive function is tied to digital devices, then digital inequality—rather than human deficiency—becomes a key variable in understanding variations. At some level, I think people already understand this. After all, is it not the underlying driver of digital literacy movements?

This was not the study I wanted it to be. It does, however, tell us something interesting. People are changing. Our thought processes are changing. This is a moment of cognitive flux, and mobile digital technologies are key players in the future of thinking."
technology  2015  humans  research  cognition  cognitivescience  tools  jannydavis  change  flux  cognitiveflux  mobile  phones  smartphones  intuitiveness  thinking  howwethink  brain  skill  ability  laziness  stupidity  measurement  behavior  humancognition 
march 2015 by robertogreco
S’More Inequality
"Cognitive psychology—“the mind’s new science” of the last several decades—has directed both popular and scholarly attention to the cultivation of individual willpower as a tool of personal maximization. The Stanford marshmallow experiment on delayed gratification among preschoolers serves as a widely recognized touchstone for this revivification of interest in the will. But the marshmallow test is more than a handy synecdoche for the cold new logic behind shrinking public services and the burgeoning apparatus of surveillance and accountability. It also shows how the sciences of the soul can be deployed to create the person they purport to describe, by willing political transformation. The individual agent of willpower—“executive function,” in the argot of the cognitive sciences—becomes both the means and the end of school privatization. This body of work offers a way to read savage social inequality and a bifurcated labor market as individual mental functions whose ideal type is corporate decision making; it also aids the transition to corporate control of education itself. Following this trope from the realm of cultural logic to public policy allows us to watch neoliberalism operating simultaneously as ideology and agenda and to recognize the consistent denial of reproductive labor that gives the lie to its pretensions."

[via: https://twitter.com/yayitsrob/status/517691187516280832
https://twitter.com/yayitsrob/status/517691603280879616 ]
executivefunction  via:robinsonmeyer  cognitivescience  psychology  cognitivepsychology  willpower  self-control  marshmallowtest  delayedgratification  control  neoliberalism  bathanymoreton  2014  children  schools  schooling  edreform  policy  education  preschool  privatization  ideology  popscience  capitalism  latecapitalism  labor  behavior 
october 2014 by robertogreco
Fundamentals of learning: the exploration-exploitation trade-off
"This shows that people who are most inconsistent when they start to learn perform best towards the end of learning. Usually inconsistency is a bad sign, so it is somewhat surprising that it predicts better performance later on. The obvious interpretation is in terms of the exploration-exploitation trade-off. The inconsistent people are trying out more things at the beginning, learning more about what works and what doesn’t. This provides them with the foundation to perform well later on. This pattern holds when comparing across individuals, but it also holds for comparing across trials (so for the same individual, their later performance is better for targets on which they are most inconsistent on early in learning)."
learning  tomstafford  psychology  cognitivescience  inconsistency  2012  howwelearn 
january 2014 by robertogreco

Copy this bookmark:





to read