recentpopularlog in

robertogreco : soylent   2

Ethan Zuckerman: Solving Other People's Problems With Technology - The Atlantic
"In other words, is it possible to get beyond both a naïve belief that the latest technology will solve social problems—and a reaction that rubbishes any attempt to offer novel technical solutions as inappropriate, insensitive, and misguided? Can we find a synthesis in which technologists look at their work critically and work closely with the people they’re trying to help in order to build sociotechnical systems that address hard problems?

Obviously, I think this is possible — if really, really hard — or I wouldn’t be teaching at an engineering school. But before considering how we overcome a naïve faith in technology, let’s examine Snow’s suggestion. It’s a textbook example of a solution that’s technically sophisticated, simple to understand, and dangerously wrong."

"The problem with the solutionist critique, though, is that it tends to remove technological innovation from the problem-solver’s toolkit. In fact, technological development is often a key component in solving complex social and political problems, and new technologies can sometimes open a previously intractable problem. The rise of inexpensive solar panels may be an opportunity to move nations away from a dependency on fossil fuels and begin lowering atmospheric levels of carbon dioxide, much as developments in natural gas extraction and transport technologies have lessened the use of dirtier fuels like coal.

But it’s rare that technology provides a robust solution to a social problem by itself. Successful technological approaches to solving social problems usually require changes in laws and norms, as well as market incentives to make change at scale."

"Of the many wise things my Yale students said during our workshop was a student who wondered if he should be participating at all. “I don’t know anything about prisons, I don’t have family in prison. I don’t know if I understand these problems well enough to solve them, and I don’t know if these problems are mine to solve.”

Talking about the workshop with my friend and colleague Chelsea Barabas, she asked the wonderfully deep question, “Is it ever okay to solve another person’s problem?”

On its surface, the question looks easy to answer. We can’t ask infants to solve problems of infant mortality, and by extension, it seems unwise to let kindergarten students design educational policy or demand that the severely disabled design their own assistive technologies.

But the argument is more complicated when you consider it more closely. It’s difficult if not impossible to design a great assistive technology without working closely, iteratively, and cooperatively with the person who will wear or use it. My colleague Hugh Herr designs cutting-edge prostheses for U.S. veterans who’ve lost legs, and the centerpiece of his lab is a treadmill where amputees test his limbs, giving him and his students feedback about what works, what doesn’t, and what needs to change. Without the active collaboration with the people he’s trying to help, he’s unable to make technological advances.

Disability rights activists have demanded “nothing about us without us,” a slogan that demands that policies should not be developed without the participation of those intended to benefit from those policies.

Design philosophies like participatory design and codesign bring this concept to the world of technology, demanding that technologies designed for a group of people be designed and built, in part, by those people. Codesign challenges many of the assumptions of engineering, requiring people who are used to working in isolation to build broad teams and to understand that those most qualified to offer a technical solution may be least qualified to identify a need or articulate a design problem. This method is hard and frustrating, but it’s also one of the best ways to ensure that you’re solving the right problem, rather than imposing your preferred solution on a situation."

"It is unlikely that anyone is going to invite Shane Snow to redesign a major prison any time soon, so spending more than 3,000 words urging you to reject his solution may be a waste of your time and mine. But the mistakes Snow makes are those that engineers make all the time when they turn their energy and creativity to solving pressing and persistent social problems. Looking closely at how Snow’s solutions fall short offers some hope for building better, fairer, and saner solutions.

The challenge, unfortunately, is not in offering a critique of how solutions go wrong. Excellent versions of that critique exist, from Morozov’s war on solutionism, to Courtney Martin’s brilliant “The Reductive Seduction of Other People’s Problems.” If it’s easy to design inappropriate solutions about problems you don’t fully understand, it’s not much harder to criticize the inadequacy of those solutions.

What’s hard is synthesis — learning to use technology as part of well-designed sociotechnical solutions. These solutions sometimes require profound advances in technology. But they virtually always require people to build complex, multifunctional teams that work with and learn from the people the technology is supposed to benefit.

Three students at the MIT Media Lab taught a course last semester called “Unpacking Impact: Reflecting as We Make.” They point out that the Media Lab prides itself on teaching students how to make anything, and how to turn what you make into a business, but rarely teaches reflection about what we make and what it might mean for society as a whole. My experience with teaching this reflective process to engineers is that it’s both important and potentially paralyzing, that once we understand the incompleteness of technology as a path for solving problems and the ways technological solutions relate to social, market, and legal forces, it can be hard to build anything at all.

I’m going to teach a new course this fall, tentatively titled “Technology and Social Change.” It’s going to include an examination of the four levers of social change Larry Lessig suggests in Code, and which I’ve been exploring as possible paths to civic engagement. The course will include deep methodological dives into codesign, and will examine using anthropology as tool for understanding user needs. It will look at unintended consequences, cases where technology’s best intentions fail, and cases where careful exploration and preparation led to technosocial systems that make users and communities more powerful than they were before.

I’m “calling my shot” here for two reasons. One, by announcing it publicly, I’m less likely to back out of it, and given how hard these problems are, backing out is a real possibility. And two, if you’ve read this far in this post, you’ve likely thought about this issue and have suggestions for what we should read and what exercises we should try in the course of the class — I hope you might be kind enough to share those with me.

In the end, I’m grateful for Shane Snow’s surreal, Black Mirror vision of the future prison both because it’s a helpful jumping-off point for understanding how hard it is to make change well by using technology, and because the U.S. prison system is a broken and dysfunctional system in need of change. But we need to find ways to disrupt better, to challenge knowledgeably, to bring the people they hope to benefit into the process. If you can, please help me figure out how we teach these ideas to the smart, creative people I work with—people who want to change the world, and are afraid of breaking it in the process."
technology  technosolutionism  solutionism  designimperialism  humanitariandesign  problemsolving  2016  ethanzuckerman  design  blackmirror  shanesnow  prisons  socialchange  lawrencelessig  anthropology  medialab  courtneymartin  nutraloaf  soylent  codesign  evgenymorozov  olcp  wikipedia  bias  racism  empathy  suziecagle  mitmedialab  mit  systems  systemsthinking  oculusrift  secondlife  vr  virtualreality  solitaryconfinement  incarceration  change  changemaking  ethnography  chelseabarabas  participatory  participatorydesign 
july 2016 by robertogreco
Solving All the Wrong Problems - The New York Times
"We are overloaded daily with new discoveries, patents and inventions all promising a better life, but that better life has not been forthcoming for most. In fact, the bulk of the above list targets a very specific (and tiny!) slice of the population. As one colleague in tech explained it to me recently, for most people working on such projects, the goal is basically to provide for themselves everything that their mothers no longer do.

He was joking — sort of — but his comment made me think hard about who is served by this stuff. I’m concerned that such a focus on comfort and instant gratification will reduce us all to those characters in “Wall-E,” bound to their recliners, Big Gulps in hand, interacting with the world exclusively through their remotes.

Too many well-funded entrepreneurial efforts turn out to promise more than they can deliver (i.e., Theranos’ finger-prick blood test) or read as parody (but, sadly, are not — such as the $99 “vessel” that monitors your water intake and tells you when you should drink more water).

When everything is characterized as “world-changing,” is anything?

Clay Tarver, a writer and producer for the painfully on-point HBO comedy “Silicon Valley,” said in a recent New Yorker article: “I’ve been told that, at some of the big companies, the P.R. departments have ordered their employees to stop saying ‘We’re making the world a better place,’ specifically because we have made fun of that phrase so mercilessly. So I guess, at the very least, we’re making the world a better place by making these people stop saying they’re making the world a better place.”

O.K., that’s a start. But the impulse to conflate toothbrush delivery with Nobel Prize-worthy good works is not just a bit cultish, it’s currently a wildfire burning through the so-called innovation sector. Products and services are designed to “disrupt” market sectors (a.k.a. bringing to market things no one really needs) more than to solve actual problems, especially those problems experienced by what the writer C. Z. Nnaemeka has described as “the unexotic underclass” — single mothers, the white rural poor, veterans, out-of-work Americans over 50 — who, she explains, have the “misfortune of being insufficiently interesting.”

If the most fundamental definition of design is to solve problems, why are so many people devoting so much energy to solving problems that don’t really exist? How can we get more people to look beyond their own lived experience?

In “Design: The Invention of Desire,” a thoughtful and necessary new book by the designer and theorist Jessica Helfand, the author brings to light an amazing kernel: “hack,” a term so beloved in Silicon Valley that it’s painted on the courtyard of the Facebook campus and is visible from planes flying overhead, is also prison slang for “horse’s ass carrying keys.”

To “hack” is to cut, to gash, to break. It proceeds from the belief that nothing is worth saving, that everything needs fixing. But is that really the case? Are we fixing the right things? Are we breaking the wrong ones? Is it necessary to start from scratch every time?

Empathy, humility, compassion, conscience: These are the key ingredients missing in the pursuit of innovation, Ms. Helfand argues, and in her book she explores design, and by extension innovation, as an intrinsically human discipline — albeit one that seems to have lost its way. Ms. Helfand argues that innovation is now predicated less on creating and more on the undoing of the work of others.

“In this humility-poor environment, the idea of disruption appeals as a kind of subversive provocation,” she writes. “Too many designers think they are innovating when they are merely breaking and entering.”

In this way, innovation is very much mirroring the larger public discourse: a distrust of institutions combined with unabashed confidence in one’s own judgment shifts solutions away from fixing, repairing or improving and shoves them toward destruction for its own sake. (Sound like a certain presidential candidate? Or Brexit?)

Perhaps the main reason these frivolous products and services frustrate me is because of their creators’ insistence that changing lives for the better is their reason for being. To wit, the venture capitalist Marc Andreessen, who has invested in companies like Airbnb and Twitter but also in services such as LikeALittle (which started out as a flirting tool among college students) and Soylent (a sort of SlimFast concoction for tech geeks), tweeted last week: “The perpetually missing headline: ‘Capitalism worked okay again today and most people in the world got a little better off.’ ”

Meanwhile, in San Francisco, where such companies are based, sea level rise is ominous, the income gap between rich and poor has been growing faster than in any other city in the nation, a higher percentage of people send their kids to private school than in almost any other city, and a minimum salary of $254,000 is required to afford an average-priced home. Who exactly is better off?

Ms. Helfand calls for a deeper embrace of personal vigilance: “Design may provide the map,” she writes, “but the moral compass that guides our personal choices resides permanently within us all.”

Can we reset that moral compass? Maybe we can start by not being a bunch of hacks."
2016  allisonarieff  siliconvalley  problemsolving  disruption  claytarver  sanfrancisco  capitalism  jessicahelfand  books  invention  narcissism  theranos  comfort  instantgratification  hacking  innovation  publicdiscourse  publicgood  inequality  marcandreessen  morality  moralcompass  soylent  venturecapitalism  brexit  us  priorities 
july 2016 by robertogreco

Copy this bookmark:

to read