recentpopularlog in

danhon

« earlier   
No one’s coming. It’s up to us. – Dan Hon – Medium
"Getting from here to there

This is all very well and good. But what can we do? And more precisely, what “we”? There’s increasing acceptance of the reality that the world we live in is intersectional and we all play different and simultaneous roles in our lives. The society of “we” includes technologists who have a chance of affecting the products and services, it includes customers and users, it includes residents and citizens.

I’ve made this case above, but I feel it’s important enough to make again: at a high level, I believe that we need to:

1. Clearly decide what kind of society we want; and then

2. Design and deliver the technologies that forever get us closer to achieving that desired society.

This work is hard and, arguably, will never be completed. It necessarily involves compromise. Attitudes, beliefs and what’s considered just changes over time.

That said, the above are two high level goals, but what can people do right now? What can we do tactically?

What we can do now

I have two questions that I think can be helpful in guiding our present actions, in whatever capacity we might find ourselves.

For all of us: What would it look like, and how might our societies be different, if technology were better aligned to society’s interests?

At the most general level, we are all members of a society, embedded in existing governing structures. It certainly feels like in the recent past, those governing structures are coming under increasing strain, and part of the blame is being laid at the feet of technology.

One of the most important things we can do collectively is to produce clarity and prioritization where we can. Only by being clearer and more intentional about the kind of society we want and accepting what that means, can our societies and their institutions provide guidance and leadership to technology.

These are questions that cannot and should not be left to technologists alone. Advances in technology mean that encryption is a societal issue. Content moderation and censorship are a societal issue. Ultimately, it should be for governments (of the people, by the people) to set expectations and standards at the societal level, not organizations accountable only to a board of directors and shareholders.

But to do this, our governing institutions will need to evolve and improve. It is easier, and faster, for platforms now to react to changing social mores. For example, platforms are responding in reaction to society’s reaction to “AI-generated fake porn” faster than governing and enforcing institutions.

Prioritizations may necessarily involve compromise, too: the world is not so simple, and we are not so lucky, that it can be easily and always divided into A or B, or good or not-good.

Some of my perspective in this area is reflective of the schism American politics is currently experiencing. In a very real way, America, my adoptive country of residence, is having to grapple with revisiting the idea of what America is for. The same is happening in my country of birth with the decision to leave the European Union.

These are fundamental issues. Technologists, as members of society, have a point of view on them. But in the way that post-enlightenment governing institutions were set up to protect against asymmetric distribution of power, technology leaders must recognize that their platforms are now an undeniable, powerful influence on society.

As a society, we must do the work to have a point of view. What does responsible technology look like?

For technologists: How can we be humane and advance the goals of our society?

As technologists, we can be excited about re-inventing approaches from first principles. We must resist that impulse here, because there are things that we can do now, that we can learn now, from other professions, industries and areas to apply to our own. For example:

* We are better and stronger when we are together than when we are apart. If you’re a technologist, consider this question: what are the pros and cons of unionizing? As the product of a linked network, consider the question: what is gained and who gains from preventing humans from linking up in this way?

* Just as we create design patterns that are best practices, there are also those that represent undesired patterns from our society’s point of view known as dark patterns. We should familiarise ourselves with them and each work to understand why and when they’re used and why their usage is contrary to the ideals of our society.

* We can do a better job of advocating for and doing research to better understand the problems we seek to solve, the context in which those problems exist and the impact of those problems. Only through disciplines like research can we discover in the design phase — instead of in production, when our work can affect millions — negative externalities or unintended consequences that we genuinely and unintentionally may have missed.

* We must compassionately accept the reality that our work has real effects, good and bad. We can wish that bad outcomes don’t happen, but bad outcomes will always happen because life is unpredictable. The question is what we do when bad things happen, and whether and how we take responsibility for those results. For example, Twitter’s leadership must make clear what behaviour it considers acceptable, and do the work to be clear and consistent without dodging the issue.

* In America especially, technologists must face the issue of free speech head-on without avoiding its necessary implications. I suggest that one of the problems culturally American technology companies (i.e., companies that seek to emulate American culture) face can be explained in software terms. To use agile user story terminology, the problem may be due to focusing on a specific requirement (“free speech”) rather than the full user story (“As a user, I need freedom of speech, so that I can pursue life, liberty and happiness”). Free speech is a means to an end, not an end, and accepting that free speech is a means involves the hard work of considering and taking a clear, understandable position as to what ends.

* We have been warned. Academics — in particular, sociologists, philosophers, historians, psychologists and anthropologists — have been warning of issues such as large-scale societal effects for years. Those warnings have, bluntly, been ignored. In the worst cases, those same academics have been accused of not helping to solve the problem. Moving on from the past, is there not something that we technologists can learn? My intuition is that post the 2016 American election, middle-class technologists are now afraid. We’re all in this together. Academics are reaching out, have been reaching out. We have nothing to lose but our own shame.

* Repeat to ourselves: some problems don’t have fully technological solutions. Some problems can’t just be solved by changing infrastructure. Who else might help with a problem? What other approaches might be needed as well?

There’s no one coming. It’s up to us.

My final point is this: no one will tell us or give us permission to do these things. There is no higher organizing power working to put systemic changes in place. There is no top-down way of nudging the arc of technology toward one better aligned with humanity.

It starts with all of us.

Afterword

I’ve been working on the bigger themes behind this talk since …, and an invitation to 2017’s Foo Camp was a good opportunity to try to clarify and improve my thinking so that it could fit into a five minute lightning talk. It also helped that Foo Camp has the kind of (small, hand-picked — again, for good and ill) influential audience who would be a good litmus test for the quality of my argument, and would be instrumental in taking on and spreading the ideas.

In the end, though, I nearly didn’t do this talk at all.

Around 6:15pm on Saturday night, just over an hour before the lightning talks were due to start, after the unconference’s sessions had finished and just before dinner, I burst into tears talking to a friend.

While I won’t break the societal convention of confidentiality that helps an event like Foo Camp be productive, I’ll share this: the world felt too broken.

Specifically, the world felt broken like this: I had the benefit of growing up as a middle-class educated individual (albeit, not white) who believed he could trust that institutions were a) capable and b) would do the right thing. I now live in a country where a) the capability of those institutions has consistently eroded over time, and b) those institutions are now being systematically dismantled, to add insult to injury.

In other words, I was left with the feeling that there’s nothing left but ourselves.

Do you want the poisonous lead removed from your water supply? Your best bet is to try to do it yourself.

Do you want a better school for your children? Your best bet is to start it.

Do you want a policing policy that genuinely rehabilitates rather than punishes? Your best bet is to…

And it’s just. Too. Much.

Over the course of the next few days, I managed to turn my outlook around.

The answer, of course, is that it is too much for one person.

But it isn’t too much for all of us."
danhon  technology  2018  2017  johnperrybarlow  ethics  society  calltoaction  politics  policy  purpose  economics  inequality  internet  web  online  computers  computing  future  design  debchachra  ingridburrington  fredscharmen  maciejceglowski  timcarmody  rachelcoldicutt  stacy-marieishmael  sarahjeong  alexismadrigal  ericmeyer  timmaughan  mimionuoha  jayowens  jayspringett  stacktivism  georginavoss  damienwilliams  rickwebb  sarawachter-boettcher  jamebridle  adamgreenfield  foocamp  timoreilly  kaitlyntiffany  fredturner  tomcarden  blainecook  warrenellis  danhill  cydharrell  jenpahljka  robinray  noraryan  mattwebb  mattjones  danachisnell  heathercamp  farrahbostic  negativeexternalities  collectivism  zeyneptufekci  maciejcegłowski 
february 2018 by robertogreco
s5e06: Filtered for wide spread, low accuracy
Which comes back to the idea that sufficiently non-transparent advanced algorithms are indistinguishable from malice.
danhon  státusz  facebook  internet 
january 2018 by kelt
s4e10: Stages of Transformation
Dan Hon: "It doesn't matter *how* we got here, all that matters is that we're *here*. For those people who were involved in getting to our current location - maybe they had written policy, maybe they had been involved with the design or management of certain processes - this isn't a judgment on them or their work. We just have to accept that we're here now, without blame or judgment."
*  transformation  government  danhon  yes 
april 2017 by gilest
Dan Hon s3e30: The Generative Surplus
Dan on neural networks and generative stuff. Lots of good thinks in here.
danhon  neural_networks  creativity  generative 
october 2016 by peteashton
s3e27: It's Difficult
On tech companies saying some things are difficult when they do difficult things all the time and pride themselves on it.

"And lack of trying looks like lack of caring"
facebook  danhon  silicon-valley-groupthink 
september 2016 by mr_stru
The Beginning of the End of Big Government IT | MetaFilter
Good discussion of this stuff, with calm defences by danhon and migurski in the face of (understandable) cynicism that sees the private sector as either big, dumb corporates or small, dumb Silicon Valley techbros. (via @neb)
via:neb  danhon  michalmigurski  holgate  government  cfa  gds  metafilter 
december 2015 by philgyford
Metafoundry 15: Scribbled Leatherjackets
[Update 23 Jan 2015: a new version of this is now at The Atlantic: http://www.theatlantic.com/technology/archive/2015/01/why-i-am-not-a-maker/384767/ ]

"HOMO FABBER: Every once in a while, I am asked what I ‘make’. When I attended the Brighton Maker Faire in September, a box for the answer was under my name on my ID badge. It was part of the XOXO Festival application for 2013; when I saw the question, I closed the browser tab, and only applied later (and eventually attended) because of the enthusiastic encouragement of friends. I’m always uncomfortable identifying myself as a maker. I'm uncomfortable with any culture that encourages you take on an entire identity, rather than to express a facet of your own identity (‘maker’, rather than ‘someone who makes things’). But I have much deeper concerns.

Walk through a museum. Look around a city. Almost all the artifacts that we value as a society were made by or at the the order of men. But behind every one is an invisible infrastructure of labour—primarily caregiving, in its various aspects—that is mostly performed by women. As a teenager, I read Ayn Rand on how any work that needed to be done day after day was meaningless, and that only creating new things was a worthwhile endeavour. My response to this was to stop making my bed every day, to the distress of my mother. (While I admit the possibility of a misinterpretation, as I haven’t read Rand’s writing since I was so young my mother oversaw my housekeeping, I have no plans to revisit it anytime soon.) The cultural primacy of making, especially in tech culture—that it is intrinsically superior to not-making, to repair, analysis, and especially caregiving—is informed by the gendered history of who made things, and in particular, who made things that were shared with the world, not merely for hearth and home.

Making is not a rebel movement, scrappy individuals going up against the system. While the shift might be from the corporate to the individual (supported, mind, by a different set of companies selling things), and from what Ursula Franklin describes as prescriptive technologies to ones that are more holistic, it mostly reinscribes familiar values, in slightly different form: that artifacts are important, and people are not.

In light of this history, it’s unsurprising that coding has been folded into ‘making’. Consider the instant gratification of seeing ‘hello, world’ on the screen; it’s nearly the easiest possible way to ‘make’ things, and certainly one where failure has a very low cost. Code is 'making' because we've figured out how to package it up into discrete units and sell it, and because it is widely perceived to be done by men. But you can also think about coding as eliciting a specific, desired set of behaviours from computing devices. It’s the Searle’s 'Chinese room' take on the deeper, richer, messier, less reproducible, immeasurably more difficult version of this that we do with people—change their cognition, abilities, and behaviours. We call the latter 'education', and it’s mostly done by underpaid, undervalued women.

When new products are made, we hear about exciting technological innovation, which are widely seen as worth paying (more) for. In contrast, policy and public discourse around caregiving—besides education, healthcare comes immediately to mind—are rarely about paying more to do better, and are instead mostly about figuring out ways to lower the cost. Consider the economics term ‘Baumol's cost disease’: it suggests that it is somehow pathological that the time and energy taken by a string quartet to prepare for a performance--and therefore the cost--has not fallen in the same way as goods, as if somehow people and what they do should get less valuable with time (to be fair, given the trajectory of wages in the US over the last few years in real terms, that seems to be exactly what is happening).

It's not, of course, that there's anything wrong with making (although it’s not all that clear that the world needs more stuff). It's that the alternative to making is usually not doing nothing—it's nearly always doing things for and with other people, from the barista to the Facebook community moderator to the social worker to the surgeon. Describing oneself as a maker—regardless of what one actually or mostly does—is a way of accruing to oneself the gendered, capitalist benefits of being a person who makes products.

I am not a maker. In a framing and value system that is about creating artifacts, specifically ones you can sell, I am a less valuable human. As an educator, the work I do is, at least superficially, the same year after year. That's because all of the actual change is at the interface between me, my students, and the learning experiences I design for them. People have happily informed me that I am a maker because I use phrases like 'design learning experiences', which is mistaking what I do for what I’m actually trying to elicit and support. The appropriate metaphor for education, as Ursula Franklin has pointed out, is a garden, not the production line.

My graduate work in materials engineering was all about analysing and characterizing biological tissues, mostly looking at disease states and interventions and how they altered the mechanical properties of bone, including addressing a public health question for my doctoral research. My current education research is mostly about understanding the experiences of undergraduate engineering students so we can do a better job of helping them learn. I think of my brilliant and skilled colleagues in the social sciences, like Nancy Baym at Microsoft Research, who does interview after interview followed by months of qualitative analysis to understand groups of people better. None of these activities are about ‘making’.

I educate. I analyse. I characterize. I critique. Almost everything I do these days is about communicating with others. To characterize what I do as 'making' is either to mistake the methods—the editorials, the workshops, the courses, even the materials science zine I made—for the purpose. Or, worse, to describe what I do as 'making' other people, diminishing their own agency and role in sensemaking, as if their learning is something I impose on them.

In a recent newsletter, Dan Hon wrote, "But even when there's this shift to Makers (and with all due deference to Getting Excited and Making Things), even when "making things" includes intangibles now like shipped-code, there's still this stigma that feels like it attaches to those-who-don't-make. Well, bullshit. I make stuff." I understand this response, but I'm not going to call myself a maker. Instead, I call bullshit on the stigma, and the culture and values behind it that reward making above everything else. Instead of calling myself a maker, I'm proud to stand with the caregivers, the educators, those that analyse and characterize and critique, everyone who fixes things and all the other people who do valuable work with and for others, that doesn't result in something you can put in a box and sell."

[My response on Twitter:

Storified version: https://storify.com/rogre/on-the-invisible-infrastructure-of-often-intangibl

and as a backup to that (but that doesn't fit the container of what Pinboard will show you)…

“Great way to start my day: @debcha on invisible infrastructure of (often intangible) labor, *not* making, & teaching.”
https://twitter.com/rogre/status/536601349756956672

“[pause to let you read and to give you a chance to sign up for @debcha’s Metafoundry newsletter http://tinyletter.com/metafoundry ]”
https://twitter.com/rogre/status/536601733791633408

““behind every…[maker] is an invisible infrastructure of labour—primarily caregiving, in…various aspects—…mostly performed by women” —@debcha”
https://twitter.com/rogre/status/536602125107605505

“See also Maciej Cegłowski on Thoreau. https://static.pinboard.in/xoxo_talk_thoreau.htm https://www.youtube.com/watch?v=eky5uKILXtM”
https://twitter.com/rogre/status/536602602431995904

““Thoreau had all these people, mostly women, who silently enabled the life he thought he was heroically living for himself.” —M. Cegłowski”
https://twitter.com/rogre/status/536602794786963458

“And this reminder from @anotherny [Frank Chimero] that we should acknowledge and provide that support: “Make donuts too.”” http://frankchimero.com/blog/the-inferno-of-independence/
https://twitter.com/rogre/status/536603172244967424

“small collection of readings (best bottom up) on emotional labor, almost always underpaid, mostly performed by women https://pinboard.in/u:robertogreco/t:emotionallabor”
https://twitter.com/rogre/status/536603895087128576

““The appropriate metaphor for education, as Ursula Franklin has pointed out, is a garden, not the production line.” —@debcha”
https://twitter.com/rogre/status/536604452065513472

““to describe what I do as 'making' other people, diminish[es] their own agency & role in sensemaking” —@debcha”
https://twitter.com/rogre/status/536604828705648640

“That @debcha line gets at why Taylor Mali’s every-popular “What Teachers Make” has never sat well with me. https://www.youtube.com/watch?v=RxsOVK4syxU”
https://twitter.com/rogre/status/536605134185177088

““I call bullshit on the stigma, and the culture and values behind it that reward making above everything else.” —@debcha”
https://twitter.com/rogre/status/536605502805798912

“This all brings me back to Margaret Edson’s 2008 Commencement Address at Smith College. http://www.smith.edu/events/commencement_speech2008.php + https://vimeo.com/1085942”
https://twitter.com/rogre/status/536606045200588803

“Edson’s talk is about classroom teaching. I am forever grateful to @CaseyG for pointing me there (two years ago on Tuesday).”
https://twitter.com/rogre/status/536606488144248833

““Bringing nothing, producing nothing, expecting nothing, withholding … [more]
debchachra  2014  making  makers  makermovement  teaching  howweteach  emotionallabor  labor  danhon  scubadiving  support  ursulafranklin  coding  behavior  gender  cv  margaretedson  caseygollan  care  caretaking  smithcollege  sensemaking  agency  learning  howwelearn  notmaking  unproduct  frankchimero  maciejceglowski  metafoundry  independence  interdependence  canon  teachers  stigma  gratitude  thorough  infrastructure  individualism  invisibility  critique  criticism  fixing  mending  analysis  service  intangibles  caregiving  homemaking  maciejcegłowski 
november 2014 by robertogreco
Episode Nineteen - Not Trying Is A Signal, Peak Game, EasyHard, SnapChat
“I wrote yesterday, in my bit about Fax Your GP (which, maybe not on its own, but certainly appears to have contributed to this recent decision to push back rollout of the care.data database a little[1]), that with ever increasing emphasis on customer service and user experience, the delta between what's good and what's intolerable inexorably decreases. That's to say: once you've seen something with a good user experience, it's hard to justify other experiences in the same category having a shitty user experience. 

Sometimes, this makes sense: you can pay a little more and pay Virgin America and get a super-good user experience when you fly, compared to when you fly United and you're wondering why they're not paying you instead. That can kind of make sense when you're flying because you don't really have that many options. 

But in other cases, where the goods or services are highly substitutable, the distinction between one option which has a pleasurable service (especially one rendered digitally) and one that isn't is just going to lead to instances of nerd rage. A slightly more mellow than nerd rage case in point, Russell Davies (again) on Sony's new product[2] and the fact that while they're entirely capable of a singularly impressive engineering feat, everything apart from shipping the product and making it falls down from a new customer's point of view. And that's despite the fun stuff[3] you can do with it.

And all of this in spite of the fact that you just know there's someone in management, somewhere, saying that they know a fifteen year old kid (hopefully, equally likely to be a girl than a boy) who could 'knock up a website' that would do that in a few days.

Nick Sweeney helped me articulate this, in the context of government, a little better in a series of emails: 

* If government can't produce a good, digital user experience, when other entities can, then government looks bad (see: Healthcare.gov)

* If government *can* produce a good, digital user experience (see: the work of GDS in the UK), and then for some reason a good digital user experience isn't produced (see: care.data opt-out), then suspicion as to failure of implementation includes policy reasons (ie malice) as well as incompetence. 

So: companies and governments. You're on notice. It isn't hard to do this kind of thing. It isn't easy, either. It's just simply doable. The fact that you're not doing it is now a valid signal that you're not doing it for a reason.”
everyoneiknowisdoingawesomeshit  friends  writing  essay  effort  gds  govuk  danhon  government  quality  work  2015fellowshipreader 
july 2014 by migurski
Episode One Hundred: Taking Stock; And The New
"It took a while, but one of the early themes that emerged was that of the Californian Ideology. That phrase has become a sort of short-hand for me to take a critical look at what's coming out of the west coast of the USA (and what that west coast is inspiring in the rest of the world). It's a conflicting experience for me, because I genuinely believe in the power of technology to enhance the human experience and to pull everyone, not just some people, up to a humane standard of living. But there's a particular heady mix that goes into the Ideology: one of libertarianism, of the power of the algorithm and an almost-blind belief in a purity of an algorithm, of the maths that goes into it, of the fact that it's executed in and on a machine substrate that renders the algorithm untouchable. But the algorithms we design reflect our intentions, our beliefs and our predispositions. We're learning so much about how our cognitive architecture functions - how our brains work, the hacks that evolution "installed" in us that are essentially zero-day back-door unpatched vulnerabilties - that I feel like someone does need to be critical about all the ways software is going to eat the world. Because software is undeniably eating the world, and it doesn't need to eat it in a particular way. It can disrupt and obsolete the world, and most certainly will, but one of the questions we should be asking is: to what end? 

This isn't to say that we should ask these questions to impede progress just as a matter of course: just that if we're doing these things anyway, we should also (because we *do* have the ability to) feel able to examine the long term consequences and ask: is this what we want?"
danhon  2014  californianideology  howwethink  brain  algorithms  libertarianism  progress  technology  technosolutionism  ideology  belief  intention 
june 2014 by robertogreco
Episode Eighty Six: Solid 2 of 2; Requests - GOV.UK 2018; Next
"Today, reading LinkedIn recommendations as they came in felt like reading eulogies. Apart from me not quite being dead. Not yet, at least. Or, I was dead and I hadn't realised it yet. It doesn't matter, anyway: all the recommendations from people I've enjoyed working with over the past three years just feel, unfortunately, like double-edged knives - ultimately good but only really readable with a twist.

Right now is a bad time, one of those terrible times when it doesn't even really matter that one of my good friends has pulled me aside, insisted that I have something to eat and sat patiently with me in a pizza joint while I stare off into space and mumble. It doesn't matter that he's great and doing these things for me and telling me that this too will pass: I am hearing all of the words that he's saying, the sounds he's making as that make all the little bits of air vibrate and hit my ear and undergo some sort of magic transformation as they get understood in my brain. But they don't connect. Understanding is different from feeling. And right now, I'm feeling useless and broken and disconnected and above all, sad. But I can't feel those things. I have meetings to go to. Hustle to hust. Against what felt at times like the relentless optimism of an O'Reilly conference I had to finally hide away for a while, behind a Diet Coke and a slice of cheesecake, because dealing with that much social interaction was just far too draining.

And so I'm hiding again tonight, instead of out with friends, because it's just too hard to smile and pretend that everything's OK when it's demonstrably not."



"Over the past couple of days at Solid it's become almost painfully apparent that the Valley, in broad terms, is suffering from a chronic lack of empathy in terms of how it both sees and deals with the rest of the world, not just in communicating what it's doing and what it's excited about, but also in its acts. Sometimes these are genuine gaffes - mistakes that do not betray a deeper level of consideration, thinking or strategy. Other times, they *are* genuine, and they betray at the very least a naivety as to consequence or second-order impact (and I'm prepared to accept that without at least a certain level of naivety or lack of consideration for impact we'd find it pretty hard as a species to ever take advantage of any technological advance), but let me instead perhaps point to a potential parallel. 

There are a bunch of people worried about what might happen if, or when, we finally get around to a sort of singularity event and we have to deal with a genuine superhuman artificial intelligence that can think (and act) rings around us, never mind improving its ability at a rate greater than O(n). 

One of the reasons to be afraid of such a strong AI was explained by Elizer Yudkowsky:

"The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else."

And here's how the rest of the world, I think, can unfairly perceive Silicon Valley: Silicon Valley doesn't care about humans, really. Silicon Valley loves solving problems. It doesn't hate you and it doesn't love you, but you do things that it can use for something else. Right now, those things include things-that-humans-are-good-at, like content generation and pointing at things. Right now, those things include things like getting together and making things. But solving problems is more fun than looking after people, and sometimes solving problems can be rationalised away as looking after people because hey, now that $20bn worth of manufacturing involved in making planes has gone away, people can go do stuff that they want to do, instead of having to make planes!

Would that it were that easy.

So anyway. I'm thinking about the Internet of Things and how no one's done a good job of branding it or explaining it or communicating it to Everyone Else. Because that needs doing.

--

As ever, thanks for the notes. Keep them coming in. If you haven't said hi already, please do and let me know where you heard about my newsletter. And if you like the newsletter, please consider telling some friends about it."
danhon  2014  siliconvalley  ai  empathy  problemsolving  society  californianideology  unemplyment  capitalism  depression  elizeryudkowsky  humans  singularity 
may 2014 by robertogreco

Copy this bookmark:





to read