recentpopularlog in

robertogreco : danhon   16

No one’s coming. It’s up to us. – Dan Hon – Medium
"Getting from here to there

This is all very well and good. But what can we do? And more precisely, what “we”? There’s increasing acceptance of the reality that the world we live in is intersectional and we all play different and simultaneous roles in our lives. The society of “we” includes technologists who have a chance of affecting the products and services, it includes customers and users, it includes residents and citizens.

I’ve made this case above, but I feel it’s important enough to make again: at a high level, I believe that we need to:

1. Clearly decide what kind of society we want; and then

2. Design and deliver the technologies that forever get us closer to achieving that desired society.

This work is hard and, arguably, will never be completed. It necessarily involves compromise. Attitudes, beliefs and what’s considered just changes over time.

That said, the above are two high level goals, but what can people do right now? What can we do tactically?

What we can do now

I have two questions that I think can be helpful in guiding our present actions, in whatever capacity we might find ourselves.

For all of us: What would it look like, and how might our societies be different, if technology were better aligned to society’s interests?

At the most general level, we are all members of a society, embedded in existing governing structures. It certainly feels like in the recent past, those governing structures are coming under increasing strain, and part of the blame is being laid at the feet of technology.

One of the most important things we can do collectively is to produce clarity and prioritization where we can. Only by being clearer and more intentional about the kind of society we want and accepting what that means, can our societies and their institutions provide guidance and leadership to technology.

These are questions that cannot and should not be left to technologists alone. Advances in technology mean that encryption is a societal issue. Content moderation and censorship are a societal issue. Ultimately, it should be for governments (of the people, by the people) to set expectations and standards at the societal level, not organizations accountable only to a board of directors and shareholders.

But to do this, our governing institutions will need to evolve and improve. It is easier, and faster, for platforms now to react to changing social mores. For example, platforms are responding in reaction to society’s reaction to “AI-generated fake porn” faster than governing and enforcing institutions.

Prioritizations may necessarily involve compromise, too: the world is not so simple, and we are not so lucky, that it can be easily and always divided into A or B, or good or not-good.

Some of my perspective in this area is reflective of the schism American politics is currently experiencing. In a very real way, America, my adoptive country of residence, is having to grapple with revisiting the idea of what America is for. The same is happening in my country of birth with the decision to leave the European Union.

These are fundamental issues. Technologists, as members of society, have a point of view on them. But in the way that post-enlightenment governing institutions were set up to protect against asymmetric distribution of power, technology leaders must recognize that their platforms are now an undeniable, powerful influence on society.

As a society, we must do the work to have a point of view. What does responsible technology look like?

For technologists: How can we be humane and advance the goals of our society?

As technologists, we can be excited about re-inventing approaches from first principles. We must resist that impulse here, because there are things that we can do now, that we can learn now, from other professions, industries and areas to apply to our own. For example:

* We are better and stronger when we are together than when we are apart. If you’re a technologist, consider this question: what are the pros and cons of unionizing? As the product of a linked network, consider the question: what is gained and who gains from preventing humans from linking up in this way?

* Just as we create design patterns that are best practices, there are also those that represent undesired patterns from our society’s point of view known as dark patterns. We should familiarise ourselves with them and each work to understand why and when they’re used and why their usage is contrary to the ideals of our society.

* We can do a better job of advocating for and doing research to better understand the problems we seek to solve, the context in which those problems exist and the impact of those problems. Only through disciplines like research can we discover in the design phase — instead of in production, when our work can affect millions — negative externalities or unintended consequences that we genuinely and unintentionally may have missed.

* We must compassionately accept the reality that our work has real effects, good and bad. We can wish that bad outcomes don’t happen, but bad outcomes will always happen because life is unpredictable. The question is what we do when bad things happen, and whether and how we take responsibility for those results. For example, Twitter’s leadership must make clear what behaviour it considers acceptable, and do the work to be clear and consistent without dodging the issue.

* In America especially, technologists must face the issue of free speech head-on without avoiding its necessary implications. I suggest that one of the problems culturally American technology companies (i.e., companies that seek to emulate American culture) face can be explained in software terms. To use agile user story terminology, the problem may be due to focusing on a specific requirement (“free speech”) rather than the full user story (“As a user, I need freedom of speech, so that I can pursue life, liberty and happiness”). Free speech is a means to an end, not an end, and accepting that free speech is a means involves the hard work of considering and taking a clear, understandable position as to what ends.

* We have been warned. Academics — in particular, sociologists, philosophers, historians, psychologists and anthropologists — have been warning of issues such as large-scale societal effects for years. Those warnings have, bluntly, been ignored. In the worst cases, those same academics have been accused of not helping to solve the problem. Moving on from the past, is there not something that we technologists can learn? My intuition is that post the 2016 American election, middle-class technologists are now afraid. We’re all in this together. Academics are reaching out, have been reaching out. We have nothing to lose but our own shame.

* Repeat to ourselves: some problems don’t have fully technological solutions. Some problems can’t just be solved by changing infrastructure. Who else might help with a problem? What other approaches might be needed as well?

There’s no one coming. It’s up to us.

My final point is this: no one will tell us or give us permission to do these things. There is no higher organizing power working to put systemic changes in place. There is no top-down way of nudging the arc of technology toward one better aligned with humanity.

It starts with all of us.

Afterword

I’ve been working on the bigger themes behind this talk since …, and an invitation to 2017’s Foo Camp was a good opportunity to try to clarify and improve my thinking so that it could fit into a five minute lightning talk. It also helped that Foo Camp has the kind of (small, hand-picked — again, for good and ill) influential audience who would be a good litmus test for the quality of my argument, and would be instrumental in taking on and spreading the ideas.

In the end, though, I nearly didn’t do this talk at all.

Around 6:15pm on Saturday night, just over an hour before the lightning talks were due to start, after the unconference’s sessions had finished and just before dinner, I burst into tears talking to a friend.

While I won’t break the societal convention of confidentiality that helps an event like Foo Camp be productive, I’ll share this: the world felt too broken.

Specifically, the world felt broken like this: I had the benefit of growing up as a middle-class educated individual (albeit, not white) who believed he could trust that institutions were a) capable and b) would do the right thing. I now live in a country where a) the capability of those institutions has consistently eroded over time, and b) those institutions are now being systematically dismantled, to add insult to injury.

In other words, I was left with the feeling that there’s nothing left but ourselves.

Do you want the poisonous lead removed from your water supply? Your best bet is to try to do it yourself.

Do you want a better school for your children? Your best bet is to start it.

Do you want a policing policy that genuinely rehabilitates rather than punishes? Your best bet is to…

And it’s just. Too. Much.

Over the course of the next few days, I managed to turn my outlook around.

The answer, of course, is that it is too much for one person.

But it isn’t too much for all of us."
danhon  technology  2018  2017  johnperrybarlow  ethics  society  calltoaction  politics  policy  purpose  economics  inequality  internet  web  online  computers  computing  future  design  debchachra  ingridburrington  fredscharmen  maciejceglowski  timcarmody  rachelcoldicutt  stacy-marieishmael  sarahjeong  alexismadrigal  ericmeyer  timmaughan  mimionuoha  jayowens  jayspringett  stacktivism  georginavoss  damienwilliams  rickwebb  sarawachter-boettcher  jamebridle  adamgreenfield  foocamp  timoreilly  kaitlyntiffany  fredturner  tomcarden  blainecook  warrenellis  danhill  cydharrell  jenpahljka  robinray  noraryan  mattwebb  mattjones  danachisnell  heathercamp  farrahbostic  negativeexternalities  collectivism  zeyneptufekci  maciejcegłowski 
february 2018 by robertogreco
Metafoundry 15: Scribbled Leatherjackets
[Update 23 Jan 2015: a new version of this is now at The Atlantic: http://www.theatlantic.com/technology/archive/2015/01/why-i-am-not-a-maker/384767/ ]

"HOMO FABBER: Every once in a while, I am asked what I ‘make’. When I attended the Brighton Maker Faire in September, a box for the answer was under my name on my ID badge. It was part of the XOXO Festival application for 2013; when I saw the question, I closed the browser tab, and only applied later (and eventually attended) because of the enthusiastic encouragement of friends. I’m always uncomfortable identifying myself as a maker. I'm uncomfortable with any culture that encourages you take on an entire identity, rather than to express a facet of your own identity (‘maker’, rather than ‘someone who makes things’). But I have much deeper concerns.

Walk through a museum. Look around a city. Almost all the artifacts that we value as a society were made by or at the the order of men. But behind every one is an invisible infrastructure of labour—primarily caregiving, in its various aspects—that is mostly performed by women. As a teenager, I read Ayn Rand on how any work that needed to be done day after day was meaningless, and that only creating new things was a worthwhile endeavour. My response to this was to stop making my bed every day, to the distress of my mother. (While I admit the possibility of a misinterpretation, as I haven’t read Rand’s writing since I was so young my mother oversaw my housekeeping, I have no plans to revisit it anytime soon.) The cultural primacy of making, especially in tech culture—that it is intrinsically superior to not-making, to repair, analysis, and especially caregiving—is informed by the gendered history of who made things, and in particular, who made things that were shared with the world, not merely for hearth and home.

Making is not a rebel movement, scrappy individuals going up against the system. While the shift might be from the corporate to the individual (supported, mind, by a different set of companies selling things), and from what Ursula Franklin describes as prescriptive technologies to ones that are more holistic, it mostly reinscribes familiar values, in slightly different form: that artifacts are important, and people are not.

In light of this history, it’s unsurprising that coding has been folded into ‘making’. Consider the instant gratification of seeing ‘hello, world’ on the screen; it’s nearly the easiest possible way to ‘make’ things, and certainly one where failure has a very low cost. Code is 'making' because we've figured out how to package it up into discrete units and sell it, and because it is widely perceived to be done by men. But you can also think about coding as eliciting a specific, desired set of behaviours from computing devices. It’s the Searle’s 'Chinese room' take on the deeper, richer, messier, less reproducible, immeasurably more difficult version of this that we do with people—change their cognition, abilities, and behaviours. We call the latter 'education', and it’s mostly done by underpaid, undervalued women.

When new products are made, we hear about exciting technological innovation, which are widely seen as worth paying (more) for. In contrast, policy and public discourse around caregiving—besides education, healthcare comes immediately to mind—are rarely about paying more to do better, and are instead mostly about figuring out ways to lower the cost. Consider the economics term ‘Baumol's cost disease’: it suggests that it is somehow pathological that the time and energy taken by a string quartet to prepare for a performance--and therefore the cost--has not fallen in the same way as goods, as if somehow people and what they do should get less valuable with time (to be fair, given the trajectory of wages in the US over the last few years in real terms, that seems to be exactly what is happening).

It's not, of course, that there's anything wrong with making (although it’s not all that clear that the world needs more stuff). It's that the alternative to making is usually not doing nothing—it's nearly always doing things for and with other people, from the barista to the Facebook community moderator to the social worker to the surgeon. Describing oneself as a maker—regardless of what one actually or mostly does—is a way of accruing to oneself the gendered, capitalist benefits of being a person who makes products.

I am not a maker. In a framing and value system that is about creating artifacts, specifically ones you can sell, I am a less valuable human. As an educator, the work I do is, at least superficially, the same year after year. That's because all of the actual change is at the interface between me, my students, and the learning experiences I design for them. People have happily informed me that I am a maker because I use phrases like 'design learning experiences', which is mistaking what I do for what I’m actually trying to elicit and support. The appropriate metaphor for education, as Ursula Franklin has pointed out, is a garden, not the production line.

My graduate work in materials engineering was all about analysing and characterizing biological tissues, mostly looking at disease states and interventions and how they altered the mechanical properties of bone, including addressing a public health question for my doctoral research. My current education research is mostly about understanding the experiences of undergraduate engineering students so we can do a better job of helping them learn. I think of my brilliant and skilled colleagues in the social sciences, like Nancy Baym at Microsoft Research, who does interview after interview followed by months of qualitative analysis to understand groups of people better. None of these activities are about ‘making’.

I educate. I analyse. I characterize. I critique. Almost everything I do these days is about communicating with others. To characterize what I do as 'making' is either to mistake the methods—the editorials, the workshops, the courses, even the materials science zine I made—for the purpose. Or, worse, to describe what I do as 'making' other people, diminishing their own agency and role in sensemaking, as if their learning is something I impose on them.

In a recent newsletter, Dan Hon wrote, "But even when there's this shift to Makers (and with all due deference to Getting Excited and Making Things), even when "making things" includes intangibles now like shipped-code, there's still this stigma that feels like it attaches to those-who-don't-make. Well, bullshit. I make stuff." I understand this response, but I'm not going to call myself a maker. Instead, I call bullshit on the stigma, and the culture and values behind it that reward making above everything else. Instead of calling myself a maker, I'm proud to stand with the caregivers, the educators, those that analyse and characterize and critique, everyone who fixes things and all the other people who do valuable work with and for others, that doesn't result in something you can put in a box and sell."

[My response on Twitter:

Storified version: https://storify.com/rogre/on-the-invisible-infrastructure-of-often-intangibl

and as a backup to that (but that doesn't fit the container of what Pinboard will show you)…

“Great way to start my day: @debcha on invisible infrastructure of (often intangible) labor, *not* making, & teaching.”
https://twitter.com/rogre/status/536601349756956672

“[pause to let you read and to give you a chance to sign up for @debcha’s Metafoundry newsletter http://tinyletter.com/metafoundry ]”
https://twitter.com/rogre/status/536601733791633408

““behind every…[maker] is an invisible infrastructure of labour—primarily caregiving, in…various aspects—…mostly performed by women” —@debcha”
https://twitter.com/rogre/status/536602125107605505

“See also Maciej Cegłowski on Thoreau. https://static.pinboard.in/xoxo_talk_thoreau.htm https://www.youtube.com/watch?v=eky5uKILXtM”
https://twitter.com/rogre/status/536602602431995904

““Thoreau had all these people, mostly women, who silently enabled the life he thought he was heroically living for himself.” —M. Cegłowski”
https://twitter.com/rogre/status/536602794786963458

“And this reminder from @anotherny [Frank Chimero] that we should acknowledge and provide that support: “Make donuts too.”” http://frankchimero.com/blog/the-inferno-of-independence/
https://twitter.com/rogre/status/536603172244967424

“small collection of readings (best bottom up) on emotional labor, almost always underpaid, mostly performed by women https://pinboard.in/u:robertogreco/t:emotionallabor”
https://twitter.com/rogre/status/536603895087128576

““The appropriate metaphor for education, as Ursula Franklin has pointed out, is a garden, not the production line.” —@debcha”
https://twitter.com/rogre/status/536604452065513472

““to describe what I do as 'making' other people, diminish[es] their own agency & role in sensemaking” —@debcha”
https://twitter.com/rogre/status/536604828705648640

“That @debcha line gets at why Taylor Mali’s every-popular “What Teachers Make” has never sat well with me. https://www.youtube.com/watch?v=RxsOVK4syxU”
https://twitter.com/rogre/status/536605134185177088

““I call bullshit on the stigma, and the culture and values behind it that reward making above everything else.” —@debcha”
https://twitter.com/rogre/status/536605502805798912

“This all brings me back to Margaret Edson’s 2008 Commencement Address at Smith College. http://www.smith.edu/events/commencement_speech2008.php + https://vimeo.com/1085942”
https://twitter.com/rogre/status/536606045200588803

“Edson’s talk is about classroom teaching. I am forever grateful to @CaseyG for pointing me there (two years ago on Tuesday).”
https://twitter.com/rogre/status/536606488144248833

““Bringing nothing, producing nothing, expecting nothing, withholding … [more]
debchachra  2014  making  makers  makermovement  teaching  howweteach  emotionallabor  labor  danhon  scubadiving  support  ursulafranklin  coding  behavior  gender  cv  margaretedson  caseygollan  care  caretaking  smithcollege  sensemaking  agency  learning  howwelearn  notmaking  unproduct  frankchimero  maciejceglowski  metafoundry  independence  interdependence  canon  teachers  stigma  gratitude  thorough  infrastructure  individualism  invisibility  critique  criticism  fixing  mending  analysis  service  intangibles  caregiving  homemaking  maciejcegłowski 
november 2014 by robertogreco
Episode One Hundred: Taking Stock; And The New
"It took a while, but one of the early themes that emerged was that of the Californian Ideology. That phrase has become a sort of short-hand for me to take a critical look at what's coming out of the west coast of the USA (and what that west coast is inspiring in the rest of the world). It's a conflicting experience for me, because I genuinely believe in the power of technology to enhance the human experience and to pull everyone, not just some people, up to a humane standard of living. But there's a particular heady mix that goes into the Ideology: one of libertarianism, of the power of the algorithm and an almost-blind belief in a purity of an algorithm, of the maths that goes into it, of the fact that it's executed in and on a machine substrate that renders the algorithm untouchable. But the algorithms we design reflect our intentions, our beliefs and our predispositions. We're learning so much about how our cognitive architecture functions - how our brains work, the hacks that evolution "installed" in us that are essentially zero-day back-door unpatched vulnerabilties - that I feel like someone does need to be critical about all the ways software is going to eat the world. Because software is undeniably eating the world, and it doesn't need to eat it in a particular way. It can disrupt and obsolete the world, and most certainly will, but one of the questions we should be asking is: to what end? 

This isn't to say that we should ask these questions to impede progress just as a matter of course: just that if we're doing these things anyway, we should also (because we *do* have the ability to) feel able to examine the long term consequences and ask: is this what we want?"
danhon  2014  californianideology  howwethink  brain  algorithms  libertarianism  progress  technology  technosolutionism  ideology  belief  intention 
june 2014 by robertogreco
Episode Eighty Six: Solid 2 of 2; Requests - GOV.UK 2018; Next
"Today, reading LinkedIn recommendations as they came in felt like reading eulogies. Apart from me not quite being dead. Not yet, at least. Or, I was dead and I hadn't realised it yet. It doesn't matter, anyway: all the recommendations from people I've enjoyed working with over the past three years just feel, unfortunately, like double-edged knives - ultimately good but only really readable with a twist.

Right now is a bad time, one of those terrible times when it doesn't even really matter that one of my good friends has pulled me aside, insisted that I have something to eat and sat patiently with me in a pizza joint while I stare off into space and mumble. It doesn't matter that he's great and doing these things for me and telling me that this too will pass: I am hearing all of the words that he's saying, the sounds he's making as that make all the little bits of air vibrate and hit my ear and undergo some sort of magic transformation as they get understood in my brain. But they don't connect. Understanding is different from feeling. And right now, I'm feeling useless and broken and disconnected and above all, sad. But I can't feel those things. I have meetings to go to. Hustle to hust. Against what felt at times like the relentless optimism of an O'Reilly conference I had to finally hide away for a while, behind a Diet Coke and a slice of cheesecake, because dealing with that much social interaction was just far too draining.

And so I'm hiding again tonight, instead of out with friends, because it's just too hard to smile and pretend that everything's OK when it's demonstrably not."



"Over the past couple of days at Solid it's become almost painfully apparent that the Valley, in broad terms, is suffering from a chronic lack of empathy in terms of how it both sees and deals with the rest of the world, not just in communicating what it's doing and what it's excited about, but also in its acts. Sometimes these are genuine gaffes - mistakes that do not betray a deeper level of consideration, thinking or strategy. Other times, they *are* genuine, and they betray at the very least a naivety as to consequence or second-order impact (and I'm prepared to accept that without at least a certain level of naivety or lack of consideration for impact we'd find it pretty hard as a species to ever take advantage of any technological advance), but let me instead perhaps point to a potential parallel. 

There are a bunch of people worried about what might happen if, or when, we finally get around to a sort of singularity event and we have to deal with a genuine superhuman artificial intelligence that can think (and act) rings around us, never mind improving its ability at a rate greater than O(n). 

One of the reasons to be afraid of such a strong AI was explained by Elizer Yudkowsky:

"The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else."

And here's how the rest of the world, I think, can unfairly perceive Silicon Valley: Silicon Valley doesn't care about humans, really. Silicon Valley loves solving problems. It doesn't hate you and it doesn't love you, but you do things that it can use for something else. Right now, those things include things-that-humans-are-good-at, like content generation and pointing at things. Right now, those things include things like getting together and making things. But solving problems is more fun than looking after people, and sometimes solving problems can be rationalised away as looking after people because hey, now that $20bn worth of manufacturing involved in making planes has gone away, people can go do stuff that they want to do, instead of having to make planes!

Would that it were that easy.

So anyway. I'm thinking about the Internet of Things and how no one's done a good job of branding it or explaining it or communicating it to Everyone Else. Because that needs doing.

--

As ever, thanks for the notes. Keep them coming in. If you haven't said hi already, please do and let me know where you heard about my newsletter. And if you like the newsletter, please consider telling some friends about it."
danhon  2014  siliconvalley  ai  empathy  problemsolving  society  californianideology  unemplyment  capitalism  depression  elizeryudkowsky  humans  singularity 
may 2014 by robertogreco
Episode Seventy Two: Symptom Masquerading As Disruption (2); The Model Is The Modeled; Labour Not Employment; Superstar Ratings, Here We Go; Not Swarm
"John V Willshire's observation, that I mentioned on Twitter kind of blew my mind. Now, John *has* studied economics, and the point he made was this: this "stack" view of people - that there are those who now think of people as virtualised substitutable AWS EC2 instances that can be activated, spun up, assigned a parcel of work, and then demobilised, "is the way that economists have always liked to think of people anyway - little atoms of meat who must behave in predictable ways."

Yes, OK, so what we have is our humans as rational actors and, in a sense, what Uber and Airbnb have done is not necessarily produced an API that controls the world, but an API that instead controls other humans. We reach out and use these services, and our requests get translated, mediated, into instructions for other humans to perform for us. You can see a sort of spectrum-disorder response to this in Hacker News comments where occasionally someone will call for an even better version of Uber where there is literally no need to interact or converse with your driver at all, and essentially the human is totally abstracted away behind a piece of glass-fronted interface.

But John's *best* point for me, was when he said:

"What if rather than being a way to describe the world, economics has unwittingly become a way to proscribe the world. Then we're fucked."

Abstract it away and it's kind of saying this: a model of a subject that is so successful at describing the subject that the subject takes on the attributes of the model. The model becomes the thing being modeled.

This is a thing, now. Seeing the world as addressable stacks. A kind of mankind's dominion over a computer-addressable, insructable directable world. There was someone at work who got super excited about "an API for the world!" and I think that's kind of the problem for me: an API for the world abstracts the world so that you can deal with it and manipulate it, which is great, but the thing is we have a super high bandwidth low-latency interface for the world that's super multi-modal. And I think it's fair to say that our APIs for the world right now are really coarse and in that way, treat the objects (note! objects! Not people!) that they interact with in a necessarily coarse way. And humans aren't coarse. Humans are many splendored things.

And maybe this is part of the whole "design with empathy" mini-crusade that I'm on. Sure, APIs that allow you to instruct humans to do things like Uber and Airbnb are successful right now, but I'm questioning whether they're successful good, or successful because of a symptom of changes in the labour market, or, honestly, a combination of the two. And, you know, first attempt at providing an API layer for humans that's more nuanced, I think, than Mechanical Turk, which I should've referenced earlier. But I like to think that an empathic API that's more considerate of humans will do better than one that is less considerate. Remember this, hackers of the Bay Area: you do not like being thought of as replaceable resource units, and there aren't many people who think "yeah, Human Resources is totally the best name for that department". "
danhon  johnwillshire  2014  economics  obseroreffect  modeling  empathy  humans  dehumanization  systemsthinking  systems  capitalism  worldbuilding  internet  humanresources  gr  uber  airbnb  abstraction  scale  disruption  models  shrequest1  sharingeconomy 
may 2014 by robertogreco
18. Webstock 2014 Talk Notes and References - postarchitectural
[Direct link to video: https://vimeo.com/91957759 ]
[See also: http://www.webstock.org.nz/talks/the-future-happens-so-much/ ]

"I was honored to be invited to Webstock 2014 to speak, and decided to use it as an opportunity to talk about startups and growth in general.

I prepared for this talk by collecting links, notes, and references in a flat text file, like I did for Eyeo and Visualized. These references are vaguely sorted into the structure of the talk. Roughly, I tried to talk about the future happening all around us, the startup ecosystem and the pressures for growth that got us there, and the dangerous sides of it both at an individual and a corporate level. I ended by talking about ways for us as a community to intervene in these systems of growth.

The framework of finding places to intervene comes from Leverage Points by Donella Meadows, and I was trying to apply the idea of 'monstrous thoughts' from Just Asking by David Foster Wallace. And though what I was trying to get across is much better said and felt through books like Seeing like a State, Debt, or Arctic Dreams, here's what was in my head."
shahwang  2014  webstock  donellameadows  jamescscott  seeinglikeastate  davidgraeber  debt  economics  barrylopez  trevorpaglen  google  technology  prism  robotics  robots  surveillance  systemsthinking  growth  finance  venturecapital  maciejceglowski  millsbaker  mandybrown  danhon  advertising  meritocracy  democracy  snapchat  capitalism  infrastructure  internet  web  future  irrationalexuberance  github  geopffmanaugh  corproratism  shareholders  oligopoly  oligarchy  fredscharmen  kenmcleod  ianbanks  eleanorsaitta  quinnorton  adamgreenfield  marshallbrain  politics  edwardsnowden  davidsimon  georgepacker  nicolefenton  power  responsibility  davidfosterwallace  christinaxu  money  adamcurtis  dmytrikleiner  charlieloyd  wealth  risk  sarahkendxior  markjacobson  anildash  rebeccasolnit  russellbrand  louisck  caseygollan  alexpayne  judsontrue  jamesdarling  jenlowe  wilsonminer  kierkegaard  readinglist  startups  kiev  systems  control  data  resistance  obligation  care  cynicism  snark  change  changetheory  neoliberalism  intervention  leveragepoints  engagement  nonprofit  changemaki 
april 2014 by robertogreco
Episode Sixty: We Have Always Been At War; Our Independence Day; Spimes, Duh
"Last episode I talked about the Chief Empathy Officer, and in case I wasn't clear, I want to make it abundantly so this time: I think having a chief empathy officer is a stupid idea, exactly the kind of tactic that makes it look like you're jumping on a bandwagon and fixing something without fixing anything at all. It's almost as bad as having a hived-off UX team and exactly the kind of thing where, as Matt Locke points out, a general good practice in business is promoted up to the C-level suite so that you don't have to do deal with it anymore.

Let me put it clearly: no one person in an organisation should have sole responsibility for "empathy", especially in a manner that's going to make it easy for detractors to make fun of it. Instead, customer-centricism is something that needs to be distributed throughout, from the bottom-up as well as top-down

Leisa Reichelt tweeted at me in response to that episode the concept of 'exposure hours'[1], which is such a blindingly simple idea that you're kind of surprised (and then when you think about, it understand why) more companies or organisations don't use it. It's just this: the more time your designers or product owners spend with end-users, the better designed those products or services tend to be: "There is a direct correlation between this exposure and the improvements we see in the designs that team produces." And this isn't just for design personnel - as soon as non-design personnel were included in the contact hours, the entire group would fall together. This is as much an argument for audience/customer contact across each functional unit or team across an organisation.

An aside: there's a wonderful tv series (it's true! Such things exist!) called Back To The Floor[2] which started in the UK in which, for entertainment purposes (and the occasional tear-jerker), C-level executives are forced to take entry level jobs in their organisations and are bluntly confronted with the humanity of their employees. Because, you know, living in a bubble.

At this point my brain wanders off and looks at the anti-pattern. Capitalism is all too often thought of as being combative and the American strand in particular borrows heavily from sports metaphors (crushing it, home run, left field, sprint). It's all anyone can do to try and impress that often capitalism doesn't necessarily have to be a zero-sum game, and that type of thinking feels like it's at odds with a customer-centric or empathy-driven organisation.

The anti-pattern, of course, is dehumanising your enemies so you can make it easier to kill them. Losing shopkeepers with face-to-face interaction dehumanises customers. Interchangeable call-center workers dehumanise customers. Reducing a customer to a statistic and traffic-light feedback mechanisms. In essence, putting up barriers and abstracting away difficult-to-quantise or measure or digitise measures that seek to make the customer experience more predictable and scaleable.

In some ways, you can get at this empathy intuitively and by having strong direction - if you're lucky. And by lucky, I mean *really* lucky - you're the kind of person who's a one in a million Steve Jobs type, and remember even *he* got it wrong with things like the hocky puck mouse and, well, iTunes, where the strategy was right and the initial user experience (plug in a first gen iPod, FireWire your songs over) was great but then degraded over time with lack of focus. And Jobs, well, Jobs was just making sure that he understood *himself* really well and appeared to be pretty true to that and wouldn't stand for any shit. So at least you get clarity of vision for products like iPhone or iPad that way.

But for everyone else, and for everyone else, chances are blindingly highly likely that you're not Steve Jobs, in which case research to understand the audience and the user need is absolutely critical. So the question is: why do hardly any organisations do this?

It's interesting because for engineers and entrepreneuers the first product is often the "scratch your own itch", which makes sense, because you understand your own itch and you know exactly where it's itching and what you might need to un-itch yourself. But when that product or service starts to grow outside of that market or that population, then having the ability to understand the people you're interacting with becomes super important, I think.

There are ways to mitigate needing to have a super-developed corporate sense of empathy, though. You can use network effects to tie people in social applications, you can use local monopolies like in fixed-line telecommunications, and plain-old regulation of competitors and limited service in air travel. But the flip side of Moore's Law is that communication and computation has gotten ever cheaper, so all of these organisations got "social", which the consultants remind us is all about having "conversations". And the thing about having conversations with an organisation that lacks empathy, or lacks the ability to act upon empathy, is that over time, they end up feeling like a sociopath.

For those of you who have been following along at home, the protracted amount of thinking in this area may or may not have something to do with one of my side projects.

[1] http://www.uie.com/articles/user_exposure_hours/
[2] http://en.wikipedia.org/wiki/Back_to_the_Floor_(UK_TV_series) "
danhon  empathy  titles  culture  ux  organizations  administration  leadership  management  tcsnmy  knowing  leisareichelt  exposurehours  exposure  attention  fieldwork  fieldvisits  ethnography  listening  noticing 
april 2014 by robertogreco
Episode Fifty Two: Disintermediation & Externalisation; Housing; Odds
"Would Amazon test, for example, an Amazon Express option for browsing their site where they *only* show you the highest-rated, most-recommended items against your search? Because even the presence or indication of hundreds or thousands of other options can be stress-inducing. And perhaps pair it with a liberal return policy?

I guess the thing is this. Disintermediation and increased consumer choice rely upon the assumption (if you care, I suppose) that your consumer or audience is a rational economic actor who *has the time and the resources to be a rational actor*. I get pissed off at stereotypical, straw-man privileged engineers who just want a spreadsheet with a table and do all the research and go buy the best thing because seriously: who has the time for that?

And there's a significant, vulnerable section of the population that *doesn't* have time for that, that doesn't have time to be a rational actor. And who are you supposed to trust? Do you trust the financial advisor who is getting kickbacks? Do you trust your health insurance company? Do you even have health insurance to trust? This isn't a mere case of "information overload" in terms of a firehose of stuff coming to you. This is: how do I make a basic decision and ensure I am informed to make a rational, appropriate choice.

The other side of this is the fantastic one for businesses that get to externalise formerly internal costs under the guise of consumer choice. So instead of having knowledgeable salespeople (for example), "the information is available on our website". This is perhaps the difference with a retailer like Apple where I believe their store employees are taught to listen to user needs (there's that phrase again) and help them accomplish them rather than being commission-driven and pushing everyone to buy the most expensive model, for example. And so a place where retail can be improved both offline and online: Amazon doesn't have salespeople that understand their products to help you choose one, they externalise that cost by having you write reviews and lists instead."
danhon  paradoxofchoiuce  choice  2014  decisionmaking  comparison  retail  amazon  wirecutter  travelagents  housing 
april 2014 by robertogreco
Internet of Newsletters | Artisanal Internet Pen Pals
"A WEBRING?

Yes! A Webring! Not a We Bring, though, a Web Ring!

Why a web ring? Because newsletters are old-school! And so are web rings!"



"The internet. It’s made up of bitty-bytey type things, all swirling around in one giant TCP/IP melting pot of loveliness. And in all of this, all of those blogs and video channels and tweets and posts and likes, there’s always been one constant.

The newsletter.

And now they’re back! A whole bunch of people are writing newsletters – daily, weekly, monthly, erratically! And they’re using them to talk to people. Because newsletters are special. They use email. And we don’t get distracted when we read email. So we pay attention to them.

What else is awesome? Webrings! Webrings have been around forever! You might have one too, if you look beneath the sofa or in that cupboard in your kitchen. It’s probably gathering dust. But never fear, they’re coming back too! Because what better way to find out about and explore newsletters than a web ring!

What are you waiting for?

Sign up by filling in this Google form here. Laura and Dan will get it and add your submission to the web ring.

When you receive your webring code, edit the settings of your tinyletter to insert it at the bottom of your description.

Artisanal internet pen pals."
newsletters  webrings  2014  danhon  laurahall  oldschoolweb  web  internet 
april 2014 by robertogreco
Episode Forty Four: Snow Crashing; danah boyd; Facebook and Oculus Rift
"It looks like Facebook's leadership is waking up to this (in fairness to them, the rest of the industry is waking up to this, too). With mobile, there isn't (and doesn't have to be) a one-size-fits-all communication/social networking utility or app. Facebook may well be the thing that everyone ends up having an account on, but in their latest earnings call, they reiterated their strategy to build more mobile apps and with the acquisition of WhatsApp alongside Instagram it seems clear to me (without my work hat on) that Facebook's goal to connect the world is through Facebook the holding company, not just through Facebook the product/platform. 

You can contrast boyd's work with that of Paul Adams' in his book Grouped[2], the result of which was Google Plus Circles shortly after he left Google for Facebook. Circles (and Google Plus) appears to me to be the sort of social network you end up building where you want everyone *and* you want to solve the problem of having different spaces and contexts. But we don't work like that, not as people: Google Plus is the place and it doesn't matter how many different circles I might have there - the cognitive overhead involved in placing people in circles is just too great and causes too much friction as opposed to just using a different app like Snapchat or WhatsApp or Twitter or Secret that comes with intrinsic contextual cues to being another place.

Adams' research was right - people don't like inadvertently sharing different facets of themselves to the wrong audience. No product has successfully catered for multiple facets, I don't think, and trying to build it into a one-size-fits-all product has failed so far. Mobile, which has reduced context-switching to near negligible, as well as provided a new social graph through the address book, has finally let a thousand social flowers bloom at scale."



"So when you're vision driven, look at Facebook the way you look at Google. One way of looking at Google is that they want to organise the world's information and make it freely available. One way of looking at Facebook is that they literally want to connect the world and enable every living person to communicate as frictionlessly as possible with everyone else.
Like I said, the devil is in the detail.

Facebook - the product you and I use, the one with the newsfeed - is just one way Facebook the holding company is connecting the world. Instagram is another. WhatsApp is another.

Some of those products are ad-funded, some others aren't. And if you're thinking about an end-goal of connecting the world, what's going to connect more people more quickly? Them paying for it, or the connection being available for free?

This might sound like having drunk the kool-aid, but try crediting Zuckerberg with more intelligence and think of him as the prototypical smart nerd: optimize for a connected world. What do you build? How do you deploy it?

It's against this background that they buy Oculus Rift. And don't think agency people have any knowledge - I'm in a plane at 30k feet, and when the news broke about WhatsApp, we were in a meeting *with our clients* - we find out about this stuff when you do, when Twitter explodes.

Like everyone apart from Apple, Facebook missed the boat. But Oculus as display technology - as another way to augment the human social experience is provocative and interesting. In the PR, Zuckerberg is quoted as saying:

"Oculus has the chance to create the most social platform ever, and change the way we work, play and communicate."

He's not wrong. You are always going to be able to meet more people through mediated experiences than physically. Physicality doesn't scale. Is this a terrible harbinger of the replacement of physical social contact? Probably not. We have always invented and looked for more ways to connect with people. boyd says in her book that teenagers aren't addicted to Facebook in the same way they were never addicted to texting or tying up the house landline for hours. They're addicted to *people*. And if Oculus genuinely has the way to change the way people connect, then that makes perfect strategic sense for Facebook.

It turns out that today, people are still using Snow Crash as a business plan."
personas  diversity  facebook  occulusrift  personality  pauladams  danahboyd  google  google+  circles  toolbelttheory  onlinetoolkit  multitools  killerapps  instagram  whatsapp  spaces  socialnetworking  socialnetworks  communication  multiplefacets  contextswitching  danhon  markzuckerberg  snowcrash  nealstephenson  googleplus 
march 2014 by robertogreco
Episode Forty One: When You're Part Of A Team; The Dabbler
"The thing is, a lot of this behaviour is very easy to mistake for cult-like behaviour from the outside. Apple frequently gets described as a cult - not only are its employees members of the cult, but its customers are described in terms of being followers, too. And you see this cult behaviour in terms of the reverence expressed toward dear leaders (Messrs Wieden and Kennedy, for example, or the brain trust at Pixar, or Steve at Apple) but also in terms of the transmission of the values of those leaders. Wieden prides itself on a number of maxims ranging from a thousands-of-thumbtacks installation done by members of its advertising school of the slogan FAIL HARDER (with requisite misplaced thumbtack) to pretty much every employee being able to understand what's meant by "the work comes first" even if they do need a bit of re-education as to how, exactly, the work comes first (ie: it is not a get out of jail free card when you disagree with the client about what counts as good work). Then there are the Other Rules, the ones practically handed down from the mount (or, more accurately, discovered in an office scribbled in pen) that state:

1. Don't act big
2. No sharp stuff
3. Follow directions
4. Shut up when someone is talking to you

and turned out to be a parent's note to their child but actually not that bad advice when you think about it.

[See also: http://wklondon.typepad.com/welcome_to_optimism/2005/02/words_from_wied.html ]

And now, another nascent organisation, another one that I constantly harp on about: the UK's Government Digital Service. I don't think it's a coincidence that from the outside two of the people (but certainly by no means the only people) influential in the success of GDS and its culture are Russell Davies and Ben Terrett, both of whom have been through the Wieden+Kennedy, er, experience.

Russell is an exceedingly smart, unassuming and humble person who has a singularly incredibly ability to be almost devastatingly insightful and plain-speaking at the same time. It feels rare to see both at the same time. But what he's articulating at the moment in terms of GDS strategy and implementation is the thought that "the unit of delivery is the team" and when you're building a new organisation from the ground up, and one whose success is tied directly to its ability to embed within and absorb the culture of an existing massive entity, the UK civil service, it feels like watching a (so far successful) experiment in sociology and anthropology being deployed in realtime. A note (and thanks to Matthew Solle for the clarification because it's an important one): while the GDS works with the civil service, it's not actually a part of it, instead being a part of the cabinet office and being more tied to the government of the day.

So there are macro-level observations about Pixar that you glean from books and other secondary sources, but it's not until you visit the place and start to talk to the people who work there that understand starts to feel that it unlocks a little more. I'm lucky enough to know one person at Pixar who's been gracious enough to host me a few times and while we were talking about the culture of the place and how, exactly, they get done what they get done, one thing that struck me was the role of the individual and the individual's place in the team.

You see, one of the things it felt like they concentrated on was empowerment and responsibility but also those two things set against context. My friend would talk about how every person on his team would know what their superpower was - the thing they were good at, the thing that they were expert at - and everyone else would know what that superpower was, too. And the culture thus fostered was one where everyone was entitled to have a reckon or an opinion about something and were listened to, but when it came down to it, the decision and authority rested with the expert.

Now, this might not sound like a stunningly insightful revelation. Allowing people to have opinions about the work of the greater team and then restricting decision-making to those best qualified to make it sounds on the surface like a fairly reasonable if not obvious tenet, and maybe even one that because of its obviousness would seem reasonably easy if not trivial to implement. Well, if you think that, then I'm sorry, it sounds like you've never been a good manager before: it turns out to be exceedingly difficult.

At this point the narrative begins to sound rather trite: Pixar, and the companies like it that consistently achieve "good" results and are able to marshall the resources of large teams to accomplish something greater, are simply trying harder than all the other ones. And in the end, it may well be as simple as that. It's easy to have a mission statement. It's easy to have values. It's significantly harder to try as hard you can, every single day, for thirty years, to actually live them.

In the same way that one does not simply walk into Mordor, one does not simply say that one has a set of values or culture and it magically happen.

This is perhaps best illustrated in the blindness of the new wave of stereotypical valley startups that rail against bureaucracy and instead insist that their trademarked culture of holocracy inures them to the requirement of bureaucracy. That the way they instinctively do things is sufficient in and of itself. Well: bullshit to that. That simply doesn't scale, and the companies that think they're doing that - and I'm looking at you, Github, winner so far of the Best Example Of The Need To Grow Up award of 2014 and we've not even finished the first quarter of the year - are living in some sort of hundred-million-dollar VC-fueled fantasy land. Which, I suppose, goes without saying.

I began this part by implying something about teams, and I sort of alluded to it when mentioning the GDS maxim that the unit of delivery is the team.

I think it's becoming clear that the type of delivery that is expected in this age by its nature requires a multi-disciplinary team that works together. It's not enough, anymore, to have specialisms siloed away, and one thing that jumped out at me recently was the assertion in conversation on Twitter with a number of GDS members that there isn't anybody with the role of "user experience" at GDS. Everyone, each and every single member of the team, is responsible and accountable to the user experience of delivery, from operations to design to copy and research.

The sharpest end of this is where digital expertise had traditionally been siloed away in a sort of other. In a sort of check-boxing exercise, organisations would recruit in those with digital experience and either for reasons of expediency or for their own good, would shepherd them into a separate organisational unit. Davies' point - and one that is rapidly becoming clear - is that this just doesn't make sense anymore. I would qualify that and say that it doesn't make sense for certain organisations, but I'm not even sure if I can do that, and instead should just agree that it's a rule across the board.

Of course, the devil is always in the detail of the implementation."



"The thing about hobbies in the networked age is that it's incredibly easy for them to become performative instead of insular. That's not to say that insular hobbies are great, but the networked performance of a hobby comes with seductive interactions built not necessarily for the hobbyist's benefit but for the benefit of the network substrate or medium. As a general reckon, hobbies in their purest form are nothing but intrinsic motivation: whether they're an idiosyncratic desire to catalogue every single model of rolling stock in the UK or increasingly intricate nail art, before the hobby becomes performative it is for the self's benefit only, a sort of meditation in repetitive action and a practice.

The hobby as the networked performance, though (and I realise that at this point I may well sound like a reactionary luddite who doesn't 'get' the point of social media) perhaps too easily tips the balance in favour of extrinsic motivation. Whether that extrinsic motivation is in terms of metrics like followers, likes, retweets, subscribers or other measurable interaction with the hobbyist the point remains that it's there, and it's never necessarily for a clear benefit for the hobbyist. You could perhaps absolve blame and say that such metrics are intrinsic properties of the enactment of a social graph and that they're making explicit what would be rendered as implicit feedback cues in any event, but I don't buy that. They were put there for a reason. Friend counts and subscriber counts were put there because those of us who are product designers and of the more geeky persuasion realised that we could count something (and here, we get to point the finger at the recording pencil of the train spotter), and the step from counting something to making visible that count was a small one and then our evolutionary psychology and comparison of sexual fitness took over and before you knew it people were doing at the very least SXSW panels or if you were really lucky TED talks about gamification and leaderboards and whether you had more Fuelpoints than your friends.

So that's what happened to the hobby: it moved from the private to the public and at the same time the dominant public medium of the day, the one that all of us had access to, marched inexorably to measurement, quantification and feedback loops of attention."
danhon  leadership  administration  management  pixar  wk  gov.uk  russelldavies  benterrett  authority  empowerment  collaboration  teams  2014  hobbies  expertise  trust  tcsnmy  lcproject  openstudioproject  motivation  performance 
march 2014 by robertogreco
Episode Nine: Everything In Silos, Forever and Ever, Amen
"Just Good Enough is bullshit. Just Good Enough means that a company doesn't have to produce a useable site that provides easily findable manuals or reference for its product, because Google will index that content eventually. Just Good Enough means I can just about use your site on my phone. Just Good Enough means that the timekeeping software that everyone in the building uses (and has to use - otherwise the entire business screeches to a halt) is only Just Not Irritating Enough to have to deal with. Just Good Enough means that you can whip up a Word document that you can save and then email to me for comment and I can open it up where it's saved in my Temporary Outlook Files and then save it as Your Document - Dan Comments.doc in my Temporary Outlook Files and then email it back to you, where you'll revise it and then send it to a project manager who will then rename it Your Document - Dan Comments - Final Feb 4 2014.doc and then email it back to me for more comments. Or where Google Docs is Just Good Enough to use single sign on so that in theory we can all use it together, but that its text formatting doesn't quite work and not everyone uses it.

It's bullshit. Just Good Enough should be offensive. Just Good Enough is the digital/software equivalent of a bridge that doesn't quite kill anyone most of the time, instead of one that actually does the fucking job. "

[…]

"This is what the threat of the consumerisation of IT is, then, to entrenched divisions and groups. It means that five years ago, the apocryphal story of someone at the BBC being quoted however many tens of thousands of pounds for a Rails server from outsourced IT deciding to, bluntly, fuck it and just stick an AWS instance on their card and expense it was the inevitable sharp end of the wedge: digital, devolved from some sort of priesthood that existed to serve itself, and instead unlocking its potential to the people who have problems that need solving now, and don't particularly care whether something is a solution or not, or has properly gone through procurement (and yes, I realize that this opens you to the possibility of a raft of 'Just Good Enoughs').

But you can't have one without the other. Leadership that reacts to teams reaching for their cards and organically using AWS or Basecamp or whatever because it's Just Good Enough and flies under the procurement radar, or reaching out to Get Stuff Done with small external groups rather than using internal resource by asking: "what's the problem here, and why are our employees choosing to react in this way rather than internally?" and fixing that internal provision of resource are the ones that are going to win. Which is, again, why GDS is building internal capability rather than external.

In a conversation with GDS's Russell Davies about this, the one comment of his that stood out was that none of this was new to those of us who've been working in digital or interactive. There was no stunning insight, no secret sauce, no magic recipe. Just that, from a leadership and organizational point of view, digital was an important concept to align around as a way of achieving their goals: and then, GDS conceived on (again, my external reckoning) with teams constructed around delivery. It was just that the will was there.

So here's the thing. (And this type of wrapping up inevitably feels like a Church of England sermon or Thought for the Day).

Siloed organisations, where digital is "over there", aren't going to succeed. At the very least, they're only going to unlock a fraction of the opportunity that's available to them. At the very worst, they'll find themselves both slowly ("oh, they've only got a few tens of thousands of users") and quickly (Blackberry, Nokia) disrupted. Runkeeper will come and eat their lunch. Netflix will become the next video network. Uber, much as I hate them for being Uber, will come along and work out that hey, digital actually can make your business of cars that move things from one place to another better for the end user. 

They're just not. 

It's just a question of how fast we get there.

My brother, when asked when video games will finally be treated as mainstream culture, used to say: "When enough people die." 

GDS is showing that we don't need people to die for digital to work. We just need leadership that wants it."

[Reference post: http://russelldavies.typepad.com/planning/2013/04/the-unit-of-delivery.html ]
danhon  2014  gov.uk  russelldavies  services  digital  organizations  technology  edtech  bbc  basecamp  problemsolving  leadership  management  siloing  culture  it  justgoodenough 
february 2014 by robertogreco
Your Grim Meathook Future : Extenuating Circumstances
"Recorded for posterity.

• Larry Page wants to set aside part of the world for experimentation.
http://www.theverge.com/2013/5/15/4334356/larry-page-wants-to-set-aside-a-part-of-the-world-for-experimentation/in/4095431

• Balaji Srinivasan thinks Silicon Valley should create an opt-out” society where existing governments cannot meddle.
http://news.cnet.com/8301-1023_3-57608320-93/a-radical-dream-for-making-techno-utopias-a-reality/ [video here: http://balajis.com/2013/11/09/silicon-valleys-ultimate-exit/ ]

• Chamath Palihapitiya says value is only created in Silicon Valley and that Valley companies transcend power and are the eminent vehicles for change and influence, over government.
http://nymag.com/daily/intelligencer/2013/10/silicon-valleys-dysfunction-fetish.html

You may also be interested in:

• Sovereign corporate franchise enclaves
http://en.wikipedia.org/wiki/Snow_Crash

• Free Trade Zones
http://en.wikipedia.org/wiki/Free_trade_zone

• The London Olympic Games and Paralympic Games Act 2006
http://en.wikipedia.org/wiki/London_Olympic_Games_and_Paralympic_Games_Act_2006

• The Special Administrative Region
http://en.wikipedia.org/wiki/Special_administrative_region "

[Each item is a link to an article]
siliconvalley  libertarianism  larrypage  balajisrinivasan  chamathpalihapitiya  government  individualism  danhon  2013  libertarians  californianideology  technosolutionism  seasteading  sealand 
october 2013 by robertogreco
The tyranny of digital advertising — I.M.H.O. — Medium
"Let's be clear: big businesses have grown up around the availability and theory of mass media and buying attention. Any big client older than15 years old will have grown up with the reassuring ability of tv and print advertising to reach mass audiences. Those were methods of advertising predicated on guaranteed access to peoples’ attention through interruptions in mass media.

And thus the marketing and business plans and briefs for those companies assume that you market your product or service by delivering a message to a stupendously large number of people in a short amount of time.

The Product is the Service is the Marketing

At roughly the same time as my two year anniversary in advertising land, Russell Davies recently wrote up a storm explaining what the UK’s Government Digital Service does and what GOV.UK is for.

Simply, their job is to save money by making the digital provision of government services so good that the public prefers to use them.

One of the points that Russell makes in his post is that, in their case, the product is the service is the marketing: the product (a government service) is the service (the delivery and usage of that service) is the marketing (the clear communication to the target audience of the benefits of that service). The tying together of those three different items - product, service, marketing, and how GDS have achieved that aim, has implications as to why good integrated (and so digital) advertising is so difficult to achieve."



"Anything but display advertising
But then there's the whole other, other side to interactive advertising that isn't confined to formats defined by media agencies and associations. And I might be biased, but they seem way more interesting than display advertising.

Here's some examples:"



"There is a shift at the heart of this. There are new brands out there - Kickstarter, Etsy and Amazon come to mind - that got big and profitable without conventional advertising. They’re also brands built in a world reliant upon the network. They do not need advertising, at least, they don’t need advertising the way your mother’s fast moving consumer goods company needed it. Their products are services, and the way their services behave are their own marketing. Google’s own Dear Sophie and Parisian Love adverts are critically acclaimed examples of advertising letting products and services speak for themselves.

So what does an advertising agency do for them?"
danhon  advertising  digital  2013  experience  russelldavies  marketing  service  product  gov.uk  kickstarter  etsy  amazon  nike  nikefuelband  fuelband  ilovebees  jay-zdecoded  arg  oldspice  attention  chrysler  television  tv  wieden+kennedy 
july 2013 by robertogreco
Fitness by design - Design - Domus
"Can data heal? Yes, argues Dan Hon, whose type 2 diabetes spurred him to embrace "personal informatics" devices such as the Nike FuelBand and the Fitbit. Yet as such devices become a part of everyday life, a new challenge emerges: the Balkanisation of health data across multiple platforms. A design report from Portland by Dan Hon"
diabetes  2012  technology  personalinformatics  quantifiedself  health  fuelband  fitbit  danhon 
january 2013 by robertogreco

Copy this bookmark:





to read