recentpopularlog in

robertogreco : zeyneptufekci   18

No one’s coming. It’s up to us. – Dan Hon – Medium
"Getting from here to there

This is all very well and good. But what can we do? And more precisely, what “we”? There’s increasing acceptance of the reality that the world we live in is intersectional and we all play different and simultaneous roles in our lives. The society of “we” includes technologists who have a chance of affecting the products and services, it includes customers and users, it includes residents and citizens.

I’ve made this case above, but I feel it’s important enough to make again: at a high level, I believe that we need to:

1. Clearly decide what kind of society we want; and then

2. Design and deliver the technologies that forever get us closer to achieving that desired society.

This work is hard and, arguably, will never be completed. It necessarily involves compromise. Attitudes, beliefs and what’s considered just changes over time.

That said, the above are two high level goals, but what can people do right now? What can we do tactically?

What we can do now

I have two questions that I think can be helpful in guiding our present actions, in whatever capacity we might find ourselves.

For all of us: What would it look like, and how might our societies be different, if technology were better aligned to society’s interests?

At the most general level, we are all members of a society, embedded in existing governing structures. It certainly feels like in the recent past, those governing structures are coming under increasing strain, and part of the blame is being laid at the feet of technology.

One of the most important things we can do collectively is to produce clarity and prioritization where we can. Only by being clearer and more intentional about the kind of society we want and accepting what that means, can our societies and their institutions provide guidance and leadership to technology.

These are questions that cannot and should not be left to technologists alone. Advances in technology mean that encryption is a societal issue. Content moderation and censorship are a societal issue. Ultimately, it should be for governments (of the people, by the people) to set expectations and standards at the societal level, not organizations accountable only to a board of directors and shareholders.

But to do this, our governing institutions will need to evolve and improve. It is easier, and faster, for platforms now to react to changing social mores. For example, platforms are responding in reaction to society’s reaction to “AI-generated fake porn” faster than governing and enforcing institutions.

Prioritizations may necessarily involve compromise, too: the world is not so simple, and we are not so lucky, that it can be easily and always divided into A or B, or good or not-good.

Some of my perspective in this area is reflective of the schism American politics is currently experiencing. In a very real way, America, my adoptive country of residence, is having to grapple with revisiting the idea of what America is for. The same is happening in my country of birth with the decision to leave the European Union.

These are fundamental issues. Technologists, as members of society, have a point of view on them. But in the way that post-enlightenment governing institutions were set up to protect against asymmetric distribution of power, technology leaders must recognize that their platforms are now an undeniable, powerful influence on society.

As a society, we must do the work to have a point of view. What does responsible technology look like?

For technologists: How can we be humane and advance the goals of our society?

As technologists, we can be excited about re-inventing approaches from first principles. We must resist that impulse here, because there are things that we can do now, that we can learn now, from other professions, industries and areas to apply to our own. For example:

* We are better and stronger when we are together than when we are apart. If you’re a technologist, consider this question: what are the pros and cons of unionizing? As the product of a linked network, consider the question: what is gained and who gains from preventing humans from linking up in this way?

* Just as we create design patterns that are best practices, there are also those that represent undesired patterns from our society’s point of view known as dark patterns. We should familiarise ourselves with them and each work to understand why and when they’re used and why their usage is contrary to the ideals of our society.

* We can do a better job of advocating for and doing research to better understand the problems we seek to solve, the context in which those problems exist and the impact of those problems. Only through disciplines like research can we discover in the design phase — instead of in production, when our work can affect millions — negative externalities or unintended consequences that we genuinely and unintentionally may have missed.

* We must compassionately accept the reality that our work has real effects, good and bad. We can wish that bad outcomes don’t happen, but bad outcomes will always happen because life is unpredictable. The question is what we do when bad things happen, and whether and how we take responsibility for those results. For example, Twitter’s leadership must make clear what behaviour it considers acceptable, and do the work to be clear and consistent without dodging the issue.

* In America especially, technologists must face the issue of free speech head-on without avoiding its necessary implications. I suggest that one of the problems culturally American technology companies (i.e., companies that seek to emulate American culture) face can be explained in software terms. To use agile user story terminology, the problem may be due to focusing on a specific requirement (“free speech”) rather than the full user story (“As a user, I need freedom of speech, so that I can pursue life, liberty and happiness”). Free speech is a means to an end, not an end, and accepting that free speech is a means involves the hard work of considering and taking a clear, understandable position as to what ends.

* We have been warned. Academics — in particular, sociologists, philosophers, historians, psychologists and anthropologists — have been warning of issues such as large-scale societal effects for years. Those warnings have, bluntly, been ignored. In the worst cases, those same academics have been accused of not helping to solve the problem. Moving on from the past, is there not something that we technologists can learn? My intuition is that post the 2016 American election, middle-class technologists are now afraid. We’re all in this together. Academics are reaching out, have been reaching out. We have nothing to lose but our own shame.

* Repeat to ourselves: some problems don’t have fully technological solutions. Some problems can’t just be solved by changing infrastructure. Who else might help with a problem? What other approaches might be needed as well?

There’s no one coming. It’s up to us.

My final point is this: no one will tell us or give us permission to do these things. There is no higher organizing power working to put systemic changes in place. There is no top-down way of nudging the arc of technology toward one better aligned with humanity.

It starts with all of us.

Afterword

I’ve been working on the bigger themes behind this talk since …, and an invitation to 2017’s Foo Camp was a good opportunity to try to clarify and improve my thinking so that it could fit into a five minute lightning talk. It also helped that Foo Camp has the kind of (small, hand-picked — again, for good and ill) influential audience who would be a good litmus test for the quality of my argument, and would be instrumental in taking on and spreading the ideas.

In the end, though, I nearly didn’t do this talk at all.

Around 6:15pm on Saturday night, just over an hour before the lightning talks were due to start, after the unconference’s sessions had finished and just before dinner, I burst into tears talking to a friend.

While I won’t break the societal convention of confidentiality that helps an event like Foo Camp be productive, I’ll share this: the world felt too broken.

Specifically, the world felt broken like this: I had the benefit of growing up as a middle-class educated individual (albeit, not white) who believed he could trust that institutions were a) capable and b) would do the right thing. I now live in a country where a) the capability of those institutions has consistently eroded over time, and b) those institutions are now being systematically dismantled, to add insult to injury.

In other words, I was left with the feeling that there’s nothing left but ourselves.

Do you want the poisonous lead removed from your water supply? Your best bet is to try to do it yourself.

Do you want a better school for your children? Your best bet is to start it.

Do you want a policing policy that genuinely rehabilitates rather than punishes? Your best bet is to…

And it’s just. Too. Much.

Over the course of the next few days, I managed to turn my outlook around.

The answer, of course, is that it is too much for one person.

But it isn’t too much for all of us."
danhon  technology  2018  2017  johnperrybarlow  ethics  society  calltoaction  politics  policy  purpose  economics  inequality  internet  web  online  computers  computing  future  design  debchachra  ingridburrington  fredscharmen  maciejceglowski  timcarmody  rachelcoldicutt  stacy-marieishmael  sarahjeong  alexismadrigal  ericmeyer  timmaughan  mimionuoha  jayowens  jayspringett  stacktivism  georginavoss  damienwilliams  rickwebb  sarawachter-boettcher  jamebridle  adamgreenfield  foocamp  timoreilly  kaitlyntiffany  fredturner  tomcarden  blainecook  warrenellis  danhill  cydharrell  jenpahljka  robinray  noraryan  mattwebb  mattjones  danachisnell  heathercamp  farrahbostic  negativeexternalities  collectivism  zeyneptufekci  maciejcegłowski 
february 2018 by robertogreco
Zeynep Tufekci: We're building a dystopia just to make people click on ads | TED Talk | TED.com
"We're building an artificial intelligence-powered dystopia, one click at a time, says techno-sociologist Zeynep Tufekci. In an eye-opening talk, she details how the same algorithms companies like Facebook, Google and Amazon use to get you to click on ads are also used to organize your access to political and social information. And the machines aren't even the real threat. What we need to understand is how the powerful might use AI to control us -- and what we can do in response."

[See also: "Machine intelligence makes human morals more important"
https://www.ted.com/talks/zeynep_tufekci_machine_intelligence_makes_human_morals_more_important

"Machine intelligence is here, and we're already using it to make subjective decisions. But the complex way AI grows and improves makes it hard to understand and even harder to control. In this cautionary talk, techno-sociologist Zeynep Tufekci explains how intelligent machines can fail in ways that don't fit human error patterns -- and in ways we won't expect or be prepared for. "We cannot outsource our responsibilities to machines," she says. "We must hold on ever tighter to human values and human ethics.""]
zeyneptufekci  machinelearning  ai  artificialintelligence  youtube  facebook  google  amazon  ethics  computing  advertising  politics  behavior  technology  web  online  internet  susceptibility  dystopia  sociology  donaldtrump 
october 2017 by robertogreco
The Plight of Refugees, the Shame of the World - The New York Times
"Accepting, feeding, immunizing, resettling and helping this many people can be done only at an institutional level, with worldwide organizations. At the moment, most of this burden is on a few neighboring countries — Turkey, Lebanon and now Greece — that get little to no outside help. Unsurprisingly, many refugees are risking their lives to reach Europe.

A crisis of this scale cannot be met with individual heroism, however admirable. Huge numbers of people cannot be sheltered through ad hoc charity, however well intended.

In mid-July, a Palestinian teenager whose family faces deportation from Germany asked Chancellor Angela Merkel, in perfect German, why her family couldn’t stay, and why she couldn’t just stay in school and study like everyone else. Ms. Merkel had said, in a dry speech: “Politics is sometimes hard. ... But you also know in the Palestinian refugee camps in Lebanon are thousands and thousands, and if we were to say you can all come ... we just can’t manage it.” At that, the girl burst into tears and Ms. Merkel was taken aback. Her halting efforts to comfort the girl were recorded worldwide.

“Politics is hard” is just not enough.

It’s clear that our leaders aren’t stepping up to the gravity of the moment. We can, and we must, push them to do the right thing. If distributed properly, the cost is not that high. Today’s world is much richer than during World War II, and it’s not tangled in global war. In 2014, the entire World Food Program budget was a paltry $5.4 billion. The United Nations refugee agency’s budget is a mere $7 billion. To put these numbers in context, Amazon’s market capitalization climbed recently by $40 billion in after-hours trading after it announced that its web-hosting services were slightly more profitable than expected. Saving millions of refugee children fleeing war apparently isn’t worth a fraction of an evening’s speculation on a single stock.

Last month, the world lost a quiet hero, Nicholas Winton, who saved almost 700 mostly Jewish children from Czechoslovakia, by placing them with British families right before Hitler invaded. What was overlooked in the celebration of his remarkable life — he never sought credit for his good deeds — was his deep regret about the thousands of children he couldn’t save. The world’s governments turned their backs on these children. Have they learned nothing since?"
2015  zeyneptufekci  refugees  syria  nicholaswinton  politics  policy  inequality 
august 2015 by robertogreco
Why the Great Glitch of July 8th Should Scare You — The Message — Medium
"Think of it as needing more space in your house, so you decide you want to build a second story. But the house was never built right to begin with, with no proper architectural planning, and you don’t really know which are the weight-bearing walls. You make your best guess, go up a floor and… cross your fingers. And then you do it again. That is how a lot of our older software systems that control crucial parts of infrastructure are run. This works for a while, but every new layer adds more vulnerability. We are building skyscraper favelas in code — in earthquake zones."



"Essentially, there is a lot of equivalent of “duct-tape” in the code, holding things together. If done right, that code will eventually be fixed, commented (explanations written up so the next programmer knows what the heck is up) and ported to systems built for the right scale — before there is a crisis. How often does that get done? I wager that many wait to see if the system comes crashing down, necessitating the fix. By then, you are probably too big to go down for too long, so there’s the temptation for more duct tape. And so on."



"This is a bit like knowing you have a chronic condition, but pretending that the costs you will face are limited to those you will face this month. It’s a lie, everyone knows it’s a lie, but it makes those numbers look good now, as long as we are all suspending disbelief. (Also, this is why a lot of educational technology efforts fail: nobody budgets for maintenance, some parts of the system goes down, and teachers and kids rightfully abandon it. I heard a lot about this “no maintenance money” problem from researchers looking into the one laptop per child project)."



"There is a lot of interest, and boondoggle money, in exaggerating the “cyber-terrorism” threat (which is not unreal but making software better would help that a lot more than anything devoted solely to “cyber-terrorism” — but, hey, you know which buzzword gets the funding), and not much interest in spending real money in fixing the boring but important problems with the software infrastructure. This is partly lack of attention to preventive spending which plagues so many issues (Hello, Amtrak’s ailing rails!) but it’s also because lousy software allows … easier spying. And everyone is busy spying on everyone else, and the US government, perhaps best placed to take a path towards making software more secure, appears to have chosen that path as well. I believe this is a major mistake in the long run, but here we are."



"I’m actually more scared at this state of events than I would’ve been at a one-off hacking event that took down the NYSE. Software is eating the world, and the spread of networked devices through the “internet of things” is only going to accelerate this. Our dominant operating systems, our way of working, and our common approach to developing, auditing and debugging software, and spending (or not) money on its maintenance, has not yet reached the requirements of the 21st century. So, yes, NYSE going down is not a big deal, and United Airlines will probably have more ground halts if they don’t figure out how to change their infrastructure (not a cheap or easy undertaking). But it’s not just them. From our infrastructure to our privacy, our software suffers from “software sucks” syndrome which doesn’t sound as important as a Big Mean Attack of Cyberterrorists. But it is probably worse in the danger it poses.

And nobody is likely going to get appointed the Czar of How to Make Software Suck Less.

So, yes. Be scared. Be very worried. Software is eating the world, and it sucks."
code  software  technology  2015  complexity  speed  zeyneptufekci 
july 2015 by robertogreco
Re:publica Keynote: The System is Broken – That’s the Good News | ... My heart’s in Accra
"…Ivan Krastev, chairman of the Center for Liberal Strategies, in Sofia, Bulgaria. He worries that even if protests like the Indignados or Occupy succeed in ousting a government, much of what protesters are asking for is not possible. “Voters can change governments, yet it is nearly impossible for them to change economic policies.” When Indignados grows into Podemos, Krastev predicts that it’s going to be very hard for them to truly reverse policies on austerity – global financial markets are unlikely to let them do so, punish them by making it impossibly expensive to borrow

Krastev offers the example of how Italy finally got rid of Silvio Berlusconi – wasn’t through popular protest, but through the bond market – the bond market priced italian debt at 6.5%, and Berlusconi resigned, leaving Mario Monti to put austerity measures in place. You may have been glad to see Berlusconi go, but don’t mistake this as a popular revolt that kicked him out – it was a revolt by global lenders, and basically set the tone for what the market would allow an Italian leader to do. As Krastev puts it, “Politics has been reduced to the art of adjusting to the imperatives of the market” – we’ve got an interesting test of whether this theory is right with Syriza, a left-wing party rooted in anti-austerity protests now in power, and facing possible default and exit from the Eurozone this month. What Krastev is saying is really chilling – we can oust bad people through protest and elect the right people and put them in power, we can protest to pressure our leaders to do the right things, and they may not be powerful enough to give us the changes we really want."



"These three approaches – building new institutions, becoming engaged critics of the institutions we’ve got, and looking for ways to build a post-institutional world – all have their flaws. We need the new decentralized systems we build to work as well as the institutions we are replacing, and when Mt. Gox disappears with our money, we’re reminded what a hard task this is. Monitorial citizenship can lead to more responsible institutions, but not to structural change. When we build new companies, codebases and movements, we’ve got to be sure these new institutions we’re creating stay closer to our values than those we mistrust now, and that they’re worthy of the trust of generations to come.

What these approaches have in common is this: instead of letting mistrust of the institutions we have leave us sidelined and ineffective, these approaches make us powerful. Because this is the middle path between the ballot box and the brick – it’s taking the dangerous and corrosive mistrust we now face and using it to build the institutions we deserve. This is the challenge of our generation, to build a better world than the one we inherited, one that’s fairer, more just, one that’s worthy of our trust."
ethanzuckerman  ivankrastev  quinnnorton  zeyneptufekci  democracy  politics  institutions  euope  us  protest  occupywallstreet  ows  voting  decentralization  internet  citizenship  civics  monotorialcitizenship  globalization  finance  capitalism  austerity  markets  indignados  government  power  control 
may 2015 by robertogreco
Zeynep Tufecki, David Graeber and the End of Work – Flavorwire
"We can do better than this, though. We have to. The great myth of the “free market” capitalism in which we exist today is that it’s the best of all possible worlds, that it might not be perfect but it’s the best we can do. This is a line that’s often wheeled out by right-wing demagogues with interests to promote — none other than George W. Bush pronounced “democratic capitalism” to be “the best system ever devised” in an address to the nation in September 2008. This is demonstrably untrue — for a start, pretty much every other First World country, and plenty of non-First World ones, manage to gives their citizens universal healthcare — but even if it was true, “best to date” doesn’t mean “best possible.”

The paucity of depictions of a post-work society in pop culture seems to reflect a defeatist mentality, the idea that such a world can never and will never exist. The prevalence of dystopian fiction clearly reflects a growing pessimism about the future, a sort of impotent cynicism that holds that this is the way things will always be, and there’s nothing we can do about it. And, of course, the fact that none of us have ever lived in anything but this capitalist system makes it feel somehow wrong to imagine a world where work isn’t venerated as a virtuous thing in and of itself. People tend to recoil instinctively from ideas like guaranteed minimum income, despite the distinct possibility that it’s exactly what is needed to rectify some of the structural problems of the economy as it exists today — even the idea’s proponents scurry to make it clear that it’s not a disincentive to work.

But why shouldn’t it be? Or, more importantly, why don’t we address the most significant disincentive to work: that the majority of it is a giant waste of everyone’s time on this planet? Work has value if its product is valuable. Someone digging a hole to install, say, a storm drain, is valuable labor; someone digging a hole so someone else can fill it in is a waste of both people’s time, even if it provides them both with a job.

If we’re to devote our imagination to anything, it shouldn’t be ways to outpace the machines, or what might happen when they take over completely. It should be how our economy might work in a world where consumption isn’t a sort of economic ouroboros, where all the hole digging that is required gets done and everyone else finds something better to do with their time. If economists think about such a world today, it’s with trepidation. For the rest of us, though, imagining a world where bullshit jobs are a thing of the past… that’s a challenge we should relish."
tomhaking  2015  zeyneptufekci  labor  work  davidgraeber  universalbasicincome  post-worksociety  society  economics  estherkaplan  ubi 
april 2015 by robertogreco
The Machines Are Coming - NYTimes.com
"But computers do not just replace humans in the workplace. They shift the balance of power even more in favor of employers. Our normal response to technological innovation that threatens jobs is to encourage workers to acquire more skills, or to trust that the nuances of the human mind or human attention will always be superior in crucial ways. But when machines of this capacity enter the equation, employers have even more leverage, and our standard response is not sufficient for the looming crisis.

Machines aren’t used because they perform some tasks that much better than humans, but because, in many cases, they do a “good enough” job while also being cheaper, more predictable and easier to control than quirky, pesky humans. Technology in the workplace is as much about power and control as it is about productivity and efficiency.

This used to be spoken about more openly. An ad in 1967 for an automated accounting system urged companies to replace humans with automated systems that “can’t quit, forget or get pregnant.” Featuring a visibly pregnant, smiling woman leaving the office with baby shower gifts, the ads, which were published in leading business magazines, warned of employees who “know too much for your own good” — “your good” meaning that of the employer. Why be dependent on humans? “When Alice leaves, will she take your billing system with her?” the ad pointedly asked, emphasizing that this couldn’t be fixed by simply replacing “Alice” with another person.

The solution? Replace humans with machines. To pregnancy as a “danger” to the workplace, the company could have added “get sick, ask for higher wages, have a bad day, aging parent, sick child or a cold.” In other words, be human."



"This is the way technology is being used in many workplaces: to reduce the power of humans, and employers’ dependency on them, whether by replacing, displacing or surveilling them. Many technological developments contribute to this shift in power: advanced diagnostic systems that can do medical or legal analysis; the ability to outsource labor to the lowest-paid workers, measure employee tasks to the minute and “optimize” worker schedules in a way that devastates ordinary lives. Indeed, regardless of whether unemployment has gone up or down, real wages have been stagnant or declining in the United States for decades. Most people no longer have the leverage to bargain.

In the 1980s, the Harvard social scientist Shoshana Zuboff examined how some workplaces used technology to “automate” — take power away from the employee — while others used technology differently, to “informate” — to empower people.

For academics, software developers and corporate and policy leaders who are lucky enough to live in this “informate” model, technology has been good. So far. To those for whom it’s been less of a blessing, we keep doling out the advice to upgrade skills. Unfortunately, for most workers, technology is used to “automate” the job and to take power away.

And workers already feel like they are powerless as it is. Last week, low-wage workers around the country demonstrated for a $15-an-hour wage, calling it economic justice. Those with college degrees may not think that they share a problem with these workers, who are fighting to reclaim some power with employers, but they do. The fight is poised to move up the skilled-labor chain.

Optimists insist that we’ve been here before, during the Industrial Revolution, when machinery replaced manual labor, and all we need is a little more education and better skills. But that is not a sufficient answer. One historical example is no guarantee of future events, and we won’t be able to compete by trying to stay one step ahead in a losing battle.

This cannot just be about machines’ capabilities or human skills, since the true solution lies in neither. Confronting the threat posed by machines, and the way in which the great data harvest has made them ever more able to compete with human workers, must be about our priorities.

It’s easy to imagine an alternate future where advanced machine capabilities are used to empower more of us, rather than control most of us. There will potentially be more time, resources and freedom to share, but only if we change how we do things. We don’t need to reject or blame technology. This problem is not us versus the machines, but between us, as humans, and how we value one another."
zeyneptufekci  future  automation  robots  labor  work  machiens  humans  2015  empowerment  control  surveillance  economics  history  technology  wages  shoshanazuboff 
april 2015 by robertogreco
Why Twitter Should Not Algorithmically Curate the Timeline — The Message — Medium
"Twitter brims with human judgment, and the problem with algorithmic filtering is not losing the chronology, which I admit can be clumsy at times, but it’s losing the human judgment that makes the network rewarding and sometimes unpredictable. I also recently wrote about how #Ferguson surfaced on Twitter while it remained buried, at least for me, in curated Facebook—as many others noted, Facebook was awash with the Ice Bucket Challenge instead, which invites likes and provides videos and tagging of others; just the things an algorithm would value. This isn’t a judgement of the value of the ALS challenge but a clear example of how algorithms work—and don’t work.

Algorithms are meant to be gamed—my Facebook friends have now taken to posting faux “congratulations” to messages they want to push to the top of everyone’s feeds, because Facebook’s algorithm pushes such posts with the phrase “congratulations” in the comments to top of your feed. Recently, a clever friend of mine asked to be faux congratulated on her sale of used camera equipment. Sure enough! Her network reported that it stayed on top of everyone’s feed for days. (And that’s why you have so many baby/marriage/engagement announcements in your Facebook feed—and commercial marketers are also already looking to exploit this).

For another thing, algorithmic curation will make writing to be retweeted, which already plagues Twitter much worse. I’m not putting down the retweetable quote; just the behavior that optimizes for that above everything else — and I know you've seen that kind of user. Some are quite smart. Many are very good writers. Actually, many are unfortunately very good writers. They are also usually insufferable. I can see them taking over an algorithmic Twitter.

Bleargh, I say.

But the bigger loss will be the networked intelligence that prizes emergence over engagement and interaction above the retweetable— which gets very boring very quickly. I know Twitter thinks it may increase engagement, but it will decrease engagement among some of its most creative segments.

What else will a curated feed optimize for? It will almost certainly look more like television since there is a reason television looks like television: that’s what advertisers like. There will be more celebrities. There will be more pithy quotes. There will be even more outrage, and even more lovable, fluffy things (both are engaging, and remember, algorithms will optimize for engagement). There will be more sports and television events. There will be less random, weird and otherwise obscure content being surfaced by the collective, networked judgement of the users I choose to follow.

Does Twitter have a signal-to-noise problem? Sure, sometimes. But remember, one person’s noise is another’s signal. Is the learning curve too steep? Yes, it is. Is there a harassment issue, especially for the users with amplified footprints? Absolutely."



"Never forget: the algorithm giveth but it also taketh away. Don’t let it take away the network because it’s the flock, not the bird, that provides the value."
algorithms  twitter  zeyneptufekci  2014  fliters  filtering  human  judgement  unpredictability  emergence  voice  facebook  socialmedia 
september 2014 by robertogreco
What Happens to #Ferguson Affects Ferguson: — The Message — Medium
"This isn’t about Facebook per se—maybe it will do a good job, maybe not—but the fact that algorithmic filtering, as a layer, controls what you see on the Internet. Net neutrality (or lack thereof) will be yet another layer determining this. This will come on top of existing inequalities in attention, coverage and control.

Twitter was also affected by algorithmic filtering. “Ferguson” did not trend in the US on Twitter but it did trend locally. [I’ve since learned from @gilgul that that it *briefly* trended but mostly trended at localities.] So, there were fewer chances for people not already following the news to see it on their “trending” bar. Why? Almost certainly because there was already national, simmering discussion for many days and Twitter’s trending algorithm (said to be based on a method called “term frequency inverse document frequency”) rewards spikes… So, as people in localities who had not been talking a lot about Ferguson started to mention it, it trended there though the national build-up in the last five days penalized Ferguson.

Algorithms have consequences.

Mass media, typically, does not do very well covering chronic problems of unprivileged populations, poor urban blacks bear the brunt of this, but they are not alone. Rural mostly white America, too, is almost always ignored except for the occasional “meth labs everywhere” story. But yesterday, many outlets were trying, except police didn’t let them. Chris Hayes says that police ordered satellite trucks off the area so that they could not go live from the area. Washington Post was only one outlet whose journalists were arrested — citizen journalists were targeted as well.

On the scrappy live feed kept up by frequently tear-gassed, coughing citizen journalists, I heard the announcements calling on them to “turn off their cameras.”

But maybe in the future, they don’t have to bother to arrest journalists and force cameras off. In California, legislation is being considered for “kill switches” in phones — a feature I honestly cannot imagine a good use for this in the United States.

The citizen journalists held on, even as choked from the gas, some traditional media started going live from the region, and today, it’s on the front page of many newspapers.

Maybe, just maybe, there can be a national conversation on these topics long-ignored outside these communities. That’s not everything: it may be a first step, or it may get drowned out.

But at least, we are here.

But I’m not quite sure that without the neutral side of the Internet—the livestreams whose “packets” were fast as commercial, corporate and moneyed speech that travels on our networks, Twitter feeds which are not determined by an opaque corporate algorithms but my own choices,—we’d be having this conversation.

So, I hope that in the coming days, there will be a lot written about race in America, about militarization of police departments, lack of living wage jobs in large geographic swaths of the country.

But keep in mind, Ferguson is also a net neutrality issue. It’s also an algorithmic filtering issue. How the internet is run, governed and filtered is a human rights issue.

And despite a lot of dismal developments, this fight is far from over, and its enemy is cynicism and dismissal of this reality.

Don’t let anyone tell you otherwise.

What happens to #Ferguson affects what happens to Ferguson."
zeyneptufekci  censorship  internet  netneutrality  twitter  facebook  news  media  2014  ferguson  algortihms  class  race  economics  television  tv  citizenjournalism 
august 2014 by robertogreco
Failing the Third Machine Age: When Robots Come for Grandma — The Message — Medium
"In fact, automation usually follows this path: first, the job is broken down into pieces, and “lower-end” pieces are first outsourced to cheaper labor (China in the 20th century or rural laborers that fled to cities in 19th century), then automated and replaced with machines, then integrated into even more powerful machines.

And this automation always moves up the value chain. First, the machine does the arithmetic, but the human is still solving the integrals. Then Matlab comes for the integrals. Next, machines are doing mathematical proofs, and so up it goes the value chain, often until it hits a regulatory block, hence Silicon Valley’s constant desire to undermine regulation and licensing. Doctors are somewhat safe, for example, because of licensing requirements, but technology can find a way around that, too: witness the boom in cheaper radiologists located in India, reading US-based patients x-rays and MRIs; and “homework tutors” that tutor US-based kids remotely from China.

For example, it was nurses who used to take blood pressure. Then it became nurse’s assistants or physician’s assistant—much lower-paid jobs that require less training. Then came machines that perform a reasonable job taking your blood pressure, and the job became even less skilled. More and more, you only see your doctor for a few minutes so that her highly-paid time is dedicated to only that which she can do—is licensed to do—, and everything else is either automated or done by someone paid much less.

This arrangement has advantages but it is not without trade-offs. Your doctor will miss anything that requires a broader eye and reflection, because she’s spending very little time with you, and the information she has about you in front of her is low bandwidth—whatever the physician’s assistant checked on a chart. She may or may not notice your slightly pale skin if it’s not noted on the chart. Most of the time, that’s okay. Sometimes, though, patients spend months and years in this “low-bandwidth” medical care environment while nobody puts two-and-two-and-three-and-that-pale-skin and wait-didn’t-you-have-a-family-history-of-kidney-disease together.

Occasionally, loss of holistic awareness due to division of labor between humans and machines ends up in disasters."



"It’s those face-to-face professions, ones in which being in contact with another human being are important, that are growing in numbers—almost every other profession is shrinking, numerically.

No there won’t be a shortage of engineers and programmers either—engineers and programmers, better than anyone, should know that machine intelligence is coming for them fairly soon, and will move up the value chain pretty quickly. Also, much of this “shortage”, too, is about controlling workers and not paying them—note how Silicon Valley colluded to not pay its engineers too much, even as the companies in question had hoarded billions in cash. In a true shortage under market conditions, companies would pay more to that which was scarce. Instead, wages are stagnant in almost all professions, including technical ones.

Many of these jobs BLS says will grow, however, are only there for the grace-of-the-generation that still wants to see a cashiers while checking out—and besides, they are low-paid jobs. Automation plus natural language processing by machines is going to obliterate through those jobs in the next decade or two. (Is anyone ready for the even worse labor crisis that will ensue?) Machines will take your order at the fast-food joint, they will check out your groceries without having to scan them, it will become even harder to get a human on the customer service line.

What’s left as jobs is those transactions in which the presence of the human is something more than a smiling face that takes your order and enters into another machine—the cashier and the travel agent that has now been replaced by us, in the “self-serve” economy.

What’s left is deep emotional labor: taking care of each other.

And emotional labor is already greatly devalued: notice how most of it is so little paid: health-aides and pre-school teachers are among the lowest paid jobs even though the the work is difficult and requires significant skill and emotional labor. It’s also crucial work: economists estimate a good kindergarten teacher is worth about $320,000 a year, when measured as adult outcomes of those children she teaches well. (And yes, devalued emotional labor is mostly a female job around the world—and the gendered nature of this reality is a whole other post).

And the argument, now is that we should turn care over to machines as well, because, there is a “shortage of humans”.

What are seven billion people supposed to do? Scour Task Rabbit hoping that the few percent who will have money to purchase services have some desires that still require a human?

Turning emotional labor to machines isn't just economically destructive; it’s the very description of inhuman.

In my view, warehousing elderly and children—especially children with disabilities—in rooms with machines that keep them busy, when large numbers of humans beings around the world are desperate for jobs that pay a living wage is worse than the Dickensian nightmares of mechanical industrialization, it’s worse than the cold, alienated workplaces depicted by Kafka."



"So where to go? Here’s where not to go. Expecting all care work to be unpaid and done voluntarily (almost solely by women) is not the path forward.

I don’t mourn if Deep Blue beats Kasparov. Chess is a fine game, but it’s a pretty rigid game, invented by us as a game exactly because it doesn't play to our strengths—that’s why it’s a challenge and a game worth playing. If we were naturally good at it, there’d be no point to it as a game. I don’t mourn not having to dig ditches—though abandoning our flesh as if it were irrelevant is turning out not to be a good idea. Many of us hop on exercise machines that go nowhere to counter our coerced sedentary lifestyle, a development surely bemusing to our ditch-digging ancestors.

But surely we should mourn if we put our elderly and our children in “care” of metal objects animated by software because we, the richest society globally the world has ever seen, with so much abundance of wealth that there are persistent asset bubbles—indicating piles of wealth looking for something anything to invest in—as well as hundreds of millions, if not billions, of under and unemployed people around the world looking for a way to make a living in a meaningful way, cannot bring together the political will to remain human through taking care of each other, and making a decent living doing so."
automation  capitalism  economics  jobs  work  labor  2014  relationships  zeyneptufekci  edtech  care  caring  purpose  dehumanization  humanism  humans  society  childcare  aging  elderly  industrialization  emotionallabor  shrequest1  softskills 
july 2014 by robertogreco
Learning From @NateSilver538's OMG-Wrong #Bra vs #Ger Prediction — The Message — Medium
"Problem One: Ignoring Measurement Error in Your Data



All measurements are partial, incorrect reflections. We are always in Plato’s Cave. Everything is a partial shadow. There is no perfect data, big or otherwise. All researchers should repeat this to themselves, and even more importantly, to the general public to avoid giving the impression that some kinds of data have special magic sauce that make them error-free. Nope."



"In other words, this time, the hedgehogs knew something the fox didn’t. But this fox is often too committed to methodological singularity and fighting pundits, sometimes for the sake of fighting them, so it often doesn’t like to listen to non-statistical data. In reality, methodological triangulation is almost always stronger, though harder to pull-offs.

The Fix? Find Experts You Trust and/or do Qualitative Pull-Outs.
Instead of the aggressive pundit-versus-data stance taken by some big data proponents, it’s important to recognize that substantive area experts are often pretty good at recognizing measurement errors."



"If the substantive experts are deemed unreliable, another option is “qualitative pull-outs” of your data to check for measurement error."



"Problem Two: Ignoring Field (or Broad) Effects

Many data analytic efforts look only at the network (or the team at hand) without considering how events affect the whole field."



"Since humans have this thing called psychology, it’s not easy to run data analysis by replacing one human with another as it were Lego pieces and looking at the resulting structure. It rarely works that way. In fact, the original analysis of FiveThirtyEight mention this factor, but their predictive score seems completely unaffected by this reality."



"The Fix? Recognize Frailties in Studying Human Endeavors
This one is hard because this is a structural feature of most human endeavors—and one that disciplined efforts like militaries and organized sports try to minimize through extensive training and drilling so people do act like cogs in a machine, even under pressure. Still, though, it’s hard to fully account for."



"Problem Three: Humans Are Not Gases in a Chamber, but reflexive beings who react to events.

A problem with much statistical analysis is ignoring the fact that humans, umm, react to things around them. (Social science jargon for this is reflexivity). I know this seems so simple, but it’s amazing how much predictive analytics don’t factor this in.



The Fix? Recognize that Stability and Instability are not that far apart for structural reasons
This, too, comes under “hard to predict exact timing but not hard to substantively discuss possibility” category. It means recognizing that models that are snapshots in time are just that—and that events themselves carry force in feedback loops."
zeyneptufekci  data  statistics  fox  hedgehog  worldcup  behavior  psychology  socialscience  measurement  2014  qualitativedata  quantification 
july 2014 by robertogreco
Expensive Bikes, Cheap Lives — The Message — Medium
"Bait Bikes and Felonies in a Divided San Francisco"



"Let me rephrase: in a city in which inequality is greatly increasing, in which those outside the tech industry are struggling to pay rents and deal with increasing cost of life, and in which flushed, moneyed tech employees are buying more and more expensive bikes (the article notes, can cost $10,000), those police are luring people to steal them by intentionally using bait bikes so expensive that the people tempted to steal them can be charged with felonies. If convicted, so that they can no longer vote in many states, and also are unemployable in large sectors of the economy for all practical reasons.

What could go wrong?"



"I’m not denying anyone’s right to fume over a stolen bike, nor am I justifying street crime. What I’m arguing is that our individualized outrage over small-scale crime is hiding terrible policy effects, and that our “serves the thief right” knee-jerk response—quite understandable from an individual point of view—reflects distorted priorities that makes things worse for all of us in the long run. Similar to misguided three-strikes laws that saw some small-time criminals serve life sentences for minor thefts—like shoplifting— bait bikes designed to trigger felonies can waste lives and resources. Individual, momentary outrage feels good and justified, but its impacts, distorted through priorities aimed at appeasing us, rather than solving the problem, do not leave any of us better in the long run."
crime  culture  inequality  legal  police  sanfrancisco  bikes  zeyneptufekci  economics  policy  criminalization  poverty  2014 
may 2014 by robertogreco
Can “Leaderless Revolutions” Stay Leaderless: Preferential Attachment, Iron Laws and Networks | technosociology
"Many commentators relate the diffuse, somewhat leaderless nature of the uprisings in Egypt and Tunisia (and now spreading elsewhere) with the prominent role social-media-enabled peer-to-peer networks played in these movements. While I remain agnostic but open to the possibility that these movements are more diffuse partially due to the media ecology, it is wrong to assume that open networks “naturally” facilitate “leaderless” or horizontal structures. On the contrary, an examination of dynamics in such networks, and many examples from history, show that such set-ups often quickly evolve into very hierarchical and ossified networks not in spite of, but because of, their initial open nature."



"I agree and have said before that this was the revolution of a networked public, and as such, not dominated by traditional structures such as political parties or trade-unions (although such organizations played a major role, especially towards the end). I have also written about how this lack of well-defined political structure might be both a weakness and a strength.

A fact little-understood but pertinent to this discussion, however, is that relatively flat networks can quickly generate hierarchical structures even without any attempt at a power grab by emergent leaders or by any organizational, coordinated action. In fact, this often occurs through a perfectly natural process, known as preferential attachment, which is very common to social and other kinds of networks."



"Disposition is not destiny. In one of my favorite books as a teenager, The Dispossessed, Ursula K. Leguin imagines a utopian colony under harsh conditions and describes their attempts to guard against the rise of such a ossified leadership through multiple mechanisms: rotation of jobs, refusal of titles, attempts to use a language that is based on sharing and utility rather than possession and others. The novel does not resolve if it is all futile but certainly conveys the yearning for a truly egalitarian society.

If the nascent revolutionaries in Egypt are successful in finding ways in which a movement can leverage social media to remain broad-based, diffused and participatory, they will truly help launch a new era beyond their already remarkable achievements. Such a possibility, however, requires a clear understanding of how networks operate and an explicit aversion to naïve or hopeful assumptions about how structures which allow for horizontal congregation will necessarily facilitate a future that is non-hierarchical, horizontal and participatory. Just like the Egyptian revolution was facilitated by digital media but succeeded through the bravery, sacrifice, intelligence and persistence of its people, ensuring a participatory future can only come through hard work as well as the diligent application of thoughtful principles to these new tools and beyond."
egypt  anarchism  horizontality  hierarchy  hierarchies  socialnetworks  2011  groupdynamics  sociology  zeyneptufekci  organizations  tunisia  arabspring 
may 2014 by robertogreco
Is the Internet good or bad? Yes.  — Matter — Medium
"Resistance and surveillance: The design of today’s digital tools makes the two inseparable. And how to think about this is a real challenge. It’s said that generals always fight the last war. If so, we’re like those generals. Our understanding of the dangers of surveillance is filtered by our thinking about previous threats to our freedoms. But today’s war is different. We’re in a new kind of environment, one that requires a new kind of understanding."

[Update 3 March 2014: Kars Alfrink repsonds: http://tumblr.leapfrog.nl/post/78430061210/resistance-and-surveillance-the-design-of-todays

"OK, this is one of the best things I’ve read on Occupy Gezi Park in ages or probably ever. Last year I wrote up my thoughts on gameful design and the built environment in a chapter for The Gameful World and what emerged was mainly about legibility and resistance. This article describes in great detail both the workings and value of street protests and the mechanisms by which contemporary Western regimes (attempt to) control people. In my chapter I suggested that fuzzing, making oneself illegible, is the most effective strategy for resistance in this day and age. I think that is supported by what Zeynep Tufekci argues here." ]
surveillance  resistance  protest  nsa  2014  zeyneptufekci 
february 2014 by robertogreco
Computers are for people
"Markets are gonna market, and specs are gonna spec, but it often feels like companies are forgetting that computers are for people, first. And people have bodies, those bodies have limitations, and all of us have limitations in specific situations.

We're all disabled sometimes. If I turn off the lights in your room, you can't see. If I fill the room with enough noise, you can't hear. If your hands are full, you can't use them to do anything else.

But as Sara Hendren writes, "all technology is assistive technology." When it's working right, technology helps people of every ability overcome these limitations. It doesn't throw us back into the world of assumptions that expects us all to be fully capable all of the time.

That's not what good technology does. That's not what good design does. That's what assholes do.

I think often about Jason's post on one-handed computing because I'm in the story. He wrote it for his wife, and he wrote it for me. I'd badly broken my right arm in an accident, snapping my radius in half and shooting it out of my body."



"The thing that tech companies forget -- that journalists forget, that Wall Street never knew, that commenters who root for tech companies like sports fans for their teams could never formulate -- that technology is for people -- is obvious to Jason. Technology is for us. All of us. People who carry things.

People. Us. These stupid, stubborn, spectacular machines made of meat and electricity, friends and laughter, genes and dreams."

[Update: see also (via @ablerism):
"It’s a Man’s Phone: My female hands meant I couldn’t use my Google Nexus to document tear gas misuse"
https://medium.com/technology-and-society/its-a-mans-phone-a26c6bee1b69 ]
technology  timcarmody  2013  assistivetechnology  sarahendren  humans  vulnerability  ability  disability  iphone  limitations  computing  computers  accessibility  computersareforpeople  disabilities  zeyneptufekci 
october 2013 by robertogreco
Facebook and the Epiphanator: An End to Endings? -- Daily Intel [Don't rely on the quotes here. Read the whole thing.]
"…should be a word for that feeling you get when an older person…shames himself by telling young people how to live…

Obviously, the Epiphinator will need to slim down in order to thrive, but a careful study of history shows how impossible it is to determine whether it can return to both power & glory, or whether its demise is imminent…

This moment of anxiety and fear will pass; future generations (there's now one every 3-4 years) will have no idea what they missed, & yet they will go on, marry, divorce, & own pets.

They may even work in journalism, not in the old dusty career paths…

We'll still need professionals to organize the events of the world into narratives, & our story-craving brains will still need the narrative hooks, the cold opens, the dramatic climaxes, & that all-important "■" to help us make sense of the great glut of recent history that is dumped over us every morning. No matter what comes along streams, feeds, & walls, we will still have need of an ending."
technology  media  socialmedia  facebook  privacy  paulford  narrative  jonathanfranzen  zadiesmith  billkeller  zeyneptufekci  life  wisdom  journalism  storytelling  endings  epiphinator  love  living  stevejobs  commencementspeeches  wholeearthcatalog  stewartbrand  aaronsorkin  2011  nuance  feral  unfinished  culture  internet 
july 2011 by robertogreco

Copy this bookmark:





to read