recentpopularlog in

robertogreco : katecrawford   4

Do Not Track: revolutionary mashup documentary about Web privacy - Boing Boing
"Brett "Remix Manifesto" Gaylor tells the story of his new project: a revolutionary "mashup documentary" about privacy and the Web."

[This article refers to:
https://donottrack-doc.com/en/episode/1
https://donottrack-doc.com/en/episode/2
https://donottrack-doc.com/en/episode/3
https://donottrack-doc.com/en/episode/4 ]

"I make documentaries about the Internet. My last one, Rip! A Remix Manifesto, was made during the copyright wars of the early 2000s. We followed Girl Talk, Larry Lessig, Gilberto Gil, Cory and others as the Free Culture movement was born. I believed then that copyright was the Internet's defining issue. I was wrong.

In the time since I made Rip, we’ve seen surveillance from both corporate and state actors reach deeper into our lives. Advertising, and the tracking that goes with it, have become the dominant business model of the web. With the Snowden revelations, we've seen that this business model has given the NSA and other state agencies access to the intimate details of our online lives, our location, our reading lists, and our friends.

So with my colleagues at Upian in Paris, the National Film Board of Canada, AJ+, Radio-Canada, RTS, Arte and Bayersicher Rundfunk, I decided to make a documentary series about this. The trouble is, privacy is a difficult issue for most people. They either quickly pull out the "nothing to hide" argument, or they give the shruggie ¯\_(ツ)_/¯. We wanted to find a way to make this personal for people, so we decided to use the viewer's own data to create each episode.

When you open Episode One, the narrator you hear will depend on your location. You'll likely see me if you link from Boing Boing -- I'm the English narrator on desktop. But if you connect on mobile, you'll meet Francesca Fiorentini from AJ+. In Quebec, you'll meet Sandra Rodriguez. In France, it'll be journalist Vincent Glad. The tone is conversational. You'll meet someone who speaks your own language discussing their online sharing addiction.

Once you've met us, we'll say different things to you. If it's raining where you are, we'll know it, because we've plugged into a weather API. This API will communicate with Giphy's API and present different GIFs. It's all edited together like a movie, but a movie that is created on the spot, just for you.

To go further, we ask you to tell us a bit more about you. If you tell us where you go for your news, we've partnered with the service disconnect.me to show you the third party trackers that advertisers and analytics folks place on your computer to follow you around the Web.

In Episode Two, we then take this data to create personalized ads within the program - while we talk to Ethan Zuckerman and Julia Angwin about how advertising came to dominate the Web. We'll ask you how much you would be willing to pay for a version of Facebook or Google that didn't have ads, and compare that with how much they make from you.

In Episode Three, we created a a corporation called Illuminus that practices "future present risk detection". If you log in with your Facebook profile, the corporation uses an API developed at the University of Cambridge, "Apply Magic Sauce," to determine which one of the "Big Five Personality Traits" applies to you. We discover how lenders are dipping their toes into making risk assessments based on your social media activity.

We varied our style in Episode Four and made a privacy cartoon. Journalist Zineb Dryef spent months researching what information she discloses on her mobile phone, and then Darren Pasemko animated what she learned. We meet Kate Crawford, Julia Angwin, as well as Harlo Holmes and Nathan Freitas from the Guardian project. It’s an episode told in four parts, and you can watch the first part in the video below.

If you watch the rest of this episode on donottrack-doc.com, it will be geo-located and interactive.

Our next episode, available May 26th, is produced by the National Film Board of Canada's digital studio, who have a well deserved reputation for creating beautiful interfaces for new types of documentaries. In this episode, we'll explore big data - by making correlations as you watch, you'll determine the outcome, while you meet danah boyd, Cory Doctorow, Alicia Garza and Kate Crawford.

We’re still catching our breath while we produce the final two episodes. One thing we know - we want these to be personal. As we learned in our first episodes, people understand the issues around privacy and surveillance when we let them explore their own data. Depending on how you behaved during the series, we want these final episodes to adapt. We’ll be exploring how the filter bubble shapes your view of the world in our 6th episode, and how our actions can shape the future in our 7th. What these episodes look like is up to you."
brettgaylor  film  interactive  interactivefilm  mashups  documentary  towatch  privacy  web  online  internet  2015  nfbc  nfb  katecrawford  corydoctoow  aliciagarza  danahboyd  location  zinebdryef  darrenpasemko  harloholmes  nathanfreitas  juliaangwin  ethanzuckerman  advertising  tracking  francescafiorentini  sandrarodriguez  giphy  api  trackers  cookies 
may 2015 by robertogreco
What is a Flag for? Social Media Reporting Tools and the Vocabulary of Complaint by Kate Crawford, Tarleton L. Gillespie :: SSRN
"The flag is now a common mechanism for reporting offensive content to an online platform, and is used widely across most popular social media sites. It serves both as a solution to the problem of curating massive collections of user-generated content and as a rhetorical justification for platform owners when they decide to remove content. Flags are becoming a ubiquitous mechanism of governance -- yet their meaning is anything but straightforward. In practice, the interactions between users, flags, algorithms, content moderators, and platforms are complex and highly strategic. Significantly, flags are asked to bear a great deal of weight, arbitrating both the relationship between users and platforms, and the negotiation around contentious public issues. In this essay, we unpack the working of the flag, consider alternatives that give greater emphasis to public deliberation, and consider the implications for online public discourse of this now commonplace yet rarely studied sociotechnical mechanism."
katecrawford  tarletongillespie  2014  moderation  flagging  online  internet  web  socialmedia  governance 
december 2014 by robertogreco
The Test We Can—and Should—Run on Facebook - Kate Crawford - The Atlantic
"We have now had a glimpse within the black box of Facebook’s experiments, and we’ve seen how highly centralized power can be exercised. It is clear that no one in the emotional contagion study knew they were participants, and even now, the full technical means and mechanisms of the study are only legible to the researchers. Nor can we know if anyone was harmed by the negatively skewed feeds. What we do know is that Facebook, like many social media platforms, is an experiment engine: a machine for making A/B tests and algorithmic adjustments, fueled by our every keystroke. This has been used as a justification for this study, and all studies like it: Why object to this when you are always being messed with? If there is no ‘natural’ News Feed, or search result or trending topic, what difference does it make if you experience A or B?

The difference, for Shils and others, comes down to power, deception and autonomy. Academics and medical researchers have spent decades addressing these issues through ethical codes of conduct and review boards, which were created to respond to damaging and inhumane experiments, from the Tuskegee syphilis experiment to Milgram’s electric shocks. These review boards act as checks on the validity and possible harms of a study, with varying degrees of effectiveness, and they seek to establish traditions of ethical research. But what about when platforms are conducting experiments outside of an academic context, in the course of everyday business? How do you develop ethical practices for perpetual experiment engines?"

There is no easy answer to this, but we could do worse than begin by asking the questions that Shils struggled with: What kinds of power are at work? What are the dynamics of trust, consent and deception? Who or what is at risk? While academic research is framed in the context of having a wider social responsibility, we can consider the ways the technology sector also has a social responsibility. To date, Silicon Valley has not done well in thinking about its own power and privilege, or what it owes to others. But this is an essential step if platforms are to understand their obligation to the communities of people who provide them with content, value and meaning.

Perhaps we could nudge that process with Silicon Valley’s preferred tool: an experiment. But this time, we request an experiment to run on Facebook and similar platforms. Rather than assuming Terms of Service are equivalent to informed consent, platforms should offer opt-in settings where users can choose to join experimental panels. If they don’t opt in, they aren’t forced to participate. This could be similar to the array of privacy settings that already exist on these platforms. Platforms could even offer more granular options, to specify what kinds of research a user is prepared to participate in, from design and usability studies through to psychological and behavioral experiments.

Of course, there is no easy technological solution to complex ethical issues, but this would be significant gesture on the part of platforms towards less deception, more ethical research and more agency for users.

Some companies might protest that this will reduce the quality of their experimental studies because fewer people will choose to opt in. There is a tendency in big data studies to accord merit to massive sample sizes, regardless of the importance of the question or the significance of the findings. But if there’s something we’ve learned from the emotional contagion study, a large number of participants and data points does not necessarily produce good research.

It is a failure of imagination and methodology to claim that it is necessary to experiment on millions of people without their consent in order to produce good data science. Shifting to opt-in panels of subjects might produce better research, and more trusted platforms. It would be a worthy experiment."
culture  ethics  facebook  privacy  research  2014  katecrawford  emotions 
july 2014 by robertogreco
Save The Data Drama
"Place



A location rife with beautiful signifiers of gentle exclusion seemed an ideal place for a conference like this. Within a few minutes of Friday's talks beginning, it became clear that almost everyone at this event knew each other, and if they didn't know the speakers they were probably grad students. I came to this thing from a periphery: close enough to some of the speakers to not be totally isolated, making work that's relevant to the topic, but not transactionally useful to the majority of the people there.

I admittedly have a knack for showing up in places where I am out of place. I tend to show up to such places bearing a massive, posture-ruining, class warfare chip on my shoulder. Some of this comes from the fact that being alive feels like being out of place, because honestly I wake up every day amazed that I'm still alive after an unrelated incident in 2009 that I don't really want to talk about.

One side effect of my terrible posture is that I'm a terrible liar. When faced with the elaborate theater of someone trying to convince me that this is the hippest data center ever or that he is the Most Important Man In The Room, I kind of just start laughing. And when I start to get worked up about how out of place I am in a given situation, I get defensive and snarky.

This isn't necessarily an apology (I don't think I have to apologize for for thinking that careerism is silly or for having reactions to however unintentionally hostile spaces); just context on the off-chance any of the Serious Important Men of Data Drama (who perhaps hereafter should be called the Data Drama Queens) who were probably annoyed with me read this. Don't worry guys, I'm just a silly woman living paycheck to paycheck, don't mind me.



Privilege



Honestly, I don't really like being the person at a gendered, privileged event talking about gender or privilege, because I know that I have so much privilege, and I don't want to claim to speak for the marginalized who are not in the room. Hell, we didn't even really get into how deeply white the conference was (in both speaker and audience makeup). There was a uniquely awkward moment when, during a Q&A session, filmmaker Ben Lewis complained about the difficulty he encountered finding people to interview who were concerned about or negatively affected by surveillance--this after James Bridle had given a talk about British citizens stripped of their citizenship essentially so they could be droned. The anger at Lewis' apparent ignorance was palpable, but not necessarily productive--while yes, someone probably should have said "Ben, perhaps you should consider speaking to people who don't look like you", there weren't that many examples of such people to point to in that conference room at Princeton.


When Data Drama Queens talk about the risks being faced in our new data age, the future adaptations of cyborg humans, the potential of World 2.0, who is actually being spoken about or spoken for? To what extent are these speculations of the future posed more or less for people like them?



Magic

The aesthetics of the slide talks and much of the work presented varied--from Metahaven's seapunk-Geocities collages to Adam Harvey's apparently oblivious fashion magazine-glossy male gaze--but there was a frequently ambivalent return to a rhetoric and aesthetics of awe. Despite ourselves, we are kind of in love with the technology, even if it is in the hands of the oppressor, and that's hard to reconcile.

Early on during Saturday's talks, a Q&A got into a discussion of magic, and that's the thing I keep coming back to. I'm not sure what that's going to look like, but I think it's got a lot of potential. I am for a dialogue on technology and society that allows for weirdness, allows for vulnerability, allows for humanity, requires a certain amount of faith, and isn't about pure mastery. I think there's more space for that in the language of magic, I don't know. Mostly I wanted to know how many of the people at that conference listen to Welcome to Nightvale."
ingridburrington  datadrama  2014  data  privilege  place  conferences  magic  nightvale  mastery  usmanhaque  katecrawford  cv  honesty  lying  class  classwarfare  liamyoung  jamesbridle  benlewis 
april 2014 by robertogreco

Copy this bookmark:





to read