recentpopularlog in

katecrawford

Gesichtserkennung: Kein Kredit mit dieser Nase
23.09.2019 Michael Moorstedt
Automatisierte Gesichtserkennung wird langsam aber sicher allgegenwärtig.
Auf einer Webseite lässt sich nun ausprobieren, wie die im Hintergrund tätigen Algorithmen die eigene Person einschätzen.
Die Ergebnisse zeigen, dass die Algorithmen die Vorurteile der Clickworker übernehmen, die sie trainiert haben.
KI  Algorithmen  KateCrawford  AINowInstitute 
15 days ago by amprekord
The 10 Top Recommendations for the AI Field in 2017
Today we released our second annual research report on the state of artificial intelligence. Since last year’s report, we’ve seen early stage AI technologies continue to filter into many everyday…
AI  machinelearning  maschinellesLernen  KünstlicheIntelligenz  Algorithmus  KateCrawford  AINowInstitute 
november 2017 by amprekord
Artificial Intelligence—With Very Real Biases - WSJ
There are no established methods to test AI for safety, fairness or effectiveness
ai  katecrawford  from twitter_favs
october 2017 by silvertje
Kate Crawford on Twitter: "Meanwhile, a new attack uses ultrasonic commands to make Siri & Alexa do anything, but no human can hear it. https://t.co/vBuqwDNXIm https://t.co/tch4K8gZMz"
Meanwhile, a new attack uses ultrasonic commands to make Siri & Alexa do anything, but no human can hear it. https://t.co/vBuqwDNXIm pic.twitter.com/tch4K8gZMz

— Kate Crawford (@katecrawford) September 7, 2017
FavoriteTweet  katecrawford 
september 2017 by mjtsai
Why Machines Discriminate—and How to Fix Them - Science Friday
"Kate, let me ask you, let’s say a company found that its algorithms were indeed discriminating against job applicants but it saves a ton of money to use machines and wants to stay with the machines. Is there any incentive for the company to change this? And way for it to do that?"
katecrawford  talks  sparkfile  data  algorithms 
december 2015 by sha
Do Not Track: revolutionary mashup documentary about Web privacy - Boing Boing
"Brett "Remix Manifesto" Gaylor tells the story of his new project: a revolutionary "mashup documentary" about privacy and the Web."

[This article refers to:
https://donottrack-doc.com/en/episode/1
https://donottrack-doc.com/en/episode/2
https://donottrack-doc.com/en/episode/3
https://donottrack-doc.com/en/episode/4 ]

"I make documentaries about the Internet. My last one, Rip! A Remix Manifesto, was made during the copyright wars of the early 2000s. We followed Girl Talk, Larry Lessig, Gilberto Gil, Cory and others as the Free Culture movement was born. I believed then that copyright was the Internet's defining issue. I was wrong.

In the time since I made Rip, we’ve seen surveillance from both corporate and state actors reach deeper into our lives. Advertising, and the tracking that goes with it, have become the dominant business model of the web. With the Snowden revelations, we've seen that this business model has given the NSA and other state agencies access to the intimate details of our online lives, our location, our reading lists, and our friends.

So with my colleagues at Upian in Paris, the National Film Board of Canada, AJ+, Radio-Canada, RTS, Arte and Bayersicher Rundfunk, I decided to make a documentary series about this. The trouble is, privacy is a difficult issue for most people. They either quickly pull out the "nothing to hide" argument, or they give the shruggie ¯\_(ツ)_/¯. We wanted to find a way to make this personal for people, so we decided to use the viewer's own data to create each episode.

When you open Episode One, the narrator you hear will depend on your location. You'll likely see me if you link from Boing Boing -- I'm the English narrator on desktop. But if you connect on mobile, you'll meet Francesca Fiorentini from AJ+. In Quebec, you'll meet Sandra Rodriguez. In France, it'll be journalist Vincent Glad. The tone is conversational. You'll meet someone who speaks your own language discussing their online sharing addiction.

Once you've met us, we'll say different things to you. If it's raining where you are, we'll know it, because we've plugged into a weather API. This API will communicate with Giphy's API and present different GIFs. It's all edited together like a movie, but a movie that is created on the spot, just for you.

To go further, we ask you to tell us a bit more about you. If you tell us where you go for your news, we've partnered with the service disconnect.me to show you the third party trackers that advertisers and analytics folks place on your computer to follow you around the Web.

In Episode Two, we then take this data to create personalized ads within the program - while we talk to Ethan Zuckerman and Julia Angwin about how advertising came to dominate the Web. We'll ask you how much you would be willing to pay for a version of Facebook or Google that didn't have ads, and compare that with how much they make from you.

In Episode Three, we created a a corporation called Illuminus that practices "future present risk detection". If you log in with your Facebook profile, the corporation uses an API developed at the University of Cambridge, "Apply Magic Sauce," to determine which one of the "Big Five Personality Traits" applies to you. We discover how lenders are dipping their toes into making risk assessments based on your social media activity.

We varied our style in Episode Four and made a privacy cartoon. Journalist Zineb Dryef spent months researching what information she discloses on her mobile phone, and then Darren Pasemko animated what she learned. We meet Kate Crawford, Julia Angwin, as well as Harlo Holmes and Nathan Freitas from the Guardian project. It’s an episode told in four parts, and you can watch the first part in the video below.

If you watch the rest of this episode on donottrack-doc.com, it will be geo-located and interactive.

Our next episode, available May 26th, is produced by the National Film Board of Canada's digital studio, who have a well deserved reputation for creating beautiful interfaces for new types of documentaries. In this episode, we'll explore big data - by making correlations as you watch, you'll determine the outcome, while you meet danah boyd, Cory Doctorow, Alicia Garza and Kate Crawford.

We’re still catching our breath while we produce the final two episodes. One thing we know - we want these to be personal. As we learned in our first episodes, people understand the issues around privacy and surveillance when we let them explore their own data. Depending on how you behaved during the series, we want these final episodes to adapt. We’ll be exploring how the filter bubble shapes your view of the world in our 6th episode, and how our actions can shape the future in our 7th. What these episodes look like is up to you."
brettgaylor  film  interactive  interactivefilm  mashups  documentary  towatch  privacy  web  online  internet  2015  nfbc  nfb  katecrawford  corydoctoow  aliciagarza  danahboyd  location  zinebdryef  darrenpasemko  harloholmes  nathanfreitas  juliaangwin  ethanzuckerman  advertising  tracking  francescafiorentini  sandrarodriguez  giphy  api  trackers  cookies 
may 2015 by robertogreco
What is a Flag for? Social Media Reporting Tools and the Vocabulary of Complaint by Kate Crawford, Tarleton L. Gillespie :: SSRN
"The flag is now a common mechanism for reporting offensive content to an online platform, and is used widely across most popular social media sites. It serves both as a solution to the problem of curating massive collections of user-generated content and as a rhetorical justification for platform owners when they decide to remove content. Flags are becoming a ubiquitous mechanism of governance -- yet their meaning is anything but straightforward. In practice, the interactions between users, flags, algorithms, content moderators, and platforms are complex and highly strategic. Significantly, flags are asked to bear a great deal of weight, arbitrating both the relationship between users and platforms, and the negotiation around contentious public issues. In this essay, we unpack the working of the flag, consider alternatives that give greater emphasis to public deliberation, and consider the implications for online public discourse of this now commonplace yet rarely studied sociotechnical mechanism."
katecrawford  tarletongillespie  2014  moderation  flagging  online  internet  web  socialmedia  governance 
december 2014 by robertogreco
The Test We Can—and Should—Run on Facebook - Kate Crawford - The Atlantic
"We have now had a glimpse within the black box of Facebook’s experiments, and we’ve seen how highly centralized power can be exercised. It is clear that no one in the emotional contagion study knew they were participants, and even now, the full technical means and mechanisms of the study are only legible to the researchers. Nor can we know if anyone was harmed by the negatively skewed feeds. What we do know is that Facebook, like many social media platforms, is an experiment engine: a machine for making A/B tests and algorithmic adjustments, fueled by our every keystroke. This has been used as a justification for this study, and all studies like it: Why object to this when you are always being messed with? If there is no ‘natural’ News Feed, or search result or trending topic, what difference does it make if you experience A or B?

The difference, for Shils and others, comes down to power, deception and autonomy. Academics and medical researchers have spent decades addressing these issues through ethical codes of conduct and review boards, which were created to respond to damaging and inhumane experiments, from the Tuskegee syphilis experiment to Milgram’s electric shocks. These review boards act as checks on the validity and possible harms of a study, with varying degrees of effectiveness, and they seek to establish traditions of ethical research. But what about when platforms are conducting experiments outside of an academic context, in the course of everyday business? How do you develop ethical practices for perpetual experiment engines?"

There is no easy answer to this, but we could do worse than begin by asking the questions that Shils struggled with: What kinds of power are at work? What are the dynamics of trust, consent and deception? Who or what is at risk? While academic research is framed in the context of having a wider social responsibility, we can consider the ways the technology sector also has a social responsibility. To date, Silicon Valley has not done well in thinking about its own power and privilege, or what it owes to others. But this is an essential step if platforms are to understand their obligation to the communities of people who provide them with content, value and meaning.

Perhaps we could nudge that process with Silicon Valley’s preferred tool: an experiment. But this time, we request an experiment to run on Facebook and similar platforms. Rather than assuming Terms of Service are equivalent to informed consent, platforms should offer opt-in settings where users can choose to join experimental panels. If they don’t opt in, they aren’t forced to participate. This could be similar to the array of privacy settings that already exist on these platforms. Platforms could even offer more granular options, to specify what kinds of research a user is prepared to participate in, from design and usability studies through to psychological and behavioral experiments.

Of course, there is no easy technological solution to complex ethical issues, but this would be significant gesture on the part of platforms towards less deception, more ethical research and more agency for users.

Some companies might protest that this will reduce the quality of their experimental studies because fewer people will choose to opt in. There is a tendency in big data studies to accord merit to massive sample sizes, regardless of the importance of the question or the significance of the findings. But if there’s something we’ve learned from the emotional contagion study, a large number of participants and data points does not necessarily produce good research.

It is a failure of imagination and methodology to claim that it is necessary to experiment on millions of people without their consent in order to produce good data science. Shifting to opt-in panels of subjects might produce better research, and more trusted platforms. It would be a worthy experiment."
culture  ethics  facebook  privacy  research  2014  katecrawford  emotions 
july 2014 by robertogreco

Copy this bookmark:





to read