recentpopularlog in
« earlier  
Opinion | Chinese Hacking Is Alarming. So Are Data Brokers. - The New York Times
Using the personal data of millions of Americans against their will is certainly alarming. But what’s the difference between the Chinese government stealing all that information and a data broker
amassing it legally without user consent and selling it on the open market?

Both are predatory practices to invade privacy for insights and strategic leverage. Yes, one is corporate and legal and the other geopolitical and decidedly not legal. But the hack wasn’t a malfunction of the system; it was a direct result of how the system was designed.



The takeaway: While almost anything digital is at some risk of being hacked, the Equifax attack was largely preventable.



Equifax amasses personal data on millions of Americans whether we want it to or not, creating valuable profiles that can be used to approve or deny loans or insurance claims. That data, which can help dictate the outcome of major events in our lives (where we live, our finances, even potentially our health), then becomes a target.

From this vantage, it’s unclear why data brokers should continue to collect such sensitive information at scale. Setting aside Equifax’s long, sordid history of privacy concerns and its refusal to let Americans opt out
of collection, the very existence of such information, stored by private companies with little oversight, is a systemic risk.
china  equifax  databroker  privacy  security  hacking  lens 
5 days ago
Mental health websites don't have to sell your data. Most still do. | PI
In other words, whenever you visit a number of websites dedicated to mental health to read about depression or take a test, dozens of third-parties may receive this information and **bid money to show you a targeted ad**. Interestingly, some of these websites seem to include marketing trackers without displaying any ads, meaning they simply allow data collection on their site, which in turn may be used for advanced profiling of their users.



It is highly disturbing that we still have to have to say this, but websites dealing with such sensitive topics should not track their users for marketing purposes. Your mental health is not and should never be for sale.
privacy  tracking  mentalhealth  lens  targetedads  targeting 
6 days ago
GDPR compliance is abysmal, and dark patterns may be why
>“[N]ew research suggests that only 11% of major sites are designing these so-called consent notices to meet the minimum requirements set by law.”
>
>…
>
>“So are design patterns that prevent the user from making an easy and clear privacy decision examples of simply poor design, or are these design patterns intentionally nudging users to share data?” “It has to be intentional because anyone who’s actually read the GDPR in an honest way would know that it’s not right,” says [David] Carroll. “Both the design and the functionality of them are very manipulative in favor of the first- and third-party collectors where possible.”
>
>…
>
>“It’s a design problem,” Carroll says, “but it’s a business model problem first and foremost.”
darkpatterns  gdpr  consent  lens  businessmodels 
7 days ago
Algorithmic bias hurts people with disabilities, too.
In hiring, for example, new algorithm-driven tools will identify characteristics shared by a company’s “successful” existing employees, then look for those traits when they evaluate new hires. But as the model treats underrepresented traits as undesired traits to receive less weighting, people with disabilities—like other marginalized groups—risk being excluded as a matter of course.



While some have called to fix this data problem by collecting more detailed information about job candidates’ disabilities, further collection raises its own distinct and very real concerns about privacy and discrimination.

These problems exist for others, too: people who have marginalized sexual orientations or nonbinary gender identities, those who fall outside U.S. definitions of race and ethnicity, and for people who are members of multiple, intersecting marginalized communities.
disability  discrimination  algorithms  algorithmicbias  bias  lens 
10 days ago
Teens have figured out how to mess with Instagram's tracking algorithm - CNET
>“These teenagers are relying on a sophisticated network of trusted Instagram users to post content from multiple different devices, from multiple different locations.
>
>…
>
>Teens shouldn't have to go to those lengths to socialize privately on Instagram, said Liz O'Sullivan, technology director at the Surveillance Technology Oversight Project.

>‘I love that the younger generation is thinking along these lines, but it bothers me when we have to come up with these strategies to avoid being tracked,’ O'Sullivan said. ‘She shouldn't have to have these psyop [psychological operations] networks with multiple people working to hide her identity from Instagram.’”
instagram  tracking  privacy  psyops  lens  surveillance  surveillancecapitalism 
11 days ago
Researchers Find 'Anonymized' Data Is Even Less Anonymous Than We Thought - VICE
They told Motherboard their tool analyzed thousands of datasets from data scandals ranging from the 2015 hack of Experian, to the hacks and breaches that have plagued services from MyHeritage to porn websites. Despite many of these datasets containing “anonymized” data, the students say that identifying actual users wasn’t all that difficult.



For example, while one company might only store usernames, passwords, email addresses, and other basic account information, another company may have stored information on your browsing or location data. Independently they may not identify you, but collectively they reveal numerous intimate details even your closest friends and family may not know.



Previous studies have shown that even within independent individual anonymized datasets, identifying users isn’t all that difficult.

In one 2019 UK study, researchers were able to develop a machine learning model capable of correctly identifying 99.98 percent of Americans in any anonymized dataset using just 15 characteristics. A different MIT study of anonymized credit card data found that users could be identified 90 percent of the time using just four relatively vague points of information.

Another German study looking at anonymized user vehicle data found that that 15 minutes’ worth of data from brake pedal use could let them identify the right driver, out of 15 options, roughly 90 percent of the time. Another 2017 Stanford and Princeton study showed that deanonymizing user social networking data was also relatively simple.



The problem is compounded by the fact that the United States still doesn’t have even a basic privacy law for the internet era, thanks in part to relentless lobbying from a cross-industry coalition of corporations eager to keep this profitable status quo intact. As a result, penalties for data breaches and lax security are often too pathetic to drive meaningful change.
anonymisation  anonymization  privacy  security  lens 
13 days ago
I'm a trans woman. Google Photos doesn't know how to categorize me
“The same data set that could be used to build a system to prevent showing trans folks photos from before they started transition could be trivially used and weaponized by an authoritarian state to identify trans people from street cameras,” [Penelope] Phippen says.

With this dystopian future in mind, coupled with the fact that federal agencies like ICE already use facial recognition technology for immigration enforcement, do we even want machine learning to piece together a coherent identity from both pre- and post-transition images?



With trans people facing daily harassment simply for existing as ourselves, the stakes seem too high to risk teaching these systems how to recognize us
facialrecognition  google  photos  facebook  trans  discrimination  systemicdiscrimination  system  lens 
14 days ago
Tinder's Panic Button Partner, Noonlight, Shares Data With Third Parties
From Gizmodo’s own analysis of Noonlight, we counted no fewer than five partners gleaning some sort of information from the app, including Facebook and YouTube. Two others, Branch and Appboy (since renamed Braze), specialize in connecting a given user’s behavior across all of their devices for retargeting purposes. Kochava is a major hub for all sorts of audience data gleaned from an untold number of apps.



What is clear, in this particular case, is that even if the data isn’t “sold,” it ischanging hands with the third parties involved. Branch, for example, received some basic specs on the phone’s operating system and display, along with the fact that a user downloaded the app to begin with. The company also provided the phone with a unique “fingerprint” that could be used to link the user across each of their devices.



It should be pointed out that Tinder, even without Noonlight integration, has historically shared data with Facebook and otherwise collects troves of dataabout you.



“Looking at it like ‘the more partners you share with, the worse’ isn’t really correct,” he explained. “Once it gets outside the app and into the hands of one marketer who wants to monetize from it—it could be anywhere, and it might as well be everywhere.”



“The kinds of people that are gonna be coerced into downloading it are exactly the kind of people that are put most at risk by the data that they’re sharing,”
privacy  adtech  tinder  noonlight  safety  panicbutton  lens 
18 days ago
Opinion | You Are Now Remotely Controlled - The New York Times
In Wonderland, we celebrated the new digital services as free, but now we see that the surveillance capitalists behind those services regard us as the free commodity. We thought that we search Google, but now we understand that Google searches us. We assumed that we use social media to connect, but we learned that connection is how social media uses us. We barely questioned why our new TV or mattress had a privacy policy
, but we’ve begun to understand that “privacy” policies are actually surveillance policies.







All of these delusions rest on the most treacherous hallucination of them all: the belief that privacy is private. We have imagined that we can choose our degree of privacy with an individual calculation in which a bit of personal information is traded for valued services — a reasonable quid pro quo.







Our digital century was to have been democracy’s Golden Age. Instead, we enter its third decade marked by a stark new form of social inequality best understood as “epistemic inequality.” It recalls a pre-Gutenberg era of extreme asymmetries of knowledge and the power that accrues to such knowledge, as the tech giants seize control of information and learning itself. The delusion of “privacy as private” was crafted to breed and feed this unanticipated social divide. Surveillance capitalists exploit the widening inequity of knowledge for the sake of profits. They manipulate the economy, our society and even our lives with impunity, endangering not just individual privacy but democracy itself. Distracted by our delusions, we failed to notice this bloodless coup from above.







The lesson is that privacy is public — it is a collective good that is logically and morally inseparable from the values of human autonomy and self-determination upon which privacy depends and without which a democratic society is unimaginable.







Between 2000, when the new economic logic was just emerging, and 2004, when the company went public, revenues increased by 3,590 percent. This startling number represents the “surveillance dividend.” It quickly reset the bar for investors, eventually driving start-ups, apps developers and established companies to shift their business models toward surveillance capitalism.







Unequal knowledge about us produces unequal power over us
surveillancecapitalism  shoshanazuboff  privacy  inequality  lens 
20 days ago
The Secretive Company That Might End Privacy as We Know It - The New York Times
His tiny company, Clearview AI, devised a groundbreaking facial recognition app. You take a picture of a person, upload it and get to see public photos of that person, along with links to where those photos appeared. The system — whose backbone is a database of more than three billion images that Clearview claims to have scraped from Facebook, YouTube, Venmo and millions of other websites — goes far beyond anything ever constructed by the United States government or Silicon Valley giants.



The tool could identify activists at a protest or an attractive stranger on the subway, revealing not just their names but where they lived, what they did and whom they knew.



Clearview’s app carries extra risks because law enforcement agencies are uploading sensitive photos to the servers of a company whose ability to protect its data is untested.



“I’ve come to the conclusion that because information constantly increases, there’s never going to be privacy,” Mr. Scalzo said. “Laws have to determine what’s legal, but you can’t ban technology. Sure, that might lead to a dystopian future or something, but you can’t ban it.”



“We have no data to suggest this tool is accurate,” said Clare Garvie, a researcher at Georgetown University’s Center on Privacy and Technology, who has studied the government’s use of facial recognition. “The larger the database, the larger the risk of misidentification because of the doppelgänger effect. They’re talking about a massive database of random people they’ve found on the internet.”



If you change a privacy setting in Facebook so that search engines can’t link to your profile, your Facebook photos won’t be included in the database, he said.

But if your profile has already been scraped, it is too late. The company keeps all the images it has scraped even if they are later deleted or taken down, though Mr. Ton-That said the company was working on a tool that would let people request that images be removed if they had been taken down from the website of origin.





“We’ve relied on industry efforts to self-police and not embrace such a risky technology, but now those dams are breaking because there is so much money on the table,” Mr. Hartzog said. “I don’t see a future where we harness the benefits of face recognition technology without the crippling abuse of the surveillance that comes with it. The only way to stop it is to ban it.”
facialrecognition  privacy  clearviewai  lens 
28 days ago
Mass surveillance for national security does conflict with EU privacy rights, court advisor suggests | TechCrunch
“If the Court agrees with the [Advocate general]’s opinion, then unlawful bulk surveillance schemes, including one operated by the UK, will be reined in.”
masssurveillance  governmentsurveillance  eu  privacy  lens  rights 
4 weeks ago
Systemic Algorithmic Harms - Data & Society: Points
Because both “stereotype” and “bias” are theories of individual perception, our discussions do not adequately prioritize naming and locating the systemic harms of the technologies we build. When we stop overusing the word “bias,” we can begin to use language that has been designed to theorize at the level of structural oppression, both in terms of identifying the scope of the harm and who experiences it.



When we stop overusing the word “bias,” we can begin to use language that has been designed to theorize at the level of structural oppression.

[W]hen we say “an algorithm is biased,” we, in some ways, are treating an algorithm as if it were a flawed individual, rather than an institutional force.

By using the language of bias, we may end up overly focusing on the individual intents of technologists involved, rather than the structural power of the institutions they belong to.


Bias as a term obscures more than it explains because we are not equally concerned about all biases for the same reasons. We specifically care about dismantling algorithmic biases that enable continued harm to those belonging to one or more historically oppressed social identity groups.
systemic  algorithmic  harms  bias  discrimination  oppression  lens 
4 weeks ago
Grindr Shares Location, Sexual Orientation Data, Study Shows - Bloomberg
“Grindr is sharing detailed personal data with thousands of advertising partners, allowing them to receive information about users’ location, age, gender and sexual orientation…”

“‘Every time you open an app like Grindr, advertisement networks get your GPS location, device identifiers and even the fact that you use a gay dating app,’ said Austrian privacy activist Max Schrems.”
grindr  ads  analytics  location  dating  lens  privacy  surveillancecapitalism 
4 weeks ago
Technology Can't Fix Algorithmic Injustice | Boston Review
Some contend that strong AI may be only decades away, but this focus obscures the reality that “weak” (or “narrow”) AI is already reshaping existing social and political institutions. Algorithmic decision making and decision support systems are currently being deployed in many high-stakes domains, from criminal justice, law enforcement, and employment decisions to credit scoring, school assignment mechanisms, health care, and public benefits eligibility assessments. Never mind the far-off specter of doomsday; AI is already here, working behind the scenes of many of our social systems.

What responsibilities and obligations do we bear for AI’s social consequences in the present—not just in the distant future? To answer this question, we must resist the learned helplessness that has come to see AI development as inevitable. Instead, we should recognize that developing and deploying weak AI involves making consequential choices—choices that demand greater democratic oversight not just from AI developers and designers, but from all members of society.



There may be some machine learning systems that should not be deployed in the first place, no matter how much we can optimize them.



Structural injustice thus yields biased data through a variety of mechanisms—prominently including under- and overrepresentation—and worrisome feedback loops result. Even if the quality control problems associated with an algorithm’s decision rules were resolved, we would be left with a more fundamental problem: these systems would still be learning from and relying on data born out of conditions of pervasive and long-standing injustice.



Algorithmic design cannot be fixed in isolation. Developers cannot just ask, “What do I need to do to fix my algorithm?” They must rather ask: “How does my algorithm interact with society at large, and as it currently is, including its structural inequalities?” We must carefully examine the relationship and contribution of AI systems to existing configurations of political and social injustice, lest these systems continue to perpetuate those very conditions under the guise of neutrality. As many critical race theorists and feminist philosophers have argued, neutral solutions might well secure just outcomes in a just society, but only serve to preserve the status quo in an unjust one.
algorithms  injustice  discrimination  artificialintelligence  lens 
5 weeks ago
Opinion | Why Are You Publicly Sharing Your Child’s DNA Information? - The New York Times
The problem with these tests is twofold. First, parents are testing their children in ways that could have serious implications as they grow older — and they are not old enough to consent. Second, by sharing their children’s genetic information on public websites, parents are forever exposing their personal health data.



Dr. Louanne Hudgins, a geneticist at Stanford, cautions parents to consider the long-term privacy of their child’s health information collected through home genetic kits. Their children’s DNA and other health data, she has warned, could be sold to other companies — marketing firms, data brokers
, insurance companies — in the same way that social media sites and search engines collect and share data about their users.



The sharing of DNA results on open-source genealogy databases to find long-lost relatives poses another privacy risk: When parents share their children’s DNA on these sites, they are effectively sharing it with the world, including with the government and law enforcement investigators.



Genetic privacy is just one part of a larger conversation about children’s privacy. While we have laws to protect children from third parties sharing children’s personal information online, these laws don’t apply when a parent does the sharing, or consents to allowing someone else to do it. This is because we have a legal tradition of allowing parents to determine what is in the best interests of their children.



But we find ourselves in a new era in which technology is outpacing most parents’ digital literacy. When parents upload their children’s test results to third-party sites, they likely do not consider the many possible consequences — some of which are listed only in the fine print of privacy policies
most people never read.
dna  privacy  consent  children  23andme  lens 
6 weeks ago
Big Data, Underground Railroad: History says unfettered collection of data is a bad idea.
There is a booming debate around what big data means for vulnerable communities. Industry groups argue, in good faith, that it will be a tool for empowering the disadvantaged. Others are skeptical. Algorithms have learned that workers with longer commutes quit their jobs sooner. Is it fair to turn away job applicants with long commutes if that disproportionately hurts blacks and Latinos? Is it legal for a company to assign you a credit score based on the creditworthiness of your neighbors? Are big data algorithms as neutral and accurate as they seem—and if they’re not, are our discrimination laws up to the challenge?


Privacy protections? They would come after the fact, through “use restrictions” that would prohibit certain uses of data that society deemed harmful. We used to try to protect people at each stage of data processing—collection, analysis, sharing. Now, it’s collect first and ask questions later.

Davos and the president’s council are basically saying that it’s OK to vacuum up data, so long as you prohibit certain harmful uses of it. The problem is that harmful uses of data are often recognized as such only long after the fact. Our society has been especially slow to condemn uses of data that hurt racial and ethnic minorities, the LGBT community, and other “undesirables.”


There is a moral lag in the way we treat data. Far too often, today’s discrimination was yesterday’s national security or public health necessity. An approach that advocates ubiquitous data collection and protects privacy solely through post-collection use restrictions doesn’t account for that.


Privacy is a shield for the weak. The ubiquitous collection of our data—coupled with after-the-fact use restrictions—would take that shield away and replace it with promises.
lens  bigdata  datacollection  collection  privacy  industry  surveillance  slavery  discrimination 
6 weeks ago
Big Mood Machine | Liz Pelly
music streaming platforms are in a unique position within the greater platform economy: they have troves of data related to our emotional states, moods, and feelings. It’s a matter of unprecedented access to our interior lives, which is buffered by the flimsy illusion of privacy.

Spotify’s enormous access to mood-based data is a pillar of its value to brands and advertisers, allowing them to target ads on Spotify by moods and emotions. Further, since 2016, Spotify has shared this mood data directly with the world’s biggest marketing and advertising firms.


“At Spotify we have a personal relationship with over 191 million people who show us their true colors with zero filter,” reads a current advertising deck. “That’s a lot of authentic engagement with our audience: billions of data points every day across devices! This data fuels Spotify’s streaming intelligence—our secret weapon that gives brands the edge to be relevant in real-time moments.”

In Spotify’s world, listening data has become the oil that fuels a monetizable metrics machine, pumping the numbers that lure advertisers to the platform. In a data-driven listening environment, the commodity is no longer music. The commodity is listening. The commodity is users and their moods. The commodity is listening habits as behavioral data. Indeed, what Spotify calls “streaming intelligence” should be understood as surveillance of its users to fuel its own growth and ability to sell mood-and-moment data to brands.

What’s in question here isn’t just how Spotify monitors and mines data on our listening in order to use their “audience segments” as a form of currency—but also how it then creates environments more suitable for advertisers through what it recommends, manipulating future listening on the platform.
spotify  music  advertising  surveillance  mood  emotion  sentiment  lens 
10 weeks ago
AI thinks like a corporation—and that’s worrying - Open Voices
”A central promise of [artificial intelligence] is that it enables large-scale automated categorisation… This ‘promise’ becomes a menace when directed at the complexities of everyday life.”
ai  algorithms  discrimination  categorisation  artificialintelligence  lens 
11 weeks ago
The Risks of Using AI to Interpret Human Emotions
Because of the subjective nature of emotions, emotional AI is especially prone to bias. For example, one study found that emotional analysis technology assigns more negative emotions to people of certain ethnicities than to others. Consider the ramifications in the workplace, where an algorithm consistently identifying an individual as exhibiting negative emotions might affect career progression.

In short, if left unaddressed, conscious or unconscious emotional bias can perpetuate stereotypes and assumptions at an unprecedented scale.
emotiondetection  emotion  bias  discrimination  ai  artificialintelligence  lens 
12 weeks ago
These Black Women Are Fighting For Justice In A World Of Biased Algorithms - Essence
“By rooting out bias in technology, these Black women engineers, professors and government experts are on the front lines of the civil rights movement of our time.”
algorithms  discrimination  facialrecognition 
november 2019
I worked on political ads at Facebook. They profit by manipulating us. - The Washington Post
We need lawmakers and regulators to help protect our children, our cognitive capabilities, our public square and our democracy by creating guardrails and rules to deal directly with the incentives and business models of these platforms and the societal harms they are causing.
facebook  politics  politicalads  ads  businessmodels  lens 
november 2019
Personalisation vs privacy: a ‘stark and explicit’ trade-off | Financial Times
Please use the sharing tools found via the share button at the top or side of articles. Copying articles to share with others is a breach of FT.com T&Cs and Copyright Policy. Email licensing@ft.com to buy additional rights. Subscribers may share up to 10 or 20 articles per month using the gift article service. More information can be found at https://www.ft.com/tour.
https://www.ft.com/content/e00886f6-ca70-11e9-af46-b09e8bfe60c0

“Companies built business models . . . based on three practices: the unhindered collection of personal information, the creation of opaque algorithms that create content, and the development of tremendously compelling services that come at the expense of privacy.”
privacy  personalisation  businessmodels 
november 2019
Under digital surveillance: how American schools spy on millions of kids | World news | The Guardian
Unlike gun control, Marlow said, “Surveillance is politically palatable, and so they’re pursuing surveillance as a way you can demonstrate action, even though there’s no evidence that it will positively impact the problem.”

“Some people think that technology is magic, that artificial intelligence will save us,” Vance said. “A lot of the questions and a lot of the privacy concerns haven’t [been] thought of, let alone addressed.”

As technology has advanced and schools have integrated laptops and digital technology into every part of the school day, school districts have largely defined for themselves how to responsibly monitor students on school-provided devices – and how aggressive they think that monitoring should be.

For black students, and students with disabilities, who already face a disproportionate amount of harsh disciplinary measures, the introduction of new kinds of surveillance may be especially harmful, privacy experts said.

Both machine-learning algorithms and human analysts are at risk of misunderstanding what students write – particularly if the human analysts are older, or from different cultural backgrounds than the students they are monitoring, experts said. If digital surveillance companies scanning students’ emails and chats misinterpret their jokes or sarcasm as real threats, that “could expose students to law enforcement in a way they have not been in the past”, said Elizabeth Laird, the senior fellow for student privacy at the Center for Democracy and Technology.

It’s not clear what kind of “chilling effect” the monitoring might have on students’ self-exploration, their group conversations and their academic freedom, Marlow, the ACLU privacy expert, said. If students know their schools are monitoring their computer usage, will LGBTQ students in conservative school districts feel comfortable researching their sexuality?
privacy  schools  email  us  surveillance  guncontrol  lens 
october 2019
Tech platforms are where public life is increasingly constructed, and their motivations are far from neutral » Nieman Journalism Lab
Note that I haven’t asked: “What’s the impact of technology on society?” That’s the wrong question. Platforms are societies of intertwined people and machines. There is no such thing as “online life” versus “real life.” We give massive ground if we pretend that these companies are simply having an “effect” or “impact” on some separate society.
platforms  democracy  community  advertising  attention  regulation  society  lens 
october 2019
Europe’s top court says active consent is needed for tracking cookies | TechCrunch
“Sites that have relied upon opting EU users into ad-tracking cookies in the hopes they’ll just click okay to make the cookie banner go away are in for a rude awakening.”
cookies  consent  gdpr  eprivacy  tracking  privacy  lens 
october 2019
To decarbonize we must decomputerize: why we need a Luddite revolution | Technology | The Guardian
“We are often sold a similar bill of goods: big tech companies talk incessantly about how “AI” and digitization will bring a better future. In the present tense, however, putting computers everywhere is bad for most people. It enables advertisers, employers and cops to exercise more control over us – in addition to helping heat the planet.”
luddite  luddism  environment  climatechange  lens  data  ai  machinelearning 
september 2019
Flock and the rise of networked vigilante surveillance.
“Neighborhoods armed with Ring videos, Flock readers, and NextDoor posts have the power to create networked engines of suspicion, sometimes ill-founded or erroneous, that may embolden residents to take actions they should not.”
safety  security  privacy  surveillance  community  neighbourhood  licenseplaterecognition  lens 
september 2019
Making face recognition less biased doesn’t make it less scary - MIT Technology Review
“Without algorithmic justice, algorithmic accuracy/technical fairness can create AI tools that are weaponized,” says Buolamwini.
facialrecognition  algorithms  algorithmicjustice  ai 
september 2019
« earlier      
per page:    204080120160

Copy this bookmark:





to read