recentpopularlog in

charlesarthur : abuse   18

The creator of one of YouTube’s top tween channels was arrested for molesting a minor. YouTube is keeping the channel up • Buzzfeed News
Charlie Warzel:
<p>According to an arrest warrant obtained by BuzzFeed News, detectives were called to Rylett’s Orange County hotel room on the morning of Aug. 16, after Rylett allegedly verbally abused the girl, demanding she undress in front of him against her will and “practice wrapping her breasts down, to make them appear smaller for the video shoot.” According to the report, the girl, who is under 16, claims Rylett touched her breasts and fondled her while repeatedly making her undress, eventually attempting to forcefully remove her underwear. The arrest report also alleges that Rylett “threatened to use the contract to fine her if she did not comply with his demand.” Rylett pleaded not guilty to the charges at an arraignment last month. He has surrendered his passport and will stand trial later this year. Rylett’s lawyer did not respond to requests for comment.

Rylett’s channel remains live on YouTube; the streaming video company learned of his arrest in mid-August.

Rylett, a 55-year-old who resides in the United Kingdom, is one of the founders of the SevenAwesomeKids brand. Established in 2008, the franchise boasts a collective 17 million subscribers over seven channels, including SevenPerfectAngels, SevenFabulousTeens, and SevenTwinklingTweens. The largest channel — SevenSuperGirls — currently has roughly 9 million subscribers and 5 billion views. Each features daily videos from a rotating stable of more than 20 young girls, ranging from 8 to 18 years old. Rylett pays them a monthly salary in exchange for filming videos he directs.

Rylett’s arrest is the latest in a series of unsettling revelations involving YouTube content aimed at teens and young children. In 2017, after public outcry, YouTube began cracking down on the child exploitation videos it was hosting, many depicting young kids in disturbing and abusive situations, all with millions of views…

…A number of young women who previously starred in Rylett’s videos told BuzzFeed News they were frustrated by the platform’s lack of oversight. “I was telling my mom two years ago that, if this was a real entertainment business — you know, with rules — I’d report him in an instant,” one said. “But I can’t because there’s nobody here to help me.”</p>
youtube  abuse 
7 weeks ago by charlesarthur
Child abuse algorithms: from science fiction to cost-cutting reality • The Guardian
David Pegg and Niamh McIntyre:
<p>Machine learning systems built to mine massive amounts of personal data have long been used to predict customer behaviour in the private sector.

Computer programs assess how likely we are to default on a loan, or how much risk we pose to an insurance provider.

Designers of a predictive model have to identify an “outcome variable”, which indicates the presence of the factor they are trying to predict.

For child safeguarding, that might be a child entering the care system.

They then attempt to identify characteristics commonly found in children who enter the care system. Once these have been identified, the model can be run against large datasets to find other individuals who share the same characteristics.

The Guardian obtained details of all predictive indicators considered for inclusion in Thurrock council’s child safeguarding system. They include history of domestic abuse, youth offending and truancy.

More surprising indicators such as rent arrears and health data were initially considered but excluded from the final model. In the case of both Thurrock, a council in Essex, and the London borough of Hackney, families can be flagged to social workers as potential candidates for the Troubled Families programme. Through this scheme councils receive grants from central government for helping households with long-term difficulties such as unemployment.

Such systems inevitably raise privacy concerns. Wajid Shafiq, the chief executive of Xantura, the company providing predictive analytics work to both Thurrock and Hackney, insists that there is a balance to be struck between privacy rights and the use of technology to deliver a public good.

“The thing for me is: can we get to a point where we’ve got a system that gets that balance right between protecting the vulnerable and protecting the rights of the many?” said Shafiq. “It must be possible to do that, because if we can’t we’re letting down people who are vulnerable.”</p>
ai  machinelearning  children  abuse 
8 weeks ago by charlesarthur
The conventional wisdom about not feeding trolls makes online abuse worse • The Verge
"Film Crit Hulk":
<p>Whether we’re talking about AOL, AIM, early 4chan, or the early days of Twitter, there has always been a myth about the time and place where things were more innocent, when trolling was all in good fun. But what everyone really remembers about these proverbial times isn’t their purity. It’s how they didn’t see the big deal back then. They remember how they felt a sense of permission, a belief that it was all okay. But that was only true for those who were like them, who thought exactly like they did. All the while, someone else was getting stepped on and bullied while others laughed. The story of the internet has always been the same story: disaffected young men thinking their boorish and cruel behavior was justified or permissible.

And it was always wrong.

The second great lie is that trolling is harmless…

…The third great lie is about what fixes it…

The premise of “don’t feed the trolls” implies that if you ignore a troll, they will inevitably get bored or say, “Oh, you didn’t nibble at my bait? Good play, sir!” and tip their cap and go on their way. Ask anyone who has dealt with persistent harassment online, especially women: this is not usually what happens. Instead, the harasser keeps pushing and pushing to get the reaction they want with even more tenacity and intensity. It’s the same pattern on display in the litany of abusers and stalkers, both online and off, who escalate to more dangerous and threatening behavior when they feel like they are being ignored. In many cases, ignoring a troll can carry just as dear a price as provocation.</p>


Terrific article. I feel as though in technology, the hardware business is in stasis generally. Now we're trying to work out the social and software side.
socialwarming  internet  abuse  trolls 
july 2018 by charlesarthur
New York passes bill to restrict guns for domestic abusers • The Hill
John Bowden -:
<p>New York Gov. Andrew Cuomo (Democrat) on Saturday announced the passage of legislation that would strip all firearms from New Yorkers convicted of domestic violence, updating a previous law that prohibited abusers from owning handguns.

In a press release on the governor's website, Cuomo said the law, which passed the state Assembly by 85-32 and Senate by 41-19 this week, will make the state "safer and stronger."

"New York is once again leading the way to prevent gun violence, and with this common sense reform, break the inextricable link between gun violence and domestic violence," Cuomo said.

The law forces convicted domestic abusers to turn in rifles, shotguns, and any other firearms they were not previously prohibited from owning under a law passed after the 2012 Sandy Hook Elementary School shooting in Newtown, Connecticut, that barred abusers from owning pistols or revolvers.

In his press release, Cuomo faulted the federal government for not doing more to protect citizens from gun violence.</p>

One to watch for the effects on deaths by gun in the state. Domestic abuse is a key indicator for whether someone will kill with a gun.
Guns  laws  domestic  abuse  newyork 
april 2018 by charlesarthur
Twitter just suspended a ton of accounts known for stealing tweets • Buzzfeed
Julia Reinstein:
<p>Many of these accounts were hugely popular, with hundreds of thousands or even millions of followers.

In addition to stealing people's tweets without credit, some of these accounts are known as "tweetdeckers" due to their practice of teaming up in exclusive Tweetdeck groups and mass-retweeting one another's — and paying customers' — tweets into forced virality.

A Twitter spokesperson declined to comment on individual accounts, but BuzzFeed News understands the accounts were suspended for violating Twitter's spam policy.

Tweetdecking, as it's called, is an explicit violation of Twitter's spam policy, which does not allow users to "sell, purchase, or attempt to artificially inflate account interactions."

Still, Twitter has previously struggled to crack down on these accounts.

After a BuzzFeed News story uncovered the practice of tweetdecking in January, Twitter announced new spam-fighting changes to Tweetdeck, including removing the ability to simultaneously retweet a tweet across multiple accounts.

"Tweetdecking is over. Our follower gains are gonna diminish," Andrew Guerrero, a 23-year-old tweetdecker in New Mexico, told BuzzFeed News after Twitter announced the changes in February. (Guerrero asked that his account name not be disclosed since it could get him suspended.)</p>


Interesting how Twitter is working inward, from the comparatively easy targets, implicitly towards the tougher ones.
twitter  tweetdeck  abuse 
march 2018 by charlesarthur
Something is wrong on the internet • Medium
James Bridle on the weird subculture within YouTube's "Kids" space of knockoff and randomly-generated videos aimed at children:
<p>A step beyond the simply pirated Peppa Pig videos mentioned previously are the knock-offs. These too seem to teem with violence. In the official Peppa Pig videos, Peppa does indeed go to the dentist, and the episode in which she does so seems to be popular — although, confusingly, what appears to be the real episode is only available on an unofficial channel. In the official timeline, Peppa is appropriately reassured by a kindly dentist. In the version above, she is basically tortured, before turning into a series of Iron Man robots and performing the Learn Colours dance. A search for “peppa pig dentist” returns the above video on the front page, and it only gets worse from here.

Disturbing Peppa Pig videos, which tend towards extreme violence and fear, with Peppa eating her father or drinking bleach, are, it turns out very widespread. They make up an entire YouTube subculture. Many are obviously parodies, or even satires of themselves, in the pretty common style of the internet’s outrageous, deliberately offensive kind…

…Here are a few things which are disturbing me:

The first is the level of horror and violence on display. Some of the times it’s troll-y gross-out stuff; most of the time it seems deeper, and more unconscious than that. The internet has a way of amplifying and enabling many of our latent desires; in fact, it’s what it seems to do best. I spend a lot of time arguing for this tendency, with regards to human sexual freedom, individual identity, and other issues. Here, and overwhelmingly it sometimes feels, that tendency is itself a violent and destructive one.

The second is the levels of exploitation, not of children because they are children but of children because they are powerless. Automated reward systems like YouTube algorithms necessitate exploitation in the same way that capitalism necessitates exploitation, and if you’re someone who bristles at the second half of that equation then maybe this should be what convinces you of its truth. Exploitation is encoded into the systems we are building, making it harder to see, harder to think and explain, harder to counter and defend against. Not in a future of AI overlords and robots in the factories, but right here, now, on your screen, in your living room and in your pocket.

Many of these latest examples confound any attempt to argue that nobody is actually watching these videos, that these are all bots. There are humans in the loop here, even if only on the production side, and I’m pretty worried about them too.</p>


Something is definitely wrong, and YouTube's utter laissez-faire attitude is a giant part of the problem. By treating anyone under the age of 18 as essentially the same - the sort of decision that would only be made by someone without children or without morals - it is seeding a deeply weird future. And by chaining videos together - so convenient! Just flag the unsuitable ones, kids, while we show you ads! - it deepens the rabbit hole.
internet  youtube  abuse  children 
november 2017 by charlesarthur
Study finds Reddit’s controversial ban of its most toxic subreddits actually worked • TechCrunch
Devin Coldewey:
<p>It’s an example of one of the objections made to the idea of banning troublesome users or communities: they’ll just go elsewhere, so why bother?

Researchers at the Georgia Institute of Technology took this question seriously, as until someone actually investigates whether such bans are helpful, harmful or some mix thereof, it’s all speculation. So they took a major corpus of Reddit data (compiled by PushShift.io) and <a href="http://comp.social.gatech.edu/papers/cscw18-chand-hate.pdf">examined exactly what happened to the hate speech and purveyors thereo</a>f, with the two aforementioned subreddits as case studies.

Essentially they looked at the thousands of users that made up CT and FPH (as they call them) and quantified their hate speech usage. They then compared this pre-ban data to the same users post-ban: how much hate speech they produced, where they “migrated” to (i.e. duplicate subreddits, related ones, etc.) and whether “invaded” subreddits experienced spikes in hate speech as a result. Control groups were created by observing the activity of similar subreddits that weren’t banned.

What they found was encouraging for this strategy of reducing unwanted activity on a site like Reddit:

• Post-ban, hate speech by the same users was reduced by as much as 80-90 percent.<br />• Members of banned communities left Reddit at significantly higher rates than control groups.<br />• Migration was common, both to similar subreddits (i.e. overtly racist ones) and tangentially related ones (r/The_Donald).<br />• However, within those communities, hate speech did not reliably increase, although there were slight bumps as the invaders encountered and tested new rules and moderators.

All in all, the researchers conclude, the ban was quite effective at what it set out to do…</p>


Encouraging.
abuse  reddit  toxic 
september 2017 by charlesarthur
Amazon was tricked by fake law firm into removing hot product, costing seller $200K • CNBC
Eugene Kim:
<p>Shortly before Amazon Prime Day in July, the owner of the Brushes4Less store on Amazon's marketplace received a suspension notice for his best-selling product, a toothbrush head replacement.

The email that landed in his inbox said the product was being delisted from the site because of an intellectual property violation. In order to resolve the matter and get the product reinstated, the owner would have to contact the law firm that filed the complaint.

But there was one problem: the firm didn't exist.

Brushes4Less was given the contact information for an entity named Wesley & McCain in Pittsburgh. The website wesleymccain.com has profiles for five lawyers. A Google image search shows that all five actually work for the law firm Brydon, Swearengen & England in Jefferson City, Missouri…

…The owner of Brushes4Less agreed to tell his story to CNBC but asked that we not use his name out of concern for his privacy. As far as he can tell, and based on what CNBC could confirm, Amazon was duped into shutting down the seller's key product days before the site's busiest shopping event ever.

"Just five minutes of detective work would have found this website is a fraud, but Amazon doesn't seem to want to do any of that," the owner said. "This is like the Wild Wild West of intellectual property complaints."</p>


I'm hearing more and more complaints about how Amazon behaves, both here and through its promotions. Once more, the problem is: what alternative do you have?
amazon  monopoly  abuse 
september 2017 by charlesarthur
Twitter touts progress combating abuse, but repeat victims remain frustrated • Buzzfeed
Charlie Warzel:
<p>It’s been just over seven months since Twitter pledged to move faster to combat the systemic abuse problem that has plagued it for a decade, and the company claims to have made dramatic improvements in that time.

In a Thursday <a href="https://blog.twitter.com/official/en_us/topics/product/2017/Our-Safety-Work-Results-Update.html">blog post by Ed Ho</a>, Twitter’s general manager of the consumer product and engineering groups, the company said that users are “experiencing significantly less abuse on Twitter today than they were six months ago.” The company also touted, for the first time, statistics about its progress on combating abuse. According to Ho, Twitter is “taking action on 10x the number of abusive accounts every day compared to the same time last year” and has limited account functionality and suspended “thousands more abusive accounts each day” compared to the same time last year.

Twitter claims this uptick in account suspensions and limitations is changing the behavior of its most contentious users. According to Ho, 65% of limited accounts are only suspended once for rules violations; after Twitter limits or suspends accounts for a brief time (and explains why), these users “generate 25% fewer abuse reports.”

Lastly, the company said that it has seen evidence that its biggest anti-abuse feature — customized muting and algorithmic filtering tools — is “having a positive impact.” According to Ho, “blocks after @mentions from people you don’t follow are down 40%.”</p>


Also not having an amazingly divisive election going on helps.
twitter  abuse 
july 2017 by charlesarthur
Staring down internet trolls: my disturbing cat and mouse game • Sydney Morning Herald
Ginger Gorman has been hassled by trolls in the past, but persists in wanting to track down and talk to them:
<p>something bloody-minded within me can't let it go. My job is to report and trolls like Mark are a risk to public safety. Maybe if I ask enough questions – or ask the right questions – I'll understand this. Maybe if I reveal his game plan, we'll all be safer.

The truth, though, is far less convenient than this and I've paid the price for my idealism. He's hurting other people, and I can't stop him. The more I know about him, the less I understand.

"Because it's funny," he says by way of explanation for the trolling, and it provides him "entertainment".

He says: "I don't really have emotions that much. I have emotions but nothing to do with regretting stuff and that field of emotions [including] sadness."

This unsatisfactory answer leaves the notion of "morals" hanging limply between us.

"I don't think it's morally OK," he says.

"Morals don't come into it. I know everything I do is wrong."

With some hesitation, I contact him to speak on camera. He agrees and meets me on time.

Perhaps because there's a camera present, he's less effusive than normal. He leans back in the chair in an apparent attempt to look relaxed. His answers are short and there's a scratchiness about him.

Before the tape starts rolling and, out of earshot of the cameraman, he snaps: "If I'm going to be anonymous, I don't see why you even need to interview me on camera."

When we first spoke, Mark spent up to 14 hours a week trolling people. These days, he tells me, it's more like 30 hours a week. His psychopathic tendencies are getting worse as he gets older.

"Have you ever read some of my stuff on the internet?" he boasts during yet another interview that we conduct by phone. "I'm one of the biggest narcissists on the planet."</p>


Narcissistic, lacking empathy; an ASD (autistic spectrum disorder) sociopath. Before the internet, he was just one of those people who'd injure the neighbour's cat under cover of night.
abuse  harassment  ethics  online 
july 2017 by charlesarthur
Mark Zuckerberg - Over the last few weeks, we've seen... • Facebook
<p>Over the last few weeks, we've seen people hurting themselves and others on Facebook -- either live or in video posted later. It's heartbreaking, and I've been reflecting on how we can do better for our community.

If we're going to build a safe community, we need to respond quickly. We're working to make these videos easier to report so we can take the right action sooner -- whether that's responding quickly when someone needs help or taking a post down.

Over the next year, we'll be adding 3,000 people to our community operations team around the world -- on top of the 4,500 we have today -- to review the millions of reports we get every week, and improve the process for doing it quickly.

These reviewers will also help us get better at removing things we don't allow on Facebook like hate speech and child exploitation. And we'll keep working with local community groups and law enforcement who are in the best position to help someone if they need it -- either because they're about to harm themselves, or because they're in danger from someone else.

In addition to investing in more people, we're also building better tools to keep our community safe.</p>


Maths: they're going from 4,500 people to 7,500 people. That roughly halves the workload for each person. Millions of reports per week means a minimum (1m/wk) of 143,000 per day; if 10m, then 1.43m per day; if 100m, then 14.3m per day.

1m per day among 4,500 is an average of 32 per day, or 4 per hour. 10m is 40 per hour (two every three minutes). 100m is 400 per hour. Though the number of reports might be "bursty" - quieter when Asia is awake, busier when the US is awake. Maybe it peaks at twice or three times the mean.

Upping the workforce is going to roughly halve that, in theory. (Though there might be more complaints.) And the workforce is being upped because, clearly, they aren't getting to reports fast enough. Which suggests that the reports are more towards the upper end than the lower.
facebook  abuse 
may 2017 by charlesarthur
Twitter ramps up abuse controls as it lets users silence anonymous 'eggs' • Daily Telegraph
Sam Dean:
<p>Twitter users will now be able to automatically bar anonymous trolls from their timelines as the social media giant steps up its fight on abuse.

Twitter has introduced new filtering options that allow users to mute accounts without profile pictures, unverified email addresses and phone numbers.

Accounts that do not have profile pictures - also known as ‘Twitter eggs’ - have long been associated with abusive behaviour on the site, which has been criticised for not doing more to clamp down on the problem.

The platform also said that it is working on identifying abusive accounts even in cases where they have not been reported. It can then limit the accounts for a certain amount of time so that only their followers can see their tweets.</p>


Improvement, and only a couple of years overdue.
twitter  abuse 
march 2017 by charlesarthur
If only AI could save us from ourselves • MIT Technology Review
David Auerbach goes into more detail about Google's Perspective project:
<p>The linguistic problem in abuse detection is context. Conversation AI’s comment analysis doesn’t model the entire flow of a discussion; it matches individual comments against learned models of what constitute good or bad comments. For example, comments on the New York Times site might be deemed acceptable if they tend to include common words, phrases, and other features. But Greene says Google’s system frequently flagged comments on articles about Donald Trump as abusive because they quoted him using words that would get a comment rejected if they came from a reader. For these sorts of articles, the Times will simply turn off automatic moderation.

It’s impossible, then, to see Conversation AI faring well on a wide-open site like Twitter. How would it detect the Holocaust allusions in abusive tweets sent to the Jewish journalist Marc Daalder: “This is you if Trump wins,” with a picture of a lamp shade, and “You belong here,” with a picture of a toaster oven? Detecting the abusiveness relies on historical knowledge and cultural context that a machine-learning algorithm could detect only if it had been trained on very similar examples. Even then, how would it be able to differentiate between abuse and the same picture with “This is what I’m buying if Trump wins”? The level of semantic and practical knowledge required is beyond what machine learning currently even aims at.</p>
ai  abuse  moderation  machinelearning  google  perspective 
february 2017 by charlesarthur
The immortal myths about online abuse • Medium
Anil Dash points out a number of (proven) falsehoods about how to fix abuse within networks, and concludes:
<p>The bottom line, as I wrote half a decade ago, is that if your website is full of assholes, it’s your fault. Same goes for your apps. We are accountable for the communities we create, and if we want to take credit for the magical moments that happen when people connect with each other online, then we have to take responsibility for the negative experiences that we enable.

Our communities are defined by the worst things that we permit to happen. What we allow tells the world who we are.</p>


That "worst things that we permit to happen" is where Twitter is struggling to bring things back to where it wants to be.
abuse 
august 2016 by charlesarthur
Yahoo has a tool that can catch online abuse surprisingly well • MIT Technology Review
Will Knight:
<p>Researchers are, in fact, making some progress toward technology that can help stop the abuse. A team at Yahoo <a href="http://www2016.net/proceedings/proceedings/p145.pdf">recently developed an algorithm capable of catching abusive messages better than any other automated system to date</a>. The researchers created a data set of abuse by collecting messages on Yahoo articles that were flagged as offensive by the company’s own comment editors.

The Yahoo team used a number of conventional techniques, including looking for abusive keywords, punctuation that often seemed to accompany abusive messages, and syntactic clues as to the meaning of a sentence.

But the researchers also applied a more advanced approach to automated language understanding, using a way of representing the meaning of words as vectors with many dimensions. This approach, known as “word embedding,” allows semantics to be processed in a sophisticated way. For instance, even if a comment contains a string of words that have not been identified as abusive, the representations of that string in vector space may be enough to identify it as such.

When everything was combined, the team was able to identify abusive messages (from its own data set) with roughly 90% accuracy.

Catching the remaining 10% may prove tricky. </p>
yahoo  abuse  machinelearning 
august 2016 by charlesarthur
South Korea covered up mass abuse, killings of 'vagrants' » Associated Press
Kim Tong-Hyung and Foster Klug:
<p>Choi [Seung-woo] was one of thousands — the homeless, the drunk, but mostly children and the disabled — rounded up off the streets ahead of the 1988 Seoul Olympics, which the ruling dictators saw as international validation of South Korea's arrival as a modern country. An Associated Press investigation shows that the abuse of these so-called vagrants at Brothers, the largest of dozens of such facilities, was much more vicious and widespread than previously known, based on hundreds of exclusive documents and dozens of interviews with officials and former inmates.

 
Yet nobody has been held accountable to date for the rapes and killings at the Brothers compound because of a cover-up orchestrated at the highest levels of government, the AP found. Two early attempts to investigate were suppressed by senior officials who went on to thrive in high-profile jobs; one remains a senior adviser to the current ruling party. Products made using slave labor at Brothers were sent to Europe, Japan and possibly beyond, and the family that owned the institution continued to run welfare facilities and schools until just two years ago.

Even as South Korea prepares for its second Olympics, in 2018, thousands of traumatized former inmates have still received no compensation, let alone public recognition or an apology. The few who now speak out want a new investigation.</p>


The government opposes it on the grounds that the evidence is "too old"; an official said "there have been so many incidents since the Korean War." Astonishing investigation, aided by still-extant government documents and living people.
korea  abuse 
april 2016 by charlesarthur
Schaumburg man acquitted after child porn got mixed up in WWII downloads » DailyHerald.com
Barbara Vitello:
Testifying in his own defense, [Wocjciech] Florczykowski, a 40-year-old electrical engineer, described himself as a history buff with an interest in World War II, specifically battlefield memorabilia. In pursuit of that hobby, Florczykowski said he occasionally travels to battlefields in Poland where he and other military history buffs use metal detectors to unearth everything from medals and canteens to shells, grenades and unexploded land mines.

He testified he was using a program called uTorrent (which enables users to share large files) to research explosives on a laptop supplied to him by his former employer DLS Electronic Systems in Wheeling and inadvertently downloaded pornography.

"What I discovered was completely disgusting. I was not looking for this stuff," he said, adding that he moved the offensive images and other unwanted material to a folder he intended to delete but was fired from his job before he could do so.

Discovering information on explosives on the laptop, his supervisors alerted federal authorities.


And then things got really bad. (Note: on the site itself, you need to answer a survey question to view the content. Can't decide if that's great, terrible, or "never going to scale".) Also: it's child abuse, not child "porn".
bittorrent  abuse 
april 2015 by charlesarthur
One week of harassment on Twitter >> Feminist Frequency
Anita Sarkeesian:
Ever since I began my Tropes vs Women in Video Games project, two and a half years ago, I’ve been harassed on a daily basis by irate gamers angry at my critiques of sexism in video games. It can sometimes be difficult to effectively communicate just how bad this sustained intimidation campaign really is. So I’ve taken the liberty of collecting a week’s worth of hateful messages sent to me on Twitter. The following tweets were directed at my @femfreq account between 1/20/15 and 1/26/15.


I'd really like to see an analysis that looks at when the abusive accounts were created, and what sort of use they are put to if they aren't being abusive. There are two competing hypotheses: one, that it's the work of a small and super-determined coterie who create abusive accounts; two, that it's a large group of real people who are all just jerks. Hard to figure out which would be worse.
twitter  abuse  gamergate 
january 2015 by charlesarthur

Copy this bookmark:





to read