recentpopularlog in

charlesarthur : algorithms   34

An algorithm wipes clean the criminal pasts of thousands • BBC News
Dave Lee:
<p>This month, a judge in California cleared thousands of criminal records with one stroke of his pen. He did it thanks to a ground-breaking new algorithm that reduces a process that took months to mere minutes. The programmers behind it say: we’re just getting started solving America’s urgent problems…

…It’s estimated there are a million people in California with a cannabis-related charge in their past, an invisible shackle that blocks opportunities to get housing, jobs and thousands of other things most of us would regard as necessities.

Yet fewer than 3% of people thought to qualify have sought to have their records cleared since the passing of the new law. It’s thought many are overwhelmed or intimidated by the complex expungement process. The clinic may only come to town once every few months, if at all. Others simply don’t know expungement is possible.

But now, work to automate this entire ordeal has begun - with remarkable results.

“I formed the opinion that this is really our responsibility,” said George Gascon, San Francisco’s district attorney. Though almost 10,000 people in the city were predicted to be eligible for expungement, just 23 had come forward.

So in January 2018, Mr Gascon pledged to proactively review past marijuana cases - but there was a snag.
San Francisco's District Attorney George Gascon quickly realised doing the task manually would take too long.

“When we started to do this by hand, we recognised very rapidly that this was going to take a long time.”
He enlisted Code For America, a non-profit organisation that works on creating Silicon Valley-esque solutions to problems within the many antiquated systems powering the US government.</p>


Tech for good! It can happen.
algorithms  automation  marijuana 
april 2019 by charlesarthur
Warner enters into distribution partnership with a mood music algorithm • Pitchfork
Matthew Strauss:
<p>Endel is an app that creates personalized music for you based on a mood that you can request. For example, if you would like to enter “Relax Mode,” the algorithm will create music that “calms your mind to create feelings of comfort and safety,” according to the app’s description. This week (March 21), Warner Music Group announced that it has partnered with Endel to distribute 20 albums this year through WMG’s Arts Division.

Endel has already released five albums this year, all part of its Sleep series: Clear Night, Rainy Night, Cloudy Afternoon, Cloudy Night, and Foggy Morning. The next 15 album will correspond with the app’s other modes: Relax, Focus, and On-the-Go.</p>


And here's an extract from a review on iTunes - note that the app requires a monthly or annual subscription:
<p>Ok, I've had the free trial for a week now, and I feel I can safely say that this app isn't some algorithmic genius, it's simply a pleasing ambient album. For example, there are two distinct tracks on the sleep channel, and that's it, no matter if sometimes a somewhat ancillary ticking clock is playing instead of a white noise filter sweep mimicking the ocean.

There's no shame at all in making a good ambient album. They've done that. But the description of the app is truly misleading and tries to represent this app as something more. And on top of that, it charges an ongoing subscription fee that is not equivalent to the market price of an album, which, again, is what this is. Sorry, but I'm not gonna subscribe and have to renew $25 every year for the latest Carly Rae Jepsen album either.</p>
algorithms  music 
march 2019 by charlesarthur
One year in, Facebook’s big algorithm change has spurred an angry, Fox News-dominated – and very engaged! – News Feed • Nieman Journalism Lab
Laura Hazard Owen:
<p>A new report from social media tracking company NewsWhip shows that the turn toward “meaningful interactions” has:

• pushed up articles on divisive topics like abortion, religion, and guns;<br />• politics rules; and<br />• the “angry” reaction (😡) dominates many pages, with “Fox News driving the most angry reactions of anyone, with nearly double that of anyone else.”

Of course, all that isn’t only Facebook’s fault. The content that dominates the platform now might have risen even without an algorithmic boost. But what’s clear is that Mark Zuckerberg’s January 2018 exhortation that the time spent on Facebook be “time well spent” has not come to pass: Instead, it’s often an angry, reactive place where people go to get worked up and to get scared. Here are the two most-shared Facebook stories of 2019 so far:

<img src="http://www.niemanlab.org/images/Screen-Shot-2019-03-15-at-8.01.15-AM.png" width="100%" />

Engagement — likes, comments, shares, reactions — has risen. For the first few months of this year, it was 50% higher than it was in 2018, and about 10% higher than it was in 2017 (which, remember, included Trump’s inauguration, large-scale protests, and the chaotic early days of his presidency).

<img src="http://www.niemanlab.org/images/Screen-Shot-2019-03-15-at-8.07.43-AM.png" width="100%" />

“There is a possibility that Facebook’s friends and family focus, getting people to read what their networks are sharing rather than what pages are promoting, may have contributed to this increase as people shared articles they enjoyed on the network,” NewsWhip says.</p>


So you're saying it's still a cesspit, and that your "tuning" has made it worse, Mr Zuckerberg?
facebook  algorithms 
march 2019 by charlesarthur
A neural network can learn to organize the world it sees into concepts—just like we do • MIT Technology Review
Karen Hao:
<p>GANs, or generative adversarial networks, are the social-media starlet of AI algorithms. They are responsible for creating the first AI painting ever sold at an art auction and for superimposing celebrity faces on the bodies of porn stars. They work by pitting two neural networks against each other to create realistic outputs based on what they are fed. Feed one lots of dog photos, and it can create completely new dogs; feed it lots of faces, and it can create new faces. 

As good as they are at causing mischief, researchers from the MIT-IBM Watson AI Lab realized GANs are also a powerful tool: because they paint what they’re “thinking,” they could give humans insight into how neural networks learn and reason. This has been something the broader research community has sought for a long time—and it’s become more important with our increasing reliance on algorithms.

“There’s a chance for us to learn what a network knows from trying to re-create the visual world,” says David Bau, an MIT PhD student who worked on the project.

So the researchers began probing a GAN’s learning mechanics by feeding it various photos of scenery—trees, grass, buildings, and sky. They wanted to see whether it would learn to organize the pixels into sensible groups without being explicitly told how.

Stunningly, over time, it did. By turning “on” and “off” various “neurons” and asking the GAN to paint what it thought, the researchers found distinct neuron clusters that had learned to represent a tree, for example. Other clusters represented grass, while still others represented walls or doors. In other words, it had managed to group tree pixels with tree pixels and door pixels with door pixels regardless of how these objects changed color from photo to photo in the training set. “These GANs are learning concepts very closely reminiscent of concepts that humans have given words to,” says Bau.</p>


OK, so it can group them as concepts. Is that the same as having a concept of them, though?
ai  algorithms  artificialintelligence 
january 2019 by charlesarthur
The "Yellow Vest" riots in France are what happens when Facebook gets involved with local news • Buzzfeed News
Ryan Broderick and Jules Darmanin:
<p>what’s happening right now in France isn’t happening in a vacuum. The Yellow Vests movement — named for the protesters’ brightly colored safety vests — is a beast born almost entirely from Facebook. And it’s only getting more popular. Recent polls indicate the majority of France now supports the protesters. The Yellow Vests communicate almost entirely on small, decentralized Facebook pages. They coordinate via memes and viral videos. Whatever gets shared the most becomes part of their platform.

Due to the way algorithm changes made earlier this year interacted with the fierce devotion in France to local and regional identity, the country is now facing some of the worst riots in many years — and in Paris, the worst in half a century.

This isn’t the first time real-life violence has followed a viral Facebook storm and it certainly won’t be the last. Much has already been written about the anti-Muslim Facebook riots in Myanmar and Sri Lanka and the WhatsApp lynchings in Brazil and India. Well, the same process is happening in Europe now, on a massive scale. Here’s how Facebook tore France apart…

…These pages [fuelling the protests] weren’t exploding in popularity by coincidence. The same month that [a Portuguese bricklayer called Leandro Antonio] Nogueira set up his first [Facebook protest] group [in January], Mark Zuckerberg announced two algorithm changes to Facebook’s News Feed that would “prioritize news that is trustworthy, informative, and local.” The updates were meant to combat sensationalism, misinformation, and political polarization by emphasizing local networks over publisher pages. One change upranks news from local publishers only. Another change made the same month prioritizes posts from friends and family, hoping to inspire back-and-forth discussion in the comments of posts.</p>


Facebook is now so powerful that little tweaks to its Newsfeed can destabilise countries by conjoining all the most crazy conspiracy theorists. Happy holidays, everyone!
facebook  socialmedia  algorithms  france 
december 2018 by charlesarthur
The statistical rule of three • John Cook Consulting
<p>suppose you are testing children for perfect pitch. You’ve tested 100 children so far and haven’t found any with perfect pitch. Do you conclude that children don’t have perfect pitch? You know that some do because you’ve heard of instances before. Your data suggest perfect pitch in children is at least rare. But how rare?

The rule of three gives a quick and dirty way to estimate these kinds of probabilities. It says that if you’ve tested N cases and haven’t found what you’re looking for, a reasonable estimate is that the probability is less than 3/N. So in our proofreading example, if you haven’t found any typos in 20 pages, you could estimate that the probability of a page having a typo is less than 15%. In the perfect pitch example, you could conclude that fewer than 3% of children have perfect pitch.

Note that the rule of three says that your probability estimate goes down in proportion to the number of cases you’ve studied. If you’d read 200 pages without finding a typo, your estimate would drop from 15% to 1.5%. But it doesn’t suddenly drop to zero. I imagine most people would harbor a suspicion that that there may be typos even though they haven’t seen any in the first few pages. But at some point they might say “I’ve read so many pages without finding any errors, there must not be any.” The situation is a little different with the perfect pitch example, however, because you may know before you start that the probability cannot be zero.

If the sight of math makes you squeamish, you might want to stop reading now. Just remember that if you haven’t seen something happen in N observations, a good estimate is that the chances of it happening are less than 3/N.</p>


I had never heard of this.
algorithms  statistics 
december 2018 by charlesarthur
Airlines face crackdown on use of ‘exploitative’ algorithm that splits up families on flights • The Independent
Helen Coffey:
<p>“They’ve had the temerity to split the passengers up, and when the family want to travel together they are charged more.”

It’s an issue that will be looked at by the Centre for Data Ethics and Innovation, launched by the government this week to identify and address areas where clearer guidelines and regulation are needed in how data is used.

Passengers first started noticing they were being split up from their party if they didn’t pay more for allocated seating in June 2017, with Ryanair most commonly associated with the practice.

However, Europe’s biggest airline never admitted to changing the way seating was allocated, insisting there was no change and saying that those who don’t pay to choose a seat are “randomly” assigned one.

The <a href="https://www.independent.co.uk/travel/news-and-advice/airline-seats-caa-ryanair-easyjet-british-airways-seating-allocations-splitting-middle-seats-a8193431.html">Civil Aviation Authority (CAA) has been investigating the issue of paid-for seat allocation</a> for more than a year.

Its latest research, released in October 2018, stated that the likelihood of passengers being split up if they didn’t pay to sit together varied wildly between airlines.

In a survey of 4,296 people who had flown as part of a group, the CAA found that travellers were most likely to be split from their party when flying with Ryanair – 35% of those surveyed were separated having opted not to pay more for allocated seating. </p>



Flybe and TUI Airways were the least likely to break up groups, with just 12 per cent of people separated.
airlines  algorithms  seating 
november 2018 by charlesarthur
Chess, AI and Asia's future • Nikkei Asian Review
James Crabtree:
<p>"When I tell someone in finance that I'm a chess expert their eyes light up," I was told by grandmaster Daniel King, a well-known commentator on YouTube. "They are just so fascinated by AI. All they see is dollar signs."

AI engines play in adventurous new styles: neither like normal computers nor humans, but instead in what DeepMind founder Demis Hassabis calls "a third, almost alien, way." By this he means the machines often play improbable moves that look peculiar to human eyes but turn out to be brilliant.

Oddly, this kind of machine skill has only increased interest in human competition. Chess computers help humans improve. They make the game more entertaining for analysts and spectators too. Former champion Garry Kasparov has even pioneered "cyborg chess," a variant where a human and a computer work in tandem, playing against other man-and-machine teams. Typically, the result is better than either might manage on their own.

It is just this marriage of computers and people that holds wider economic lessons, given future productivity will grow most quickly where humans and machines collaborate. This could be physical "co-bots" supporting workers in factories or retail outlets. But it could also involve advanced algorithms providing unbiased data to improve human decision-making, or machines which takeover routine tasks to let humans focus on those involving advanced judgment.

Skills of this kind should bring advantages to Asia, with its youthful population and tech-savvy employees.</p>


(A reminder of the <a href="https://www.chess.com/news/view/google-s-alphazero-destroys-stockfish-in-100-game-match">AlphaZero v Stockfish match</a>: 28-72-0 win/draw/loss.) I'd wonder about that "unbiased data to improve human decision-making" bit though.
algorithms  chess 
november 2018 by charlesarthur
Instagram has a drug problem. Its algorithms make it worse • The Washington Post
Elizabeth Dwoskin:
<p>Recent searches on Instagram, which is owned by Facebook, for hashtags of the names of drugs — such as #oxy, #percocet, #painkillers, #painpills, #oxycontin, #adderall and #painrelief — revealed thousands of posts by a mash-up of people grappling with addiction, those bragging about their party­going lifestyle and enticements from drug dealers.

Following the dealer accounts, or even liking one of the dealer posts, prompted Instagram’s algorithms to work as designed — in this case, by filling up a person’s feed with posts for drugs, suggesting other sellers to follow and introducing new hashtags, such as #xansforsale. Ads from some of the country’s largest brands, including Target, Chase and Procter & Gamble, as well as Facebook’s own video streaming service, appeared next to posts illegally selling pills.

Even as top executives from Facebook and Twitter, which has also long struggled with posts offering drugs illegally, promised earlier this month in a congressional hearing that they were cracking down on sales of opioids and other drugs, their services appeared to be open marketplaces for advertising such content. Facebook’s chief operating officer, Sheryl Sandberg, said her company was “firmly against” such activity. Twitter chief executive Jack Dorsey said he was “looking deeply” at how drug-selling spreads on the site.

But activists and other groups have warned tech companies about illegal drug sales on their platforms for years. In recent months, lawmakers, the Food and Drug Administration and some advertisers have stepped into the fray. In April, FDA Commissioner Scott Gottlieb charged Internet companies with not “taking practical steps to find and remove opioid listings.” Sen. Joe Manchin III (D-W.Va.) called social media companies “reckless,” saying, “It is past time they put human life above profit and finally institute measures that crack down on these harmful practices, preventing the sale of illegal narcotics on or through their platforms.”</p>
instagram  algorithms  drugs 
september 2018 by charlesarthur
Evolutionary algorithm outperforms deep-learning machines at video games • MIT Technology Review
<p>Many genomes [of evolving code, where "good" code is reused] ended up playing entirely new gaming strategies, often complex ones. But they sometimes found simple ones that humans had overlooked.

For example, when playing Kung Fu Master, the evolutionary algorithm discovered that the most valuable attack was a crouch-punch. Crouching is safer because it dodges half the bullets aimed at the player and also attacks anything nearby. The algorithm’s strategy was to repeatedly use this maneuver with no other actions. In hindsight, using the crouch-punch exclusively makes sense.

That surprised the human players involved in the study. “Employing this strategy by hand achieved a better score than playing the game normally, and the author now uses crouching punches exclusively when attacking in this game,” say Wilson and co.

Overall, the evolved code played many of the games well, even outperforming humans in games such as Kung Fu Master. Just as significantly, the evolved code is just as good as many deep-learning approaches and outperforms them in games like Asteroids, Defender, and Kung Fu Master.

It also produces a result more quickly. “While the programs are relatively small, many controllers are competitive with state-of-the-art methods for the Atari benchmark set and require less training time,” say Wilson and co.

The evolved code has another advantage. Because it is small, it is easy to see how it works. By contrast, a well-known problem with deep-learning techniques is that it is sometimes impossible to know why they have made particular decisions, and this can have practical and legal ramifications.</p>
ai  algorithm  algorithms  deeplearning 
july 2018 by charlesarthur
YouTube struggles with plan to clean up mess that made it rich • Bloomberg
Lucas Shaw and Mark Bergen:
<p>Much like Facebook and Twitter, however, YouTube has long prioritized growth over safety. Hany Farid, senior adviser to the Counter Extremism Project, which works with internet companies to stamp out child pornography and terrorist messaging, says that of the companies he works with, “Google is the least receptive.” With each safety mishap, he says, YouTube acts freshly shocked. “It’s like a Las Vegas casino saying, ‘Wow, we can’t believe people are spending 36 hours in a casino.’ It’s designed like that.”

That’s not how Google or YouTube see things. Over the past year, YouTube has made the most sweeping changes since its early days, removing videos it deemed inappropriate and stripping away the advertising from others. But to date, both the video-sharing service and its corporate parent have struggled to articulate how their plan will make things better. Only recently, as Washington has edged closer to training its regulatory eye on Silicon Valley, did YouTube executives agree to walk Bloomberg Businessweek through its proposed fixes and explain how the site got to this point. Conversations with more than a dozen people at YouTube, some of whom asked not to be identified while discussing sensitive internal matters, reveal a company still grappling to reach a balance between contributors’ freedom of expression and society’s need to protect itself.

“The whole world has become a lot less stable and more polarized,” says Robert Kyncl, YouTube’s chief business officer. “Because of that, our responsibility is that much greater.”
In interviews at the San Bruno complex, YouTube executives often resorted to a civic metaphor: YouTube is like a small town that’s grown so large, so fast, that its municipal systems—its zoning laws, courts, and sanitation crews, if you will—have failed to keep pace. “We’ve gone from being a small village to being a city that requires proper infrastructure,” Kyncl says. “That’s what we’ve been building.”

But minimal infrastructure was a conscious choice, according to Hunter Walk, who ran YouTube’s product team from 2007 to 2011. When the markets tanked in 2008, Google tightened YouTube’s budgets and took staffers off community safety efforts—such as patrolling YouTube’s notorious comments section—in favor of projects with better revenue potential. “For me, that’s YouTube’s original sin,” Walk says. “Trust and safety has always been a top priority. This was true 10 years ago and it remains true today,” YouTube said in an emailed statement.</p>
youtube  algorithms  content 
april 2018 by charlesarthur
Letting neural networks be weird: when algorithms surprise us • Ai Weirdness
Janelle Shane:
<p>machine learning algorithms are widely used for facial recognition, language translation, financial modeling, image recognition, and ad delivery. If you’ve been online today, you’ve probably interacted with a machine learning algorithm.

But it doesn’t always work well. Sometimes the programmer will think the algorithm is doing really well, only to look closer and discover it’s solved an entirely different problem from the one the programmer intended. For example, I looked earlier at an image recognition algorithm that was supposed to recognize sheep but learned to recognize grass instead, and <a href="http://aiweirdness.com/post/171451900302/do-neural-nets-dream-of-electric-sheep">kept labeling empty green fields as containing sheep</a>.

<img src="https://78.media.tumblr.com/f8f13fd86e3453be3ee8744f94c0995f/tumblr_inline_p720zk5db01rl9zu7_500.jpg" width="100%" />

When machine learning algorithms solve problems in unexpected ways, programmers find them, okay yes, annoying sometimes, but often purely delightful.

So delightful, in fact, that in 2018 a group of researchers wrote a fascinating paper that collected dozens of anecdotes that “elicited surprise and wonder from the researchers studying them”. The <a href="https://arxiv.org/pdf/1803.03453.pdf">paper is well worth reading</a>, as are the original references, but here are several of my favorite examples.</p>


There are so many, but I think my favourite is:
<p>In one of the more chilling examples, there was an algorithm that was supposed to figure out how to apply a minimum force to a plane landing on an aircraft carrier. Instead, it discovered that if it applied a *huge* force, it would overflow the program’s memory and would register instead as a very *small* force. The pilot would die but, hey, perfect score.</p>


"OK, engaging machine learning autopilot for landing..."
artificialintelligence  algorithms  learning  ai 
april 2018 by charlesarthur
Artwork personalization at Netflix • Medium
Ashok Chandrashekar, Fernando Amat, Justin Basilico and Tony Jebara, on the Netflix Techblog:
<p>For many years, the main goal of the Netflix personalized recommendation system has been to get the right titles in front each of our members at the right time. With a catalog spanning thousands of titles and a diverse member base spanning over a hundred million accounts, recommending the titles that are just right for each member is crucial. But the job of recommendation does not end there. Why should you care about any particular title we recommend? What can we say about a new and unfamiliar title that will pique your interest? How do we convince you that a title is worth watching? Answering these questions is critical in helping our members discover great content, especially for unfamiliar titles. One avenue to address this challenge is to consider the artwork or imagery we use to portray the titles. If the artwork representing a title captures something compelling to you, then it acts as a gateway into that title and gives you some visual “evidence” for why the title might be good for you.


<img src="https://cdn-images-1.medium.com/max/1600/0*038O1qN_N7lC3CGD." width="100%" />
A Netflix homepage without artwork. This is how historically our recommendation algorithms viewed a page.

<img src="https://cdn-images-1.medium.com/max/1600/1*xwD8rVHPapbfmrl6AIbQbA.png" width="100%" />
Artwork for Stranger Things that each receive over 5% of impressions from our personalization algorithm. Different images cover a breadth of themes in the show to go beyond what any single image portrays.</p>


Breathtaking.
ai  netflix  algorithms  marketing 
april 2018 by charlesarthur
Facebook may stop the data leaks, but it’s too late: Cambridge Analytica’s models live on • MIT Technology Review
Jacob Metcalf:
<p>There has been plenty of skeptical analysis of just how useful SCL’s psychographic tools were. In contrast to Nix’s flamboyant salesmanship of the method, critics have routinely responded by calling it snake oil. Where Cambridge Analytica was hired to run digital campaigns, it bungled some basic operations (especially for Ted Cruz, whose website it failed to launch on time). And SCL staff often rubbed others working on Trump’s digital campaign the wrong way.

The models may have helped in constructing Trump’s lose-the-electorate, win-the-electoral-college strategy.

However, none of Cambridge Analytica’s many Republican critics has yet said its models were not useful. Moreover, <a href="http://adage.com/article/campaign-trail/trump-camp-s-inexperience-set-stage-rnc-data-win/307105/">some reporting indicates</a> that the models were used primarily to target voters in swing states and to hone Trump’s stump speeches in those states. That shows that the campaign understood that these models are most useful when applied in a focused manner. They may have helped in constructing Trump’s lose-the-electorate, win-the-electoral-college strategy.

And while they have their limitations, behavioral profiles are very good at estimating demographics, including political leanings, gender, location, and ethnicity. A behavioral profile of seemingly innocuous “likes” paired with other data sets is both a good-enough map to far more information about a potential voter, and a way to predict what types of content they might find engaging.

Ultimately, then, if we strip out the context of the 2016 election and the odd correlations that these algorithms find in Facebook behavioral data, the role that psychometrics plays is actually fairly straightforward: it is another criterion among many by which to create tranches of voters and learn from iterative feedback about how those tranches respond to ads.</p>
facebook  data  bigdata  algorithms 
april 2018 by charlesarthur
Using self-organizing maps to solve the Traveling Salesman Problem •
Diego Vicente:
<p><img src="https://diego.codes/img/som-tsp-uruguay.gif" width="100%" />
To evaluate the implementation, we will use some instances provided by the aforementioned National Traveling Salesman Problem library. These instances are inspired in real countries and also include the optimal route for most of them, which is a key part of our evaluation. The evaluation strategy consists in running several instances of the problem and study some metrics:

• Execution time invested by the technique to find a solution.<br />• Quality of the solution, measured in function of the optimal route: a route that we say is "10% longer that the optimal route" is exactly 1.1 times the length of the optimal one.

The parameters used in the evaluation are the ones found by parametrization of the technique, by using the ones provided in previous works 2 as a starting point. These parameters are:

• A population size of 8 times the cities in the problem.<br />• An initial learning rate of 0.8, with a discount rate of 0.99997.<br />• An initial neighbourhood of the number of cities, decayed by 0.9997.

These parameters were applied to the following instances:

Qatar, containing 194 cities with an optimal tour of 9352.<br />• Uruguay, containing 734 cities with an optimal tour of 79114.<br />• Finland, containing 10639 cities with an optimal tour of 520527.<br />• Italy, containing 16862 cities with an optimal tour of 557315.</p>


It gets pretty close to the ideal - within 10% on a couple. (Worse on others.) The GIF above is for Uruguay, where it hit 7.5% of the ideal.
maps  algorithms  travellingsalesman 
january 2018 by charlesarthur
Chasm of comprehension • Remains of the Day
Eugene Wei:
<p> In the future, diagnosing why Autopilot or other self-driving algorithms made certain choices will likely only become more and more challenging as the algorithms rise in complexity.

At times, when I have my Tesla in Autopilot mode, the car will do something bizarre and I'll take over. For example, if I drive to work out of San Francisco, I have to exit left and merge onto the 101 using a ramp that arcs to the left almost 90 degrees. There are two lanes on that ramp, but even if I start in the far left lane and am following a car in front of me my car always seems to try to slide over to the right lane.

Why does it do that? My only mental model is the one I know, which is my own method for driving. I look at the road, look for lane markings and other cars, and turn a steering wheel to stay in a safe zone in my lane. But thinking that my car drives using that exact process says more about my limited imagination than anything else because Autopilot doesn't drive the way humans do. This becomes evident when you look at videos showing how a self-driving car "sees" the road.

When I worked at Flipboard, we moved to a home feed that tried to select articles for users based on machine learning. That algorithm continued to be to tweaked and evolved over time, trying to optimize for engagement. Some of that tweaking was done by humans, but a lot of it was done by ML.

At times, people would ask why a certain article had been selected for them? Was it because they had once read a piece on astronomy? Dwelled for a few seconds on a headline about NASA? By that point, the algorithm was so complex it was impossible to really offer an explanation that made intuitive sense to a human, there were so many features and interactions in play.

As more of the world comes to rely on artificial intelligence, and as AI makes great advances, we will walk to the edge of a chasm of comprehension.</p>
programming  ai  algorithms 
october 2017 by charlesarthur
Russians took a page from corporate America by using Facebook tool to ID and influence voters • The Washington Post
Elizabeth Dwoskin, Craig Timberg and Adam Entous:
<p>Russian operatives set up an array of misleading Web sites and social media pages to identify American voters susceptible to propaganda, then used a powerful Facebook tool to repeatedly send them messages designed to influence their political behavior, say people familiar with the investigation into foreign meddling in the U.S. election.

The tactic resembles what American businesses and political campaigns have been doing in recent years to deliver messages to potentially interested people online. The Russians exploited this system by creating English-language sites and Facebook pages that closely mimicked those created by U.S. political activists.

The Web sites and Facebook pages displayed ads or other messages focused on such hot-button issues as illegal immigration, African American political activism and the rising prominence of Muslims in the United States. The Russian operatives then used a Facebook “retargeting” tool, called Custom Audiences, to send specific ads and messages to voters who had visited those sites, say people familiar with the investigation who spoke on the condition of anonymity to share details from an ongoing investigation.</p>


Facebook is in so much trouble.
bias  bots  algorithms  facebook  election 
october 2017 by charlesarthur
Anatomy of a moral panic • Idle Words
Maciej Cieglowski on the "Amazon helps you build bombs story":
<p>just how many people does Channel 4 imagine are buying bombs online? For a recommendations algorithm to be suggesting shrapnel to sulfur shoppers implies that thousands or tens of thousands of people are putting these items together in their shopping cart. So where are all these black powder bombers? And why on earth would an aspiring bomber use an online shopping cart tied to their real identity?

A more responsible report would have clarified that black powder, a low-velocity explosive, is not a favored material for bomb making. Other combinations are just as easy to make, and pack a bigger punch.

The bomb that blew up the Federal building in Oklahoma City, for example, was a mixture of agricultural fertilizer and racing fuel. Terrorists behind the recent London bombings have favored a homemade explosive called TATP that can be easily synthesized from acetone, a ubiquitous industrial solvent.

Those bombers who do use black powder find it easier to just scrape it out of commercially available fireworks, which is how the Boston Marathon bomber obtained the explosives for his device. The only people carefully milling the stuff from scratch, after buying it online in an easily traceable way, are harmless musket owners and rocket nerds who will now face an additional level of hassle.

The shoddiness of this story has not prevented it from spreading like a weed to other media outlets, accumulating errors as it goes.

The New York Times omits the bogus shrapnel claim, but falsely describes thermite as “two powders that explode when mixed together in the right proportions and then ignited.” (Thermite does not detonate.)</p>


And more where those came from. I have one issue: he thinks bad reporting comes from the desire to get clicks. It's been around a lot, lot longer than the internet. But like all of his articles, this one has killer blows. (Thanks John Naughton for the link.)
amazon  internet  journalism  algorithms 
september 2017 by charlesarthur
Driverless ed-tech: the history of the future of automation in education • Hack Education
Audrey Waters:
<p>We can see the “driverless university” already under development perhaps at the Math Emporium at Virginia Tech, which The Washington Post once described as “the Wal-Mart of higher education, a triumph in economy of scale and a glimpse at a possible future of computer-led learning.”

Eight thousand students a year take introductory math in a space that once housed a discount department store. Four math instructors, none of them professors, lead seven courses with enrollments of 200 to 2,000. Students walk to class through a shopping mall, past a health club and a tanning salon, as ambient Muzak plays.

The pass rates are up. That’s good traffic data, I suppose, if you’re obsessed with moving bodies more efficiently along the university’s pre-determined “map.” Get the students through pre-calc and other math requirements without having to pay for tenured faculty or, hell, even adjunct faculty. “In the Emporium, the computer is teacher,” The Washington Post tells us.
“Students click their way through courses that unfold in a series of modules.” Of course, students who “click their way through courses” seem unlikely to develop a love for math or a deep understanding of math. They’re unlikely to become math majors. They’re unlikely to become math graduate students. They’re unlikely to become math professors. (And perhaps you think this is a good thing if you believe there are too many mathematicians or if you believe that the study of mathematics has nothing to offer a society that seems increasingly obsessed with using statistics to solve every single problem that it faces or if you think mathematical reasoning is inconsequential to twenty-first century life.)

Students hate the Math Emporium, by the way.</p>


The whole talk deals with the way that libertarianism is woven into so much of Silicon Valley's thinking.
ai  education  algorithms 
april 2017 by charlesarthur
How Google's search algorithm spreads false information with a rightwing bias • The Guardian
Olivia Solon and Sam Levin:
<p>Google’s search algorithm appears to be systematically promoting information that is either false or slanted with an extreme rightwing bias on subjects as varied as climate change and homosexuality.

Following a recent investigation by the Observer, which found that Google’s search engine prominently suggests neo-Nazi websites and antisemitic writing, the Guardian has uncovered a dozen additional examples of biased search results.

Google’s search algorithm and its autocomplete function prioritize websites that, for example, declare that climate change is a hoax, being gay is a sin, and the Sandy Hook mass shooting never happened…

……there’s the secret recipe of factors that feed into the algorithm Google uses to determine a web page’s importance – embedded with the biases of the humans who programmed it. These factors include how many and which other websites link to a page, how much traffic it receives, and how often a page is updated. People who are very active politically are typically the most partisan, which means that extremist views peddled actively on blogs and fringe media sites get elevated in the search ranking.</p>


It's good that someone is still holding Google's feet to, well, the blow heater (if not the fire) over this. My one quibble would be that the headline oversells it; we don't really know. Google needs to explain itself, rather better than the boilerplate response it gives at the end of the story. (I'll bet that there were lots of anxious requests for "background chats" and "our view" from Google to Solon and Levin.)

There need to be more stories like this from more publications: it's important people understand that Google is not a neutral platform, and isn't promoting truth, just rankings.
google  search  algorithms 
december 2016 by charlesarthur
Google is not ‘just’ a platform. It frames, shapes and distorts how we see the world • The Guardian
Carole Cadwalldr, who last week pointed to research showing how Google's search results are being poisoned by right-wing sites:
<p>One week on, Google is still quietly pretending there’s nothing wrong, while surreptitiously going in and fixing the most egregious examples we published last week. It refused to comment on the search results I found – such as the autocomplete suggestion that “jews are evil”, with eight of its 10 top results confirming they are – and, instead, hand-tweaked a handful of the results. Or as, we call it in the media, it “edited” them. It did this without acknowledging there was any problem or explaining the basis on which it is altering its results, or why, or what its future editorial policy will be. Its search box is no longer suggesting that Jews are still evil but it’s still suggesting “Islam should be destroyed”. And, it is spreading and broadcasting the information as fact.

This is hate speech. It’s lies. It’s racist propaganda. And Google is disseminating it. It is what the data scientist Cathy O’Neil calls a “co-conspirator”. And so are we. Because what happens next is entirely down to us. This is our internet. And we need to make a decision: do we believe it’s acceptable to spread hate speech, to promulgate lies as the world becomes a darker, murkier place?

Because Google is only beyond the reach of the law if it we allow it to be.</p>


What happens if Jewish groups begin protesting to Google? If minority groups begin protesting, if enough people - and newspapers - make noise about it? Google can't keep pretending there's nothing going on if it keeps changing results that people have specifically complained about. But it needs its feet held to the fire - and the full article points out how
<p>This is how power works too: the last time I wrote a story that Google didn’t like, I got a call from Peter Barron, Google’s UK head of press, who was at pains to point out the positive and beneficial relationship that Google has with the Guardian Media Group, our owners.</p>


Uh-huh.
google  data  algorithms  media 
december 2016 by charlesarthur
Google, democracy and the truth about internet search • The Guardian
Carole Cadwalladr:
<p>Do you want to know about Hitler? Let’s Google it. “Was Hitler bad?” I type. And here’s Google’s top result: “10 Reasons Why Hitler Was One Of The Good Guys” I click on the link: “He never wanted to kill any Jews”; “he cared about conditions for Jews in the work camps”; “he implemented social and cultural reform.” Eight out of the other 10 search results agree: Hitler really wasn’t that bad.

A few days later, I talk to Danny Sullivan, the founding editor of SearchEngineLand.com. He’s been recommended to me by several academics as one of the most knowledgeable experts on search. Am I just being naive, I ask him? Should I have known this was out there? “No, you’re not being naive,” he says. “This is awful. It’s horrible. It’s the equivalent of going into a library and asking a librarian about Judaism and being handed 10 books of hate. Google is doing a horrible, horrible job of delivering answers here. It can and should do better.”

He’s surprised too. “I thought they stopped offering autocomplete suggestions for religions in 2011.” And then he types “are women” into his own computer. “Good lord! That answer at the top. It’s a featured result. It’s called a “direct answer”. This is supposed to be indisputable. It’s Google’s highest endorsement.” That every women has some degree of prostitute in her? “Yes. This is Google’s algorithm going terribly wrong.”

I contacted Google about its seemingly malfunctioning autocomplete suggestions and received the following response: “Our search results are a reflection of the content across the web. This means that sometimes unpleasant portrayals of sensitive subject matter online can affect what search results appear for a given query. These results don’t reflect Google’s own opinions or beliefs – as a company, we strongly value a diversity of perspectives, ideas and cultures.”</p>


A stunning article, which also highlights <a href="https://medium.com/@d1gi/the-election2016-micro-propaganda-machine-383449cc1fba#.14h9hafbd">research by Jonathan Albright</a> which found a constellation of fake news sites all trying to harness their very finest Googlejuice.
google  facebook  algorithms  politics 
december 2016 by charlesarthur
Automated inference on criminality using face images • Arxiv
Xiaolin Wu, Xi Zhang:
<p>We study, for the first time, automated inference on criminality based solely on still face images. Via supervised machine learning, we build four classifiers (logistic regression, KNN, SVM, CNN) using facial images of 1856 real persons controlled for race, gender, age and facial expressions, nearly half of whom were convicted criminals, for discriminating between criminals and non-criminals. All four classifiers perform consistently well and produce evidence for the validity of automated face-induced inference on criminality, despite the historical controversy surrounding the topic. Also, we find some discriminating structural features for predicting criminality, such as lip curvature, eye inner corner distance, and the so-called nose-mouth angle.</p>


What?! As Maciej Ceglowski <a href="https://twitter.com/Pinboard/status/799675717860499456">pointed out</a>, this is like Phrenology 2.0. Or perhaps Phrenology AI. It's nuts.
ai  bias  algorithms  crime 
november 2016 by charlesarthur
There is a blind spot in AI research • Nature News & Comment
Kate Crawford and Ryan Calo:
<p>“People worry that computers will get too smart and take over the world, but the real problem is that they’re too stupid and they’ve already taken over the world.” This is how computer scientist Pedro Domingos sums up the issue in his 2015 book The Master Algorithm. Even the many researchers who reject the prospect of a ‘technological singularity’ — saying the field is too young — support the introduction of relatively untested AI systems into social institutions…

…As a first step, researchers — across a range of disciplines, government departments and industry — need to start investigating how differences in communities’ access to information, wealth and basic services shape the data that AI systems train on.

Take, for example, the algorithm-generated ‘heat maps’ used in Chicago, Illinois, to identify people who are most likely to be involved in a shooting. A study8 published last month indicates that such maps are ineffective: they increase the likelihood that certain people will be targeted by the police, but do not reduce crime.

A social-systems approach would consider the social and political history of the data on which the heat maps are based. This might require consulting members of the community and weighing police data against this feedback, both positive and negative, about the neighbourhood policing. It could also mean factoring in findings by oversight committees and legal institutions. A social-systems analysis would also ask whether the risks and rewards of the system are being applied evenly — so in this case, whether the police are using similar techniques to identify which officers are likely to engage in misconduct, say, or violence.</p>
ai  algorithms  ethics 
october 2016 by charlesarthur
Facebook has repeatedly trended fake news since firing its human editors • The Washington Post
Caitlin Dewey:
<p>in the six weeks since Facebook revamped its Trending system — and a hoax about the Fox News Channel star subsequently trended — the site has repeatedly promoted “news” stories that are actually works of fiction.

As part of a <a href="http://www.washingtonpost.com/news/the-intersect/wp/2016/08/31/facebook-is-tracking-trends-so-were-tracking-facebook/">larger audit of Facebook’s Trending topics</a>, the Intersect logged every news story that trended across four accounts during the workdays from Aug. 31 to Sept. 22. During that time, we uncovered five trending stories that were indisputably fake and three that were profoundly inaccurate. On top of that, we found that news releases, blog posts from sites such as Medium and links to online stores such as iTunes regularly trended. Facebook declined to comment about Trending on the record.

“I’m not at all surprised how many fake stories have trended,” one former member of the team that used to oversee Trending told the Post. “It was beyond predictable by anyone who spent time with the actual functionality of the product, not just the code.”</p>
facebook  bias  algorithms 
october 2016 by charlesarthur
Crash: how computers are setting us up for disaster • The Guardian
Tim Harford:
<p>It is possible to resist the siren call of the algorithms. Rebecca Pliske, a psychologist, found that veteran meteorologists would make weather forecasts first by looking at the data and forming an expert judgment; only then would they look at the computerised forecast to see if the computer had spotted anything that they had missed. (Typically, the answer was no.) By making their manual forecast first, these veterans kept their skills sharp, unlike the pilots on the Airbus 330. However, the younger generation of meteorologists are happier to trust the computers. Once the veterans retire, the human expertise to intuit when the computer has screwed up could be lost.

Many of us have experienced problems with GPS systems, and we have seen the trouble with autopilot. Put the two ideas together and you get the self-driving car. Chris Urmson, who runs Google’s self-driving car programme, hopes that the cars will soon be so widely available that his sons will never need to have a driving licence. There is a revealing implication in the target: that unlike a plane’s autopilot, a self-driving car will never need to cede control to a human being.

Raj Rajkumar, an autonomous driving expert at Carnegie Mellon University, thinks completely autonomous vehicles are 10 to 20 years away. Until then, we can look forward to a more gradual process of letting the car drive itself in easier conditions, while humans take over at more challenging moments.</p>


But as Harford has illustrated with an earlier example from an Air France crash, only giving humans the challenging moments carries dangerous presumptions in itself.
ai  automation  algorithms  computer  safety 
october 2016 by charlesarthur
Video compression seeing slower improvement • EE Times
Rick Merritt:
<p>Video codecs will not deliver historic gains in the foreseeable future unless engineers come up with radical new techniques, according to experts from Google and Microsoft. The good news is pioneering work in areas such as augmented reality is opening new doors and one effort may produce a royalty-free codec in less than a year.

Improvements in video codecs reduce the amount of bandwidth needed to serve video over the Internet. The gains determine the quality of the experience for constrained devices such as smartphones on cellular networks and are key to supporting the business models of cloud-based video services such as Hulu, Netflix and YouTube.

Over the last 20 years, video codecs doubled gains in compression about every decade in a trade-off for ten-fold increases in encoder complexity. Looking forward, gains appear to peak at about 30% with practical results below 25%, said experts at an event here sponsored by the Society of Motion Picture and Television Engineers (SMPTE).</p>

First Moore's Law, now this. And it's not as if batteries are pulling their weight either.
video  algorithms  compression 
june 2016 by charlesarthur
Machine bias: there’s software used across the country to predict future criminals. And it’s biased against blacks • ProPublica
Julia Angwin, Jeff Larson, Surya Mattu and Lauren Kirchner:
<p>Borden and her friend [both aged 18, who had stolen two bicycles] were arrested and charged with burglary and petty theft for the items, which were valued at a total of $80.

Prater was the more seasoned criminal. He had already been convicted of armed robbery and attempted armed robbery, for which he served five years in prison, in addition to another armed robbery charge. Borden had a record, too, but it was for misdemeanors committed when she was a juvenile.

Yet something odd happened when Borden and Prater were booked into jail: A computer program spat out a score predicting the likelihood of each committing a future crime. Borden — who is black — was rated a high risk. Prater — who is white — was rated a low risk.

Two years later, we know the computer algorithm got it exactly backward. Borden has not been charged with any new crimes. Prater is serving an eight-year prison term for subsequently breaking into a warehouse and stealing thousands of dollars’ worth of electronics.</p>


You can read <a href="https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm/">how the analysed the data</a>, and <a href="https://github.com/propublica/compas-analysis">download the dataset</a>.
algorithms  crime  racism 
may 2016 by charlesarthur
I worked on Facebook's Trending team – the most toxic work experience of my life • The Guardian
"Anonymous" (you'll realise why):
<p>Working at Facebook, even as a contractor, was supposed to be the opportunity of a lifetime. It was, instead, the most toxic work experience of my life.

As a curator, my job was to choose what links would appear on the Facebook “trending” box on the right side of a user’s newsfeed. Every day, I sifted through hundreds of topics (or “keywords”) that Facebook told me were trending on the platform. Then I’d choose a story about the keyword, and come up with a headline and a short summary that would appear on the trending box.

Most, if not all, of what you’ve read about Facebook’s Trending team in Gizmodo over the past few weeks has been mischaracterized or taken out of context. There is no political bias that I know of and we were never told to suppress conservative news. There is an extraordinary amount of talent on the team, but poor management, coupled with intimidation, favoritism and sexism, has resulted in a deeply uncomfortable work environment. Employees I worked with were angry, depressed and left voiceless – especially women.</p>


Hell of an article. But it's not just formal work environments that can be toxic in the new era...
algorithms  blog  facebook 
may 2016 by charlesarthur
Investigating the algorithms that govern our lives » Columbia Journalism Review
Chava Gourarie:
<p>[Algorithms are] also anything but objective. “How can they be?” asks Mark Hansen, a statistician and the director of the Brown Institute at Columbia University. “They’re the products of human imagination.” (As an experiment, think about all of the ways you could answer the question: “How many Latinos live in New York?” That’ll give you an idea of how much human judgement goes into turning the real world into math.)

Algorithms are built to approximate the world in a way that accommodates the purposes of their architect, and “embed a series of assumptions about how the world works and how the world should work,” says Hansen.

It’s up to journalists to investigate those assumptions, and their consequences, especially where they intersect with policy. The first step is extending classic journalism skills into a nascent domain: questioning systems of power, and employing experts to unpack what we don’t know. But when it comes to algorithms that can compute what the human mind can’t, that won’t be enough. Journalists who want to report on algorithms must expand their literacy into the areas of computing and data, in order to be equipped to deal with the ever-more-complex algorithms governing our lives.</p>


As Gourarie points out, there aren't yet any journalists with the title of "Algorithm correspondent", but maybe there should; they're going to be as powerful as politicians, but less easy to interview.
algorithms  journalism  research 
april 2016 by charlesarthur
Guaranteeing the integrity of a register » Government Digital Service
Philip Potter:
<p>There are a number of ways of achieving this but one we have been exploring is based around <a href="http://www.certificate-transparency.org/">Google’s Certificate Transparency</a> project. At its heart, Certificate Transparency depends on the creation of a digitally signed append-only log. The entries in the log are hashed together in a <a href="https://en.wikipedia.org/wiki/Merkle_tree">Merkle tree</a> and the tree is signed. The registrar can append to the log by issuing a new signature. Consumers can request proof that a single entry appears in a particular log. Consumers can also request proof that the registrar has not rewritten history which the registrar can easily provide.</p>


At this point knowledgeable readers will be saying "BLOCKCHAIN! IT'S A BLOCKCHAIN!" And indeed it is. The British government is looking at the feasibility of using blockchain technology for things like registries for everything from restaurant inspections upwards and outwards.
algorithms  blockchain  data  government 
october 2015 by charlesarthur
Facebook's problem: Its algorithms aren't smart enough » Fortune
Mathew Ingram:
<p>Zuckerberg said: “Under the current system, our community reports content that they don’t like, and then we look at it to see if it violates our polices, and if it does we take it down. But part of the problem with that is by the time we take it down, someone has already seen it and they’ve had a bad experience.”

The promise of artificial intelligence, said the Facebook founder, is that some day computers might be able to filter such content more accurately, and allow people to personalize their news-feed. “But right now, we don’t have computers that can look at a photo and understand it in the way that a person can, and tell kind of basic things about it… is this nudity, is this graphic, what is it,” he said.

Zuckerberg said that in the case of the Syrian child lying dead on the beach, he thought that image was very powerful, because it symbolized a huge problem and crystallized a complex social issue. “I happen to think that was a very important photo in the world, because it raised awareness for this issue,” he said. “It’s easy to describe the stats about refugees, but there’s a way that capturing such a poignant photo has of getting people’s attention.”</p>


Any AI that could make the right call about that photograph, though, would be as wise as the super-experienced editors around the world. It would have passed the Turing test and then some.
facebook  ai  algorithms 
september 2015 by charlesarthur
Machine vision algorithm chooses the most creative paintings in history » MIT Technology Review
The job of distinguishing the most creative from the others falls to art historians. And it is no easy task. It requires, at the very least, an encyclopedic knowledge of the history of art. The historian must then spot novel features and be able to recognize similar features in future paintings to determine their influence.

Those are tricky tasks for a human and until recently, it would have been unimaginable that a computer could take them on. But today that changes thanks to the work of Ahmed Elgammal and Babak Saleh at Rutgers University in New Jersey, who say they have a machine that can do just this.

<img src="https://www.technologyreview.com/sites/default/files/images/Art%20creativity.png" width="100%" alt="machine vision view of art" />

They’ve put it to work on a database of some 62,000 pictures of fine art paintings to determine those that are the most creative in history. The results provide a new way to explore the history of art and the role that creativity has played in it.


Can't be long before someone puts a human art historian up against the machine to see who spots the fake. (By the way, there was no byline I could find on the story. Maybe a robot wrote it.)
algorithms  art  machinelearning 
june 2015 by charlesarthur
Racist Camera! No, I did not blink... I'm just Asian! » Flickr
<a href="http://twitter.com/jearle">Jared Earle</a> offered a followup to the <a href="http://priceonomics.com/how-photography-was-optimized-for-white-skin/">story on Kodachrome</a> from Monday, pointing to this photo and commentary from 2009:
We got our Mom a new Nikon S630 digital camera for Mother's Day and I was playing with it during the Angels game we were at on Sunday.
 
As I was taking pictures of my family, it kept asking "Did someone blink?" even though our eyes were always open.


Surprising, to say the least, that Nikon would have this problem.
<a href="http://content.time.com/time/business/article/0,8599,1954643,00.html">Time picked up the story about a year later</a>, and pointed out more strange examples where systems seemed to have built-in prejudices.

Of course, you can blame "the algorithms". But they don't write themselves.
photo  racism  algorithms 
april 2015 by charlesarthur

Copy this bookmark:





to read