recentpopularlog in

charlesarthur : ai   329

« earlier  
'A white-collar sweatshop': Google Assistant contractors allege wage theft • The Guardian
Julia Carrie Wong:
<p>to some of the Google employees responsible for making the Assistant work, the tagline of the conference – “Keep making magic” – obscured a more mundane reality: the technical wizardry relies on massive data sets built by subcontracted human workers earning low wages.

“It’s smoke and mirrors if anything,” said a current Google employee who, as with the others quoted in this story, spoke on condition of anonymity because they were not authorized to speak to the press. “Artificial intelligence is not that artificial; it’s human beings that are doing the work.”

The Google employee works on Pygmalion, the team responsible for producing linguistic data sets that make the Assistant work. And although he is employed directly by Google, most of his Pygmalion co-workers are subcontracted temps who have for years been routinely pressured to work unpaid overtime, according to seven current and former members of the team.

These employees, some of whom spoke to the Guardian because they said efforts to raise concerns internally were ignored, alleged that the unpaid work was a symptom of the workplace culture put in place by the executive who founded Pygmalion. That executive, Linne Ha, was fired by Google in March following an internal investigation, Google said. Ha could not be reached for comment before publication. She contacted the Guardian after publication and said her departure had not been related to unpaid overtime.</p>


The depressing reality is how Wizard-of-Oz these assistants seem to be: ignore the temp worker behind the curtain.
google  ai  assistant  bots  machinelearning 
17 hours ago by charlesarthur
Fraudsters used AI to mimic CEO’s voice in unusual cybercrime case • WSJ
Catherine Stupp:
<p>Criminals used artificial intelligence-based software to impersonate a chief executive’s voice and demand a fraudulent transfer of €220,000 ($243,000) in March in what cybercrime experts described as an unusual case of artificial intelligence being used in hacking.

The CEO of a UK-based energy firm thought he was speaking on the phone with his boss, the chief executive of the firm’s German parent company, who asked him to send the funds to a Hungarian supplier. The caller said the request was urgent, directing the executive to pay within an hour, according to the company’s insurance firm, Euler Hermes Group SA.

Euler Hermes declined to name the victim companies.

Law enforcement authorities and AI experts have predicted that criminals would use AI to automate cyberattacks. Whoever was behind this incident appears to have used AI-based software to successfully mimic the German executive’s voice by phone. The UK CEO recognized his boss’ slight German accent and the melody of his voice on the phone, said Rüdiger Kirsch, a fraud expert at Euler Hermes, a subsidiary of Munich-based financial services company Allianz SE.</p>


New technology uses: first for porn, next for crime. It's as predictable as sunrise.
fraud  theft  ai  cybersecurity 
13 days ago by charlesarthur
Talk to Transformer • OpenAI code
:
<p>See how a modern neural network completes your text. Type a custom snippet or try one of the examples. Built by Adam King (@AdamDanielKing) as an easier way to play with OpenAI's new machine learning model. In February, OpenAI unveiled a language model called GPT-2 that generates coherent paragraphs of text one word at a time.

For now OpenAI has decided only to release three smaller versions of it which aren't as coherent but still produce interesting results. This site runs the largest released model, 774M, which is half the size of the full model.</p>


I tried "It was a dark and stormy night." and got back a Hemingway-esque murder mystery. Trying the first two lines of Jabberwocky - "Twas brilling, and the slithey toves/ Did gyre and gimbal in the wabe" produced what looked like Olde English. Have fun!
ai  artificialintelligence  machinelearning  openai 
15 days ago by charlesarthur
Google’s algorithm for detecting hate speech is racially biased • MIT Technology Review
Charlotte Jee:
<p> Researchers <a href="https://homes.cs.washington.edu/~msap/pdfs/sap2019risk.pdf">built two AI systems</a> and tested them on a pair of data sets of more than 100,000 tweets that had been annotated by humans with labels like “offensive,” “none,” or “hate speech.” One of the algorithms incorrectly flagged 46% of inoffensive tweets by African-American authors as offensive. Tests on bigger data sets, including one composed of 5.4 million tweets, found that posts by African-American authors were 1.5 times more likely to be labeled as offensive. When the researchers then tested Google’s Perspective, an AI tool that the company lets anyone use to moderate online discussions, they found similar racial biases.

A hard balance to strike: Mass shootings perpetrated by white supremacists in the US and New Zealand have led to growing calls from politicians for social-media platforms to do more to weed out hate speech. These studies underline just how complicated a task that is. Whether language is offensive can depend on who’s saying it, and who’s hearing it. For example, a black person using the “N word” is very different from a white person using it. But AI systems do not, and currently cannot, understand that nuance.</p>


That's weird. Like, really weird. Unless the corpus had a ton of seriously offensive tweets.
ai  artificialintelligence  machinelearning  tweets 
4 weeks ago by charlesarthur
Facebook paid contractors to transcribe user audio files • Bloomberg
Sarah Frier:
<p>Facebook has been paying hundreds of outside contractors to transcribe clips of audio from users of its services, according to people with knowledge of the work.

The work has rattled the contract employees, who are not told where the audio was recorded or how it was obtained - only to transcribe it, said the people, who requested anonymity for fear of losing their jobs. They’re hearing Facebook users’ conversations, sometimes with vulgar content, but do not know why Facebook needs them transcribed, the people said.

Facebook confirmed that it had been transcribing users’ audio and said it will no longer do so, following scrutiny into other companies. “Much like Apple and Google, we paused human review of audio more than a week ago,” the company said Tuesday. The company said the users who were affected chose the option in Facebook’s Messenger app to have their voice chats transcribed. The contractors were checking whether Facebook’s artificial intelligence correctly interpreted the messages, which were anonymized.</p>


But of COURSE Facebook was doing this, same as everyone else. Clearly this was an open secret within the voice assistant industry.
facebook  ai  privacy  voice 
4 weeks ago by charlesarthur
DeepMind’s latest AI health breakthrough has some problems • OneZero
Julia Powles:
<p>In one paper, <a href="https://nature.com/articles/s41586-019-1390-1">published in the journal Nature</a>, with co-authors from Veterans Affairs and University College London, DeepMind claimed its biggest healthcare breakthrough to date: that artificial intelligence (AI) can predict acute kidney injury (AKI) up to two days before it happens.

AKI — which occurs when the kidneys suddenly stop functioning, leading to a dangerous buildup of toxins in the bloodstream — is alarmingly common among hospital patients in serious care, and contributes to hundreds of thousands of deaths in the United States each year. DeepMind’s bet is that if it can successfully predict which patients are likely to develop AKI well in advance, then doctors could stop or reverse its progression much more easily, saving lives along the way.

Beyond the headlines and the hope in the DeepMind papers, however, are three sobering facts.

First, nothing has actually been predicted–and certainly not before it happens. Rather, what has happened is that DeepMind has taken a windfall dataset of historic incidents of kidney injury in American veterans, plus around 9,000 data-points for each person in the set, and has used a neural network to figure out a pattern between the two.

Second, that predictive pattern only works some of the time. The accuracy rate is 55.8% overall, with a much lower rate the earlier the prediction is made, and the system generates two false positives for every accurate prediction.

Third, and most strikingly of all: the study was conducted almost exclusively on men–or rather, a dataset of veterans that is 93.6% male. </p>


Turns out there are plenty of other anomalies about the data: armed forces veterans are far less likely to have AKI than the general population. But Powles (who has critiqued other DeepMind work) is only just getting started. The rest of the article is a very thorough look at what the papers aren't telling you.
health  google  ai  healthcare  Machinelearning  deepmind 
5 weeks ago by charlesarthur
The hidden costs of automated thinking • The New Yorker
Jonathan Zittrain:
<p>As knowledge generated by machine-learning systems is put to use, these kinds of gaps [between what is understood, and what is possible - such as drugs whose mechanism isn't understood] may prove consequential. Health-care A.I.s have been successfully trained to classify skin lesions as benign or malignant. And yet—as a team of researchers from Harvard Medical School and M.I.T. showed, in a paper published this year—they can also be tricked into making inaccurate judgments using the same techniques that turn cats into guacamole. (Among other things, attackers might use these vulnerabilities to commit insurance fraud.) Seduced by the predictive power of such systems, we may stand down the human judges whom they promise to replace. But they will remain susceptible to hijacking—and we will have no easy process for validating the answers they continue to produce.

Could we create a balance sheet for intellectual debt—a system for tracking where and how theoryless knowledge is used? Our accounting could reflect the fact that not all intellectual debt is equally problematic. If an A.I. produces new pizza recipes, it may make sense to shut up and enjoy the pizza; by contrast, when we begin using A.I. to make health predictions and recommendations, we’ll want to be fully informed.</p>


That's the tone of the article, but the fine detail is much more nuanced.
ai  research  machinelearning 
7 weeks ago by charlesarthur
We tested Europe’s new digital lie detector. It failed • The Intercept
Ryan Gallagher and Ludovica Jona:
<p>Prior to your arrival at the airport, using your own computer, you log on to a website, upload an image of your passport, and are greeted by an avatar of a brown-haired man wearing a navy blue uniform.

“What is your surname?” he asks. “What is your citizenship and the purpose of your trip?” You provide your answers verbally to those and other questions, and the virtual policeman uses your webcam to scan your face and eye movements for signs of lying.

At the end of the interview, the system provides you with a QR code that you have to show to a guard when you arrive at the border. The guard scans the code using a handheld tablet device, takes your fingerprints, and reviews the facial image captured by the avatar to check if it corresponds with your passport. The guard’s tablet displays a score out of 100, telling him whether the machine has judged you to be truthful or not.

A person judged to have tried to deceive the system is categorized as “high risk” or “medium risk,” dependent on the number of questions they are found to have falsely answered. Our reporter — the first journalist to test the system before crossing the Serbian-Hungarian border earlier this year — provided honest responses to all questions but was deemed to be a liar by the machine, with four false answers out of 16 and a score of 48. The Hungarian policeman who assessed our reporter’s lie detector results said the system suggested that she should be subject to further checks, though these were not carried out…

…The results of the test are not usually disclosed to the traveler; The Intercept obtained a copy of our reporter’s test only after filing a data access request under European privacy laws.</p>


Developed in the UK, and claims to pick up on "micro gestures" in facial expressions, etc. As if a virtual border agent viewing you through a webcam (which you probably won't look at) weren't weird enough already.
data  border  ai 
7 weeks ago by charlesarthur
AI is supercharging the creation of maps around the world • Facebook
Xiaoming Gao, Christopher Klaiber, Drishtie Patel and Jeff Underwood:
<p>For more than 10 years, volunteers with the OpenStreetMap (OSM) project have worked to address that gap by meticulously adding data on the ground and reviewing public satellite images by hand and annotating features like roads, highways, and bridges. It’s a painstaking manual task. But, thanks to AI, there is now an easier way to cover more areas in less time.

With assistance from Map With AI (a new service that Facebook AI researchers and engineers created) a team of Facebook mappers has recently cataloged all the missing roads in Thailand and more than 90 percent of missing roads in Indonesia. Map With AI enabled them to map more than 300,000 miles of roads in Thailand in only 18 months, going from a road network that covered 280,000 miles before they began to 600,000 miles after. Doing it the traditional way — without AI — would have taken another three to five years, estimates Xiaoming Gao, a Facebook research scientist who helped lead the project.

“We were really excited about this achievement because it has proven Map With AI works at a large scale,” Gao says.

Starting today, anyone will be able to use the Map With AI service, which includes access to AI-generated road mappings in Afghanistan, Bangladesh, Indonesia, Mexico, Nigeria, Tanzania, and Uganda, with more countries rolling out over time. As part of Map With AI, Facebook is releasing our AI-powered mapping tool, called RapiD, to the OSM community. </p>


This, at least, is good. Though it's a repetition of what undoubtedly already exists at Google and other mapping companies. The benefit is that this is open data.
facebook  ai  maps 
7 weeks ago by charlesarthur
The Metamorphosis • The Atlantic
Henry A. Kissinger, Eric Schmidt, Daniel Huttenlocher:
<p>In the nuclear age, strategy evolved around the concept of deterrence. Deterrence is predicated on the rationality of parties, and the premise that stability can be ensured by nuclear and other military deployments that can be neutralized only by deliberate acts leading to self-destruction; the likelihood of retaliation deters attack. Arms-control agreements with monitoring systems were developed in large part to avoid challenges from rogue states or false signals that might trigger a catastrophic response.

Hardly any of these strategic verities can be applied to a world in which AI plays a significant role in national security. If AI develops new weapons, strategies, and tactics by simulation and other clandestine methods, control becomes elusive, if not impossible. The premises of arms control based on disclosure will alter: Adversaries’ ignorance of AI-developed configurations will become a strategic advantage—an advantage that would be sacrificed at a negotiating table where transparency as to capabilities is a prerequisite. The opacity (and also the speed) of the cyberworld may overwhelm current planning models.

The evolution of the arms-control regime taught us that grand strategy requires an understanding of the capabilities and military deployments of potential adversaries. But if more and more intelligence becomes opaque, how will policy makers understand the views and abilities of their adversaries and perhaps even allies?</p>


Yes, it really is that unindicted war criminal Henry Kissinger (age 96), ex-Google CEO Eric Schmidt (64), American academic Daniel Huttenlocher (59). The article's full of vagueisms - unsurprisingly - but the idea of nation states using AI for their defence/attack strategies is quite worrying.
ai  schmidt  google 
9 weeks ago by charlesarthur
No limit: AI poker bot is first to beat professionals at multiplayer game • Nature
Douglas Heaven:
<p>Machines have raised the stakes once again. A superhuman poker-playing bot called Pluribus has beaten top human professionals at six-player no-limit Texas hold’em poker, the most popular variant of the game. It is the first time that an artificial-intelligence (AI) program has beaten elite human players at a game with more than two players1.

“While going from two to six players might seem incremental, it’s actually a big deal,” says Julian Togelius at New York University, who studies games and AI. “The multiplayer aspect is something that is not present at all in other games that are currently studied.”

The team behind Pluribus had already built an AI, called Libratus, that had beaten professionals at two-player poker. It built Pluribus by updating Libratus and created a bot that needs much less computing power to play matches. In a 12-day session with more than 10,000 hands, it beat 15 top human players. “A lot of AI researchers didn’t think it was possible to do this” with our techniques, says Noam Brown at Carnegie Mellon University in Pittsburgh, Philadelphia, and Facebook AI Research in New York, who developed Pluribus with his Carnegie colleague Tuomas Sandholm.

Other AIs that have mastered human games — such as Libratus and DeepMind’s Go-playing bots — have shown that they are unbeatable in two-player zero-sum matches. In these scenarios, there is always one winner and one loser, and game theory offers a well-defined best strategy.

But game theory is less helpful for scenarios involving multiple parties with competing interests and no clear win–lose conditions — which reflect most real-life challenges.</p>


Will they get kicked out of casinos for card-counting?
ai  machinelearning  poker 
9 weeks ago by charlesarthur
Yep, human workers are listening to recordings from Google Assistant, too • The Verge
:
<p>In the story by VRT NWS, which focuses on Dutch and Flemish speaking Google Assistant users, the broadcaster reviewed a thousand or so recordings, 153 of which had been captured accidentally. A contractor told the publication that he transcribes around 1,000 audio clips from Google Assistant every week. In one of the clips he reviewed he heard a female voice in distress and said he felt that “physical violence” had been involved. “And then it becomes real people you’re listening to, not just voices,” said the contractor.

Tech companies say that sending audio clips to humans to be transcribed is an essential process for improving their speech recognition technology. They also stress that only a small percentage of recordings are shared in this way. A spokesperson for Google told Wired that just 0.2 percent of all recordings are transcribed by humans, and that these audio clips are never presented with identifying information about the user.

However, that doesn’t stop individuals revealing sensitive information in the recording themselves. And companies are certainly not upfront about this transcription process. The privacy policy page for Google Home, for example, does not mention the company’s use of human contractors, or the possibility that Home might mistakenly record users.

These obfuscations could cause legal trouble for the company, says Michael Veale, a technology privacy researcher at the Alan Turing Institute in London. He told Wired that this level of disclosure might not meet the standards set by the EU’s GDPR regulations. “You have to be very specific on what you’re implementing and how,” said Veale. “I think Google hasn’t done that because it would look creepy.”</p>

Guess it's time for Apple to say yes or no to this question, just for completeness. But this certainly backs up why I don't activate any Google Assistant or Alexa devices. Google <a href="https://www.blog.google/products/assistant/more-information-about-our-processes-safeguard-speech-data/">has a blogpost about this</a>, complaining about the worker "leaking confidential Dutch audio data". Sure, but if the data hadn't been there in the first place...
google  ai  privacy  speech 
9 weeks ago by charlesarthur
Samsung shuts down its AI-powered Mall shopping app in India • TechCrunch
Manish Singh:
<p>Samsung has quietly discontinued an app that it built specifically for India, one of its largest markets and where it houses a humongous research and development team. The AI-powered Android app, called Samsung Mall, was positioned to help users identify objects around them and locate them on shopping sites to make a purchase.

The company has shut down the app a year and a half after its launch. Samsung Mall was exclusively available for select company handsets and was launched alongside the Galaxy On7 Prime smartphone. News blog TizenHelp was first to report the development.

At the time of launch, Samsung said the Mall app would complement features of Bixby, the company’s virtual assistant. Bixby already offers a functionality that allows users to identify objects through photos — but does not let them make the purchase.</p>


Amazon had something similar on the Fire Phone. Strange, because it seems like a useful app, yet keeps dying a death.
samsung  ai  bixby  shopping 
9 weeks ago by charlesarthur
Endless AI-generated spam risks clogging up Google’s search results • The Verge
James Vincent:
<p>Just take a look at <a href="http://thismarketingblogdoesnotexist.com/?p=81">this blog post</a> answering the question: “What Photo Filters are Best for Instagram Marketing?” At first glance it seems legitimate, with a bland introduction followed by quotes from various marketing types. But read a little more closely and you realize it references magazines, people, and — crucially — Instagram filters that don’t exist:
<p>You might not think that a mumford brush would be a good filter for an Insta story. Not so, said Amy Freeborn, the director of communications at National Recording Technician magazine. Freeborn’s picks include Finder (a blue stripe that makes her account look like an older block of pixels), Plus and Cartwheel (which she says makes your picture look like a topographical map of a town.</p>


The rest of the site is full of similar posts, covering topics like “How to Write Clickbait Headlines” and “Why is Content Strategy Important?” But every post is AI-generated, right down to the authors’ profile pictures. It’s all the creation of content marketing agency Fractl, who says it’s a demonstration of the “massive implications” AI text generation has for the business of search engine optimization, or SEO.

“Because [AI systems] enable content creation at essentially unlimited scale, and content that humans and search engines alike will have difficulty discerning [...] we feel it is an incredibly important topic with far too little discussion currently,” Fractl partner Kristin Tynski tells The Verge.

To write the blog posts, Fractl used an open source tool named Grover, made by the Allen Institute for Artificial Intelligence. Tynski says the company is not using AI to generate posts for clients, but that this doesn’t mean others won’t.</p>


I'm only slightly surprised nobody has realised this earlier. (Of course the AI-generated blogpost has an AI-generated author pic.) Google must be having meetings about how to tackle it, because it's surely only a few months away. Philip K Dick's world of computer-written newspapers feels very close.
ai  machinelearning  spam  seo 
11 weeks ago by charlesarthur
EU should ban AI-powered citizen scoring and mass surveillance, say experts • The Verge
James Vincent:
<p>The recommendations are part of the EU’s ongoing efforts to establish itself as a leader in so-called “ethical AI.” Earlier this year, it released its first guidelines on the topic, stating that AI in the EU should be deployed in a trustworthy and “human-centric” manner.

The <a href="https://ec.europa.eu/digital-single-market/en/news/policy-and-investment-recommendations-trustworthy-artificial-intelligence">new report </a>offers more specific recommendations. These include identifying areas of AI research that require funding; encouraging the EU to incorporate AI training into schools and universities; and suggesting new methods to monitor the impact of AI. However, the paper is only a set of recommendations at this point, and not a blueprint for legislation.

Notably, the suggestions that the EU should ban AI-enabled mass scoring and limit mass surveillance are some of the report’s relatively few concrete recommendations. (Often, the report’s authors simply suggest that further investigation is needed in this or that area.)

The fear of AI-enabled mass-scoring has developed largely from reports about China’s nascent social credit system.</p>
ai  eu 
11 weeks ago by charlesarthur
This $3.2bn industry could turn millions of surveillance cameras into an army of robot security guards • American Civil Liberties Union
Jay Stanley is the ACLU's senior policy analyst:
<p>Today we’re publishing a report on a $3.2 billion industry building a technology known as “video analytics,” which is starting to augment surveillance cameras around the world and has the potential to turn them into just that kind of nightmarish army of unblinking watchers.

Using cutting-edge, deep learning-based AI, the science is moving so fast that early versions of this technology are already starting to enter our lives. Some of our cars now come equipped with dashboard cameras that can sound alarms when a driver starts to look drowsy. Doorbell cameras today can alert us when a person appears on our doorstep. Cashier-less stores use AI-enabled cameras that monitor customers and automatically charge them when they pick items off the shelf.

In the report, we looked at where this technology has been deployed, and what capabilities companies are claiming they can offer. We also reviewed scores of papers by computer vision scientists and other researchers to see what kinds of capabilities are being envisioned and developed. What we found is that the capabilities that computer scientists are pursuing, if applied to surveillance and marketing, would create a world of frighteningly perceptive and insightful computer watchers monitoring our lives.

Cameras that collect and store video just in case it is needed are being transformed into devices that can actively watch us, often in real time.</p>
camera  video  monitoring  ai  ml 
june 2019 by charlesarthur
Experts: spy used AI-generated face to connect with targets • Associated Press
Raphael Satter:
<p>Katie Jones sure seemed plugged into Washington’s political scene. The 30-something redhead boasted a job at a top think tank and a who’s-who network of pundits and experts, from the centrist Brookings Institution to the right-wing Heritage Foundation. She was connected to a deputy assistant secretary of state, a senior aide to a senator and the economist Paul Winfree, who is being considered for a seat on the Federal Reserve.

But Katie Jones doesn’t exist, The Associated Press has determined. Instead, the persona was part of a vast army of phantom profiles lurking on the professional networking site LinkedIn. And several experts contacted by the AP said Jones’ profile picture appeared to have been created by a computer program.

“I’m convinced that it’s a fake face,” said Mario Klingemann, a German artist who has been experimenting for years with artificially generated portraits and says he has reviewed tens of thousands of such images. “It has all the hallmarks.”

Experts who reviewed the Jones profile’s LinkedIn activity say it’s typical of espionage efforts on the professional networking site, whose role as a global Rolodex has made it a powerful magnet for spies.

“It smells a lot like some sort of state-run operation,” said Jonas Parello-Plesner, who serves as program director at the Denmark-based think tank Alliance of Democracies Foundation and was the target several years ago of an espionage operation that began over LinkedIn .

William Evanina, director of the US National Counterintelligence and Security Center, said foreign spies routinely use fake social media profiles to home in on American targets — and accused China in particular of waging “mass scale” spying on LinkedIn.

“Instead of dispatching spies to some parking garage in the US to recruit a target, it’s more efficient to sit behind a computer in Shanghai and send out friend requests to 30,000 targets,” he said in a written statement.</p>


Amazing story. The face would be generated by a generative adversarial network (GAN). One clue it's a fake: her eyes are different colours. There are others which suggest she's got lizard DNA.
ai  photo  espionage 
june 2019 by charlesarthur
AI deepfakes are now as simple as typing whatever you want your subject to say • The Verge
James Vincent:
<p>In the latest example of deepfake technology, researchers have shown off new software that uses machine learning to let users edit the text transcript of a video to add, delete, or change the words coming right out of somebody’s mouth.

<a href="https://www.ohadf.com/projects/text-based-editing/">The work</a> was done by scientists from Stanford University, the Max Planck Institute for Informatics, Princeton University, and Adobe Research, and shows that our ability to edit what people say in videos and create realistic fakes is becoming easier every day.

You can see a number of examples of the system’s output, including an edited version of a famous quotation from Apocalypse Now, with the line “I love the smell of napalm in the morning” changed to “I love the smell of french toast in the morning.”

This work is just at the research stage right now and isn’t available as consumer software, but it probably won’t be long until similar services go public. Adobe, for example, has already shared details on prototype software named VoCo, which lets users edit recordings of speech as easily as a picture, and which was used in this research.</p>


What do we think, a year? Less?
ai  video  programming  deception  Deepfake 
june 2019 by charlesarthur
The guy who made a tool to track women in porn videos is sorry • MIT Technology Review
Angela Chen:
<p>There is still no proof that the global system—which allegedly matched women’s social-media photos with images from sites like Pornhub—actually worked, or even existed. Still, the technology is possible and would have had awful consequences. “It’s going to kill people,” says Carrie A. Goldberg, an attorney who specializes in sexual privacy violations and author of the forthcoming book Nobody’s Victim: Fighting Psychos, Stalkers, Pervs, and Trolls. “Some of my most viciously harassed clients have been people who did porn, oftentimes one time in their life and sometimes nonconsensually [because] they were duped into it. Their lives have been ruined because there’s this whole culture of incels that for a hobby expose women who’ve done porn and post about them online and dox them.” (Incels, or “involuntary celibates,” are a misogynistic online subculture of men who claim they are denied sex by women.)

The European Union’s GDPR privacy law prevents this kind of situation. Though the programmer—who posted about the project on the Chinese social network Weibo—originally insisted everything was fine because he didn’t make the information public, just collecting the data is illegal if the women didn’t consent, according to Börge Seeger, a data protection expert and partner at German law firm Neuwerk. These laws apply to any information from EU residents, so they would have held even if the programmer weren’t living in the EU.</p>


GDPR! *empties shot glass*
ai  biometrics  facialrecognition 
june 2019 by charlesarthur
London ethics panel outlines five steps for facial recognition in policing • UKAuthority
Mark Say:
<p>The [independent London Policing Ethics] panel said that facial recognition software should only be deployed by police if the five conditions can be met:

• The overall benefits to public safety must be great enough to outweigh any potential public distrust in the technology.

• It can be evidenced that using the technology will not generate gender or racial bias in policing operations.

• Each deployment must be assessed and authorised to ensure that it is both necessary and proportionate for a specific policing purpose.

• Operators are trained to understand the risks associated with use of the software and understand they are accountable.

• Both the Met and the Mayor’s Office for Policing and Crime develop strict guidelines to ensure that deployments balance the benefits of this technology with the potential intrusion on the public.

London’s police force has carried out 10 trials on the use of the technology, but some have prompted criticisms of misuse. Civil liberties group Liberty focused on trials at the Notting Hill Carnival and questioned whether the algorithm had been tested for bias.</p>


The panel "also highlighted the results of a survey of more than 1,000 Londoners on their attitudes to facial recognition that showed more 57% felt its use by the police is acceptable, and the figure increased to 87% when it is used in searching for serious offenders." Everyone likes intrusive stuff if it's used to catch the bad guys (and gals).
ethics  ai  facialrecognition 
june 2019 by charlesarthur
An AI taught itself to play a video game; for the first time, it's beating humans • The Conversation
Amit Joshi on <a href="https://science.sciencemag.org/cgi/doi/10.1126/science.aau6249">DeepMind's latest</a>:
<p>The Capture the Flag bot from the recent study also began learning from scratch. But instead of playing against its identical clone, a cohort of 30 bots was created and trained in parallel with their own internal reward signal. Each bot within this population would then play together and learn from each other. As David Silver – one of the research scientists involved – notes, AI is beginning to “remove the constraints of human knowledge… and create knowledge itself”.

The learning speed for humans is still much faster than the most advanced deep reinforcement learning algorithms. Both OpenAI’s bots and DeepMind’s AlphaStar (the bot playing StarCraft II) devoured thousands of years’ worth of gameplay before being able to reach a human level of performance. Such training is estimated to cost several millions of dollars. Nevertheless, a self-taught AI capable of beating humans at their own game is an exciting breakthrough that could change how we see machines.

AI is often portrayed replacing or complementing human capabilities, but rarely as a fully-fledged team member, performing the same task as human beings. As these video game experiments involve machine-human collaboration, they offer a glimpse of the future.

Human players of Capture the Flag rated the bots as more collaborative than other humans, but players of DOTA 2 had a mixed reaction to their AI teammates. Some were quite enthusiastic, saying they felt supported and that they learned from playing alongside them.</p>


How long before there's a system which can learn to play any game, and trounce humans at it? In which case, isn't that something like the scary AI, except just limited to video games?
games  ai  artificialintelligence  machinelearning 
may 2019 by charlesarthur
Mona Lisa frown: machine learning brings old paintings and photos to life • TechCrunch
Dewin Coldewey:
<p>We can already make a face in one video reflect the face in another in terms of what the person is saying or where they’re looking. But most of these models require a considerable amount of data, for instance a minute or two of video to analyze.

The <a href="https://arxiv.org/abs/1905.08233">new paper by Samsung’s Moscow-based researchers</a>, however, shows that using only a single image of a person’s face, a video can be generated of that face turning, speaking and making ordinary expressions — with convincing, though far from flawless, fidelity.

It does this by frontloading the facial landmark identification process with a huge amount of data, making the model highly efficient at finding the parts of the target face that correspond to the source. The more data it has, the better, but it can do it with one image — called single-shot learning — and get away with it. That’s what makes it possible to take a picture of Einstein or Marilyn Monroe, or even the Mona Lisa, and make it move and speak like a real person.

<img src="https://techcrunch.com/wp-content/uploads/2019/05/monalisa.gif" width="100%" /></p>


Film makers of all stripes will love this. But it's also going to make the fake news of 2016 look like kiddies' play.
ai  fakes  animation  machinelearning 
may 2019 by charlesarthur
Google’s Duplex uses AI to mimic humans (sometimes) • The New York Times
Brian X. Chen and Cade Metz:
<p>“It sounded very real,” Mr. Tran said in an interview after hanging up the call with Google. “It was perfectly human.”

Google later confirmed, to our disappointment, that the caller had been telling the truth: He was a person working in a call center. The company said that about 25% of calls placed through Duplex started with a human, and that about 15% of those that began with an automated system had a human intervene at some point.

We tested Duplex for several days, calling more than a dozen restaurants, and our tests showed a heavy reliance on humans. Among our four successful bookings with Duplex, three were done by people. But when calls were actually placed by Google’s artificially intelligent assistant, the bot sounded very much like a real person and was even able to respond to nuanced questions.

In other words, Duplex, which Google first showed off last year as a technological marvel using AI, is still largely operated by humans. While AI services like Google’s are meant to help us, their part-machine, part-human approach could contribute to a mounting problem: the struggle to decipher the real from the fake, from bogus reviews and online disinformation to bots posing as people.</p>


Forgivable; these are still very early days for this technology. Did you expect you'd be able to say "a machine will be able to make a booking with a restaurant, and it will seem like a human" a couple of years ago?
duplex  google  ai  machinelearning  artificialintelligence 
may 2019 by charlesarthur
The Tinder hacker • The Cut
Francesca Mari:
<p>It all started when Sean recruited his close friend and roommate Haley to create a Tinder profile. Haley, in the words of a Tinder user who would soon encounter her, was a “tall, dark, younger, better-looking version of Kim Kardashian.” Together Sean and Haley selected her profile photos — Haley lounging in a tube with a serving of side boob, Haley in shorts leaning on a baseball bat. Sean wanted her to appear seductive but approachable. Once finished, Sean ran two rather mischievous programs.

The first program had her dummy account indiscriminately swipe right on some 800 men. The second program was one that Sean had spent months coding. It paired men who matched with Haley with one another, in the order that they contacted her. A man would send a message thinking he was talking to Haley — he saw her pictures and profile — and instead another dude would receive the message, which, again, would appear to be coming from Haley. When the first dude addressed Haley by name, Sean’s code subbed in the name of the man receiving the message.

As soon as they ran this code, it was off to the races. Conversations streamed in, around 400 of them unfurling between the most unlikely people, the effect something like same-sex Tinder chat roulette.

“There was a certain breed of guy that this really worked on,” Sean told me. “It wasn’t the kind of guy looking for a girlfriend or looking to talk or be casual. It was the guy looking for a hookup.” And those guys cut to the chase, thrilled at how down “Haley” was to sext, thrusting their way through any miscommunication. (Remember, both dudes think the other is Haley.)</p>


I feel that I've seen this before, but it's so splendid that it's worth bringing back.
ai  hacking  tinder  dating 
may 2019 by charlesarthur
Who to sue when a robot loses your fortune • Bloomberg
Thomas Beardsworth and Nishant Kumar:
<p>It all started over lunch at a Dubai restaurant on March 19, 2017. It was the first time 45-year-old Li, met Costa, the 49-year-old Italian who’s often known by peers in the industry as “Captain Magic.” During their meal, Costa described a robot hedge fund his company London-based Tyndaris Investments would soon offer to manage money entirely using AI, or artificial intelligence.

Developed by Austria-based AI company 42.cx, the supercomputer named K1 would comb through online sources like real-time news and social media to gauge investor sentiment and make predictions on US stock futures. It would then send instructions to a broker to execute trades, adjusting its strategy over time based on what it had learned.

The legal battle is a sign of what’s to come as AI is incorporated into all facets of life
The idea of a fully automated money manager inspired Li instantly. He met Costa for dinner three days later, saying in an email beforehand that the AI fund “is exactly my kind of thing.”

Over the following months, Costa shared simulations with Li showing K1 making double-digit returns, although the two now dispute the thoroughness of the back-testing. Li eventually let K1 manage $2.5bn — $250m of his own cash and the rest leverage from Citigroup. The plan was to double that over time.

But Li’s affection for K1 waned almost as soon as the computer started trading in late 2017. By February 2018, it was regularly losing money, including over $20m in a single day — Feb. 14 — due to a stop-loss order Li’s lawyers argue wouldn’t have been triggered if K1 was as sophisticated as Costa led him to believe.</p>


Ooh, this will be such fun if it ever reaches court - though as the court date is set for April 2020, I suspect it will get settled before it does.
law  ai  finance 
may 2019 by charlesarthur
Why Richland Source built a system for automating high school sports articles (and stopped selling apparel) • Niemen Lab
Christine Schmidt:
<p>after completing a beta phase with seven other news organizations (which Richland Source declined to name) and over 20,000 articles published with zero inaccuracies, the team is trying to get other newsrooms onboard.

What do these articles actually look like? Often, just a headline, “Sports Desk” byline, a sentence, and a bunch of ads. (There’s no mention of the software or robo-writing on the articles themselves, but Allred and Phillips pointed to a featured article Richland Source published last week explaining Lede Ai.) Here are some examples, with screenshots of the shorter ones:

Here’s one highlighted in Lede Ai’s whitepaper; it was the longest one I saw:
Massillon Washington could use an Emily Post tutorial in manners, but its competitive spirit was fine-tuned while punishing Wadsworth 41-19 in Division II Ohio high school football action on Friday night.

The Tigers opened with a 7-0 advantage over the Grizzlies through the first quarter. Massillon Washington’s offense darted to a 24-10 lead over Wadsworth at halftime. The Tigers carried a 27-12 lead into the fourth quarter.

This marked the Grizzlies first loss of the season, as they completed a 12-1 campaign. Massillon sports a 13-0 mark heading to the state semifinals.

The OHSAA releases the state semifinal pairings and locations on Sunday.
</p>

Stories like that are just the most awful wallpaper. The other thing that you come to notice is that sports writeups are almost always about men, for men. They're a sort of literary shed.
Sports  robot  ai  journalism 
may 2019 by charlesarthur
Amazing AI generates entire bodies of people who don’t exist • Futurism
Dan Robitzski:
<p>A new deep learning algorithm can generate high-resolution, photorealistic images of people — faces, hair, outfits, and all — from scratch.

The AI-generated models are the most realistic we’ve encountered, and the tech will soon be licensed out to clothing companies and advertising agencies interested in whipping up photogenic models without paying for lights or a catering budget. At the same time, similar algorithms could be misused to undermine public trust in digital media.

The algorithm was developed by DataGrid, a tech company housed on the campus of Japan’s Kyoto University, according to a press release.

In a video showing off the tech, the AI morphs and poses model after model as their outfits transform, bomber jackets turning into winter coats and dresses melting into graphic tees.

<iframe width="560" height="315" src="https://www.youtube.com/embed/8siezzLXbNo" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> </p>

So that's another group of jobs gone. (Thanks Charles Knight for the link.)
Ai  model 
may 2019 by charlesarthur
The promises and perils of the AI-powered airport of the future • Fast Company
Devin Liddell:
<p>Imagine even an early version—informed by cameras, sensors, and an airport network in which every passenger and every bag is a node—that simply develops a basic understanding of a few interrelated data sets. A computer vision system with a dynamic comprehension of who’s at the gate and who’s not, the bags they have and the other people they’re traveling with, and even how these people physically move, can then bring those disparate data sets together to answer the question that matters most: How can we board everyone in the fastest way that never creates a line? The system would also coordinate communications with you and your fellow passengers in ways that are far more personalized than the class- and zone-based boarding routines used today. This future could liberate flyers—and the gate itself—in ways that are difficult to predict. At the very least, airport gates would feature fewer crowded waiting rooms, and passengers would spend more leisure time at airport restaurants and stores—or, even better, less time in the airport overall.

There is a more pessimistic side to this narrative, though. If AI can be used to optimize airport and airline processes, it can be used to re-architect those processes in ways that don’t necessarily benefit passengers, and instead benefit commercial interests. Put simply, AI’s strength at seeing what’s happening could be used to manipulate passengers. That fatigued family with three bored and hungry kids? AI could help ensure they’re funneled through a security checkpoint that’s adjacent to a toy shop or fast-food restaurant where they are more likely to make impulsive purchases. </p>


Plenty more ideas too. Though it doesn't have to be AI, does it? And the facial recognition element worries people.
ai  airport 
april 2019 by charlesarthur
Google quietly disbanded another AI review board following disagreements • WSJ
Parmy Olson:
<p>in late 2018, Google began to wind down another independent panel set up to do the same thing [as the recently short-lived one in the US]—this time for a division in the UK doing AI work related to health care. At the time, Google said it was rethinking that board because of a reorganization in its health-care-focused businesses.

But the move also came amid disagreements between panel members and DeepMind, Google’s UK-based AI research unit, according to people familiar with the matter. Those differences centered on the review panel’s ability to access information about research and products, the binding power of their recommendations and the amount of independence that DeepMind could maintain from Google, according to these people.

A spokeswoman for DeepMind’s health-care unit in the U.K. declined to comment specifically about the board’s deliberations. After the reorganization, the company found that the board, called the Independent Review Panel, was “unlikely to be the right structure in the future.”

… internally, some board members chafed at their inability to review the full extent of the group’s AI research and the unit’s strategic plans, according to the people familiar with the matter. Members of the board weren’t asked to sign nondisclosure agreements about the information they received. Some directors felt that limited the amount of information the company shared with them and thus the board’s effectiveness, according to one person.</p>


Smart story from Olson, who is - you noticed? - now at the WSJ. The next question is, why did Google think that the format which had failed in the UK would work in the US?
google  deepmind  ai  review 
april 2019 by charlesarthur
An algorithm is attempting to block drug deals at UK Wi-Fi kiosks • Engadget
Christine Fisher:
<p>The InLink kiosks installed throughout the UK were meant to replace payphones and provide free calls, ultra-fast WiFi and phone charging. But it wasn't long before they became a hotbed for drug dealing. Rather than do away with the free phone service, British telecom company BT and InLinkUK developed an algorithm to automatically block and disable "antisocial" calls.

The algorithm uses the frequency of attempted and connected calls, their length and distribution and insights provided by police to identify suspicious patterns and phone numbers. It can then automatically block those numbers. It's already been deployed across all of the InLinkUK kiosks.

Before the system was in place, drug dealers reportedly arranged 20,000 sales from just five kiosks in a 15-week period. A separate kiosk was used to facilitate £1.28m in drug sales (about $1.68m). But BT and InLinkUK say less than half a percent of the total calls across the InLink network are associated with antisocial behavior. And the company believes its new algorithm has already solved the problem.</p>


It was so obvious that free phone services would be abused. And now the solution is technology? It won't take long before this is figured out; apart from anything, there's money to be made, so people will find out how to defeat it. There, at least, is what humans do have over machines: the profit motive.
ai  inlink  algorithm  drugs 
april 2019 by charlesarthur
One month, 500,000 face scans: how China is using AI to profile a minority • The New York Times
Paul Mozur:
<p>The facial recognition technology, which is integrated into China’s rapidly expanding networks of surveillance cameras, looks exclusively for Uighurs based on their appearance and keeps records of their comings and goings for search and review. The practice makes China a pioneer in applying next-generation technology to watch its people, potentially ushering in a new era of automated racism.

The technology and its use to keep tabs on China’s 11 million Uighurs were described by five people with direct knowledge of the systems, who requested anonymity because they feared retribution. The New York Times also reviewed databases used by the police, government procurement documents and advertising materials distributed by the AI companies that make the systems.

Chinese authorities already maintain a vast surveillance net, including tracking people’s DNA, in the western region of Xinjiang, which many Uighurs call home. But the scope of the new systems, previously unreported, extends that monitoring into many other corners of the country.
Shoppers lined up for identification checks outside the Kashgar Bazaar last fall. Members of the largely Muslim Uighur minority have been under Chinese surveillance and persecution for years.

The police are now using facial recognition technology to target Uighurs in wealthy eastern cities like Hangzhou and Wenzhou and across the coastal province of Fujian, said two of the people. Law enforcement in the central Chinese city of Sanmenxia, along the Yellow River, ran a system that over the course of a month this year screened whether residents were Uighurs 500,000 times.</p>


China is becoming the totalitarian nightmare: using technology to oppress and suppress minorities. It's quite like what the Nazis did to identify Jews in Holland and elsewhere.
china  ai  facialrecognition 
april 2019 by charlesarthur
Exclusive: Google cancels AI ethics board in response to outcry • Vox
Kelsey Piper:
<p>Thursday afternoon, a Google spokesperson told Vox that the company has decided to dissolve the panel, called the Advanced Technology External Advisory Council (ATEAC), entirely. Here is the company’s statement in full:
<p>It’s become clear that in the current environment, ATEAC can’t function as we wanted. So we’re ending the council and going back to the drawing board. We’ll continue to be responsible in our work on the important issues that AI raises, and will find different ways of getting outside opinions on these topics.</p>


The panel was supposed to add outside perspectives to ongoing AI ethics work by Google engineers, all of which will continue. Hopefully, the cancellation of the board doesn’t represent a retreat from Google’s AI ethics work, but a chance to consider how to more constructively engage outside stakeholders.</p>


It was a total AI-wash (we need a better word), and good riddance. The board wouldn't have agreed on anything, and there's no indication Google would have taken any notice of what they said, or if they could have said it publicly. The puzzle is who at Google thought it was a good idea, and picked those people. Many more questions around this.
google  ai  ethics 
april 2019 by charlesarthur
The newest AI-enabled weapon: ‘deep-faking’ photos of the Earth • Nextgov
Patrick Tucker:
<p>Worries about deep fakes—machine-manipulated videos of celebrities and world leaders purportedly saying or doing things that they really didn’t—are quaint compared to a new threat: doctored images of the Earth itself.

China is the acknowledged leader in using an emerging technique called generative adversarial networks to trick computers into seeing objects in landscapes or in satellite images that aren’t there, says Todd Myers, automation lead and Chief Information Officer in the Office of the Director of Technology at the National Geospatial-Intelligence Agency.

“The Chinese are well ahead of us. This is not classified info,” Myers said Thursday at the second annual Genius Machines summit, hosted by Defense One and Nextgov. “The Chinese have already designed; they’re already doing it right now, using GANs—which are generative adversarial networks—to manipulate scenes and pixels to create things for nefarious reasons.”

For example, Myers said, an adversary might fool your computer-assisted imagery analysts into reporting that a bridge crosses an important river at a given point.  

“So from a tactical perspective or mission planning, you train your forces to go a certain route, toward a bridge, but it’s not there. Then there’s a big surprise waiting for you,” he said.</p>


The concern seems a little overblown, but you have to worry about malicious actors, especially with open source.
maps  ai  hacking 
april 2019 by charlesarthur
Google's new external AI ethics council apparently already falling apart • Bloomberg
Mark Bergen, Jeremy Khan and Gerrit de Vynck:
<p>In less than a week, the council is already falling apart, a development that may jeopardize Google’s chance of winning more military cloud-computing contracts.

On Saturday, Alessandro Acquisti, a behavioral economist and privacy researcher, said he won’t be serving on the council. “While I’m devoted to research grappling with key ethical issues of fairness, rights and inclusion in AI, I don’t believe this is the right forum for me to engage in this important work,” Acquisti said on Twitter. He didn’t respond to a request for comment.

On Monday, a group of employees started a petition asking the company to remove another member: Kay Cole James, president of a conservative think tank who has fought against equal-rights laws for gay and transgender people. In less than two hours after it went live, more than 300 staff signed the petition anonymously…

…Some AI experts and activists have also called on Google to remove from the board Dyan Gibbens, the CEO of Trumbull Unmanned, a drone technology company. Gibbens and her co-founders at Trumbull previously worked on U.S. military drones. Using AI for military uses is a major point of contention for some Google employees.

Joanna Bryson, a professor of computer science at the University of Bath, in England, who was appointed to the Google ethics council, said she also had reservations about some of her fellow council members. “Believe it or not, I know worse about one of the other people,” she said on Twitter in response to a post questioning James’ appointment. “I know I have pushed (Google) before on some of their associations and they say they need diversity in order to be convincing to society broadly, e.g. the GOP.”</p>


Couldn't they have had "board splinters" in the headline?
google  ai  board 
april 2019 by charlesarthur
Inside the Google employee backlash against the Heritage Foundation • The Verge
Colin Lecher:
<p>“This group [of outside people chosen for Google's external advisory board on AI] will consider some of Google’s most complex challenges that arise under our AI Principles, like facial recognition and fairness in machine learning, providing diverse perspectives to inform our work,” the company said in an announcement. The board, called the Advanced Technology External Advisory Council (ATEAC), included recognized experts in AI research who had worked in the field for years.

But some members of the new board drew immediate scrutiny, especially Kay Coles James, president of the conservative Heritage Foundation. On social media, some characterized the decision as an attempt to cater to conservatives at the expense of true expertise in the field. By Saturday, one AI expert who was invited to the board had dropped out, vaguely noting that it may not be “the right forum” for the work.

Privately, several Google employees were also livid about the decision to include James, according to sources familiar with the discussions. On internal message boards, employees described James as “intolerant” and the Heritage Foundation as “amazingly wrong” in their policies on topics like climate change, immigration, and, particularly, on issues of LGBTQ equality. A person with James’ views, the employees said, “doesn’t deserve a Google-legitimized platform, and certainly doesn’t belong in any conversation about how Google tech should be applied to the world.”</p>


There's also a <a href="https://medium.com/@against.transphobia/googlers-against-transphobia-and-hate-b1b0a5dbf76">Medium petition by Google employees</a>. The Heritage Foundation is the sort of bonkers institution that could only grow up in the US. Why not ask a group that represents minorities or women, since they'll be at far more risk from any inequity introduced by AI?
google  politics  ai 
april 2019 by charlesarthur
Can we stop AI outsmarting humanity? • The Guardian
Mara Hvistendahl:
<p>[Skype co-founder Jaan] Tallinn warns that any approach to AI safety will be hard to get right. If an AI is sufficiently smart, it might have a better understanding of the constraints than its creators do. Imagine, he said, “waking up in a prison built by a bunch of blind five-year-olds.” That is what it might be like for a super-intelligent AI that is confined by humans.

The theorist Yudkowsky found evidence this might be true when, starting in 2002, he conducted chat sessions in which he played the role of an AI enclosed in a box, while a rotation of other people played the gatekeeper tasked with keeping the AI in. Three out of five times, Yudkowsky – a mere mortal – says he convinced the gatekeeper to release him. His experiments have not discouraged researchers from trying to design a better box, however.

The researchers that Tallinn funds are pursuing a broad variety of strategies, from the practical to the seemingly far-fetched. Some theorise about boxing AI, either physically, by building an actual structure to contain it, or by programming in limits to what it can do. Others are trying to teach AI to adhere to human values. A few are working on a last-ditch off-switch. One researcher who is delving into all three is mathematician and philosopher Stuart Armstrong at Oxford University’s Future of Humanity Institute, which Tallinn calls “the most interesting place in the universe.” (Tallinn has given FHI more than $310,000.)

Armstrong is one of the few researchers in the world who focuses full-time on AI safety. When I met him for coffee in Oxford, he wore an unbuttoned rugby shirt and had the look of someone who spends his life behind a screen, with a pale face framed by a mess of sandy hair. He peppered his explanations with a disorienting mixture of popular-culture references and math. When I asked him what it might look like to succeed at AI safety, he said: “Have you seen the Lego movie? Everything is awesome.”</p>
ai  machinelearning 
april 2019 by charlesarthur
A data scientist designed a social media influencer account that's 100% automated • Buzzfeed News
Katie Notopoulos:
<p>Buetti, a data scientist by trade, decided to use his actual skills and automate the hard work of influencing by writing a program that recruited an audience of 25,000 (by autofollowing their accounts in hopes of getting a follow back), and reposted photographers’ eye-catching photos of New York City for his growing entourage to engage with (“😍🤗🤗🤗great shot💕,” one person commented). Poof: @beautiful.newyorkcity was born — an active, popular, and 100% artificial Instagram account. For Buetti, it’s the perfect solution if you don’t want to actually dedicate time to curating an online following, but still want to score free spaghetti from restaurants seeking publicity. His program even finds restaurant accounts in New York, and sends them direct messages offering to promote them to followers in exchange for a comped meal — and no, it does not disclose that @beautiful.newyorkcity is run by a robot.

Behold the latest chapter in the dark art of social media influencing, which despite being widely plagued with bots and fake engagement, continues to attract real interest from marketers and businesses. Buetti’s account has (at least some) real followers, but the influencing itself is being handled by some code rather than an eager personality. It’s a lifestyle brand generated by something that’s not alive.</p>


It's essentially the logical end state of influencer accounts.
ai  instagram  celebrity 
march 2019 by charlesarthur
Nvidia’s latest AI software turns rough doodles into realistic landscapes • The Verge
JAmes Vincent:
<p>The software generates AI landscapes instantly, and it’s surprisingly intuitive. For example, when a user draws a tree and then a pool of water underneath it, the model adds the tree’s reflection to the pool.

<img src="https://cdn.vox-cdn.com/thumbor/WxTyBlP498x5G-Tf2tJoFNxlCDg=/1600x0/filters:no_upscale()/cdn.vox-cdn.com/uploads/chorus_asset/file/15972196/nvidia_gaugan_gif.gif" width="100%" />

Demos like this are very entertaining, but they don’t do a good job of highlighting the limitations of these systems. The underlying technology can’t just paint in any texture you can think of, and Nvidia has chosen to show off imagery it handles particularly well.

For example, generating fake grass and water is relatively easy for GANs [generative adversarial networks, a form of neural network] because the visual patterns involved are unstructured. Generating pictures of buildings and furniture, by comparison, is much trickier, and the results are much less realistic. That’s because these objects have a logic and structure to them that humans are sensitive to. GANs can overcome this sort of challenge, as we’ve seen with AI-generated faces, but it takes a lot of extra effort.

Nvidia didn’t say if it has any plans to turn the software into an actual product, but it suggests that tools like this could help “everyone from architects and urban planners to landscape designers and game developers” in the future.</p>
ai  art  images  gan  neuralnet 
march 2019 by charlesarthur
Are robots competing for your job? • The New Yorker
Jill Lepore is in caustic form, reviewing a number of books:
<p>The old robots were blue-collar workers, burly and clunky, the machines that rusted the Rust Belt. But, according to the economist Richard Baldwin, in “The Globotics Upheaval: Globalization, Robotics, and the Future of Work” (Oxford), the new ones are “white-collar robots,” knowledge workers and quinoa-and-oat-milk globalists, the machines that will bankrupt Brooklyn. Mainly, they’re algorithms. Except when they’re immigrants. Baldwin calls that kind “remote intelligence,” or R.I.: they’re not exactly robots but, somehow, they fall into the same category. They’re people from other countries who can steal your job without ever really crossing the border: they just hop over, by way of the Internet and apps like Upwork, undocumented, invisible, ethereal.

Between artificial intelligence and remote intelligence, Baldwin warns, “this international talent tidal wave is coming straight for the good, stable jobs that have been the foundation of middle-class prosperity in the US and Europe, and other high-wage economies.” Change your Wi-Fi password. Clear your browser history. Ask H.R. about early retirement. The globots are coming.

How can you know if you’re about to get replaced by an invading algorithm or an augmented immigrant? “If your job can be easily explained, it can be automated,” Anders Sandberg, of Oxford’s Future of Humanity Institute, tells Oppenheimer. “If it can’t, it won’t.” (Rotten luck for people whose job description is “Predict the future.”) Baldwin offers three-part advice: (1) avoid competing with A.I. and R.I.; (2) build skills in things that only humans can do, in person; and (3) “realize that humanity is an edge not a handicap.” What all this means is hard to say, especially if you’ve never before considered being human to be a handicap. </p>


It's not a short piece, but it is very fine.
robots  ai  jobs 
march 2019 by charlesarthur
How artificial intelligence is changing science • Quanta Magazine
Rachel Suggs:
<p>In a <a href="https://arxiv.org/pdf/1812.01114.pdf">paper</a> published in December in Astronomy & Astrophysics, Schawinski and his ETH Zurich colleagues Dennis Turp and Ce Zhang used generative modeling to investigate the physical changes that galaxies undergo as they evolve. (The software they used treats the latent space somewhat differently from the way a generative adversarial network [GAN] treats it, so it is not technically a GAN, though similar.) Their model created artificial data sets as a way of testing hypotheses about physical processes. They asked, for instance, how the “quenching” of star formation — a sharp reduction in formation rates — is related to the increasing density of a galaxy’s environment.

For [Galaxy Zoo creator Kevin] Schawinski, the key question is how much information about stellar and galactic processes could be teased out of the data alone. “Let’s erase everything we know about astrophysics,” he said. “To what degree could we rediscover that knowledge, just using the data itself?”

First, the galaxy images were reduced to their latent space; then, Schawinski could tweak one element of that space in a way that corresponded to a particular change in the galaxy’s environment — the density of its surroundings, for example. Then he could re-generate the galaxy and see what differences turned up. “So now I have a hypothesis-generation machine,” he explained. “I can take a whole bunch of galaxies that are originally in a low-density environment and make them look like they’re in a high-density environment, by this process.”  Schawinski, Turp and Zhang saw that, as galaxies go from low- to high-density environments, they become redder in color, and their stars become more centrally concentrated. This matches existing observations about galaxies, Schawinski said. The question is why this is so.

The next step, Schawinski says, has not yet been automated: “I have to come in as a human, and say, ‘OK, what kind of physics could explain this effect?’”</p>


If you'd forgotten Galaxy Zoo, it was a <a href="https://www.theguardian.com/technology/2009/jan/15/internet-astronomy">crowdsourcing method of cataloguing galaxies</a>, launched 12 years ago. Now, the article says, you'd get it done by an AI system in an afternoon.
ai  data  science 
march 2019 by charlesarthur
Facial recognition's 'dirty little secret': millions of online photos scraped without consent • NBC News
Olivia Solon:
<p>“This is the dirty little secret of AI training sets. Researchers often just grab whatever images are available in the wild,” said NYU School of Law professor Jason Schultz.

The latest company to enter this territory was IBM, which in January released a collection of nearly a million photos that were taken from the photo hosting site Flickr and coded to describe the subjects’ appearance. IBM promoted the collection to researchers as a progressive step toward reducing bias in facial recognition.

But some of the photographers whose images were included in IBM’s dataset were surprised and disconcerted when NBC News told them that their photographs had been annotated with details including facial geometry and skin tone and may be used to develop facial recognition algorithms. (NBC News obtained IBM’s dataset from a source after the company declined to share it, saying it could be used only by academic or corporate research groups.)

“None of the people I photographed had any idea their images were being used in this way,” said Greg Peverill-Conti, a Boston-based public relations executive who has more than 700 photos in IBM’s collection, known as a “training dataset.”

“It seems a little sketchy that IBM can use these pictures without saying anything to anybody,” he said.

John Smith, who oversees AI research at IBM, said that the company was committed to “protecting the privacy of individuals” and “will work with anyone who requests a URL to be removed from the dataset.”

Despite IBM’s assurances that Flickr users can opt out of the database, NBC News discovered that it’s almost impossible to get photos removed. IBM requires photographers to email links to photos they want removed, but the company has not publicly shared the list of Flickr users and photos included in the dataset, so there is no easy way of finding out whose photos are included. IBM did not respond to questions about this process.</p>


Solon is one of the best technology journalists out there, with a consistent run of great stories.
ai  dataset  flickr  copyright 
march 2019 by charlesarthur
Nearly half of all ‘AI startups’ are cashing in on hype • Forbes
Parmy Olson:
<p>a new report makes the surprising claim that 40% of European firms that are classified as an “AI startup” don’t exploit the field of study in any material way for their business.

Out of 2,830 startups in Europe that were classified as being AI companies, only 1,580 accurately fit that description, according to the eye-opening stat on page 99 of <a href="https://www.mmcventures.com/wp-content/uploads/2019/02/The-State-of-AI-2019-Divergence.pdf">a new report from MMC</a>, a London-based venture capital firm. In many cases the label, which refers to computer systems that can perform tasks normally requiring human intelligence, was simply wrong.

“We looked at every company, their materials, their product, the website and product documents,” says David Kelnar, head of research for MMC, which has £300m ($400m) under management and a portfolio of 34 companies. “In 40% of cases we could find no mention of evidence of AI.” In such cases, he added, “companies that people assume and think are AI companies are probably not.”</p>

It's a constant cycle: beginning in the 1990s, companies would say (it helps if you imagine it in the voice of Ralph, the useless kid from The Simpsons): with "we're an internet company!" and "we're a mobile company!" and, now, "we're an AI company!" Doesn't make it so.
startup  ai 
march 2019 by charlesarthur
Here’s how we’ll know an AI is conscious • Nautilus
Joel Frohlich:
<p>It’s worth wondering, though, how a person or machine devoid of experience could reflect on experience it doesn’t have. In <a href="https://youtu.be/zgbfoLdrjVI?t=2702">an episode of the “Making Sense”</a> (formerly known as “Waking Up”) podcast with neuroscientist and author Sam Harris, Chalmers addressed this puzzle. “I don’t think it’s particularly hard to at least conceive of a system doing this,” Chalmers told Harris. “I mean, I’m talking to you now, and you’re making a lot of comments about consciousness that seem to strongly suggest that you have it. Still, I can at least entertain the idea that you’re not conscious and that you’re a zombie who’s in fact just making all these noises without having any consciousness on the inside.”

This is not a strictly academic matter—if Google’s DeepMind develops an AI that starts asking, say, why the color red feels like red and not something else, there are only a few possible explanations. Perhaps it heard the question from someone else. It’s possible, for example, that an AI might learn to ask questions about consciousness simply by reading papers about consciousness. It also could have been programmed to ask that question, like a character in a video game, or it could have burped the question out of random noise. Clearly, asking questions about consciousness does not prove anything per se. But could an AI zombie formulate such questions by itself, without hearing them from another source or belching them out from random outputs? To me, the answer is clearly no. If I’m right, then we should seriously consider that an AI might be conscious if it asks questions about subjective experience unprompted. Because we won’t know if it’s ethical to unplug such an AI without knowing if it’s conscious, we better start listening for such questions now.</p>


The article goes into more depth, but this is quite the question for the modern age.
artificialintelligence  ai  machinelearning  consciousness 
march 2019 by charlesarthur
Microsoft Excel will now let you snap a picture of a spreadsheet and import it • The Verge
Tom Warren:
<p>Microsoft is adding a very useful feature to its Excel mobile apps for iOS and Android. It allows Excel users to take a photo of a printed data table and convert it into a fully editable table in the app. This feature is rolling out initially in the Android Excel app, before making its way to iOS soon. Microsoft is using artificial intelligence to implement this feature, with image recognition so that Excel users don’t have to manually input hardcopy data. The feature will be available to Microsoft 365 users.

<img src="https://cdn.vox-cdn.com/thumbor/my3R0rfIZ9_lO_a05Te28l-yTG4=/1400x0/filters:no_upscale()/cdn.vox-cdn.com/uploads/chorus_asset/file/14949387/M365_Feb_update_5b.gif" width="50%" /></p>


Very fun - though I'd have thought its biggest use will be for converting PDFs or to grab information out of books and make it more useful.
microsoft  ai  excel 
march 2019 by charlesarthur
A philosopher argues: AI can’t be an artist • MIT Technology Review
Sean Dorrance Kelly, who is a philosophy professor at Harvard:
<p>Human creative achievement, because of the way it is socially embedded, will not succumb to advances in artificial intelligence. To say otherwise is to misunderstand both what human beings are and what our creativity amounts to.

This claim is not absolute: it depends on the norms that we allow to govern our culture and our expectations of technology. Human beings have, in the past, attributed great power and genius even to lifeless totems. It is entirely possible that we will come to treat artificially intelligent machines as so vastly superior to us that we will naturally attribute creativity to them. Should that happen, it will not be because machines have outstripped us. It will be because we will have denigrated ourselves…

…my argument is not that the creator’s responsiveness to social necessity must be conscious for the work to meet the standards of genius. I am arguing instead that we must be able to interpret the work as responding that way. It would be a mistake to interpret a machine’s composition as part of such a vision of the world. The argument for this is simple.

Claims like Kurzweil’s that machines can reach human-level intelligence assume that to have a human mind is just to have a human brain that follows some set of computational algorithms—a view called computationalism. But though algorithms can have moral implications, they are not themselves moral agents. We can’t count the monkey at a typewriter who accidentally types out Othello as a great creative playwright. If there is greatness in the product, it is only an accident.</p>


Your long read for today. May require registration.
ai  philosophy  machinelearning 
february 2019 by charlesarthur
Music created by artificial intelligence is better than you think • Medium
Stuart Dredge:
<p>Can an A.I. create original music? Absolutely. Can it create original music better than a human can? Well, it depends which human you’re comparing the A.I.’s music to, for a start.

Human-created music already spans everything from the sublime to the unlistenable. While an A.I. may not be able to out-Adele Adele (or Aretha Franklin, or Joni Mitchell) with a timeless song and performance, it can compose a compelling melody for a YouTube video, mobile game, or elevator journey faster, cheaper, and almost as well as a human equivalent. In these scenarios, it’s often the “faster” and “cheaper” parts that matter most to whoever’s paying.

The quality of A.I. music is improving in leaps and bounds as the technology becomes more sophisticated. In January 2017, Australian A.I.-music startup Popgun could listen to a human playing piano and respond with a melody that could come next; by July 2018, it could compose and play piano, bass, and drums together as a backing track for a human’s vocals.

Popgun is just one of a number of technology startups exploring the potential of A.I. and what it could mean for humans — both professional musicians and those of us who can barely bang a tambourine in time alike. Startups include Jukedeck, Amper Music, Aiva, WaveAI, Melodrive, Amadeus Code, Humtap, HumOn, AI Music, Mubert, Endel, and Boomy, while teams from Google, Sony, IBM, and Facebook are also looking at what A.I. music can do now and what it could do in the future.</p>


As he points out, really quick way to get corporate music or YouTube vlog stuff. As much as anything you could get it to seed something which you improve.
neuralnetwork  machinelearning  ai  artificialintelligence  music 
february 2019 by charlesarthur
New AI fake text generator may be too dangerous to release, say creators • The Guardian
Alex Hern:
<p>Feed it the first few paragraphs of a Guardian story about Brexit, and its output is plausible newspaper prose, replete with “quotes” from Jeremy Corbyn, mentions of the Irish border, and answers from the prime minister’s spokesman.

One such, completely artificial, paragraph reads: “Asked to clarify the reports, a spokesman for May said: ‘The PM has made it absolutely clear her intention is to leave the EU as quickly as is possible and that will be under her negotiating mandate as confirmed in the Queen’s speech last week.’”

From a research standpoint, GPT2 is groundbreaking in two ways. One is its size, says Dario Amodei, OpenAI’s research director. The models “were 12 times bigger, and the dataset was 15 times bigger and much broader” than the previous state-of-the-art AI model. It was trained on a dataset containing about 10m articles, selected by trawling the social news site Reddit for links with more than three votes. The vast collection of text weighed in at 40 GB, enough to store about 35,000 copies of Moby Dick.

The amount of data GPT2 was trained on directly affected its quality, giving it more knowledge of how to understand written text. It also led to the second breakthrough. GPT2 is far more general purpose than previous text models.</p>


They're not releasing it because they haven't yet figured out all the ways it might be used maliciously. Echoes the <a href="https://en.wikipedia.org/wiki/Asilomar_Conference_on_Recombinant_DNA">moratorium on genetic engineering of bacteria in the 1970s</a>. But the first time I've seen it for a neural network.

I mean, imagine it allied to the next system…
ai  writing 
february 2019 by charlesarthur
Thispersondoesnotexist.com is face-generating AI at its creepiest • The Next Web
Tristan Greene:
<p>Nvidia is a company most lauded for its impressive graphics cards. But in the world of machine learning, it’s one of the most ingenious companies using deep learning today. A couple of years back TNW reported on a new generative adversarial network (GAN) the company developed. At the time, it was an amazing example of how powerful deep learning had become.

This was cutting edge technology barely a year ago. Today, you can use it on your phone. Just point your web browser to “thispersondoesnotexist.com” and voila: the next time your grandmother asks when you’re going to settle down with someone nice, you can conjure up a picture to show them.</p>


It really is pretty amazing. The <a href="https://arxiv.org/pdf/1812.04948.pdf">paper explaining it</a> is remarkable. Try it: <a href="http://Thispersondoesnotexist.com">Thispersondoesnotexist.com</a>.
ai  deception  gan 
february 2019 by charlesarthur
Tracking sanctions-busting ships on the high seas • BBC News
Chris Baraniuk:
<p>For a long time, being out at sea meant being out of sight and out of reach. And all kinds of shenanigans went on as a result - countries secretly selling oil and other goods to countries they're not supposed to under international sanctions rules, for example, not to mention piracy and kidnapping.

The problem is that captains can easily switch off the current way of tracking ships, called the Automatic Identification System (AIS), hiding their location.

But now thousands of surveillance satellites have been launched into space, and artificial intelligence (AI) is being applied to the images they take. There's no longer anywhere for such ships to hide.

Samir Madani, co-founder of TankerTrackers.com, says his firm's satellite imagery analysis has identified Iranian tankers moving in and out of port, despite US sanctions restricting much of the country's oil exports. He's watched North Korea - which is limited by international rules to 500,000 barrels of refined oil every year - taking delivery of fuel via ship-to-ship transfers on the open ocean.

Turning off the AIS transponders that broadcast a ship's position, course and speed, is no longer a guarantee of anonymity.

His firm can even ascertain what cargo a ship is carrying - and how much - just by looking at its shadow on the water, says Mr Madani.</p>


<a href="https://tankertrackers.com">Tankertrackers</a> is pretty cheap if you were into analysis of oil supply lines - $299 per year.
ai  data  oil 
february 2019 by charlesarthur
IBM AI fails to beat human debating champion • Engadget
Saqib Shah:
<p>Champion debater Harish Natarajan triumphed in a live showdown against IBM's Miss Debater AI at the company's Think Conference in San Francisco on Monday. The 2012 European Debate winner and IBM's black monolith exchanged quick retorts on pre-school subsidies for 25 minutes before the crowd hailed Natarajan the victor.

Each side was given 15 minutes to prep for the clash, after which they presented a four-minute opening statement, a four-minute rebuttal, and a two-minute summary. The 700-strong audience, meanwhile, was comprised of top debaters from Bay Area schools and more than a hundred journalists.

Miss Debater (formerly known as Project Debater) pulled arguments from its database of 10 billion sentences taken from newspapers and academic journals. A female voice emanating from the human-sized black box spouted its answers, while three blue balls floated around its display.

The face-off was the latest event in IBM's "grand challenge" series pitting humans against its intelligent machines. In 1996, its computer system beat chess grandmaster Garry Kasparov, though the Russian later accused the IBM team of cheating, something that the company denies to this day - he later retracted some of his allegations. Then, in 2011, its Watson supercomputer trounced two record-winning Jeopardy! contestants.

In the lead-up to Monday's bout, Natarajan suggested that debating may prove a harder battleground for AI than Go and video games. "Debating is...more complicated for a machine than any of those…" he wrote in a LinkedIn post.</p>


Well I'd hope that debating was tougher than just stringing sentences together. Though I doubt Natarajan has any real idea of how hard Go or Dota 2 are for a machine or a human at the very top level.
ibm  ai  machinelearning  debate 
february 2019 by charlesarthur
Uber releases Ludwig, an open source AI 'toolbox' built on top of TensorFlow • VentureBeat
Kyle Wiggers:
<p>Want to dive earnestly into artificial intelligence (AI) development, but find the programming piece of it intimidating? Not to worry — Uber has your back. The ride-hailing giant has <a href="https://eng.uber.com/introducing-ludwig/">debuted Ludwig</a>, an open source “toolbox” built on top of Google’s TensorFlow framework that allows users to train and test AI models without having to write code.

Uber says Ludwig is the culmination of two years’ worth of work to streamline the deployment of AI systems in applied projects and says it has been tapping the tool suite internally for tasks like extracting information from driver licenses, identifying points of interest during conversations between driver-partners and riders, predicting food delivery times, and more.

“Ludwig is unique in its ability to help make deep learning easier to understand for non-experts and enable faster model improvement iteration cycles for experienced machine learning developers and researchers alike,” Uber wrote in a blog post. “By using Ludwig, experts and researchers can simplify the prototyping process and streamline data processing so that they can focus on developing deep learning architectures rather than data wrangling.”</p>


You have to want to dive earnestly, because it's not just "here's some pictures, this is what they are, off you go".
ai  uber 
february 2019 by charlesarthur
YouTube attempts to tame self-created conspiracy monster • NY Mag
Madison Malone Kircher:
<p>David Hogg… survived the Parkland school shooting that left 17 of his classmates and teachers dead, only to have to endure viral videos peddling a conspiracy that he was not a high schooler, but rather a paid crisis actor. Following the shooting, the video of Hogg spiked to the No. 1 trending spot on YouTube before the platform finally took it down. A different video from the same user purporting to show Hogg “forgetting his lines” was left up even after the other video was removed by YouTube. (It has since also been deleted.) This week, Valentine’s Day will mark one year since the Parkland shooting.

[Former YouTube engineer Guillaume] Chaslot wrote [on Twitter] that YouTube has two options when it comes to curbing conspiracy-theory videos: that “people spend more time on round earth videos” or that the company “change the AI.” “YouTube’s economic incentive is for solution 1,” he continued. “After 13 years, YouTube made the historic choice to go towards 2.”

It feels like we’re giving YouTube way too much credit here [for downplaying conspiracy videos]. YouTube didn’t have to give Alex Jones, a man who claims the shooting at Sandy Hook didn’t happen, a platform for as long as it did. (YouTube finally banned Jones in August 2018.) Just like (before January) it didn’t have to let people continue posting scientifically debunked schlock about how vaccines cause autism just because those videos technically weren’t violating the rules. The company isn’t going with option two at great cost to its bottom line. The company is going with option two because the cost of people calling it out for going with option one for so long is becoming untenable.</p>
youtube  ai  machinelearning  conspiracy 
february 2019 by charlesarthur
Advancing research on fake audio detection • Google Blog
Daisy STanton is a software engineer at Google AI:
<p>we’re keenly aware of the risks this [speech generation] technology can pose if used with the intent to cause harm. Malicious actors may synthesize speech to try to fool voice authentication systems, or they may create forged audio recordings to defame public figures. Perhaps equally concerning, public awareness of "deep fakes" (audio or video clips generated by deep learning models) can be exploited to manipulate trust in media: as it becomes harder to distinguish real from tampered content, bad actors can more credibly claim that authentic data is fake.

We're taking action. When we launched the Google News Initiative last March, we committed to releasing datasets that would help advance state-of-the-art research on fake audio detection.  Today, we're delivering on that promise: Google AI and Google News Initiative have partnered to create a body of synthetic speech containing thousands of phrases spoken by our deep learning TTS models. These phrases are drawn from English newspaper articles, and are spoken by 68 synthetic "voices" covering a variety of regional accents.  

We're making this dataset available to all participants in the independent, externally-run 2019 ASVspoof challenge. This open challenge invites researchers all over the globe to submit countermeasures against fake (or "spoofed") speech, with the goal of making automatic speaker verification (ASV) systems more secure. By training models on both real and computer-generated speech, ASVspoof participants can develop systems that learn to distinguish between the two. The results will be announced in September at the 2019 Interspeech conference in Graz, Austria.</p>


Another arms race, or maybe speech (voice?) race.
voice  ai  Deepfake 
february 2019 by charlesarthur
Why CAPTCHAs have gotten so difficult • The Verge
Josh Dzieza:
<p>Recently there have been efforts to develop game-like CAPTCHAs, tests that require users to rotate objects to certain angles or move puzzle pieces into position, with instructions given not in text but in symbols or implied by the context of the game board. The hope is that humans would understand the puzzle’s logic but computers, lacking clear instructions, would be stumped. Other researchers have tried to exploit the fact that humans have bodies, using device cameras or augmented reality for interactive proof of humanity.

The problem with many of these tests isn’t necessarily that bots are too clever — it’s that humans suck at them. And it’s not that humans are dumb; it’s that humans are wildly diverse in language, culture, and experience. Once you get rid of all that stuff to make a test that any human can pass, without prior training or much thought, you’re left with brute tasks like image processing, exactly the thing a tailor-made AI is going to be good at.

“The tests are limited by human capabilities,” Polakis says. “It’s not only our physical capabilities, you need something that [can] cross cultural, cross language. You need some type of challenge that works with someone from Greece, someone from Chicago, someone from South Africa, Iran, and Australia at the same time. And it has to be independent from cultural intricacies and differences. You need something that’s easy for an average human, it shouldn’t be bound to a specific subgroup of people, and it should be hard for computers at the same time. That’s very limiting in what you can actually do. And it has to be something that a human can do fast, and isn’t too annoying.”

Figuring out how to fix those blurry image quizzes quickly takes you into philosophical territory: what is the universal human quality that can be demonstrated to a machine, but that no machine can mimic? What is it to be human?</p>


Really it comes down to our tendency to dither when we don't know. Or else be too certain when we don't know. Unfortunately, machines can copy that too.
ai  captcha  machinelearning 
february 2019 by charlesarthur
Robots have already mastered games like chess and Go. Now they’re coming for Jenga • The Washington Post
Peter Holley:
<p>AI long ago mastered chess, the Chinese board game Go and even the Rubik’s cube, which it managed to solve in just 0.38 seconds.

Now machines have a new game that will allow them to humiliate humans: Jenga, the popular game —— and source of melodramatic 1980s commercials —— in which players strategically remove pieces from an increasingly unstable tower of 54 blocks, placing each one on top until the entire structure collapses.

A newly released video from MIT shows a robot developed by the school’s engineers playing the game with surprising precision. The machine is quipped with a soft-pronged gripper, a force-sensing wrist cuff and an external camera, allowing the robot to perceive the tower’s vulnerabilities the way a human might, according to Alberto Rodriguez, the Walter Henry Gale career development assistant professor in the Department of Mechanical Engineering at MIT.

“Unlike in more purely cognitive tasks or games such as chess or Go, playing the game of Jenga also requires mastery of physical skills such as probing, pushing, pulling, placing, and aligning pieces,” Rodriguez said <a href="http://news.mit.edu/2019/robot-jenga-0130">in a statement released by the school</a>. “It requires interactive perception and manipulation, where you have to go and touch the tower to learn how and when to move blocks.”

“This is very difficult to simulate, so the robot has to learn in the real world, by interacting with the real Jenga tower,” he added.</p>


These things are really ruining party games.
jenga  ai  machinelearning 
february 2019 by charlesarthur
An AI crushed two human pros at StarCraft—but it wasn’t a fair fight • Ars Technica
Timothy Lee:
<p><img src="https://cdn.arstechnica.net/wp-content/uploads/2019/01/SCII-BlogPost-Fig09.width-1500.png" width="100%" />

As this chart demonstrates, top StarCraft players can issue instructions to their units very quickly. Grzegorz "MaNa" Komincz averaged 390 actions per minute (more than six actions per second!) over the course of his games against AlphaStar.  But of course, a computer program can easily issue thousands of actions per minute, allowing it to exert a level of control over its units that no human player could match.

To avoid that, DeepMind says it put a hard cap on the number of actions per minute AlphaStar could make. "We set a maximum of 600 APMs over 5-second periods, 400 over 15-second periods, 320 over 30-second periods, and 300 over 60-second period," wrote DeepMind researcher Oriol Vinyals in a reddit AMA following the demonstration.

But as other redditors quickly pointed out, five seconds is a long time in a StarCraft game. These limits seem to imply that AlphaStar could take 50 actions in a single second or 15 actions per second for three seconds.

More importantly, AlphaStar has the ability to make its clicks with surgical precision using an API, whereas human players are constrained by the mechanical limits of computer mice. And if you watch a pro like Komincz play, you'll see that the number of raw actions often far exceeds the number of meaningful actions.

For example, if a human player is guiding a single unit on an important mission, he will often issue a series of "move" commands along the unit's current trajectory. Each command barely changes the unit's path, but, if the human player has already selected the unit, it takes hardly any time to click more than once. But most of these commands aren't strictly necessary; an AI like AlphaStar could easily figure out the unit's optimal route and then issue a much smaller number of move commands to achieve the same result.

So limiting the raw number of actions an AI can take to that of a typical human does not necessarily mean that the number of meaningful actions will be remotely comparable.</p>


Notice the way this assertion slides past the realities here. Computers are going to be better at doing lots of things really fast; the human advantage is meant to be the capability to think strategically about what things to do. That strategic advantage has been ceded to AlphaStar, and so people complain about its speed.

How long before these systems are running defence computers, determining and carrying out attack plans?
ai  alphastar  deepmind 
january 2019 by charlesarthur
AlphaStar: mastering the real-time strategy game StarCraft II • DeepMind
<p>Games have been used for decades as an important way to test and evaluate the performance of artificial intelligence systems. As capabilities have increased, the research community has sought games with increasing complexity that capture different elements of intelligence required to solve scientific and real-world problems. In recent years, StarCraft, considered to be one of the most challenging Real-Time Strategy (RTS) games and one of the longest-played esports of all time, has emerged by consensus as a “grand challenge” for AI research.

Now, we introduce our StarCraft II program AlphaStar, the first Artificial Intelligence to defeat a top professional player. In a series of test matches held on 19 December, AlphaStar decisively beat Team Liquid’s Grzegorz "MaNa" Komincz, one of the world’s strongest professional StarCraft players, 5-0, following a successful benchmark match against his team-mate Dario “TLO” Wünsch. The matches took place under professional match conditions on a competitive ladder map and without any game restrictions…

…StarCraft II, created by Blizzard Entertainment, is set in a fictional sci-fi universe and features rich, multi-layered gameplay designed to challenge human intellect. Along with the original title, it is among the biggest and most successful games of all time, with players competing in esports tournaments for more than 20 years.</p>


Tons of links and replays to watch here. I watched the latest Star Trek: Discovery series on Netflix and kept thinking, as people shouted orders during (stupid) space battles, "you'd have long since handed this stuff over to computers." Well, here we go.
deepmind  ai  strategy  realtime 
january 2019 by charlesarthur
AI thinks Rachel Maddow is a man (and this is a problem for all of us) • Medium
Edwin Ong:
<p>As more machine learning systems get used in production, it is increasingly important to adopt better testing beyond the test dataset. Unlike traditional software quality assurance, in which systems are tested to ensure that features operate as expected, machine learning testing requires the curation and generation of new datasets and a framework capable of dealing with confidence levels rather than the traditional 404 and 500 error codes from web servers.

My partner Alex and I have been working on tools for to support machine learning in production. As she wrote in The Rise of the Model Servers, as machine learning moves from the lab into production, additional security and testing services are required to fully complete the stack. One of our tools, ML Safety Server, allows the rapid generation and management of additional test datasets and the tracking of how these datasets perform over time. It is from using the Safety Server that we discovered that AI thinks Rachel Maddow is a man.

We’ve been using public cloud APIs to prototype the Safety Server. We discovered the Rachel Maddow issue when testing image recognition services. AWS, Azure, Clarifai, and Watson have all misgendered Rachel Maddow when given recent images of her.</p>


So basically he's saying that with great computing power comes great responsibility to make sure that the training and test sets are really, really good.
ai  machinelearning  rachelmaddow 
january 2019 by charlesarthur
A neural network can learn to organize the world it sees into concepts—just like we do • MIT Technology Review
Karen Hao:
<p>GANs, or generative adversarial networks, are the social-media starlet of AI algorithms. They are responsible for creating the first AI painting ever sold at an art auction and for superimposing celebrity faces on the bodies of porn stars. They work by pitting two neural networks against each other to create realistic outputs based on what they are fed. Feed one lots of dog photos, and it can create completely new dogs; feed it lots of faces, and it can create new faces. 

As good as they are at causing mischief, researchers from the MIT-IBM Watson AI Lab realized GANs are also a powerful tool: because they paint what they’re “thinking,” they could give humans insight into how neural networks learn and reason. This has been something the broader research community has sought for a long time—and it’s become more important with our increasing reliance on algorithms.

“There’s a chance for us to learn what a network knows from trying to re-create the visual world,” says David Bau, an MIT PhD student who worked on the project.

So the researchers began probing a GAN’s learning mechanics by feeding it various photos of scenery—trees, grass, buildings, and sky. They wanted to see whether it would learn to organize the pixels into sensible groups without being explicitly told how.

Stunningly, over time, it did. By turning “on” and “off” various “neurons” and asking the GAN to paint what it thought, the researchers found distinct neuron clusters that had learned to represent a tree, for example. Other clusters represented grass, while still others represented walls or doors. In other words, it had managed to group tree pixels with tree pixels and door pixels with door pixels regardless of how these objects changed color from photo to photo in the training set. “These GANs are learning concepts very closely reminiscent of concepts that humans have given words to,” says Bau.</p>


OK, so it can group them as concepts. Is that the same as having a concept of them, though?
ai  algorithms  artificialintelligence 
january 2019 by charlesarthur
Ray Kurzweil: 'AI is still on course to outpace human intelligence' • Gray Scott
BJ Murphy:
<p>Using examples of modern-day AI like AlphaGo, there are clear signs that they’re already starting to outpace human intelligence involving specific tasks. This has been a common factor of AI for the last couple decades, starting with the simple goal of defeating the world’s best (human) chess players. Today, they’re outpacing us in chess, Go, various strategic computer games, and even medical diagnostics. The question remains, however, as to whether AI will ever reach the point of superintelligence—also commonly known as the Technological Singularity.

What I found most interesting from Kurzweil’s response wasn’t so much his consistency in the belief that AI will indeed outpace human intelligence as a whole; rather our fears of a dystopian future where AI has gone astray are becoming increasingly unlikely. He makes this arguement with the understanding that there is no singular AI being controlled by singular powerful companies or people. In today’s reality, there are millions of different AIs being controlled by anyone who owns a smartphone.

One could argue that the level of power still isn’t well-balanced between centralized companies and a decentralized populace, especially as companies like Facebook, Amazon, and Google (Kurzweil’s current employer) continue making headlines as a result of their egregious negligence. However, with companies like SingularityNET working to democratize the technology, AI isn’t just moving at a pace beyond human intelligence; they’re moving at a pace that’ll empower the human species as a whole, whether that comes in the form of maintaining their longevity, increasing their cognitive capacities, or giving them access to the stars themselves. </p>


The difference between being superhumanly good at Go and being superintelligent is like the difference between flight of stairs and climbing K2. They're the same class of problem, separated by colossal levels of difficulty.
intelligence  ai  superintelligence 
january 2019 by charlesarthur
Alex Rosenblat’s Uberland: review • NY Mag
Adrian Chen:
<p>One thing you get from reading Alex Rosenblat’s Uberland: How Algorithms Are Rewriting the Rules of Work, is that there is nothing inevitable about management trending in a positive direction. Drawing on four years of ethnographic research among Uber drivers, Rosenblat has produced a thoroughly dystopian report that details how millions of drivers are now managed by a computerized system that combines the hard authoritarianism of Frederick Winslow Taylor with the cynical cheerleading of Michael Scott.

But wait: Isn’t the whole point of Uber that you can be your own boss? After all, Uber talks of its drivers not as employees but “partners.” In its propaganda, Uber portrays itself not as a taxi company at all but a technology platform that connects drivers directly to riders. “FREEDOM PAYS WEEKLY,” reads one recruitment ad reproduced in Uberland.

Next to it, there’s a picture of a breezy millennial with shaggy hair and a five-o’clock shadow, a scarf draped rakishly around his neck. He looks so noncorporate that he might not be wearing any pants.

In order to put that idea to rest, Rosenblat must first untangle the myths that made it seem possible in the first place. If you think about it, it’s bizarre that taxi drivers became a symbol of cutting-edge technological disruption. Cab drivers have typically occupied a benighted role in the public imagination: hustlers, criminals, or, at best, misanthropic folk philosophers. Rosenblat offers a valuable history of the ideological work that went into the “gentrification” of the profession.</p>
ai  uber  algorithm 
january 2019 by charlesarthur
Artificial intelligence can detect Alzheimer’s disease in brain scans six years before a diagnosis • UC San Francisco
Dana Smith:
<p>glucose PET scans are much more common and cheaper, especially in smaller health care facilities and developing countries, because they’re also used for cancer staging.

Radiologists have used these scans to try to detect Alzheimer’s by looking for reduced glucose levels across the brain, especially in the frontal and parietal lobes of the brain. However, because the disease is a slow progressive disorder, the changes in glucose are very subtle and so difficult to spot with the naked eye.

To solve this problem, Sohn applied a machine learning algorithm to PET scans to help diagnose early-stage Alzheimer’s disease more reliably.

“This is an ideal application of deep learning because it is particularly strong at finding very subtle but diffuse processes. Human radiologists are really strong at identifying tiny focal finding like a brain tumor, but we struggle at detecting more slow, global changes,” says Sohn. “Given the strength of deep learning in this type of application, especially compared to humans, it seemed like a natural application.”

To train the algorithm, Sohn fed it images from the Alzheimer’s Disease Neuroimaging Initiative (ADNI), a massive public dataset of PET scans from patients who were eventually diagnosed with either Alzheimer’s disease, mild cognitive impairment or no disorder. Eventually, the algorithm began to learn on its own which features are important for predicting the diagnosis of Alzheimer’s disease and which are not.

…The algorithm performed with flying colors. It correctly identified 92% of patients who developed Alzheimer’s disease in the first test set and 98% in the second test set. What’s more, it made these correct predictions on average 75.8 months – a little more than six years – before the patient received their final diagnosis.</p>


Slightly scary. What do you do with a diagnosis like that?
ai  alzheimers 
january 2019 by charlesarthur
John Giannandrea named to Apple’s executive team • Apple
<p>John Giannandrea has been named to the company’s executive team as senior vice president of Machine Learning and Artificial Intelligence Strategy. He joined Apple in April 2018.

Giannandrea oversees the strategy for AI and Machine Learning across all Apple products and services, as well as the development of Core ML and Siri technologies. His team's focus on advancing and tightly integrating machine learning into Apple products is delivering more personal, intelligent and natural interactions for customers while protecting user privacy. 

“John hit the ground running at Apple and we are thrilled to have him as part of our executive team,” said Tim Cook, Apple’s CEO. “Machine learning and AI are important to Apple’s future as they are fundamentally changing the way people interact with technology, and already helping our customers live better lives. We’re fortunate to have John, a leader in the AI industry, driving our efforts in this critical area.” </p>


Only taken seven years, but Siri now has his/her own veep. And note the points about ML/AI being "important". Not "essential"?
apple  ai  siri 
december 2018 by charlesarthur
This health startup won big government deals—but inside, doctors flagged problems • Forbes
Parmy Olson:
<p>the spectacle of brash tech entrepreneurs making outsized claims for their products is hardly a new phenomenon. Neither would matter very much except for the fact that Babylon has two contracts with Britain’s National Health Service, which runs one of the world’s largest nationalized healthcare systems. Babylon’s GP At Hand app offers 35,000 NHS patients video calls and access to its triage chatbot for advice on whether to see a doctor. The NHS is also encouraging 2 million citizens in North London to use NHS 111: Online, an app from Babylon that primarily features a triage chatbot as an alternative to the NHS advice line. Neither uses Babylon’s diagnostic advice chatbot, but the company has talked about bringing this feature to its NHS apps, staff say.

The NHS’s motivations are clear and noble: It wants to save money and produce better health outcomes for patients. Britain will spend nearly $200bn on its national healthcare system in 2020, a sum equivalent to about 7% of GDP. That slice of GDP has doubled since 1950, and the country desperately needs to find a way to rein in costs while still providing a benefit that is seen as central to the UK’s social contract. 

Reducing emergency room visits is a logical step, since they cost the NHS $200 on average per visit, a total of $4bn in the past year, while waiting times are increasing and at least 1.5 million Brits go to the emergency room when they don’t need to. Babylon’s cost-saving chatbot could be a huge help. If it worked better. 

There are some doubts, for instance, about whether the software can fulfill one of its main aims: keeping the “worried well” from heading to the hospital. Early and current iterations of the chatbot advise users to go for a costly emergency room visit in around 30% of cases, according to a Babylon staffer, compared with roughly 20% of people who dial the national health advice line, 111. It’s not clear how many patients take that advice, and Babylon says it doesn’t track that data. </p>


Another amazing exposé; one of Babylon's biggest boosters is the current health secretary Matt Hancock. Perhaps he'll read this and think again.
health  babylon  ai  machinelearning 
december 2018 by charlesarthur
Learning How AI Makes Decisions • PCMag UK
<p>After her neural networks failed to reveal the reasons they were mislabelling videos and pictures, [Kate] Saenko [an associate professor at the Department of Computer Science at Boston University] and a team of researchers at Boston University engaged in a project to find the parameters that influenced those decisions.

What came out of the effort was <a href="https://bdtechtalks.com/2018/10/15/kate-saenko-explainable-ai-deep-learning-rise/">RISE</a>, a method that tries to explain to interpret decisions made by AI algorithms. Short for "randomized input sampling for explanation of black-box models," RISE is a local explanation model.

When you provide an image-classification network with an image input, what it returns is a set of classes, each associated with a probability. Normally, you'd have no insight into how the AI reached that decision. But RISE provides you with a heatmap that describes which parts of the image are contributing to each of those output classes.

<img src="https://c-3sux78kvnkay76x24gyykzyx2evisgmx2eius.g00.pcmag.com/g00/3_c-3ccc.visgm.ius_/c-3SUXKVNKAY76x24nzzvyx3ax2fx2fgyykzy.visgm.iusx2fskjogx2fosgmkyx2f185910-x78oyk-kdvrgotghrk-go-kdgsvrk-ygroktie-sgv.vtmx3fo76i.sgx78qx3dosgmk_$/$/$/$" width="100%" />

For instance, in the above image, it's clear that the network in question is mistaking brown sheep for cows, which might mean that it hasn't been trained on enough examples of brown sheep. This type of problem happens often. Using the RISE method, Saenko was able to discover that her neural networks were specifying the gender of the people in the cooking videos based on pots and pans and other objects that appeared in the background instead of examining their facial and physical features.

The idea behind RISE is to randomly obscure parts of the input image and run it through the neural network to observe how the changes affect the output weights. By repeating the masking process multiple times, RISE is able to discern which parts of the image are more important to each output class.</p>


Clever - and very usable.
ai  machinelearning  explanation 
december 2018 by charlesarthur
AI mistakes bus-side ad for famous CEO, charges her with jaywalking • Caixin Global
<p>Cities across China have debuted crime-fighting facial recognition technology to much fanfare over the past year. But some of these jaywalker-busting devices aren’t as impressive as they seem.

A facial recognition system in the city of Ningbo caught Dong Mingzhu, the chair of appliance-making giant Gree Electric, running a red light. Only it turned out not to be Dong, but rather an advertisement featuring her face on the side of a bus, local police said on Weibo Wednesday.

The police said they have upgraded their tech to avoid issues like this in the future. The real Dong, meanwhile, is embroiled in drama with an electric vehicle company. </p>
ai  jaywalking  error 
november 2018 by charlesarthur
Wanted: the ‘perfect babysitter.’ Must pass AI scan for respect and attitude • The Washington Post
Drew Harwell:
<p>When Jessie Battaglia started looking for a new babysitter for her 1-year-old son, she wanted more information than she could get from a criminal-background check, parent comments and a face-to-face interview.

So she turned to Predictim, an online service that uses “advanced artificial intelligence” to assess a babysitter’s personality, and aimed its scanners at one candidate’s thousands of Facebook, Twitter and Instagram posts.

The system offered an automated “risk rating” of the 24-year-old woman, saying she was at a “very low risk” of being a drug abuser. But it gave a slightly higher risk assessment — a 2 out of 5 — for bullying, harassment, being “disrespectful” and having a “bad attitude.”

The system didn’t explain why it had made that decision. But Battaglia, who had believed the sitter was trustworthy, suddenly felt pangs of doubt.

“Social media shows a person’s character,” said Battaglia, 29, who lives outside Los Angeles. “So why did she come in at a 2 and not a 1?”

Predictim is offering parents the same playbook that dozens of other tech firms are selling to employers around the world: artificial-intelligence systems that analyze a person’s speech, facial expressions and online history with promises of revealing the hidden aspects of their private lives…

…The systems depend on black-box algorithms that give little detail about how they reduced the complexities of a person’s inner life into a calculation of virtue or harm. And even as Predictim’s technology influences parents’ thinking, it remains entirely unproven, largely unexplained and vulnerable to quiet biases over how an appropriate babysitter should share, look and speak.</p>


Evaluating these systems is becoming more important than ever; and more difficult than ever. And you just know this is going to turn out to be subtly racist.
babysitter  ai  predictim 
november 2018 by charlesarthur
Ranking Gmail’s AI-fuelled Smart Replies • NY Mag
Christopher Bonanos:
<p>The most recognizable feature of Gmail’s newly rolled-out redesign is the so-called smart reply, wherein bots offer three one-click responses to each mail message. Say your email contains the words “you free for lunch?” The autoreplies Gmail presents will be something like “Sure!” and “Yes!” and “Looking forward to it!” The idea, especially on a small, one-hand phone screen, is that you can tap and send using one thumb, without typing. It’s not clear just how many of these prewritten options there are, or how sophisticated the machine learning behind them is. The AI is not yet sharp enough to offer genuinely useful responses like “Please, for the love of Christ, stop sending me these offers to buy those sandals whose ad I clicked on last month” or emotionally honest ones like “Hey, it would be wonderful if someone in our group cancels our drinks tonight because I would rather stay home and order dan dan noodles while watching Succession.” Until then, we’re stuck with the few dozen simple responses that appear regularly. Some are better than others. Shall we rank? </p>


≥ Ok, sounds good! ≤
≥ We should rethink this ≤
≥ I can see everything you type you know ≤

But the idea that the answers might change over time is rather interesting.
gmail  ai  replies 
november 2018 by charlesarthur
Tempted to expense that strip club as a business dinner? AI is watching • Bloomberg
Olivia Carville:
<p>One employee traveling for work checked his dog into a kennel and billed it to his boss as a hotel expense. Another charged yoga classes to the corporate credit card as client entertainment. A third, after racking up a small fortune at a strip club, submitted the expense as a steakhouse business dinner. 

These bogus expenses, which occurred recently at major U.S. companies, have one thing in common: All were exposed by artificial intelligence algorithms that can in a matter of seconds sniff out fraudulent claims and forged receipts that are often undetectable to human auditors—certainly not without hours of tedious labor.

AppZen, an 18-month-old AI accounting startup, has already signed up several big companies, including Amazon.com Inc., International Business Machine Corp., Salesforce.com Inc. and Comcast Corp. and claims to have saved its clients $40 million in fraudulent expenses. AppZen and traditional firms like Oversight Systems say their technology isn’t erasing jobs—so far—but rather freeing up auditors to dig deeper into dubious claims and educate employees about travel and expense policies.

“People don’t have time to look at every expense item,” says AppZen Chief Executive Officer Anant Kale. “We wanted to get AI to do it for them and to find things the human eye might miss.”</p>
ai  expenses 
november 2018 by charlesarthur
AI is not “magic dust” for your company, says Google’s cloud AI boss • Technology Review
Will Knight interviews Andrew Knight, ex-Carnegie-Mellon University:
<p><strong>Q: Like you, lots of AI researchers are being sucked into big companies. Isn’t that bad for AI?</strong>

AK: It’s healthy for the world to have people who are thinking about 25 years into the future—and people who are saying “What can we do right now?”

There’s one project at Carnegie Mellon that involves a 70-foot-tall robot designed to pick up huge slabs of concrete and rapidly create levees against major flooding. It’s really important for the world that there are places that are doing that—but it’s kind of pointless if that’s all that’s going on in artificial intelligence.

While I’ve been at Carnegie Mellon, I’ve had hundreds of meetings with principals in large organizations and companies who are saying, “I am worried my business will be completely replaced by some Silicon Valley startup. How can I build something to counter that?”

I can’t think of anything more exciting than being at a place that is not just doing AI for its own sake anymore, but is determined to bring it out to all these other stakeholders who need it.

<strong>Q: How big of a technology shift is this for businesses?</strong>

AK: It’s like electrification. And it took about two or three decades for electrification to pretty much change the way the world was. Sometimes I meet very senior people with big responsibilities who have been led to believe that artificial intelligence is some kind of “magic dust” that you sprinkle on an organization and it just gets smarter. In fact, implementing artificial intelligence successfully is a slog.

When people come in and say “How do I actually implement this artificial-intelligence project?” we immediately start breaking the problems down in our brains into the traditional components of AI—perception, decision making, action (and this decision-making component is a critical part of it now; you can use machine learning to make decisions much more effectively)—and we map those onto different parts of the business. One of the things Google Cloud has in place is these building blocks that you can slot together.

Solving artificial-intelligence problems involves a lot of tough engineering and math and linear algebra and all that stuff. It very much isn’t the magic-dust type of solution.</p>

But tell me more about the 70-foot robot that moves paving slabs.
Ai  robotics  business 
november 2018 by charlesarthur
In the age of A.I., is seeing still believing? • The New Yorker
Joshua Rothman on the rise of "deep fakes":
<p>As alarming as synthetic media may be, it may be more alarming that we arrived at our current crises of misinformation—Russian election hacking; genocidal propaganda in Myanmar; instant-message-driven mob violence in India—without it. Social media was enough to do the job, by turning ordinary people into media manipulators who will say (or share) anything to win an argument. The main effect of synthetic media may be to close off an escape route from the social-media bubble. In 2014, video of the deaths of Michael Brown and Eric Garner helped start the Black Lives Matter movement; footage of the football player Ray Rice assaulting his fiancée catalyzed a reckoning with domestic violence in the National Football League. It seemed as though video evidence, by turning us all into eyewitnesses, might provide a path out of polarization and toward reality. With the advent of synthetic media, all that changes. Body cameras may still capture what really happened, but the aesthetic of the body camera—its claim to authenticity—is also a vector for misinformation. “Eyewitness video” becomes an oxymoron. The path toward reality begins to wash away.

In the early days of photography, its practitioners had to argue for its objectivity. In courtrooms, experts debated whether photos were reflections of reality or artistic products; legal scholars wondered whether photographs needed to be corroborated by witnesses. It took decades for a consensus to emerge about what made a photograph trustworthy. Some technologists wonder if that consensus could be reëstablished on different terms. Perhaps, using modern tools, photography might be rebooted…

…Citron and Chesney indulge in a bit of sci-fi speculation. They imagine the “worst-case scenario,” in which deepfakes prove ineradicable and are used for electioneering, blackmail, and other nefarious purposes. In such a world, we might record ourselves constantly, so as to debunk synthetic media when it emerges. “The vendor supplying such a service and maintaining the resulting data would be in an extraordinary position of power,” they write; its database would be a tempting resource for law-enforcement agencies. Still, if it’s a choice between surveillance and synthesis, many people may prefer to be surveilled. Truepic, McGregor told me, had already had discussions with a few political campaigns. “They say, ‘We would use this to just document everything for ourselves, as an insurance policy.’ ”</p>
ai  images  deception 
november 2018 by charlesarthur
Why Big Tech pays poor Kenyans to teach self-driving cars • BBC News
Dave Lee went to the slum of Kibera, on the east side of Nairobi, Kenya:
<p>Brenda does this work for Samasource, a San Francisco-based company that counts Google, Microsoft, Salesforce and Yahoo among its clients. Most of these firms don't like to discuss the exact nature of their work with Samasource - as it is often for future projects - but it can be said that the information prepared here forms a crucial part of some of Silicon Valley's biggest and most famous efforts in AI.

It's the kind of technological progress that will likely never be felt in a place like Kibera. As Africa's largest slum, it has more pressing problems to solve, such as a lack of reliable clean water, and a well-known sanitation crisis.

But that's not to say artificial intelligence can't have a positive impact here. We drove to one of Kibera's few permanent buildings, found near a railway line that, on this rainy day, looked thoroughly decommissioned by mud, but has apparently been in regular use since its colonial inception.

Almost exactly a year ago, this building was the dividing line between stone-throwing rioters and the military. Today, it's a thriving hub of activity: a media school and studio, something of a cafeteria, and on the first floor, a room full of PCs. Here, Gideon Ngeno teaches around 25 students the basics of using a personal computer.

What's curious about this process is that digital literacy is high, even in Kibera, where smartphones are common and every other shop is selling chargers and accessories, which people buy using the mobile money system MPesa.</p>


Terrific story, pointing out the contradictions - "magic" tech enabled by low-paid humans in distant countries who receive low pay because high pay would distort the market, but who are even so given the money and knowledge to break out of poverty. You could call it "good capitalism".
ai  recognition  kenya 
november 2018 by charlesarthur
Chelsea is using our AI research for smarter football coaching • The Conversation
Varuna de Silva is a lecturer at the Institute for Digital Technologies at Loughborough University:
<p>The best footballers aren’t necessarily the ones with the best physical skills. The difference between success and failure in football often lies in the ability to make the right split-second decisions on the field about where to run and when to tackle, pass or shoot. So how can clubs help players train their brains as well as their bodies?

My colleagues and I are working with Chelsea FC academy to develop a system to measure these decision-making skills using artificial intelligence (AI) – a kind of robot coach or scout, if you will. We’re doing this by analysing several seasons of data that tracks players and the ball throughout each game, and developing a computer model of different playing positions. The computer model provides a benchmark to compare the performance of different players. This way we can measure the performance of individual players independent of the actions of other players.

We can then visualise what might have happened if the players had made a different decision in any case. TV commentators are always criticising player actions, saying they should have done something else without any real way of testing the theory. But our computer model can show just how realistic these suggestions might be.</p>


Tricky to do, because every situation is unique - and when something similar arises, how do you know if it's sufficiently similar or different to do something else? Possibly pointing this out is something good managers have done instinctively for years. Now it's the AIs' turn.
ai  football  coaching 
november 2018 by charlesarthur
An AI lie detector is going to start questioning travellers in the EU • Gizmodo
Melanie Ehrenkranz:
<p>The virtual border control agent [in Hungary, Latvia and Greece] will ask travellers questions after they’ve passed through the checkpoint. Questions include, “What’s in your suitcase?” and “If you open the suitcase and show me what is inside, will it confirm that your answers were true?” according to New Scientist. The system reportedly records travelers’ faces using AI to analyze 38 micro-gestures, scoring each response. The virtual agent is reportedly customized according to the traveler’s gender, ethnicity, and language.

For travelers who pass the test, they will receive a QR code that lets them through the border. If they don’t, the virtual agent will reportedly get more serious, and the traveler will be handed off to a human agent who will asses their report. But, according to the New Scientist, this pilot program won’t, in its current state, prevent anyone’s ability to cross the border.

This is because the program is very much in the experimental phases. In fact, the automated lie-detection system was modeled after another system created by some individuals from iBorderCtrl’s team, but it was only tested on 30 people.</p>


Hmm. 30 people? Feels like this is going to have some teething problems.
ai  immigration  customs  borders 
november 2018 by charlesarthur
AIs trained to help with sepsis treatment, fracture diagnosis • Ars Technica
John Timmer:
<p>The new research isn't intended to create an AI that replaces these doctors; rather, it's intended to help them out.

The team recruited 18 orthopedic surgeons to diagnose over 135,0000 images of potential wrist fractures, and then it used that data to train their algorithm, a deep-learning convolutional neural network. The algorithm was used to highlight areas of interest to doctors who don't specialize in orthopedics. In essence, it was helping them focus on areas that are mostly likely to contain a break.

In the past, trials like this have resulted in over-diagnosis, where doctors would recommend further tests for something that's harmless. But in this case, the accuracy went up as false positives went down. The sensitivity (or ability) to identify fractures went from 81% up to 92%, while the specificity (or ability to make the right diagnosis) rose from 88% to 94%. Combined, these results mean that ER docs would have seen their misdiagnosis rate drop by nearly half.

Neither of these involved using the software in a context that fully reflects medically relevant circumstances. Both ER doctors and those treating sepsis (who may be one and the same) will normally have a lot of additional concerns and distractions, so it may be a challenge to integrate AI use into their process. </p>


That is the point, isn't it: it's great when you're not trying to figure out which of 15 different possible wrong things is wrong with the patient.
ai  sepsis  fracture  doctor 
october 2018 by charlesarthur
Apple: the second-best tech company in the world • The Outline
Joshua Topolsky:
<p>Apple’s lack of data (and its inability or unwillingness to blend large swaths of data) actually seems to be one of the issues driving its slippage in software innovation. While Google is using its deep pool of user data to do astounding things like screen calls or make reservations for users with AI, map the world in more detail, identify objects and describe them in real-time, and yes — make its cameras smarter, faster, and better looking — Apple devices seem increasingly disconnected from the world they exist in (and sometimes even their own platforms).

As both Amazon and Google have proven in the digital assistant and voice computing space, the more things you know about your users, the better you can actually serve them. Apple, on the other hand, wants to keep you inside its tools, safe from the potential dangers of data misuse or abuse certainly, but also marooned on a narrow island, sanitized and distanced from the riches that data can provide when used appropriately.</p>


I'm willing to be corrected, but I don't think it's deep pools of user data that Google's using for Call Screening or Duplex. It's AI systems which have been taught on quite different sets of data from email. (I don't know what they have been taught on.) Certainly, user data makes maps better, and the data from Google Photos does - that's probably a key input to the photo system on the Pixel 3.

But that data does exist, and whether Apple starts to use it more broadly is a key question for the future. It's the collision of questions: can you improve the camera (and other systems) without embedded AI? At present the answer seems to be no. (Though might that be just because when everything's getting AI, getting AI seems like the only answer.)
apple  innovation  ai 
october 2018 by charlesarthur
AI Art at Christie’s sells for $432,500 - The New York Times
Gabe Cohn:
<p>Last Friday, a portrait produced by artificial intelligence was hanging at Christie’s New York opposite an Andy Warhol print and beside a bronze work by Roy Lichtenstein. On Thursday, it sold for well over double the price realized by both those pieces combined.

“Edmond de Belamy, from La Famille de Belamy” sold for $432,500 including fees, over 40 times Christie’s initial estimate of $7,000-$10,000. The buyer was an anonymous phone bidder.

The portrait, by the French art collective Obvious, was marketed by Christie’s as the first portrait generated by an algorithm to come up for auction. It was inspired by a sale earlier this year, in which the French collector Nicolas Laugero Lasserre bought a portrait directly from the collective for about 10,000 euros, or about $11,400.</p>


GPU rig got surpassed by ASICS? Get it painting instead. (Though the picture that was auctioned <a href="https://media.npr.org/assets/img/2012/09/20/513259474_13195159_wide-360d295b5726058b589b84b5d341f077b1cde4a7.jpg?s=1400">did look a bit like this human-generated one</a> to me.)
ai  painting 
october 2018 by charlesarthur
No, AI won’t solve the fake news problem • The New York Times
Gary Marcus (a professor of psychology) and Ernest Davis (a professor of computer science):
<p>To get a handle on what automated fake-news detection would require, consider an article posted in May on the far-right website WorldNetDaily, or WND. The article reported that a decision to admit girls, gays and lesbians to the Boy Scouts had led to a requirement that condoms be available at its “global gathering.” A key passage consists of the following four sentences:
<p>The Boy Scouts have decided to accept people who identify as gay and lesbian among their ranks. And girls are welcome now, too, into the iconic organization, which has renamed itself Scouts BSA. So what’s next? A mandate that condoms be made available to ‘all participants’ of its global gathering.</p>


Was this account true or false? Investigators at the fact-checking site Snopes determined that the report was “mostly false.” But determining how it went afoul is a subtle business beyond the dreams of even the best current A.I.

First of all, there is no telltale set of phrases. “Boy Scouts” and “gay and lesbian,” for example, have appeared together in many true reports before. Then there is the source: WND, though notorious for promoting conspiracy theories, publishes and aggregates legitimate news as well. Finally, sentence by sentence, there are a lot of true facts in the passage: Condoms have indeed been available at the global gathering that scouts attend, and the Boy Scouts organization has indeed come to accept girls as well as gays and lesbians into its ranks.

What makes the article “mostly false” is that it implies a causal connection that doesn’t exist. It strongly suggests that the inclusion of gays and lesbians and girls led to the condom policy (“So what’s next?”). But in truth, the condom policy originated in 1992 (or even earlier) and so had nothing to do with the inclusion of gays, lesbians or girls, which happened over just the past few years.</p>
facebook  ai  news 
october 2018 by charlesarthur
Five ways Google Pixel 3 camera pushes the boundaries of computational photography • Digital Photography Review
Rishi Sanyal:
<p>With the launch of the Google Pixel 3, smartphone cameras have taken yet another leap in capability. I had the opportunity to sit down with Isaac Reynolds, Product Manager for Camera on Pixel, and Marc Levoy, Distinguished Engineer and Computational Photography Lead at Google, to learn more about the technology behind the new camera in the Pixel 3.

One of the first things you might notice about the Pixel 3 is the single rear camera. At a time when we're seeing companies add dual, triple, even quad-camera setups, one main camera seems at first an odd choice.

But after speaking to Marc and Isaac I think that the Pixel camera team is taking the correct approach – at least for now. Any technology that makes a single camera better will make multiple cameras in future models that much better, and we've seen in the past that a single camera approach can outperform a dual camera approach in Portrait Mode, particularly when the telephoto camera module has a smaller sensor and slower lens, or lacks reliable autofocus [like the Galaxy S9].</p>

This isn't actually a test of the Pixel 3. Plenty of interesting things here; will they come to the wider range of Android, though? The Pixel is a fraction of a fraction of Android sales.

We're also approaching the point where it's only the low-light pictures that show substantial differences between generations. (Thanks stormyparis for the link.)
computation  photograph  ai  ml 
october 2018 by charlesarthur
Amazon scraps secret AI recruiting tool that showed bias against women • Reuters
Jeffrey Dastin:
<p>The team had been building computer programs since 2014 to review job applicants’ resumes with the aim of mechanizing the search for top talent, five people familiar with the effort told Reuters.

Automation has been key to Amazon’s e-commerce dominance, be it inside warehouses or driving pricing decisions. The company’s experimental hiring tool used artificial intelligence to give job candidates scores ranging from one to five stars - much like shoppers rate products on Amazon, some of the people said.

“Everyone wanted this holy grail,” one of the people said. “They literally wanted it to be an engine where I’m going to give you 100 resumes, it will spit out the top five, and we’ll hire those.”

But by 2015, the company realized its new system was not rating candidates for software developer jobs and other technical posts in a gender-neutral way.

That is because Amazon’s computer models were trained to vet applicants by observing patterns in resumes submitted to the company over a 10-year period. Most came from men, a reflection of male dominance across the tech industry.</p>


So more accurate to say that the AI tool <em>revealed</em> bias against women. But then kept on doing the same: it would penalise those CVs which included "women's". Eventually they realised they couldn't get it right.
amazon  ai  bias  gender 
october 2018 by charlesarthur
« earlier      
per page:    204080120160

Copy this bookmark:





to read