recentpopularlog in


AI Safety Needs Social Scientists
Similarly, measuring how good people are as debate judges will not be easy. We would like to apply debate to problems where there is no other source of truth: if we had that source of truth, we would train ML models on it directly. But if there is no source of truth, there is no way to measure whether debate produced the correct answer. This problem can be avoided by starting with simple, verifiable domains, where the experimenters know the answer but the judge would not. “Success” then means that the winning debate argument is telling the externally known truth.
25 days ago by elrob
Thinking About Risks From AI: Accidents, Misuse and Structure - Lawfare
While discussions of misuse and accident risks have been useful in spurring discussion and efforts to counter potential downsides from AI, this basic framework also misses a great deal. The misuse and accident perspectives tend to focus only on the last step in a causal chain leading up to a harm: that is, the person who misused the technology, or the system that behaved in unintended ways. This, in turn, places the policy spotlight on measures that focus on this last causal step: for example, ethical guidelines for users and engineers, restrictions on obviously dangerous technology, and punishing culpable individuals to deter future misuse. Often, though, the relevant causal chain is much longer—and the opportunities for policy intervention much greater—than these perspectives suggest.

A recent fatal accident involving an AI system—Uber’s fatal self-driving car crash in early 2018—offers a good illustration of why the structural perspective’s emphasis on context and incentives is necessary. When the crash happened, commentators initially pointed to self-driving vehicles’ “incredibly brittle” vision systems as the culprit, as would make sense from a technical accident perspective. But later investigations showed that the vehicle in fact detected the victim early enough for the emergency braking system to prevent a crash. What, then, had gone wrong? The problem was that the emergency brake had purposely been turned off by engineers who were afraid that an overly sensitive braking system would make their vehicle look bad relative to competitors. They opted for this and other “dangerous trade-offs with safety” because they felt pressured to impress the new CEO with the progress of the self-driving unit, which the CEO was reportedly considering cutting due to poor market prospects.
25 days ago by elrob
The AI Cold War With China That Threatens Us All | WIRED
In May, facial-recognition cameras at Jiaxing Sports Center Stadium in Zhejiang led to the arrest of a fugitive who was attending a concert. He had been wanted since 2015 for allegedly stealing more than $17,000 worth of potatoes. Comment: interesting piece, but pieces like this lose points when they don't consider scenarios like the sub-structure of existing society losing their popularity, e.g. not considering the impact of AI/autonomous weapons weakening the grip of nation-states themselves. Under Xi, Communist Party committees within companies have expanded. Last November, China tapped Baidu, Alibaba, Tencent, and iFlytek, a Chinese voice-­recognition software company, as the inaugural members of its “AI National Team.”

Shortly before Trump’s inauguration, Jack Ma, the chair of Alibaba, pledged to create a million jobs in the United States. By September 2018, he was forced to admit that the offer was off the table, another casualty in the growing list of companies and projects that are now unthinkable.
ai-policy  chinai 
october 2018 by elrob
AI Nationalism — Ian Hogarth
The Chinese state appears to have recognised the importance of data to its AI nationalism efforts. China’s latest cybersecurity law mandates that data being exported out of China have to be reviewed.

China’s annual imports of semiconductor-related products are now $260 billion and have recently risen above spending on oil.

[2017] AlphaGo defeated world No.1 Kie Jie 3-0 in Wuzhen, China. Live video coverage of AlphaGo vs. Ke Jie was blocked in China.

This kind of dependency would be tantamount to a new kind of colonialism.

We can see small examples of new geopolitical relationships emerging. In March, Zimbabwe’s government signed a strategic cooperation framework agreement with a Guangzhou-based startup, CloudWalk Technology for a large-scale facial recognition program where Zimbabwe will export a database of their citizens’ faces to China, allowing CloudWalk to improve their underlying algorithms with more data and Zimbabwe to get access to CloudWalk’s computer vision technology. This is part of the much broader Belt and Road initiative of the Chinese Government.
ai-policy  industrial-policy  best-of-2018 
august 2018 by elrob
Guide to working in AI policy and strategy - 80,000 Hours
A rough rule of thumb is to aim to read three or so AI papers a week to get a sense of what’s happening in the field, the terminology people use, and to be able to discriminate between real and fake AI news. Regarding AI jargon, your aim should be for at least attaining interactional expertise – essentially, the ability to pass the AI researcher Turing Test in casual conversations at conferences, even if you couldn’t write a novel research paper yourself.
ai-policy  career 
august 2018 by elrob
Positively shaping the development of artificial intelligence - 80,000 Hours
A growing number of experts believe a revolution will occur during the 21st century through the invention of machines whose intelligence far surpasses ours.
august 2018 by elrob
AI for good: Is it for real? | Nesta
A quick summary would be that the various ethics committees - notably Facebook’s – have achieved very little, while activism, and investigative journalism, have achieved quite a lot. Probably the only useful thing the members of Facebook’s committee could have done would have been a mass resignation. If nothing else, difficult questions are now being asked of the ‘data ethics’ experts who spent a lot of time discussing theoretical questions (like the ‘trolley problem’) and little on the very rea...
august 2018 by elrob
Chinese Interests Take a Big Seat at the AI Governance Table
First, the government hopes that its role in standardization will generate more value out of AI technologies by facilitating data pooling and improving the interoperability of systems. The importance of standards in spurring economic development, particularly for ICTs, is pervasive in Chinese policy and industry circles. According to a popular saying, “First-tier companies make standards, second-tier companies make technology, and third-tier companies make products
china  ai-policy 
june 2018 by elrob
Google Employees Resign in Protest Against Pentagon Contract
Google has emphasized that its AI is not being used to kill, but the use of artificial intelligence in the Pentagon’s drone program still raises complex ethical and moral issues for tech workers and for academics who study the field of machine learning.

In addition to the petition circulating inside Google, the Tech Workers Coalition launched a petition in April demanding that Google abandon its work on Maven and that other major tech companies, including IBM and Amazon, refuse to work with the...
google  ethics  ai-policy 
may 2018 by elrob
Import AI: #91: European countries unite for AI grand plan; why the future of AI sensing is spatial; and testing language AI with GLUE. | Import AI
“The principles needed to build planetary-scale inference-and-decision-making systems of this kind, blending computer science with statistics, and taking into account human utilities, were nowhere to be found in my education.”

Things that make you go ‘hmmm’: Mr Jordan thanks Jeff Bezos for reading an earlier draft of the post. If there’s any company well-placed to build a global ‘intelligent infrastructure’ that dovetails into the physical world, it’s Amazon.
ai-policy  market-design 
may 2018 by elrob
The Trouble with Bias - NIPS 2017 Keynote - Kate Crawford #NIPS2017 - YouTube
- Classification is always arbitrary and ambiguous, hard to find human classifications that aren't "of their time"
- So bias will always exist?
- Allocation bias and representation bias
- Can't think of it as only a technical problem, though it is technically a hard problem to "solve"
- Call for interdisciplinary approaches to solving problem
- FATE group at Microsoft
ai-policy  best-of-2018 
may 2018 by elrob
Remarks at the SASE Panel On The Moral Economy of Tech
First, programmers are trained to seek maximal and global solutions. Why solve a specific problem in one place when you can fix the general problem for everybody, and for all time? We don't think of this as hubris, but as a laudable economy of effort. And the startup funding culture of big risk, big reward encourages this grandiose mode of thinking. There is powerful social pressure to avoid incremental change, particularly any change that would require working with people outside tech and treating them as intellectual equals.

Machine learning is like money laundering for bias. It's a clean, mathematical apparatus that gives the status quo the aura of logical inevitability. The numbers don't lie.

The reality is, opting out of surveillance capitalism means opting out of much of modern life.

We tend to imagine dystopian scenarios as one where a repressive government uses technology against its people. But what scares me in these scenarios is that each one would have broad social support, possibly majority support. Democratic societies sometimes adopt terrible policies.

We should not listen to people who promise to make Mars safe for human habitation, until we have seen them make Oakland safe for human habitation.

Techies will complain that trivial problems of life in the Bay Area are hard because they involve politics. But they should involve politics. Politics is the thing we do to keep ourselves from murdering each other.
ai-policy  scary 
april 2018 by elrob
Deep learning: Why it’s time for AI to get philosophical
The other serious risk is something I call nerd-sightedness: the inability to see value beyond one’s own inner circle. There’s a tendency in the computer-science world to build first, fix later, while avoiding outside guidance during the design and production of new technology. Both the people working in AI and the people holding the purse strings need to start taking the social and ethical implications of their work much more seriously.

Another kind of effort at fixing AI’s ethics problem is the proliferation of crowdsourced ethics projects, which have the commendable goal of a more democratic approach to science. One example is DJ Patil’s Code of Ethics for Data Science, which invites the data-science community to contribute ideas but doesn’t build up from the decades of work already done by philosophers, historians and sociologists of science. Then there’s MIT’s Moral Machine project, which asks the public to vote on questions such as whether a self-driving car with brake failure ought to run over five homeless people rather than one female doctor. Philosophers call these “trolley problems” and have published thousands of books and papers on the topic over the past half-century. Comparing the views of professional philosophers with those of the general public can be eye-opening, as experimental philosophy has repeatedly shown, but simply ignoring the experts and taking a vote instead is irresponsible.
april 2018 by elrob
The Pursuit of AI Is More Than an Arms Race - Defense One
This open, collaborative character of AI development – for which the private sector has acted as the primary engine of innovation – also renders infeasible most attempts to ban or constrain its diffusion. For that reason, traditional paradigms of arms control are unlikely to be effective if applied to this so-called arms race.
april 2018 by elrob
The Lebowski Theorem of machine superintelligence
In other words, Bach imagines that Bostrom’s hypothetical paperclip-making AI would foresee the fantastically difficult and time-consuming task of turning everything in the universe into paperclips and opt to self-medicate itself into no longer wanting or caring about making paperclips, instead doing whatever the AI equivalent is of sitting around on the beach all day sipping piña coladas, a la The Big Lebowski’s The Dude.
ai-policy  funny 
april 2018 by elrob
Palantir Knows Everything About You
Palantir’s software engineers showed up at the bank on skateboards. Neckties and haircuts were too much to ask, but JPMorgan drew the line at T-shirts. The programmers had to agree to wear shirts with collars, tucked in when possible.

After their departures, JPMorgan drastically curtailed its Palantir use, in part because “it never lived up to its promised potential,” says one JPMorgan executive who insisted on anonymity to discuss the decision.

Thiel told Bloomberg in 2011 that civil libertarians ought to embrace Palantir, because data mining is less repressive than the “crazy abuses and draconian policies” proposed after Sept. 11. The best way to prevent another catastrophic attack without becoming a police state, he argued, was to give the government the best surveillance tools possible, while building in safeguards against their abuse.

The company’s early data mining dazzled venture investors, who valued it at $20 billion in 2015. But Palantir has never reported a profit. It operates less like a conventional software company than like a consultancy, deploying roughly half its 2,000 engineers to client sites.

Palantir says its Privacy and Civil Liberties Team watches out for inappropriate data demands, but it consists of just 10 people in a company of 2,000 engineers.

Similarly, the court’s 2014 decision in Riley v. California found that cellphones contain so much personal information that they provide a virtual window into the owner’s mind, and thus necessitate a warrant for the government to search. Chief Justice John Roberts, in his majority opinion, wrote of cellphones that “with all they contain and all they may reveal, they hold for many Americans ‘the privacies of life.’” Justice Louis Brandeis, 86 years earlier, wrote a searing dissent in a wiretap case that seems to perfectly foresee the advent of Palantir.

When whole communities are algorithmically scraped for pre-crime suspects, data is destiny
palantir  privacy  ai-policy 
april 2018 by elrob
Prediction Machines
And they offer a motivating example that would require pretty advanced tech: At some point, as it turns the knob, the AI’s prediction accuracy crosses a threshold, changing Amazon’s business model. The prediction becomes sufficiently accurate that it becomes more profitable for Amazon to ship you the goods that it predicts you will want rather than wait for you to order them.

For years, economists have faced criticism that the agents on which we see our theories are hyper-rational and unrealistic models of human behavior. True enough, but when it comes to superintelligence, that means we have glen on the right track. … Thus economics provides a powerful way to understand how a society of superintelligent AIs will evolve.

AI can lead to a strategic [business] change if three factors are present: (1) there is a core trade-off in the business model (e.g., shop-then-ship versus ship-then-shop); (2) the trade-off is influenced buy uncertainty; and (3) an AI tool that reduces uncertainty tips the scales of the trade-off so that the optimal strategy changes from one side of the trade to the other.

If the precision machine is an input that you can take off the shelf, then you can treat it like most companies treat energy and purchase it from the market, as long as AI is not core to your strategy. In contrast, if prediction machines are to be the center of your company’s strategy, then you need to control the data to improve the machine, so both the data and the prediction machine must be in house
ai-policy  books 
april 2018 by elrob

Copy this bookmark:

to read