recentpopularlog in

nhaliday : ai   297

« earlier  
Michael Akilian: Worker-in-the-loop Retrospective
Over the last ten years, many companies have created human-in-the-loop services that combine a mix of humans and algorithms. Now that some time has passed, we can tease out some patterns from their collective successes and failures. As someone who started a company in this space, my hope is that this retrospective can help prospective founders, investors, or companies navigating this space save time and fund more impactful projects.

A service is considered human-in-the-loop if it organizes its workflows with the intent to introduce models or heuristics that learn from the work of the humans executing the workflows. In this post, I will make reference to two common forms of human-in-the-loop:

User-in-the-loop (UITL): The end-user is interacting with suggestions from a software heuristic/ML system.
Worker-in-the-loop (WITL): A worker is paid to monitor suggestions from a software heuristic/ML system developed by the same company that pays the worker, but for the ultimate benefit of an end-user.
techtariat  reflection  business  tech  postmortem  automation  startups  hard-tech  ai  machine-learning  human-ml  cost-benefit  analysis  thinking  business-models  things  dimensionality  exploratory  markets  labor  economics  tech-infrastructure  gig-econ 
6 weeks ago by nhaliday
Sci-Hub | The Moral Machine experiment. Nature | 10.1038/s41586-018-0637-6
Preference for inaction
Sparing pedestrians
Sparing the lawful
Sparing females
Sparing the fit
Sparing higher status
Sparing more characters
Sparing the young
Sparing humans

We selected the 130 countries with at least 100 respondents (n range 101–448,125), standardized the nine target AMCEs of each country, and conducted a hierarchical clustering on these nine scores, using Euclidean distance and Ward’s minimum variance method20. This analysis identified three distinct ‘moral clusters’ of countries. These are shown in Fig. 3a, and are broadly consistent with both geographical and cultural proximity according to the Inglehart–Welzel Cultural Map 2010–201421.

The first cluster (which we label the Western cluster) contains North America as well as many European countries of Protestant, Catholic, and Orthodox Christian cultural groups. The internal structure within this cluster also exhibits notable face validity, with a sub-cluster containing Scandinavian countries, and a sub-cluster containing Commonwealth countries.

The second cluster (which we call the Eastern cluster) contains many far eastern countries such as Japan and Taiwan that belong to the Confucianist cultural group, and Islamic countries such as Indonesia, Pakistan and Saudi Arabia.

The third cluster (a broadly Southern cluster) consists of the Latin American countries of Central and South America, in addition to some countries that are characterized in part by French influence (for example, metropolitan France, French overseas territories, and territories that were at some point under French leadership). Latin American countries are cleanly separated in their own sub-cluster within the Southern cluster.

...

Fig. 3 | Country-level clusters.

[ed.: I actually rather like how the values the West has compare w/ the global mean according in this plot.]

...
Participants from individualistic cultures, which emphasize the distinctive value of each individual23, show a stronger preference for sparing the greater number of characters (Fig. 4a). Furthermore, participants from collectivistic cultures, which emphasize the respect that is due to older members of the community23, show a weaker preference for sparing younger characters (Fig. 4a, inset).
pdf  study  org:nat  psychology  social-psych  poll  values  data  experiment  empirical  morality  ethics  pop-diff  cultural-dynamics  tradeoffs  death  safety  ai  automation  things  world  gender  biases  status  class  egalitarianism-hierarchy  order-disorder  anarcho-tyranny  crime  age-generation  quantitative-qualitative  number  nature  piracy  exploratory  phalanges  n-factor  europe  the-great-west-whale  nordic  usa  anglo  anglosphere  sinosphere  asia  japan  china  islam  MENA  latin-america  gallic  wonkish  correlation  measure  similarity  dignity  universalism-particularism  law  leviathan  wealth  econ-metrics  institutions  demographics  religion  group-level  within-group  expression-survival  comparison  technocracy  visualization  trees  developing-world  regional-scatter-plots 
october 2019 by nhaliday
Information Processing: Beijing 2019 Notes
Trump, the trade war, and US-China relations came up frequently in discussion. Chinese opinion tends to focus on the long term. Our driver for a day trip to the Great Wall was an older man from the countryside, who has lived only 3 years in Beijing. I was surprised to hear him expressing a very balanced opinion about the situation. He understood Trump's position remarkably well -- China has done very well trading with the US, and owes much of its technological and scientific development to the West. A recalibration is in order, and it is natural for Trump to negotiate in the interest of US workers.

China's economy is less and less export-dependent, and domestic drivers of growth seem easy to identify. For example, there is still a lot of low-hanging fruit in the form of "catch up growth" -- but now this means not just catching up with the outside developed world, but Tier 2 and Tier 3 cities catching up with Tier 1 cities like Beijing, Shanghai, Shenzhen, etc.

China watchers have noted the rapidly increasing government and private sector debt necessary to drive growth here. Perhaps this portends a future crisis. However, I didn't get any sense of impending doom for the Chinese economy. To be fair there was very little inkling of what would happen to the US economy in 2007-8. Some of the people I met with are highly placed with special knowledge -- they are among the most likely to be aware of problems. Overall I had the impression of normalcy and quiet confidence, but perhaps this would have been different in an export/manufacturing hub like Shenzhen. [ Update: Today after posting this I did hear something about economic concerns... So situation is unclear. ]

Innovation is everywhere here. Perhaps the most obvious is the high level of convenience from the use of e-payment and delivery services. You can pay for everything using your mobile (increasingly, using just your face!), and you can have food and other items (think Amazon on steroids) delivered quickly to your apartment. Even museum admissions can be handled via QR code.

A highly placed technologist told me that in fields like AI or computer science, Chinese researchers and engineers have access to in-depth local discussions of important arXiv papers -- think StackOverflow in Mandarin. Since most researchers here can read English, they have access both to Western advances, and a Chinese language reservoir of knowledge and analysis. He anticipates that eventually the pace and depth of engineering implementation here will be unequaled.

IVF and genetic testing are huge businesses in China. Perhaps I'll comment more on this in the future. New technologies, in genomics as in other areas, tend to be received more positively here than in the US and Europe.

...

Note Added: In the comments AG points to a Quora post by a user called Janus Dongye Qimeng, an AI researcher in Cambridge UK, who seems to be a real China expert. I found these posts to be very interesting.

Infrastructure development in poor regions of China

Size of Chinese internet social network platforms

Can the US derail China 2025? (Core technology stacks in and outside China)

Huawei smartphone technology stack and impact of US entity list interdiction (software and hardware!)

Agriculture at Massive Scale

US-China AI competition

More recommenations: Bruno Maçães is one of my favorite modern geopolitical thinkers. A Straussian of sorts (PhD under Harvey Mansfield at Harvard), he was Secretary of State for European Affairs in Portugal, and has thought deeply about the future of Eurasia and of US-China relations. He spent the last year in Beijing and I was eager to meet with him while here. His recent essay Equilibrium Americanum appeared in the Berlin Policy Journal. Podcast interview -- we hope to have him on Manifold soon :-)
hsu  scitariat  china  asia  thucydides  tech  technology  ai  automation  machine-learning  trends  the-bones  links  reflection  qra  q-n-a  foreign-policy  world  usa  trade  nationalism-globalism  great-powers  economics  research  journos-pundits  straussian 
july 2019 by nhaliday
Basic Error Rates
This page describes human error rates in a variety of contexts.

Most of the error rates are for mechanical errors. A good general figure for mechanical error rates appears to be about 0.5%.

Of course the denominator differs across studies. However only fairly simple actions are used in the denominator.

The Klemmer and Snyder study shows that much lower error rates are possible--in this case for people whose job consisted almost entirely of data entry.

The error rate for more complex logic errors is about 5%, based primarily on data on other pages, especially the program development page.
org:junk  list  links  objektbuch  data  database  error  accuracy  human-ml  machine-learning  ai  pro-rata  metrics  automation  benchmarks  marginal  nlp  language  density  writing  dataviz  meta:reading  speedometer 
may 2019 by nhaliday
Information Processing: Moore's Law and AI
Hint to technocratic planners: invest more in physicists, chemists, and materials scientists. The recent explosion in value from technology has been driven by physical science -- software gets way too much credit. From the former we got a factor of a million or more in compute power, data storage, and bandwidth. From the latter, we gained (perhaps) an order of magnitude or two in effectiveness: how much better are current OSes and programming languages than Unix and C, both of which are ~50 years old now?

...

Of relevance to this discussion: a big chunk of AlphaGo's performance improvement over other Go programs is due to raw compute power (link via Jess Riedel). The vertical axis is ELO score. You can see that without multi-GPU compute, AlphaGo has relatively pedestrian strength.
hsu  scitariat  comparison  software  hardware  performance  sv  tech  trends  ai  machine-learning  deep-learning  deepgoog  google  roots  impact  hard-tech  multiplicative  the-world-is-just-atoms  technology  trivia  cocktail  big-picture  hi-order-bits 
may 2019 by nhaliday
Stack Overflow Developer Survey 2018
Rust, Python, Go in top most loved
F#/OCaml most high paying globally, Erlang/Scala/OCaml in the US (F# still in top 10)
ML specialists high-paid
editor usage: VSCode > VS > Sublime > Vim > Intellij >> Emacs
ranking  list  top-n  time-series  data  database  programming  engineering  pls  trends  stackex  poll  career  exploratory  network-structure  ubiquity  ocaml-sml  rust  golang  python  dotnet  money  jobs  compensation  erlang  scala  jvm  ai  ai-control  risk  futurism  ethical-algorithms  data-science  machine-learning  editors  devtools  tools  pro-rata  org:com  software  analysis  article  human-capital  let-me-see  expert-experience  complement-substitute 
december 2018 by nhaliday
Lateralization of brain function - Wikipedia
Language
Language functions such as grammar, vocabulary and literal meaning are typically lateralized to the left hemisphere, especially in right handed individuals.[3] While language production is left-lateralized in up to 90% of right-handers, it is more bilateral, or even right-lateralized, in approximately 50% of left-handers.[4]

Broca's area and Wernicke's area, two areas associated with the production of speech, are located in the left cerebral hemisphere for about 95% of right-handers, but about 70% of left-handers.[5]:69

Auditory and visual processing
The processing of visual and auditory stimuli, spatial manipulation, facial perception, and artistic ability are represented bilaterally.[4] Numerical estimation, comparison and online calculation depend on bilateral parietal regions[6][7] while exact calculation and fact retrieval are associated with left parietal regions, perhaps due to their ties to linguistic processing.[6][7]

...

Depression is linked with a hyperactive right hemisphere, with evidence of selective involvement in "processing negative emotions, pessimistic thoughts and unconstructive thinking styles", as well as vigilance, arousal and self-reflection, and a relatively hypoactive left hemisphere, "specifically involved in processing pleasurable experiences" and "relatively more involved in decision-making processes".

Chaos and Order; the right and left hemispheres: https://orthosphere.wordpress.com/2018/05/23/chaos-and-order-the-right-and-left-hemispheres/
In The Master and His Emissary, Iain McGilchrist writes that a creature like a bird needs two types of consciousness simultaneously. It needs to be able to focus on something specific, such as pecking at food, while it also needs to keep an eye out for predators which requires a more general awareness of environment.

These are quite different activities. The Left Hemisphere (LH) is adapted for a narrow focus. The Right Hemisphere (RH) for the broad. The brains of human beings have the same division of function.

The LH governs the right side of the body, the RH, the left side. With birds, the left eye (RH) looks for predators, the right eye (LH) focuses on food and specifics. Since danger can take many forms and is unpredictable, the RH has to be very open-minded.

The LH is for narrow focus, the explicit, the familiar, the literal, tools, mechanism/machines and the man-made. The broad focus of the RH is necessarily more vague and intuitive and handles the anomalous, novel, metaphorical, the living and organic. The LH is high resolution but narrow, the RH low resolution but broad.

The LH exhibits unrealistic optimism and self-belief. The RH has a tendency towards depression and is much more realistic about a person’s own abilities. LH has trouble following narratives because it has a poor sense of “wholes.” In art it favors flatness, abstract and conceptual art, black and white rather than color, simple geometric shapes and multiple perspectives all shoved together, e.g., cubism. Particularly RH paintings emphasize vistas with great depth of field and thus space and time,[1] emotion, figurative painting and scenes related to the life world. In music, LH likes simple, repetitive rhythms. The RH favors melody, harmony and complex rhythms.

...

Schizophrenia is a disease of extreme LH emphasis. Since empathy is RH and the ability to notice emotional nuance facially, vocally and bodily expressed, schizophrenics tend to be paranoid and are often convinced that the real people they know have been replaced by robotic imposters. This is at least partly because they lose the ability to intuit what other people are thinking and feeling – hence they seem robotic and suspicious.

Oswald Spengler’s The Decline of the West as well as McGilchrist characterize the West as awash in phenomena associated with an extreme LH emphasis. Spengler argues that Western civilization was originally much more RH (to use McGilchrist’s categories) and that all its most significant artistic (in the broadest sense) achievements were triumphs of RH accentuation.

The RH is where novel experiences and the anomalous are processed and where mathematical, and other, problems are solved. The RH is involved with the natural, the unfamiliar, the unique, emotions, the embodied, music, humor, understanding intonation and emotional nuance of speech, the metaphorical, nuance, and social relations. It has very little speech, but the RH is necessary for processing all the nonlinguistic aspects of speaking, including body language. Understanding what someone means by vocal inflection and facial expressions is an intuitive RH process rather than explicit.

...

RH is very much the center of lived experience; of the life world with all its depth and richness. The RH is “the master” from the title of McGilchrist’s book. The LH ought to be no more than the emissary; the valued servant of the RH. However, in the last few centuries, the LH, which has tyrannical tendencies, has tried to become the master. The LH is where the ego is predominantly located. In split brain patients where the LH and the RH are surgically divided (this is done sometimes in the case of epileptic patients) one hand will sometimes fight with the other. In one man’s case, one hand would reach out to hug his wife while the other pushed her away. One hand reached for one shirt, the other another shirt. Or a patient will be driving a car and one hand will try to turn the steering wheel in the opposite direction. In these cases, the “naughty” hand is usually the left hand (RH), while the patient tends to identify herself with the right hand governed by the LH. The two hemispheres have quite different personalities.

The connection between LH and ego can also be seen in the fact that the LH is competitive, contentious, and agonistic. It wants to win. It is the part of you that hates to lose arguments.

Using the metaphor of Chaos and Order, the RH deals with Chaos – the unknown, the unfamiliar, the implicit, the emotional, the dark, danger, mystery. The LH is connected with Order – the known, the familiar, the rule-driven, the explicit, and light of day. Learning something means to take something unfamiliar and making it familiar. Since the RH deals with the novel, it is the problem-solving part. Once understood, the results are dealt with by the LH. When learning a new piece on the piano, the RH is involved. Once mastered, the result becomes a LH affair. The muscle memory developed by repetition is processed by the LH. If errors are made, the activity returns to the RH to figure out what went wrong; the activity is repeated until the correct muscle memory is developed in which case it becomes part of the familiar LH.

Science is an attempt to find Order. It would not be necessary if people lived in an entirely orderly, explicit, known world. The lived context of science implies Chaos. Theories are reductive and simplifying and help to pick out salient features of a phenomenon. They are always partial truths, though some are more partial than others. The alternative to a certain level of reductionism or partialness would be to simply reproduce the world which of course would be both impossible and unproductive. The test for whether a theory is sufficiently non-partial is whether it is fit for purpose and whether it contributes to human flourishing.

...

Analytic philosophers pride themselves on trying to do away with vagueness. To do so, they tend to jettison context which cannot be brought into fine focus. However, in order to understand things and discern their meaning, it is necessary to have the big picture, the overview, as well as the details. There is no point in having details if the subject does not know what they are details of. Such philosophers also tend to leave themselves out of the picture even when what they are thinking about has reflexive implications. John Locke, for instance, tried to banish the RH from reality. All phenomena having to do with subjective experience he deemed unreal and once remarked about metaphors, a RH phenomenon, that they are “perfect cheats.” Analytic philosophers tend to check the logic of the words on the page and not to think about what those words might say about them. The trick is for them to recognize that they and their theories, which exist in minds, are part of reality too.

The RH test for whether someone actually believes something can be found by examining his actions. If he finds that he must regard his own actions as free, and, in order to get along with other people, must also attribute free will to them and treat them as free agents, then he effectively believes in free will – no matter his LH theoretical commitments.

...

We do not know the origin of life. We do not know how or even if consciousness can emerge from matter. We do not know the nature of 96% of the matter of the universe. Clearly all these things exist. They can provide the subject matter of theories but they continue to exist as theorizing ceases or theories change. Not knowing how something is possible is irrelevant to its actual existence. An inability to explain something is ultimately neither here nor there.

If thought begins and ends with the LH, then thinking has no content – content being provided by experience (RH), and skepticism and nihilism ensue. The LH spins its wheels self-referentially, never referring back to experience. Theory assumes such primacy that it will simply outlaw experiences and data inconsistent with it; a profoundly wrong-headed approach.

...

Gödel’s Theorem proves that not everything true can be proven to be true. This means there is an ineradicable role for faith, hope and intuition in every moderately complex human intellectual endeavor. There is no one set of consistent axioms from which all other truths can be derived.

Alan Turing’s proof of the halting problem proves that there is no effective procedure for finding effective procedures. Without a mechanical decision procedure, (LH), when it comes to … [more]
gnon  reflection  books  summary  review  neuro  neuro-nitgrit  things  thinking  metabuch  order-disorder  apollonian-dionysian  bio  examples  near-far  symmetry  homo-hetero  logic  inference  intuition  problem-solving  analytical-holistic  n-factor  europe  the-great-west-whale  occident  alien-character  detail-architecture  art  theory-practice  philosophy  being-becoming  essence-existence  language  psychology  cog-psych  egalitarianism-hierarchy  direction  reason  learning  novelty  science  anglo  anglosphere  coarse-fine  neurons  truth  contradiction  matching  empirical  volo-avolo  curiosity  uncertainty  theos  axioms  intricacy  computation  analogy  essay  rhetoric  deep-materialism  new-religion  knowledge  expert-experience  confidence  biases  optimism  pessimism  realness  whole-partial-many  theory-of-mind  values  competition  reduction  subjective-objective  communication  telos-atelos  ends-means  turing  fiction  increase-decrease  innovation  creative  thick-thin  spengler  multi  ratty  hanson  complex-systems  structure  concrete  abstraction  network-s 
september 2018 by nhaliday
Moravec's paradox - Wikipedia
Moravec's paradox is the discovery by artificial intelligence and robotics researchers that, contrary to traditional assumptions, high-level reasoning requires very little computation, but low-level sensorimotor skills require enormous computational resources. The principle was articulated by Hans Moravec, Rodney Brooks, Marvin Minsky and others in the 1980s. As Moravec writes, "it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility".[1]

Similarly, Minsky emphasized that the most difficult human skills to reverse engineer are those that are unconscious. "In general, we're least aware of what our minds do best", he wrote, and added "we're more aware of simple processes that don't work well than of complex ones that work flawlessly".[2]

...

One possible explanation of the paradox, offered by Moravec, is based on evolution. All human skills are implemented biologically, using machinery designed by the process of natural selection. In the course of their evolution, natural selection has tended to preserve design improvements and optimizations. The older a skill is, the more time natural selection has had to improve the design. Abstract thought developed only very recently, and consequently, we should not expect its implementation to be particularly efficient.

As Moravec writes:

Encoded in the large, highly evolved sensory and motor portions of the human brain is a billion years of experience about the nature of the world and how to survive in it. The deliberate process we call reasoning is, I believe, the thinnest veneer of human thought, effective only because it is supported by this much older and much more powerful, though usually unconscious, sensorimotor knowledge. We are all prodigious olympians in perceptual and motor areas, so good that we make the difficult look easy. Abstract thought, though, is a new trick, perhaps less than 100 thousand years old. We have not yet mastered it. It is not all that intrinsically difficult; it just seems so when we do it.[3]

A compact way to express this argument would be:

- We should expect the difficulty of reverse-engineering any human skill to be roughly proportional to the amount of time that skill has been evolving in animals.
- The oldest human skills are largely unconscious and so appear to us to be effortless.
- Therefore, we should expect skills that appear effortless to be difficult to reverse-engineer, but skills that require effort may not necessarily be difficult to engineer at all.
concept  wiki  reference  paradox  ai  intelligence  reason  instinct  neuro  psychology  cog-psych  hardness  logic  deep-learning  time  evopsych  evolution  sapiens  the-self  EEA  embodied  embodied-cognition  abstraction  universalism-particularism  gnosis-logos  robotics 
june 2018 by nhaliday
[1804.04268] Incomplete Contracting and AI Alignment
We suggest that the analysis of incomplete contracting developed by law and economics researchers can provide a useful framework for understanding the AI alignment problem and help to generate a systematic approach to finding solutions. We first provide an overview of the incomplete contracting literature and explore parallels between this work and the problem of AI alignment. As we emphasize, misalignment between principal and agent is a core focus of economic analysis. We highlight some technical results from the economics literature on incomplete contracts that may provide insights for AI alignment researchers. Our core contribution, however, is to bring to bear an insight that economists have been urged to absorb from legal scholars and other behavioral scientists: the fact that human contracting is supported by substantial amounts of external structure, such as generally available institutions (culture, law) that can supply implied terms to fill the gaps in incomplete contracts. We propose a research agenda for AI alignment work that focuses on the problem of how to build AI that can replicate the human cognitive processes that connect individual incomplete contracts with this supporting external structure.
nibble  preprint  org:mat  papers  ai  ai-control  alignment  coordination  contracts  law  economics  interests  culture  institutions  number  context  behavioral-econ  composition-decomposition  rent-seeking  whole-partial-many 
april 2018 by nhaliday
Surveil things, not people – The sideways view
Technology may reach a point where free use of one person’s share of humanity’s resources is enough to easily destroy the world. I think society needs to make significant changes to cope with that scenario.

Mass surveillance is a natural response, and sometimes people think of it as the only response. I find mass surveillance pretty unappealing, but I think we can capture almost all of the value by surveilling things rather than surveilling people. This approach avoids some of the worst problems of mass surveillance; while it still has unattractive features it’s my favorite option so far.

...

The idea
We’ll choose a set of artifacts to surveil and restrict. I’ll call these heavy technology and everything else light technology. Our goal is to restrict as few things as possible, but we want to make sure that someone can’t cause unacceptable destruction with only light technology. By default something is light technology if it can be easily acquired by an individual or small group in 2017, and heavy technology otherwise (though we may need to make some exceptions, e.g. certain biological materials or equipment).

Heavy technology is subject to two rules:

1. You can’t use heavy technology in a way that is unacceptably destructive.
2. You can’t use heavy technology to undermine the machinery that enforces these two rules.

To enforce these rules, all heavy technology is under surveillance, and is situated such that it cannot be unilaterally used by any individual or small group. That is, individuals can own heavy technology, but they cannot have unmonitored physical access to that technology.

...

This proposal does give states a de facto monopoly on heavy technology, and would eventually make armed resistance totally impossible. But it’s already the case that states have a massive advantage in armed conflict, and it seems almost inevitable that progress in AI will make this advantage larger (and enable states to do much more with it). Realistically I’m not convinced this proposal makes things much worse than the default.

This proposal definitely expands regulators’ nominal authority and seems prone to abuses. But amongst candidates for handling a future with cheap and destructive dual-use technology, I feel this is the best of many bad options with respect to the potential for abuse.
ratty  acmtariat  clever-rats  risk  existence  futurism  technology  policy  alt-inst  proposal  government  intel  authoritarianism  orwellian  tricks  leviathan  security  civilization  ai  ai-control  arms  defense  cybernetics  institutions  law  unintended-consequences  civil-liberty  volo-avolo  power  constraint-satisfaction  alignment 
april 2018 by nhaliday
Complexity no Bar to AI - Gwern.net
Critics of AI risk suggest diminishing returns to computing (formalized asymptotically) means AI will be weak; this argument relies on a large number of questionable premises and ignoring additional resources, constant factors, and nonlinear returns to small intelligence advantages, and is highly unlikely. (computer science, transhumanism, AI, R)
created: 1 June 2014; modified: 01 Feb 2018; status: finished; confidence: likely; importance: 10
ratty  gwern  analysis  faq  ai  risk  speedometer  intelligence  futurism  cs  computation  complexity  tcs  linear-algebra  nonlinearity  convexity-curvature  average-case  adversarial  article  time-complexity  singularity  iteration-recursion  magnitude  multiplicative  lower-bounds  no-go  performance  hardware  humanity  psychology  cog-psych  psychometrics  iq  distribution  moments  complement-substitute  hanson  ems  enhancement  parable  detail-architecture  universalism-particularism  neuro  ai-control  environment  climate-change  threat-modeling  security  theory-practice  hacker  academia  realness  crypto  rigorous-crypto  usa  government 
april 2018 by nhaliday
The first ethical revolution – Gene Expression
Fifty years ago Julian Jaynes published The Origin of Consciousness in the Breakdown of the Bicameral Mind. Seventy years ago Karl Jaspers introduced the concept of the Axial Age. Both point to the same dynamic historically.

Something happened in the centuries around 500 BCE all around the world. Great religions and philosophies arose. The Indian religious traditions, the Chinese philosophical-political ones, and the roots of what we can recognize as Judaism. In Greece, the precursors of many modern philosophical streams emerged formally, along with a variety of political systems.

The next few centuries saw some more innovation. Rabbinical Judaism transformed a ritualistic tribal religion into an ethical one, and Christianity universalized Jewish religious thought, as well as infusing it with Greek systematic concepts. Meanwhile, Indian and Chinese thought continued to evolve, often due to interactions each other (it is hard to imagine certain later developments in Confucianism without the Buddhist stimulus). Finally, in the 7th century, Islam emerges as the last great world religion.

...

Living in large complex societies with social stratification posed challenges. A religion such as Christianity was not a coincidence, something of its broad outlines may have been inevitable. Universal, portable, ethical, and infused with transcendence and coherency. Similarly, god-kings seem to have universally transformed themselves into the human who binds heaven to earth in some fashion.

The second wave of social-ethical transformation occurred in the early modern period, starting in Europe. My own opinion is that economic growth triggered by innovation and gains in productivity unleashed constraints which had dampened further transformations in the domain of ethics. But the new developments ultimately were simply extensions and modifications on the earlier “source code” (e.g., whereas for nearly two thousand years Christianity had had to make peace with the existence of slavery, in the 19th century anti-slavery activists began marshaling Christian language against the institution).
gnxp  scitariat  discussion  reflection  religion  christianity  theos  judaism  china  asia  sinosphere  orient  india  the-great-west-whale  occident  history  antiquity  iron-age  mediterranean  the-classics  canon  philosophy  morality  ethics  universalism-particularism  systematic-ad-hoc  analytical-holistic  confucian  big-peeps  innovation  stagnation  technology  economics  biotech  enhancement  genetics  bio  flux-stasis  automation  ai  low-hanging  speedometer  time  distribution  smoothness  shift  dennett  simler  volo-avolo  👽  mystic  marginal  farmers-and-foragers  wealth  egalitarianism-hierarchy  values  formal-values  ideology  good-evil 
april 2018 by nhaliday
Society of Mind - Wikipedia
A core tenet of Minsky's philosophy is that "minds are what brains do". The society of mind theory views the human mind and any other naturally evolved cognitive systems as a vast society of individually simple processes known as agents. These processes are the fundamental thinking entities from which minds are built, and together produce the many abilities we attribute to minds. The great power in viewing a mind as a society of agents, as opposed to the consequence of some basic principle or some simple formal system, is that different agents can be based on different types of processes with different purposes, ways of representing knowledge, and methods for producing results.

This idea is perhaps best summarized by the following quote:

What magical trick makes us intelligent? The trick is that there is no trick. The power of intelligence stems from our vast diversity, not from any single, perfect principle. —Marvin Minsky, The Society of Mind, p. 308

https://en.wikipedia.org/wiki/Modularity_of_mind

The modular organization of human anatomical
brain networks: Accounting for the cost of wiring: https://www.mitpressjournals.org/doi/pdfplus/10.1162/NETN_a_00002
Brain networks are expected to be modular. However, existing techniques for estimating a network’s modules make it difficult to assess the influence of organizational principles such as wiring cost reduction on the detected modules. Here we present a modification of an existing module detection algorithm that allowed us to focus on connections that are unexpected under a cost-reduction wiring rule and to identify modules from among these connections. We applied this technique to anatomical brain networks and showed that the modules we detected differ from those detected using the standard technique. We demonstrated that these novel modules are spatially distributed, exhibit unique functional fingerprints, and overlap considerably with rich clubs, giving rise to an alternative and complementary interpretation of the functional roles of specific brain regions. Finally, we demonstrated that, using the modified module detection approach, we can detect modules in a developmental dataset that track normative patterns of maturation. Collectively, these findings support the hypothesis that brain networks are composed of modules and provide additional insight into the function of those modules.
books  ideas  speculation  structure  composition-decomposition  complex-systems  neuro  ai  psychology  cog-psych  intelligence  reduction  wiki  giants  philosophy  number  cohesion  diversity  systematic-ad-hoc  detail-architecture  pdf  study  neuro-nitgrit  brain-scan  nitty-gritty  network-structure  graphs  graph-theory  models  whole-partial-many  evopsych  eden  reference  psych-architecture  article  coupling-cohesion  multi 
april 2018 by nhaliday
The Hanson-Yudkowsky AI-Foom Debate - Machine Intelligence Research Institute
How Deviant Recent AI Progress Lumpiness?: http://www.overcomingbias.com/2018/03/how-deviant-recent-ai-progress-lumpiness.html
I seem to disagree with most people working on artificial intelligence (AI) risk. While with them I expect rapid change once AI is powerful enough to replace most all human workers, I expect this change to be spread across the world, not concentrated in one main localized AI system. The efforts of AI risk folks to design AI systems whose values won’t drift might stop global AI value drift if there is just one main AI system. But doing so in a world of many AI systems at similar abilities levels requires strong global governance of AI systems, which is a tall order anytime soon. Their continued focus on preventing single system drift suggests that they expect a single main AI system.

The main reason that I understand to expect relatively local AI progress is if AI progress is unusually lumpy, i.e., arriving in unusually fewer larger packages rather than in the usual many smaller packages. If one AI team finds a big lump, it might jump way ahead of the other teams.

However, we have a vast literature on the lumpiness of research and innovation more generally, which clearly says that usually most of the value in innovation is found in many small innovations. We have also so far seen this in computer science (CS) and AI. Even if there have been historical examples where much value was found in particular big innovations, such as nuclear weapons or the origin of humans.

Apparently many people associated with AI risk, including the star machine learning (ML) researchers that they often idolize, find it intuitively plausible that AI and ML progress is exceptionally lumpy. Such researchers often say, “My project is ‘huge’, and will soon do it all!” A decade ago my ex-co-blogger Eliezer Yudkowsky and I argued here on this blog about our differing estimates of AI progress lumpiness. He recently offered Alpha Go Zero as evidence of AI lumpiness:

...

In this post, let me give another example (beyond two big lumps in a row) of what could change my mind. I offer a clear observable indicator, for which data should have available now: deviant citation lumpiness in recent ML research. One standard measure of research impact is citations; bigger lumpier developments gain more citations that smaller ones. And it turns out that the lumpiness of citations is remarkably constant across research fields! See this March 3 paper in Science:

I Still Don’t Get Foom: http://www.overcomingbias.com/2014/07/30855.html
All of which makes it look like I’m the one with the problem; everyone else gets it. Even so, I’m gonna try to explain my problem again, in the hope that someone can explain where I’m going wrong. Here goes.

“Intelligence” just means an ability to do mental/calculation tasks, averaged over many tasks. I’ve always found it plausible that machines will continue to do more kinds of mental tasks better, and eventually be better at pretty much all of them. But what I’ve found it hard to accept is a “local explosion.” This is where a single machine, built by a single project using only a tiny fraction of world resources, goes in a short time (e.g., weeks) from being so weak that it is usually beat by a single human with the usual tools, to so powerful that it easily takes over the entire world. Yes, smarter machines may greatly increase overall economic growth rates, and yes such growth may be uneven. But this degree of unevenness seems implausibly extreme. Let me explain.

If we count by economic value, humans now do most of the mental tasks worth doing. Evolution has given us a brain chock-full of useful well-honed modules. And the fact that most mental tasks require the use of many modules is enough to explain why some of us are smarter than others. (There’d be a common “g” factor in task performance even with independent module variation.) Our modules aren’t that different from those of other primates, but because ours are different enough to allow lots of cultural transmission of innovation, we’ve out-competed other primates handily.

We’ve had computers for over seventy years, and have slowly build up libraries of software modules for them. Like brains, computers do mental tasks by combining modules. An important mental task is software innovation: improving these modules, adding new ones, and finding new ways to combine them. Ideas for new modules are sometimes inspired by the modules we see in our brains. When an innovation team finds an improvement, they usually sell access to it, which gives them resources for new projects, and lets others take advantage of their innovation.

...

In Bostrom’s graph above the line for an initially small project and system has a much higher slope, which means that it becomes in a short time vastly better at software innovation. Better than the entire rest of the world put together. And my key question is: how could it plausibly do that? Since the rest of the world is already trying the best it can to usefully innovate, and to abstract to promote such innovation, what exactly gives one small project such a huge advantage to let it innovate so much faster?

...

In fact, most software innovation seems to be driven by hardware advances, instead of innovator creativity. Apparently, good ideas are available but must usually wait until hardware is cheap enough to support them.

Yes, sometimes architectural choices have wider impacts. But I was an artificial intelligence researcher for nine years, ending twenty years ago, and I never saw an architecture choice make a huge difference, relative to other reasonable architecture choices. For most big systems, overall architecture matters a lot less than getting lots of detail right. Researchers have long wandered the space of architectures, mostly rediscovering variations on what others found before.

Some hope that a small project could be much better at innovation because it specializes in that topic, and much better understands new theoretical insights into the basic nature of innovation or intelligence. But I don’t think those are actually topics where one can usefully specialize much, or where we’ll find much useful new theory. To be much better at learning, the project would instead have to be much better at hundreds of specific kinds of learning. Which is very hard to do in a small project.

What does Bostrom say? Alas, not much. He distinguishes several advantages of digital over human minds, but all software shares those advantages. Bostrom also distinguishes five paths: better software, brain emulation (i.e., ems), biological enhancement of humans, brain-computer interfaces, and better human organizations. He doesn’t think interfaces would work, and sees organizations and better biology as only playing supporting roles.

...

Similarly, while you might imagine someday standing in awe in front of a super intelligence that embodies all the power of a new age, superintelligence just isn’t the sort of thing that one project could invent. As “intelligence” is just the name we give to being better at many mental tasks by using many good mental modules, there’s no one place to improve it. So I can’t see a plausible way one project could increase its intelligence vastly faster than could the rest of the world.

Takeoff speeds: https://sideways-view.com/2018/02/24/takeoff-speeds/
Futurists have argued for years about whether the development of AGI will look more like a breakthrough within a small group (“fast takeoff”), or a continuous acceleration distributed across the broader economy or a large firm (“slow takeoff”).

I currently think a slow takeoff is significantly more likely. This post explains some of my reasoning and why I think it matters. Mostly the post lists arguments I often hear for a fast takeoff and explains why I don’t find them compelling.

(Note: this is not a post about whether an intelligence explosion will occur. That seems very likely to me. Quantitatively I expect it to go along these lines. So e.g. while I disagree with many of the claims and assumptions in Intelligence Explosion Microeconomics, I don’t disagree with the central thesis or with most of the arguments.)
ratty  lesswrong  subculture  miri-cfar  ai  risk  ai-control  futurism  books  debate  hanson  big-yud  prediction  contrarianism  singularity  local-global  speed  speedometer  time  frontier  distribution  smoothness  shift  pdf  economics  track-record  abstraction  analogy  links  wiki  list  evolution  mutation  selection  optimization  search  iteration-recursion  intelligence  metameta  chart  analysis  number  ems  coordination  cooperate-defect  death  values  formal-values  flux-stasis  philosophy  farmers-and-foragers  malthus  scale  studying  innovation  insight  conceptual-vocab  growth-econ  egalitarianism-hierarchy  inequality  authoritarianism  wealth  near-far  rationality  epistemic  biases  cycles  competition  arms  zero-positive-sum  deterrence  war  peace-violence  winner-take-all  technology  moloch  multi  plots  research  science  publishing  humanity  labor  marginal  urban-rural  structure  composition-decomposition  complex-systems  gregory-clark  decentralized  heavy-industry  magnitude  multiplicative  endogenous-exogenous  models  uncertainty  decision-theory  time-prefer 
april 2018 by nhaliday
Eternity in six hours: intergalactic spreading of intelligent life and sharpening the Fermi paradox
We do this by demonstrating that traveling between galaxies – indeed even launching a colonisation project for the entire reachable universe – is a relatively simple task for a star-spanning civilization, requiring modest amounts of energy and resources. We start by demonstrating that humanity itself could likely accomplish such a colonisation project in the foreseeable future, should we want to, and then demonstrate that there are millions of galaxies that could have reached us by now, using similar methods. This results in a considerable sharpening of the Fermi paradox.
pdf  study  article  essay  anthropic  fermi  space  expansionism  bostrom  ratty  philosophy  xenobio  ideas  threat-modeling  intricacy  time  civilization  🔬  futurism  questions  paradox  risk  physics  engineering  interdisciplinary  frontier  technology  volo-avolo  dirty-hands  ai  automation  robotics  duplication  iteration-recursion  von-neumann  data  scale  magnitude  skunkworks  the-world-is-just-atoms  hard-tech  ems  bio  bits  speedometer  nature  model-organism  mechanics  phys-energy  relativity  electromag  analysis  spock  nitty-gritty  spreading  hanson  street-fighting  speed  gedanken  nibble 
march 2018 by nhaliday
The Coming Technological Singularity
Within thirty years, we will have the technological
means to create superhuman intelligence. Shortly after,
the human era will be ended.

Is such progress avoidable? If not to be avoided, can
events be guided so that we may survive? These questions
are investigated. Some possible answers (and some further
dangers) are presented.

_What is The Singularity?_

The acceleration of technological progress has been the central
feature of this century. I argue in this paper that we are on the edge
of change comparable to the rise of human life on Earth. The precise
cause of this change is the imminent creation by technology of
entities with greater than human intelligence. There are several means
by which science may achieve this breakthrough (and this is another
reason for having confidence that the event will occur):
o The development of computers that are "awake" and
superhumanly intelligent. (To date, most controversy in the
area of AI relates to whether we can create human equivalence
in a machine. But if the answer is "yes, we can", then there
is little doubt that beings more intelligent can be constructed
shortly thereafter.
o Large computer networks (and their associated users) may "wake
up" as a superhumanly intelligent entity.
o Computer/human interfaces may become so intimate that users
may reasonably be considered superhumanly intelligent.
o Biological science may find ways to improve upon the natural
human intellect.

The first three possibilities depend in large part on
improvements in computer hardware. Progress in computer hardware has
followed an amazingly steady curve in the last few decades [16]. Based
largely on this trend, I believe that the creation of greater than
human intelligence will occur during the next thirty years. (Charles
Platt [19] has pointed out the AI enthusiasts have been making claims
like this for the last thirty years. Just so I'm not guilty of a
relative-time ambiguity, let me more specific: I'll be surprised if
this event occurs before 2005 or after 2030.)

What are the consequences of this event? When greater-than-human
intelligence drives progress, that progress will be much more rapid.
In fact, there seems no reason why progress itself would not involve
the creation of still more intelligent entities -- on a still-shorter
time scale. The best analogy that I see is with the evolutionary past:
Animals can adapt to problems and make inventions, but often no faster
than natural selection can do its work -- the world acts as its own
simulator in the case of natural selection. We humans have the ability
to internalize the world and conduct "what if's" in our heads; we can
solve many problems thousands of times faster than natural selection.
Now, by creating the means to execute those simulations at much higher
speeds, we are entering a regime as radically different from our human
past as we humans are from the lower animals.
org:junk  humanity  accelerationism  futurism  prediction  classic  technology  frontier  speedometer  ai  risk  internet  time  essay  rhetoric  network-structure  ai-control  morality  ethics  volo-avolo  egalitarianism-hierarchy  intelligence  scale  giants  scifi-fantasy  speculation  quotes  religion  theos  singularity  flux-stasis  phase-transition  cybernetics  coordination  cooperate-defect  moloch  communication  bits  speed  efficiency  eden-heaven  ecology  benevolence  end-times  good-evil  identity  the-self  whole-partial-many  density 
march 2018 by nhaliday
AI-complete - Wikipedia
In the field of artificial intelligence, the most difficult problems are informally known as AI-complete or AI-hard, implying that the difficulty of these computational problems is equivalent to that of solving the central artificial intelligence problem—making computers as intelligent as people, or strong AI.[1] To call a problem AI-complete reflects an attitude that it would not be solved by a simple specific algorithm.

AI-complete problems are hypothesised to include computer vision, natural language understanding, and dealing with unexpected circumstances while solving any real world problem.[2]

Currently, AI-complete problems cannot be solved with modern computer technology alone, but would also require human computation. This property can be useful, for instance to test for the presence of humans as with CAPTCHAs, and for computer security to circumvent brute-force attacks.[3][4]

...

AI-complete problems are hypothesised to include:

Bongard problems
Computer vision (and subproblems such as object recognition)
Natural language understanding (and subproblems such as text mining, machine translation, and word sense disambiguation[8])
Dealing with unexpected circumstances while solving any real world problem, whether it's navigation or planning or even the kind of reasoning done by expert systems.

...

Current AI systems can solve very simple and/or restricted versions of AI-complete problems, but never in their full generality. When AI researchers attempt to "scale up" their systems to handle more complicated, real world situations, the programs tend to become excessively brittle without commonsense knowledge or a rudimentary understanding of the situation: they fail as unexpected circumstances outside of its original problem context begin to appear. When human beings are dealing with new situations in the world, they are helped immensely by the fact that they know what to expect: they know what all things around them are, why they are there, what they are likely to do and so on. They can recognize unusual situations and adjust accordingly. A machine without strong AI has no other skills to fall back on.[9]
concept  reduction  cs  computation  complexity  wiki  reference  properties  computer-vision  ai  risk  ai-control  machine-learning  deep-learning  language  nlp  order-disorder  tactics  strategy  intelligence  humanity  speculation  crux 
march 2018 by nhaliday
[1410.0369] The Universe of Minds
kinda dumb, don't think this guy is anywhere close to legit (e.g., he claims set of mind designs is countable, but gives no actual reason to believe that)
papers  preprint  org:mat  ratty  miri-cfar  ai  intelligence  philosophy  logic  software  cs  computation  the-self 
march 2018 by nhaliday
Existential Risks: Analyzing Human Extinction Scenarios
https://twitter.com/robinhanson/status/981291048965087232
https://archive.is/dUTD5
Would you endorse choosing policy to max the expected duration of civilization, at least as a good first approximation?
Can anyone suggest a different first approximation that would get more votes?

https://twitter.com/robinhanson/status/981335898502545408
https://archive.is/RpygO
How useful would it be to agree on a relatively-simple first-approximation observable-after-the-fact metric for what we want from the future universe, such as total life years experienced, or civilization duration?

We're Underestimating the Risk of Human Extinction: https://www.theatlantic.com/technology/archive/2012/03/were-underestimating-the-risk-of-human-extinction/253821/
An Oxford philosopher argues that we are not adequately accounting for technology's risks—but his solution to the problem is not for Luddites.

Anderson: You have argued that we underrate existential risks because of a particular kind of bias called observation selection effect. Can you explain a bit more about that?

Bostrom: The idea of an observation selection effect is maybe best explained by first considering the simpler concept of a selection effect. Let's say you're trying to estimate how large the largest fish in a given pond is, and you use a net to catch a hundred fish and the biggest fish you find is three inches long. You might be tempted to infer that the biggest fish in this pond is not much bigger than three inches, because you've caught a hundred of them and none of them are bigger than three inches. But if it turns out that your net could only catch fish up to a certain length, then the measuring instrument that you used would introduce a selection effect: it would only select from a subset of the domain you were trying to sample.

Now that's a kind of standard fact of statistics, and there are methods for trying to correct for it and you obviously have to take that into account when considering the fish distribution in your pond. An observation selection effect is a selection effect introduced not by limitations in our measurement instrument, but rather by the fact that all observations require the existence of an observer. This becomes important, for instance, in evolutionary biology. For instance, we know that intelligent life evolved on Earth. Naively, one might think that this piece of evidence suggests that life is likely to evolve on most Earth-like planets. But that would be to overlook an observation selection effect. For no matter how small the proportion of all Earth-like planets that evolve intelligent life, we will find ourselves on a planet that did. Our data point-that intelligent life arose on our planet-is predicted equally well by the hypothesis that intelligent life is very improbable even on Earth-like planets as by the hypothesis that intelligent life is highly probable on Earth-like planets. When it comes to human extinction and existential risk, there are certain controversial ways that observation selection effects might be relevant.
bostrom  ratty  miri-cfar  skunkworks  philosophy  org:junk  list  top-n  frontier  speedometer  risk  futurism  local-global  scale  death  nihil  technology  simulation  anthropic  nuclear  deterrence  environment  climate-change  arms  competition  ai  ai-control  genetics  genomics  biotech  parasites-microbiome  disease  offense-defense  physics  tails  network-structure  epidemiology  space  geoengineering  dysgenics  ems  authoritarianism  government  values  formal-values  moloch  enhancement  property-rights  coordination  cooperate-defect  flux-stasis  ideas  prediction  speculation  humanity  singularity  existence  cybernetics  study  article  letters  eden-heaven  gedanken  multi  twitter  social  discussion  backup  hanson  metrics  optimization  time  long-short-run  janus  telos-atelos  poll  forms-instances  threat-modeling  selection  interview  expert-experience  malthus  volo-avolo  intel  leviathan  drugs  pharma  data  estimate  nature  longevity  expansionism  homo-hetero  utopia-dystopia 
march 2018 by nhaliday
Unaligned optimization processes as a general problem for society
TL;DR: There are lots of systems in society which seem to fit the pattern of “the incentives for this system are a pretty good approximation of what we actually want, so the system produces good results until it gets powerful, at which point it gets terrible results.”

...

Here are some more places where this idea could come into play:

- Marketing—humans try to buy things that will make our lives better, but our process for determining this is imperfect. A more powerful optimization process produces extremely good advertising to sell us things that aren’t actually going to make our lives better.
- Politics—we get extremely effective demagogues who pit us against our essential good values.
- Lobbying—as industries get bigger, the optimization process to choose great lobbyists for industries gets larger, but the process to make regulators robust doesn’t get correspondingly stronger. So regulatory capture gets worse and worse. Rent-seeking gets more and more significant.
- Online content—in a weaker internet, sites can’t be addictive except via being good content. In the modern internet, people can feel addicted to things that they wish they weren’t addicted to. We didn’t use to have the social expertise to make clickbait nearly as well as we do it today.
- News—Hyperpartisan news sources are much more worth it if distribution is cheaper and the market is bigger. News sources get an advantage from being truthful, but as society gets bigger, this advantage gets proportionally smaller.

...

For these reasons, I think it’s quite plausible that humans are fundamentally unable to have a “good” society with a population greater than some threshold, particularly if all these people have access to modern technology. Humans don’t have the rigidity to maintain social institutions in the face of that kind of optimization process. I think it is unlikely but possible (10%?) that this threshold population is smaller than the current population of the US, and that the US will crumble due to the decay of these institutions in the next fifty years if nothing totally crazy happens.
ratty  thinking  metabuch  reflection  metameta  big-yud  clever-rats  ai-control  ai  risk  scale  quality  ability-competence  network-structure  capitalism  randy-ayndy  civil-liberty  marketing  institutions  economics  political-econ  politics  polisci  advertising  rent-seeking  government  coordination  internet  attention  polarization  media  truth  unintended-consequences  alt-inst  efficiency  altruism  society  usa  decentralized  rhetoric  prediction  population  incentives  intervention  criminal-justice  property-rights  redistribution  taxes  externalities  science  monetary-fiscal  public-goodish  zero-positive-sum  markets  cost-benefit  regulation  regularizer  order-disorder  flux-stasis  shift  smoothness  phase-transition  power  definite-planning  optimism  pessimism  homo-hetero  interests  eden-heaven  telos-atelos  threat-modeling  alignment 
february 2018 by nhaliday
Information Processing: US Needs a National AI Strategy: A Sputnik Moment?
FT podcasts on US-China competition and AI: http://infoproc.blogspot.com/2018/05/ft-podcasts-on-us-china-competition-and.html

A new recommended career path for effective altruists: China specialist: https://80000hours.org/articles/china-careers/
Our rough guess is that it would be useful for there to be at least ten people in the community with good knowledge in this area within the next few years.

By “good knowledge” we mean they’ve spent at least 3 years studying these topics and/or living in China.

We chose ten because that would be enough for several people to cover each of the major areas listed (e.g. 4 within AI, 2 within biorisk, 2 within foreign relations, 1 in another area).

AI Policy and Governance Internship: https://www.fhi.ox.ac.uk/ai-policy-governance-internship/

https://www.fhi.ox.ac.uk/deciphering-chinas-ai-dream/
https://www.fhi.ox.ac.uk/wp-content/uploads/Deciphering_Chinas_AI-Dream.pdf
Deciphering China’s AI Dream
The context, components, capabilities, and consequences of
China’s strategy to lead the world in AI

Europe’s AI delusion: https://www.politico.eu/article/opinion-europes-ai-delusion/
Brussels is failing to grasp threats and opportunities of artificial intelligence.
By BRUNO MAÇÃES

When the computer program AlphaGo beat the Chinese professional Go player Ke Jie in a three-part match, it didn’t take long for Beijing to realize the implications.

If algorithms can already surpass the abilities of a master Go player, it can’t be long before they will be similarly supreme in the activity to which the classic board game has always been compared: war.

As I’ve written before, the great conflict of our time is about who can control the next wave of technological development: the widespread application of artificial intelligence in the economic and military spheres.

...

If China’s ambitions sound plausible, that’s because the country’s achievements in deep learning are so impressive already. After Microsoft announced that its speech recognition software surpassed human-level language recognition in October 2016, Andrew Ng, then head of research at Baidu, tweeted: “We had surpassed human-level Chinese recognition in 2015; happy to see Microsoft also get there for English less than a year later.”

...

One obvious advantage China enjoys is access to almost unlimited pools of data. The machine-learning technologies boosting the current wave of AI expansion are as good as the amount of data they can use. That could be the number of people driving cars, photos labeled on the internet or voice samples for translation apps. With 700 or 800 million Chinese internet users and fewer data protection rules, China is as rich in data as the Gulf States are in oil.

How can Europe and the United States compete? They will have to be commensurately better in developing algorithms and computer power. Sadly, Europe is falling behind in these areas as well.

...

Chinese commentators have embraced the idea of a coming singularity: the moment when AI surpasses human ability. At that point a number of interesting things happen. First, future AI development will be conducted by AI itself, creating exponential feedback loops. Second, humans will become useless for waging war. At that point, the human mind will be unable to keep pace with robotized warfare. With advanced image recognition, data analytics, prediction systems, military brain science and unmanned systems, devastating wars might be waged and won in a matter of minutes.

...

The argument in the new strategy is fully defensive. It first considers how AI raises new threats and then goes on to discuss the opportunities. The EU and Chinese strategies follow opposite logics. Already on its second page, the text frets about the legal and ethical problems raised by AI and discusses the “legitimate concerns” the technology generates.

The EU’s strategy is organized around three concerns: the need to boost Europe’s AI capacity, ethical issues and social challenges. Unfortunately, even the first dimension quickly turns out to be about “European values” and the need to place “the human” at the center of AI — forgetting that the first word in AI is not “human” but “artificial.”

https://twitter.com/mr_scientism/status/983057591298351104
https://archive.is/m3Njh
US military: "LOL, China thinks it's going to be a major player in AI, but we've got all the top AI researchers. You guys will help us develop weapons, right?"

US AI researchers: "No."

US military: "But... maybe just a computer vision app."

US AI researchers: "NO."

https://www.theverge.com/2018/4/4/17196818/ai-boycot-killer-robots-kaist-university-hanwha
https://www.nytimes.com/2018/04/04/technology/google-letter-ceo-pentagon-project.html
https://twitter.com/mr_scientism/status/981685030417326080
https://archive.is/3wbHm
AI-risk was a mistake.
hsu  scitariat  commentary  video  presentation  comparison  usa  china  asia  sinosphere  frontier  technology  science  ai  speedometer  innovation  google  barons  deepgoog  stories  white-paper  strategy  migration  iran  human-capital  corporation  creative  alien-character  military  human-ml  nationalism-globalism  security  investing  government  games  deterrence  defense  nuclear  arms  competition  risk  ai-control  musk  optimism  multi  news  org:mag  europe  EU  80000-hours  effective-altruism  proposal  article  realness  offense-defense  war  biotech  altruism  language  foreign-lang  philosophy  the-great-west-whale  enhancement  foreign-policy  geopolitics  anglo  jobs  career  planning  hmm  travel  charity  tech  intel  media  teaching  tutoring  russia  india  miri-cfar  pdf  automation  class  labor  polisci  society  trust  n-factor  corruption  leviathan  ethics  authoritarianism  individualism-collectivism  revolution  economics  inequality  civic  law  regulation  data  scale  pro-rata  capital  zero-positive-sum  cooperate-defect  distribution  time-series  tre 
february 2018 by nhaliday
What Peter Thiel thinks about AI risk - Less Wrong
TL;DR: he thinks its an issue but also feels AGI is very distant and hence less worried about it than Musk.

I recommend the rest of the lecture as well, it's a good summary of "Zero to One"  and a good QA afterwards.

For context, in case anyone doesn't realize: Thiel has been MIRI's top donor throughout its history.

other stuff:
nice interview question: "thing you know is true that not everyone agrees on?"
"learning from failure overrated"
cleantech a huge market, hard to compete
software makes for easy monopolies (zero marginal costs, network effects, etc.)
for most of history inventors did not benefit much (continuous competition)
ethical behavior is a luxury of monopoly
ratty  lesswrong  commentary  ai  ai-control  risk  futurism  technology  speedometer  audio  presentation  musk  thiel  barons  frontier  miri-cfar  charity  people  track-record  venture  startups  entrepreneurialism  contrarianism  competition  market-power  business  google  truth  management  leadership  socs-and-mops  dark-arts  skunkworks  hard-tech  energy-resources  wire-guided  learning  software  sv  tech  network-structure  scale  marginal  cost-benefit  innovation  industrial-revolution  economics  growth-econ  capitalism  comparison  nationalism-globalism  china  asia  trade  stagnation  things  dimensionality  exploratory  world  developing-world  thinking  definite-planning  optimism  pessimism  intricacy  politics  war  career  planning  supply-demand  labor  science  engineering  dirty-hands  biophysical-econ  migration  human-capital  policy  canada  anglo  winner-take-all  polarization  amazon  business-models  allodium  civilization  the-classics  microsoft  analogy  gibbon  conquest-empire  realness  cynicism-idealism  org:edu  open-closed  ethics  incentives  m 
february 2018 by nhaliday
Reid Hofmann and Peter Thiel and technology and politics - Marginal REVOLUTION
econotariat  marginal-rev  links  video  interview  thiel  barons  randy-ayndy  cryptocurrency  ai  communism  individualism-collectivism  civil-liberty  sv  tech  automation  speedometer  stagnation  technology  politics  current-events  trends  democracy  usa  malthus  zero-positive-sum  china  asia  stanford  news  org:local  polarization  economics  cycles  growth-econ  zeitgeist  housing  urban-rural  california  the-west  decentralized  privacy  anonymity  inequality  multi  winner-take-all  realpolitik  machiavelli  error  order-disorder  leviathan  dirty-hands  the-world-is-just-atoms  heavy-industry  embodied  engineering  reflection  trump  2016-election  pessimism  definite-planning  optimism  left-wing  right-wing  steel-man  managerial-state  orwellian  vampire-squid  contrarianism  age-generation  econ-productivity  compensation  time-series  feudal  gnosis-logos 
february 2018 by nhaliday
Information Processing: Mathematical Theory of Deep Neural Networks (Princeton workshop)
"Recently, long-past-due theoretical results have begun to emerge. These results, and those that will follow in their wake, will begin to shed light on the properties of large, adaptive, distributed learning architectures, and stand to revolutionize how computer science and neuroscience understand these systems."
hsu  scitariat  commentary  links  research  research-program  workshop  events  princeton  sanjeev-arora  deep-learning  machine-learning  ai  generalization  explanans  off-convex  nibble  frontier  speedometer  state-of-art  big-surf  announcement 
january 2018 by nhaliday
[1709.06560] Deep Reinforcement Learning that Matters
https://twitter.com/WAWilsonIV/status/912505885565452288
I’ve been experimenting w/ various kinds of value function approaches to RL lately, and its striking how primitive and bad things seem to be
At first I thought it was just that my code sucks, but then I played with the OpenAI baselines and nope, it’s the children that are wrong.
And now, what comes across my desk but this fantastic paper: (link: https://arxiv.org/abs/1709.06560) arxiv.org/abs/1709.06560 How long until the replication crisis hits AI?

https://twitter.com/WAWilsonIV/status/911318326504153088
Seriously I’m not blown away by the PhDs’ records over the last 30 years. I bet you’d get better payoff funding eccentrics and amateurs.
There are essentially zero fundamentally new ideas in AI, the papers are all grotesquely hyperparameter tuned, nobody knows why it works.

Deep Reinforcement Learning Doesn't Work Yet: https://www.alexirpan.com/2018/02/14/rl-hard.html
Once, on Facebook, I made the following claim.

Whenever someone asks me if reinforcement learning can solve their problem, I tell them it can’t. I think this is right at least 70% of the time.
papers  preprint  machine-learning  acm  frontier  speedometer  deep-learning  realness  replication  state-of-art  survey  reinforcement  multi  twitter  social  discussion  techtariat  ai  nibble  org:mat  unaffiliated  ratty  acmtariat  liner-notes  critique  sample-complexity  cost-benefit  todo 
september 2017 by nhaliday
New Theory Cracks Open the Black Box of Deep Learning | Quanta Magazine
A new idea called the “information bottleneck” is helping to explain the puzzling success of today’s artificial-intelligence algorithms — and might also explain how human brains learn.

sounds like he's just talking about autoencoders?
news  org:mag  org:sci  popsci  announcement  research  deep-learning  machine-learning  acm  information-theory  bits  neuro  model-class  big-surf  frontier  nibble  hmm  signal-noise  deepgoog  expert  ideas  wild-ideas  summary  talks  video  israel  roots  physics  interdisciplinary  ai  intelligence  shannon  giants  arrows  preimage  lifts-projections  composition-decomposition  characterization  markov  gradient-descent  papers  liner-notes  experiment  hi-order-bits  generalization  expert-experience  explanans  org:inst  speedometer 
september 2017 by nhaliday
THE GROWING IMPORTANCE OF SOCIAL SKILLS IN THE LABOR MARKET*
key fact: cognitive ability is not growing in importance, but non-cognitive ability is

The labor market increasingly rewards social skills. Between 1980 and 2012, jobs requiring high levels of social interaction grew by nearly 12 percentage points as a share of the U.S. labor force. Math-intensive but less social jobs—including many STEM occupations—shrank by 3.3 percentage points over the same period. Employment and wage growth was particularly strong for jobs requiring high levels of both math skill and social skill. To understand these patterns, I develop a model of team production where workers “trade tasks” to exploit their comparative advantage. In the model, social skills reduce coordination costs, allowing workers to specialize and work together more efficiently. The model generates predictions about sorting and the relative returns to skill across occupations, which I investigate using data from the NLSY79 and the NLSY97. Using a comparable set of skill measures and covariates across survey waves, I find that the labor market return to social skills was much greater in the 2000s than in the mid 1980s and 1990s. JEL Codes: I20, I24, J01, J23, J24, J31

The Increasing Complementarity between Cognitive and Social Skills: http://econ.ucsb.edu/~weinberg/MathSocialWeinberger.pdf

The Changing Roles of Education and Ability in Wage Determination: http://business.uow.edu.au/content/groups/public/@web/@commerce/@research/documents/doc/uow130116.pdf

Intelligence and socioeconomic success: A meta-analytic review of longitudinal research: http://www.emilkirkegaard.dk/en/wp-content/uploads/Intelligence-and-socioeconomic-success-A-meta-analytic-review-of-longitudinal-research.pdf
Moderator analyses showed that the relationship between intelligence and success is dependent on the age of the sample but there is little evidence of any historical trend in the relationship.

https://twitter.com/khazar_milkers/status/898996206973603840
https://archive.is/7gLXv
that feelio when america has crossed an inflection point and EQ is obviously more important for success in todays society than IQ
I think this is how to understand a lot of "corporate commitment to diversity" stuff.Not the only reason ofc, but reason it's so impregnable
compare: https://pinboard.in/u:nhaliday/b:e9ac3d38e7a1
and: https://pinboard.in/u:nhaliday/b:a38f5756170d

g-reliant skills seem most susceptible to automation: https://fredrikdeboer.com/2017/06/14/g-reliant-skills-seem-most-susceptible-to-automation/

THE ERROR TERM: https://spottedtoad.wordpress.com/2018/02/19/the-error-term/
Imagine an objective function- something you want to maximize or minimize- with both a deterministic and a random component.

...

Part of y is rules-based and rational, part is random and outside rational control. Obviously, the ascent of civilization has, to the extent it has taken place, been based on focusing energies on those parts of the world that are responsive to rational interpretation and control.

But an interesting thing happens once automated processes are able to take over the mapping of patterns onto rules. The portion of the world that is responsive to algorithmic interpretation is also the rational, rules-based portion, almost tautologically. But in terms of our actual objective functions- the real portions of the world that we are trying to affect or influence- subtracting out the portion susceptible to algorithms does not eliminate the variation or make it unimportant. It simply makes it much more purely random rather than only partially so.

The interesting thing, to me, is that economic returns accumulate to the random portion of variation just as to the deterministic portion. In fact, if everybody has access to the same algorithms, the returns may well be largely to the random portion. The efficient market hypothesis in action, more or less.

...

But more generally, as more and more of the society comes under algorithmic control, as various forms of automated intelligence become ubiquitous, the remaining portion, and the portion for which individual workers are rewarded, might well become more irrational, more random, less satisfying, less intelligent.

Golden age for team players: https://news.harvard.edu/gazette/story/2017/10/social-skills-increasingly-valuable-to-employers-harvard-economist-finds/
Strong social skills increasingly valuable to employers, study finds

Number of available jobs by skill set (over time)

Changes in hourly wages by skill set (over time)

https://twitter.com/GarettJones/status/947904725294260224
https://archive.is/EEQA9
A resolution for the new year: Remember that intelligence is a predictor of social intelligence!
pdf  study  economics  econometrics  trends  labor  intelligence  iq  personality  psych-architecture  compensation  human-capital  🎩  data  regularizer  hmm  career  planning  long-term  stylized-facts  management  polarization  stagnation  inequality  leadership  longitudinal  chart  zeitgeist  s-factor  history  mostly-modern  usa  correlation  gnon  🐸  twitter  social  memes(ew)  pic  discussion  diversity  managerial-state  unaffiliated  left-wing  automation  gender  backup  westminster  multi  working-stiff  news  org:edu  time-series  :/  coordination  collaboration  money  medicine  law  teaching  education  tech  dirty-hands  engineering  supply-demand  ratty  large-factor  signal-noise  order-disorder  random  technocracy  branches  unintended-consequences  ai  prediction  speculation  theory-of-mind 
august 2017 by nhaliday
Superintelligence Risk Project Update II
https://www.jefftk.com/p/superintelligence-risk-project-update

https://www.jefftk.com/p/conversation-with-michael-littman
For example, I asked him what he thought of the idea that to we could get AGI with current techniques, primarily deep neural nets and reinforcement learning, without learning anything new about how intelligence works or how to implement it ("Prosaic AGI" [1]). He didn't think this was possible, and believes there are deep conceptual issues we still need to get a handle on. He's also less impressed with deep learning than he was before he started working in it: in his experience it's a much more brittle technology than he had been expecting. Specifically, when trying to replicate results, he's often found that they depend on a bunch of parameters being in just the right range, and without that the systems don't perform nearly as well.

The bottom line, to him, was that since we are still many breakthroughs away from getting to AGI, we can't productively work on reducing superintelligence risk now.

He told me that he worries that the AI risk community is not solving real problems: they're making deductions and inferences that are self-consistent but not being tested or verified in the world. Since we can't tell if that's progress, it probably isn't. I asked if he was referring to MIRI's work here, and he said their work was an example of the kind of approach he's skeptical about, though he wasn't trying to single them out. [2]

https://www.jefftk.com/p/conversation-with-an-ai-researcher
Earlier this week I had a conversation with an AI researcher [1] at one of the main industry labs as part of my project of assessing superintelligence risk. Here's what I got from them:

They see progress in ML as almost entirely constrained by hardware and data, to the point that if today's hardware and data had existed in the mid 1950s researchers would have gotten to approximately our current state within ten to twenty years. They gave the example of backprop: we saw how to train multi-layer neural nets decades before we had the computing power to actually train these nets to do useful things.

Similarly, people talk about AlphaGo as a big jump, where Go went from being "ten years away" to "done" within a couple years, but they said it wasn't like that. If Go work had stayed in academia, with academia-level budgets and resources, it probably would have taken nearly that long. What changed was a company seeing promising results, realizing what could be done, and putting way more engineers and hardware on the project than anyone had previously done. AlphaGo couldn't have happened earlier because the hardware wasn't there yet, and was only able to be brought forward by massive application of resources.

https://www.jefftk.com/p/superintelligence-risk-project-conclusion
Summary: I'm not convinced that AI risk should be highly prioritized, but I'm also not convinced that it shouldn't. Highly qualified researchers in a position to have a good sense the field have massively different views on core questions like how capable ML systems are now, how capable they will be soon, and how we can influence their development. I do think these questions are possible to get a better handle on, but I think this would require much deeper ML knowledge than I have.
ratty  core-rats  ai  risk  ai-control  prediction  expert  machine-learning  deep-learning  speedometer  links  research  research-program  frontier  multi  interview  deepgoog  games  hardware  performance  roots  impetus  chart  big-picture  state-of-art  reinforcement  futurism  🤖  🖥  expert-experience  singularity  miri-cfar  empirical  evidence-based  speculation  volo-avolo  clever-rats  acmtariat  robust  ideas  crux  atoms  detail-architecture  software  gradient-descent 
july 2017 by nhaliday
Overcoming Bias : A Tangled Task Future
So we may often retain systems that inherit the structure of the human brain, and the structures of the social teams and organizations by which humans have worked together. All of which is another way to say: descendants of humans may have a long future as workers. We may have another future besides being retirees or iron-fisted peons ruling over gods. Even in a competitive future with no friendly singleton to ensure preferential treatment, something recognizably like us may continue. And even win.
ratty  hanson  speculation  automation  labor  economics  ems  futurism  prediction  complex-systems  network-structure  intricacy  thinking  engineering  management  law  compensation  psychology  cog-psych  ideas  structure  gray-econ  competition  moloch  coordination  cooperate-defect  risk  ai  ai-control  singularity  number  humanity  complement-substitute  cybernetics  detail-architecture  legacy  threat-modeling  degrees-of-freedom  composition-decomposition  order-disorder  analogy  parsimony  institutions  software  coupling-cohesion 
june 2017 by nhaliday
Yale Law Journal - Amazon's Antitrust Paradox
This Note argues that the current framework in antitrust—specifically its pegging competition to “consumer welfare,” defined as short-term price effects—is unequipped to capture the architecture of market power in the modern economy. We cannot cognize the potential harms to competition posed by Amazon’s dominance if we measure competition primarily through price and output. Specifically, current doctrine underappreciates the risk of predatory pricing and how integration across distinct business lines may prove anticompetitive. These concerns are heightened in the context of online platforms for two reasons. First, the economics of platform markets create incentives for a company to pursue growth over profits, a strategy that investors have rewarded. Under these conditions, predatory pricing becomes highly rational—even as existing doctrine treats it as irrational and therefore implausible. Second, because online platforms serve as critical intermediaries, integrating across business lines positions these platforms to control the essential infrastructure on which their rivals depend. This dual role also enables a platform to exploit information collected on companies using its services to undermine them as competitors.

https://www.theatlantic.com/business/archive/2017/06/why-amazon-bought-whole-foods/530652/
https://gnxp.nofe.me/2017/06/17/the-distribution-channel-comes-to-you/
https://www.forbes.com/sites/panosmourdoukoutas/2017/06/18/amazon-whole-foods-deal-is-bad-news-for-store-cashiers-and-the-fight-for-15-minimum-wage/
Amazon Must Be Stopped: https://newrepublic.com/article/119769/amazons-monopoly-must-be-broken-radical-plan-tech-giant

Amazon Will Go To Denver: https://gnxp.nofe.me/2017/09/10/amazon-will-go-to-denver/
https://www.nytimes.com/interactive/2017/09/09/upshot/where-should-amazon-new-headquarters-be.html
http://www.paddypower.com/bet?action=go_event&category=SPECIALS&ev_class_id=45&ev_type_id=22711&ev_id=13023353&force_racing_css=&ev_desc=Where%20will%20Amazon%20build%20their%20Second%20Headquarters?
https://www.seattletimes.com/business/amazon/this-city-hall-brought-to-you-by-amazon/
Real things cities are offering to get Amazon HQ2
*Chicago: Let Amazon keep employees' income tax
*SoCal: Give away $100M in land
*Boston: City employees working just for Amazon
*Fresno: Let Amazon decide how to spend tax dollars

https://www.wsj.com/articles/rules-of-engagement-how-cities-are-courting-amazons-new-headquarters-1522661401
Washington, D.C., might have a leg up, having already hosted Amazon Chief Executive Jeff Bezos for visits when he considered acquiring the Washington Post, which he now owns. Mr. Bezos also purchased the former Textile Museum in Washington’s Kalorama neighborhood for $23 million in 2016 and is currently turning it into a private residence.

28-year-old makes millions buying from Walmart, selling on Amazon: https://www.msn.com/en-us/money/companies/28-year-old-makes-millions-buying-from-walmart-selling-on-amazon/ar-AAupB8i

https://twitter.com/DKThomp/status/954028684788273153
https://twitter.com/hyperplanes/status/954020562262781952
https://archive.is/uNk1p
https://archive.is/phiTA
Thread: Why Amazon’s HQ2 is going to Fairfax County

https://www.bloomberg.com/view/articles/2018-01-19/amazon-is-sure-acting-like-it-s-going-to-pick-the-d-c-area

https://twitter.com/NeonPeonage/status/955436146183561216
https://archive.is/lJeaz
walmart is the only entity that has even a slim chance at preventing jeff bezos from intermediating every commodity exchange in the world, u must respect

https://twitter.com/holerepairer/status/955469951833436160
https://archive.is/ig58T
"I tried to save you, but you didn't listen. Now you'll have to face Him alone..."

What Amazon does to wages: https://www.economist.com/news/united-states/21735020-worlds-largest-retailer-underpaying-its-employees-what-amazon-does-wages
Is the world’s largest retailer underpaying its employees?

Flat or falling industry wages are common in the cities and towns where Amazon opens distribution centres, according to an analysis by The Economist. Government figures show that after Amazon opens a storage depot, local wages for warehouse workers fall by an average of 3%. In places where Amazon operates, such workers earn about 10% less than similar workers employed elsewhere.

What Amazon Does to Poor Cities: https://www.theatlantic.com/business/archive/2018/02/amazon-warehouses-poor-cities/552020/
The debate over Amazon’s HQ2 obscures the company’s rapid expansion of warehouses in low-income areas.

The Facts Behind Trump’s Tweets on Amazon, Taxes and the Postal Service: https://www.nytimes.com/2018/03/29/us/politics/trump-amazon-post-office-fact-check.html

If Workers Slack Off, the Wristband Will Know. (And Amazon Has a Patent for It.): https://www.nytimes.com/2018/02/01/technology/amazon-wristband-tracking-privacy.html
https://www.npr.org/sections/thetwo-way/2018/02/01/582370715/wrist-watching-amazon-patents-system-to-track-guide-employees-hands
https://boingboing.net/2018/02/02/amazon-patent-could-lead-to-do.html
https://www.jwz.org/blog/2018/02/amazon-patents-wristbands-shock-collars-designed-to-steer-employees-movements/

auto-management -> automation dystopia: http://marshallbrain.com/manna.htm

Amazon’s vision for the future: delivery drone beehives in every city: https://www.theverge.com/2017/6/23/15860668/amazon-drone-delivery-patent-city-centers
article  essay  study  law  business  industrial-org  market-power  rent-seeking  amazon  sv  tech  markets  capitalism  winner-take-all  multi  news  org:mag  org:biz  gnxp  scitariat  commentary  current-events  org:lite  right-wing  corporation  trends  zeitgeist  rhetoric  barons  management  org:rec  usa  the-west  video  profile  arbitrage  entrepreneurialism  microbiz  feudal  prediction  prediction-markets  power  org:local  twitter  social  discussion  speculation  backup  the-south  virginia-DC  planning  career  org:bv  econotariat  marginal-rev  inequality  gnon  🐸  memes(ew)  data  econ-metrics  marginal  malaise  supply-demand  labor  coming-apart  compensation  intervention  branches  org:anglo  automation  techtariat  ai  scifi-fantasy  pessimism  optimism  orwellian  vampire-squid  left-wing  trump  media  propaganda  taxes  government  capital 
june 2017 by nhaliday
« earlier      
per page:    204080120160

Copy this bookmark:





to read