« earlier
Is there a common method for detecting the convergence of the Gibbs sampler and the expectation-maximization algorithm? - Quora
In practice and theory it is much easier to diagnose convergence in EM (vanilla or variational) than in any MCMC algorithm (including Gibbs sampling).

https://www.quora.com/How-can-you-determine-if-your-Gibbs-sampler-has-converged
There is a special case when you can actually obtain the stationary distribution, and be sure that you did! If your markov chain consists of a discrete state space, then take the first time that a state repeats in your chain: if you randomly sample an element between the repeating states (but only including one of the endpoints) you will have a sample from your true distribution.

One can achieve this 'exact MCMC sampling' more generally by using the coupling from the past algorithm (Coupling from the past).

Otherwise, there is no rigorous statistical test for convergence. It may be possible to obtain a theoretical bound for the convergence rates: but these are quite difficult to obtain, and quite often too large to be of practical use. For example, even for the simple case of using the Metropolis algorithm for sampling from a two-dimensional uniform distribution, the best convergence rate upper bound achieved, by Persi Diaconis, was something with an astronomical constant factor like 10^300.

In fact, it is fair to say that for most high dimensional problems, we have really no idea whether Gibbs sampling ever comes close to converging, but the best we can do is use some simple diagnostics to detect the most obvious failures.
nibble  q-n-a  qra  acm  stats  probability  limits  convergence  distribution  sampling  markov  monte-carlo  ML-MAP-E  checking  equilibrium  stylized-facts  gelman  levers  mixing  empirical  plots  manifolds  multi  fixed-point  iteration-recursion  heuristic  expert-experience  theory-practice  project
october 2019 by nhaliday
Teach debugging
A friend of mine and I couldn't understand why some people were having so much trouble; the material seemed like common sense. The Feynman Method was the only tool we needed.

1. Write down the problem
2. Think real hard
3. Write down the solution

The Feynman Method failed us on the last project: the design of a divider, a real-world-scale project an order of magnitude more complex than anything we'd been asked to tackle before. On the day he assigned the project, the professor exhorted us to begin early. Over the next few weeks, we heard rumors that some of our classmates worked day and night without making progress.

...

And then, just after midnight, a number of our newfound buddies from dinner reported successes. Half of those who started from scratch had working designs. Others were despondent, because their design was still broken in some subtle, non-obvious way. As I talked with one of those students, I began poring over his design. And after a few minutes, I realized that the Feynman method wasn't the only way forward: it should be possible to systematically apply a mechanical technique repeatedly to find the source of our problems. Beneath all the abstractions, our projects consisted purely of NAND gates (woe to those who dug around our toolbox enough to uncover dynamic logic), which outputs a 0 only when both inputs are 1. If the correct output is 0, both inputs should be 1. The input that isn't is in error, an error that is, itself, the output of a NAND gate where at least one input is 0 when it should be 1. We applied this method recursively, finding the source of all the problems in both our designs in under half an hour.

How To Debug Any Program: https://www.blinddata.com/blog/how-to-debug-any-program-9
May 8th 2019 by Saketh Are

Start by Questioning Everything

...

When a program is behaving unexpectedly, our attention tends to be drawn first to the most complex portions of the code. However, mistakes can come in all forms. I've personally been guilty of rushing to debug sophisticated portions of my code when the real bug was that I forgot to read in the input file. In the following section, we'll discuss how to reliably focus our attention on the portions of the program that need correction.

Then Question as Little as Possible

Suppose that we have a program and some input on which its behavior doesn’t match our expectations. The goal of debugging is to narrow our focus to as small a section of the program as possible. Once our area of interest is small enough, the value of the incorrect output that is being produced will typically tell us exactly what the bug is.

In order to catch the point at which our program diverges from expected behavior, we must inspect the intermediate state of the program. Suppose that we select some point during execution of the program and print out all values in memory. We can inspect the results manually and decide whether they match our expectations. If they don't, we know for a fact that we can focus on the first half of the program. It either contains a bug, or our expectations of what it should produce were misguided. If the intermediate state does match our expectations, we can focus on the second half of the program. It either contains a bug, or our understanding of what input it expects was incorrect.

Question Things Efficiently

For practical purposes, inspecting intermediate state usually doesn't involve a complete memory dump. We'll typically print a small number of variables and check whether they have the properties we expect of them. Verifying the behavior of a section of code involves:

1. Before it runs, inspecting all values in memory that may influence its behavior.
2. Reasoning about the expected behavior of the code.
3. After it runs, inspecting all values in memory that may be modified by the code.

Reasoning about expected behavior is typically the easiest step to perform even in the case of highly complex programs. Practically speaking, it's time-consuming and mentally strenuous to write debug output into your program and to read and decipher the resulting values. It is therefore advantageous to structure your code into functions and sections that pass a relatively small amount of information between themselves, minimizing the number of values you need to inspect.

...

Finding the Right Question to Ask

We’ve assumed so far that we have available a test case on which our program behaves unexpectedly. Sometimes, getting to that point can be half the battle. There are a few different approaches to finding a test case on which our program fails. It is reasonable to attempt them in the following order:

1. Verify correctness on the sample inputs.
2. Test additional small cases generated by hand.
3. Adversarially construct corner cases by hand.
4. Re-read the problem to verify understanding of input constraints.
5. Design large cases by hand and write a program to construct them.
6. Write a generator to construct large random cases and a brute force oracle to verify outputs.
techtariat  dan-luu  engineering  programming  debugging  IEEE  reflection  stories  education  higher-ed  checklists  iteration-recursion  divide-and-conquer  thinking  ground-up  nitty-gritty  giants  feynman  error  input-output  structure  composition-decomposition  abstraction  systematic-ad-hoc  reduction  teaching  state  correctness  multi  oly  oly-programming  metabuch  neurons  problem-solving  wire-guided  marginal  strategy  tactics  methodology  simplification-normalization
may 2019 by nhaliday
Lateralization of brain function - Wikipedia
Language
Language functions such as grammar, vocabulary and literal meaning are typically lateralized to the left hemisphere, especially in right handed individuals.[3] While language production is left-lateralized in up to 90% of right-handers, it is more bilateral, or even right-lateralized, in approximately 50% of left-handers.[4]

Broca's area and Wernicke's area, two areas associated with the production of speech, are located in the left cerebral hemisphere for about 95% of right-handers, but about 70% of left-handers.[5]:69

Auditory and visual processing
The processing of visual and auditory stimuli, spatial manipulation, facial perception, and artistic ability are represented bilaterally.[4] Numerical estimation, comparison and online calculation depend on bilateral parietal regions[6][7] while exact calculation and fact retrieval are associated with left parietal regions, perhaps due to their ties to linguistic processing.[6][7]

...

Depression is linked with a hyperactive right hemisphere, with evidence of selective involvement in "processing negative emotions, pessimistic thoughts and unconstructive thinking styles", as well as vigilance, arousal and self-reflection, and a relatively hypoactive left hemisphere, "specifically involved in processing pleasurable experiences" and "relatively more involved in decision-making processes".

Chaos and Order; the right and left hemispheres: https://orthosphere.wordpress.com/2018/05/23/chaos-and-order-the-right-and-left-hemispheres/
In The Master and His Emissary, Iain McGilchrist writes that a creature like a bird needs two types of consciousness simultaneously. It needs to be able to focus on something specific, such as pecking at food, while it also needs to keep an eye out for predators which requires a more general awareness of environment.

These are quite different activities. The Left Hemisphere (LH) is adapted for a narrow focus. The Right Hemisphere (RH) for the broad. The brains of human beings have the same division of function.

The LH governs the right side of the body, the RH, the left side. With birds, the left eye (RH) looks for predators, the right eye (LH) focuses on food and specifics. Since danger can take many forms and is unpredictable, the RH has to be very open-minded.

The LH is for narrow focus, the explicit, the familiar, the literal, tools, mechanism/machines and the man-made. The broad focus of the RH is necessarily more vague and intuitive and handles the anomalous, novel, metaphorical, the living and organic. The LH is high resolution but narrow, the RH low resolution but broad.

The LH exhibits unrealistic optimism and self-belief. The RH has a tendency towards depression and is much more realistic about a person’s own abilities. LH has trouble following narratives because it has a poor sense of “wholes.” In art it favors flatness, abstract and conceptual art, black and white rather than color, simple geometric shapes and multiple perspectives all shoved together, e.g., cubism. Particularly RH paintings emphasize vistas with great depth of field and thus space and time,[1] emotion, figurative painting and scenes related to the life world. In music, LH likes simple, repetitive rhythms. The RH favors melody, harmony and complex rhythms.

...

Schizophrenia is a disease of extreme LH emphasis. Since empathy is RH and the ability to notice emotional nuance facially, vocally and bodily expressed, schizophrenics tend to be paranoid and are often convinced that the real people they know have been replaced by robotic imposters. This is at least partly because they lose the ability to intuit what other people are thinking and feeling – hence they seem robotic and suspicious.

Oswald Spengler’s The Decline of the West as well as McGilchrist characterize the West as awash in phenomena associated with an extreme LH emphasis. Spengler argues that Western civilization was originally much more RH (to use McGilchrist’s categories) and that all its most significant artistic (in the broadest sense) achievements were triumphs of RH accentuation.

The RH is where novel experiences and the anomalous are processed and where mathematical, and other, problems are solved. The RH is involved with the natural, the unfamiliar, the unique, emotions, the embodied, music, humor, understanding intonation and emotional nuance of speech, the metaphorical, nuance, and social relations. It has very little speech, but the RH is necessary for processing all the nonlinguistic aspects of speaking, including body language. Understanding what someone means by vocal inflection and facial expressions is an intuitive RH process rather than explicit.

...

RH is very much the center of lived experience; of the life world with all its depth and richness. The RH is “the master” from the title of McGilchrist’s book. The LH ought to be no more than the emissary; the valued servant of the RH. However, in the last few centuries, the LH, which has tyrannical tendencies, has tried to become the master. The LH is where the ego is predominantly located. In split brain patients where the LH and the RH are surgically divided (this is done sometimes in the case of epileptic patients) one hand will sometimes fight with the other. In one man’s case, one hand would reach out to hug his wife while the other pushed her away. One hand reached for one shirt, the other another shirt. Or a patient will be driving a car and one hand will try to turn the steering wheel in the opposite direction. In these cases, the “naughty” hand is usually the left hand (RH), while the patient tends to identify herself with the right hand governed by the LH. The two hemispheres have quite different personalities.

The connection between LH and ego can also be seen in the fact that the LH is competitive, contentious, and agonistic. It wants to win. It is the part of you that hates to lose arguments.

Using the metaphor of Chaos and Order, the RH deals with Chaos – the unknown, the unfamiliar, the implicit, the emotional, the dark, danger, mystery. The LH is connected with Order – the known, the familiar, the rule-driven, the explicit, and light of day. Learning something means to take something unfamiliar and making it familiar. Since the RH deals with the novel, it is the problem-solving part. Once understood, the results are dealt with by the LH. When learning a new piece on the piano, the RH is involved. Once mastered, the result becomes a LH affair. The muscle memory developed by repetition is processed by the LH. If errors are made, the activity returns to the RH to figure out what went wrong; the activity is repeated until the correct muscle memory is developed in which case it becomes part of the familiar LH.

Science is an attempt to find Order. It would not be necessary if people lived in an entirely orderly, explicit, known world. The lived context of science implies Chaos. Theories are reductive and simplifying and help to pick out salient features of a phenomenon. They are always partial truths, though some are more partial than others. The alternative to a certain level of reductionism or partialness would be to simply reproduce the world which of course would be both impossible and unproductive. The test for whether a theory is sufficiently non-partial is whether it is fit for purpose and whether it contributes to human flourishing.

...

Analytic philosophers pride themselves on trying to do away with vagueness. To do so, they tend to jettison context which cannot be brought into fine focus. However, in order to understand things and discern their meaning, it is necessary to have the big picture, the overview, as well as the details. There is no point in having details if the subject does not know what they are details of. Such philosophers also tend to leave themselves out of the picture even when what they are thinking about has reflexive implications. John Locke, for instance, tried to banish the RH from reality. All phenomena having to do with subjective experience he deemed unreal and once remarked about metaphors, a RH phenomenon, that they are “perfect cheats.” Analytic philosophers tend to check the logic of the words on the page and not to think about what those words might say about them. The trick is for them to recognize that they and their theories, which exist in minds, are part of reality too.

The RH test for whether someone actually believes something can be found by examining his actions. If he finds that he must regard his own actions as free, and, in order to get along with other people, must also attribute free will to them and treat them as free agents, then he effectively believes in free will – no matter his LH theoretical commitments.

...

We do not know the origin of life. We do not know how or even if consciousness can emerge from matter. We do not know the nature of 96% of the matter of the universe. Clearly all these things exist. They can provide the subject matter of theories but they continue to exist as theorizing ceases or theories change. Not knowing how something is possible is irrelevant to its actual existence. An inability to explain something is ultimately neither here nor there.

If thought begins and ends with the LH, then thinking has no content – content being provided by experience (RH), and skepticism and nihilism ensue. The LH spins its wheels self-referentially, never referring back to experience. Theory assumes such primacy that it will simply outlaw experiences and data inconsistent with it; a profoundly wrong-headed approach.

...

Gödel’s Theorem proves that not everything true can be proven to be true. This means there is an ineradicable role for faith, hope and intuition in every moderately complex human intellectual endeavor. There is no one set of consistent axioms from which all other truths can be derived.

Alan Turing’s proof of the halting problem proves that there is no effective procedure for finding effective procedures. Without a mechanical decision procedure, (LH), when it comes to … [more]
gnon  reflection  books  summary  review  neuro  neuro-nitgrit  things  thinking  metabuch  order-disorder  apollonian-dionysian  bio  examples  near-far  symmetry  homo-hetero  logic  inference  intuition  problem-solving  analytical-holistic  n-factor  europe  the-great-west-whale  occident  alien-character  detail-architecture  art  theory-practice  philosophy  being-becoming  essence-existence  language  psychology  cog-psych  egalitarianism-hierarchy  direction  reason  learning  novelty  science  anglo  anglosphere  coarse-fine  neurons  truth  contradiction  matching  empirical  volo-avolo  curiosity  uncertainty  theos  axioms  intricacy  computation  analogy  essay  rhetoric  deep-materialism  new-religion  knowledge  expert-experience  confidence  biases  optimism  pessimism  realness  whole-partial-many  theory-of-mind  values  competition  reduction  subjective-objective  communication  telos-atelos  ends-means  turing  fiction  increase-decrease  innovation  creative  thick-thin  spengler  multi  ratty  hanson  complex-systems  structure  concrete  abstraction  network-s
september 2018 by nhaliday
Does left-handedness occur more in certain ethnic groups than others?
Yes. There are some aboriginal tribes in Australia who have about 70% of their population being left-handed. It’s also more than 50% for some South American tribes.

The reason is the same in both cases: a recent past of extreme aggression with other tribes. Left-handedness is caused by recessive genes, but being left-handed is a boost when in hand-to-hand combat with a right-handed guy (who usually has trained extensively with other right-handed guys, as this disposition is genetically dominant so right-handed are majority in most human populations, so lacks experience with a left-handed). Should a particular tribe enter too much war time periods, it’s proportion of left-handeds will naturally rise. As their enemy tribe’s proportion of left-handed people is rising as well, there’s a point at which the natural advantage they get in fighting disipates and can only climb higher should they continuously find new groups to fight with, who are also majority right-handed.

...

So the natural question is: given their advantages in 1-on-1 combat, why doesn’t the percentage grow all the way up to 50% or slightly higher? Because there are COSTS associated with being left-handed, as apparently our neural network is pre-wired towards right-handedness - showing as a reduced life expectancy for lefties. So a mathematical model was proposed to explain their distribution among different societies

THE FIGHTING HYPOTHESIS: STABILITY OF POLYMORPHISM IN HUMAN HANDEDNESS

http://gepv.univ-lille1.fr/downl...

Further, it appears the average left-handedness for humans (~10%) hasn’t changed in thousands of years (judging by the paintings of hands on caves)

Frequency-dependent maintenance of left handedness in humans.

Handedness frequency over more than 10,000 years

[ed.: Compare with Julius Evola's "left-hand path".]
q-n-a  qra  trivia  cocktail  farmers-and-foragers  history  antiquity  race  demographics  bio  EEA  evolution  context  peace-violence  war  ecology  EGT  unintended-consequences  game-theory  equilibrium  anthropology  cultural-dynamics  sapiens  data  database  trends  cost-benefit  strategy  time-series  art  archaeology  measurement  oscillation  pro-rata  iteration-recursion  gender  male-variability  cliometrics  roots  explanation  explanans  correlation  causation  branches
july 2018 by nhaliday
Theory of Self-Reproducing Automata - John von Neumann
Fourth Lecture: THE ROLE OF HIGH AND OF EXTREMELY HIGH COMPLICATION

Comparisons between computing machines and the nervous systems. Estimates of size for computing machines, present and near future.

Estimates for size for the human central nervous system. Excursus about the “mixed” character of living organisms. Analog and digital elements. Observations about the “mixed” character of all componentry, artificial as well as natural. Interpretation of the position to be taken with respect to these.

Evaluation of the discrepancy in size between artificial and natural automata. Interpretation of this discrepancy in terms of physical factors. Nature of the materials used.

The probability of the presence of other intellectual factors. The role of complication and the theoretical penetration that it requires.

Questions of reliability and errors reconsidered. Probability of individual errors and length of procedure. Typical lengths of procedure for computing machines and for living organisms--that is, for artificial and for natural automata. Upper limits on acceptable probability of error in individual operations. Compensation by checking and self-correcting features.

Differences of principle in the way in which errors are dealt with in artificial and in natural automata. The “single error” principle in artificial automata. Crudeness of our approach in this case, due to the lack of adequate theory. More sophisticated treatment of this problem in natural automata: The role of the autonomy of parts. Connections between this autonomy and evolution.

- 10^10 neurons in brain, 10^4 vacuum tubes in largest computer at time
- machines faster: 5 ms from neuron potential to neuron potential, 10^-3 ms for vacuum tubes

https://en.wikipedia.org/wiki/John_von_Neumann#Computing
pdf  article  papers  essay  nibble  math  cs  computation  bio  neuro  neuro-nitgrit  scale  magnitude  comparison  acm  von-neumann  giants  thermo  phys-energy  speed  performance  time  density  frequency  hardware  ems  efficiency  dirty-hands  street-fighting  fermi  estimate  retention  physics  interdisciplinary  multi  wiki  links  people  🔬  atoms  duplication  iteration-recursion  turing  complexity  measure  nature  technology  complex-systems  bits  information-theory  circuits  robust  structure  composition-decomposition  evolution  mutation  axioms  analogy  thinking  input-output  hi-order-bits  coding-theory  flexibility  rigidity  automata-languages
april 2018 by nhaliday
Complexity no Bar to AI - Gwern.net
Critics of AI risk suggest diminishing returns to computing (formalized asymptotically) means AI will be weak; this argument relies on a large number of questionable premises and ignoring additional resources, constant factors, and nonlinear returns to small intelligence advantages, and is highly unlikely. (computer science, transhumanism, AI, R)
created: 1 June 2014; modified: 01 Feb 2018; status: finished; confidence: likely; importance: 10
ratty  gwern  analysis  faq  ai  risk  speedometer  intelligence  futurism  cs  computation  complexity  tcs  linear-algebra  nonlinearity  convexity-curvature  average-case  adversarial  article  time-complexity  singularity  iteration-recursion  magnitude  multiplicative  lower-bounds  no-go  performance  hardware  humanity  psychology  cog-psych  psychometrics  iq  distribution  moments  complement-substitute  hanson  ems  enhancement  parable  detail-architecture  universalism-particularism  neuro  ai-control  environment  climate-change  threat-modeling  security  theory-practice  hacker  academia  realness  crypto  rigorous-crypto  usa  government
april 2018 by nhaliday
The Hanson-Yudkowsky AI-Foom Debate - Machine Intelligence Research Institute
How Deviant Recent AI Progress Lumpiness?: http://www.overcomingbias.com/2018/03/how-deviant-recent-ai-progress-lumpiness.html
I seem to disagree with most people working on artificial intelligence (AI) risk. While with them I expect rapid change once AI is powerful enough to replace most all human workers, I expect this change to be spread across the world, not concentrated in one main localized AI system. The efforts of AI risk folks to design AI systems whose values won’t drift might stop global AI value drift if there is just one main AI system. But doing so in a world of many AI systems at similar abilities levels requires strong global governance of AI systems, which is a tall order anytime soon. Their continued focus on preventing single system drift suggests that they expect a single main AI system.

The main reason that I understand to expect relatively local AI progress is if AI progress is unusually lumpy, i.e., arriving in unusually fewer larger packages rather than in the usual many smaller packages. If one AI team finds a big lump, it might jump way ahead of the other teams.

However, we have a vast literature on the lumpiness of research and innovation more generally, which clearly says that usually most of the value in innovation is found in many small innovations. We have also so far seen this in computer science (CS) and AI. Even if there have been historical examples where much value was found in particular big innovations, such as nuclear weapons or the origin of humans.

Apparently many people associated with AI risk, including the star machine learning (ML) researchers that they often idolize, find it intuitively plausible that AI and ML progress is exceptionally lumpy. Such researchers often say, “My project is ‘huge’, and will soon do it all!” A decade ago my ex-co-blogger Eliezer Yudkowsky and I argued here on this blog about our differing estimates of AI progress lumpiness. He recently offered Alpha Go Zero as evidence of AI lumpiness:

...

In this post, let me give another example (beyond two big lumps in a row) of what could change my mind. I offer a clear observable indicator, for which data should have available now: deviant citation lumpiness in recent ML research. One standard measure of research impact is citations; bigger lumpier developments gain more citations that smaller ones. And it turns out that the lumpiness of citations is remarkably constant across research fields! See this March 3 paper in Science:

I Still Don’t Get Foom: http://www.overcomingbias.com/2014/07/30855.html
All of which makes it look like I’m the one with the problem; everyone else gets it. Even so, I’m gonna try to explain my problem again, in the hope that someone can explain where I’m going wrong. Here goes.

“Intelligence” just means an ability to do mental/calculation tasks, averaged over many tasks. I’ve always found it plausible that machines will continue to do more kinds of mental tasks better, and eventually be better at pretty much all of them. But what I’ve found it hard to accept is a “local explosion.” This is where a single machine, built by a single project using only a tiny fraction of world resources, goes in a short time (e.g., weeks) from being so weak that it is usually beat by a single human with the usual tools, to so powerful that it easily takes over the entire world. Yes, smarter machines may greatly increase overall economic growth rates, and yes such growth may be uneven. But this degree of unevenness seems implausibly extreme. Let me explain.

If we count by economic value, humans now do most of the mental tasks worth doing. Evolution has given us a brain chock-full of useful well-honed modules. And the fact that most mental tasks require the use of many modules is enough to explain why some of us are smarter than others. (There’d be a common “g” factor in task performance even with independent module variation.) Our modules aren’t that different from those of other primates, but because ours are different enough to allow lots of cultural transmission of innovation, we’ve out-competed other primates handily.

We’ve had computers for over seventy years, and have slowly build up libraries of software modules for them. Like brains, computers do mental tasks by combining modules. An important mental task is software innovation: improving these modules, adding new ones, and finding new ways to combine them. Ideas for new modules are sometimes inspired by the modules we see in our brains. When an innovation team finds an improvement, they usually sell access to it, which gives them resources for new projects, and lets others take advantage of their innovation.

...

In Bostrom’s graph above the line for an initially small project and system has a much higher slope, which means that it becomes in a short time vastly better at software innovation. Better than the entire rest of the world put together. And my key question is: how could it plausibly do that? Since the rest of the world is already trying the best it can to usefully innovate, and to abstract to promote such innovation, what exactly gives one small project such a huge advantage to let it innovate so much faster?

...

In fact, most software innovation seems to be driven by hardware advances, instead of innovator creativity. Apparently, good ideas are available but must usually wait until hardware is cheap enough to support them.

Yes, sometimes architectural choices have wider impacts. But I was an artificial intelligence researcher for nine years, ending twenty years ago, and I never saw an architecture choice make a huge difference, relative to other reasonable architecture choices. For most big systems, overall architecture matters a lot less than getting lots of detail right. Researchers have long wandered the space of architectures, mostly rediscovering variations on what others found before.

Some hope that a small project could be much better at innovation because it specializes in that topic, and much better understands new theoretical insights into the basic nature of innovation or intelligence. But I don’t think those are actually topics where one can usefully specialize much, or where we’ll find much useful new theory. To be much better at learning, the project would instead have to be much better at hundreds of specific kinds of learning. Which is very hard to do in a small project.

What does Bostrom say? Alas, not much. He distinguishes several advantages of digital over human minds, but all software shares those advantages. Bostrom also distinguishes five paths: better software, brain emulation (i.e., ems), biological enhancement of humans, brain-computer interfaces, and better human organizations. He doesn’t think interfaces would work, and sees organizations and better biology as only playing supporting roles.

...

Similarly, while you might imagine someday standing in awe in front of a super intelligence that embodies all the power of a new age, superintelligence just isn’t the sort of thing that one project could invent. As “intelligence” is just the name we give to being better at many mental tasks by using many good mental modules, there’s no one place to improve it. So I can’t see a plausible way one project could increase its intelligence vastly faster than could the rest of the world.

Takeoff speeds: https://sideways-view.com/2018/02/24/takeoff-speeds/
Futurists have argued for years about whether the development of AGI will look more like a breakthrough within a small group (“fast takeoff”), or a continuous acceleration distributed across the broader economy or a large firm (“slow takeoff”).

I currently think a slow takeoff is significantly more likely. This post explains some of my reasoning and why I think it matters. Mostly the post lists arguments I often hear for a fast takeoff and explains why I don’t find them compelling.

(Note: this is not a post about whether an intelligence explosion will occur. That seems very likely to me. Quantitatively I expect it to go along these lines. So e.g. while I disagree with many of the claims and assumptions in Intelligence Explosion Microeconomics, I don’t disagree with the central thesis or with most of the arguments.)
ratty  lesswrong  subculture  miri-cfar  ai  risk  ai-control  futurism  books  debate  hanson  big-yud  prediction  contrarianism  singularity  local-global  speed  speedometer  time  frontier  distribution  smoothness  shift  pdf  economics  track-record  abstraction  analogy  links  wiki  list  evolution  mutation  selection  optimization  search  iteration-recursion  intelligence  metameta  chart  analysis  number  ems  coordination  cooperate-defect  death  values  formal-values  flux-stasis  philosophy  farmers-and-foragers  malthus  scale  studying  innovation  insight  conceptual-vocab  growth-econ  egalitarianism-hierarchy  inequality  authoritarianism  wealth  near-far  rationality  epistemic  biases  cycles  competition  arms  zero-positive-sum  deterrence  war  peace-violence  winner-take-all  technology  moloch  multi  plots  research  science  publishing  humanity  labor  marginal  urban-rural  structure  composition-decomposition  complex-systems  gregory-clark  decentralized  heavy-industry  magnitude  multiplicative  endogenous-exogenous  models  uncertainty  decision-theory  time-prefer
april 2018 by nhaliday
Eternity in six hours: intergalactic spreading of intelligent life and sharpening the Fermi paradox
We do this by demonstrating that traveling between galaxies – indeed even launching a colonisation project for the entire reachable universe – is a relatively simple task for a star-spanning civilization, requiring modest amounts of energy and resources. We start by demonstrating that humanity itself could likely accomplish such a colonisation project in the foreseeable future, should we want to, and then demonstrate that there are millions of galaxies that could have reached us by now, using similar methods. This results in a considerable sharpening of the Fermi paradox.
pdf  study  article  essay  anthropic  fermi  space  expansionism  bostrom  ratty  philosophy  xenobio  ideas  threat-modeling  intricacy  time  civilization  🔬  futurism  questions  paradox  risk  physics  engineering  interdisciplinary  frontier  technology  volo-avolo  dirty-hands  ai  automation  robotics  duplication  iteration-recursion  von-neumann  data  scale  magnitude  skunkworks  the-world-is-just-atoms  hard-tech  ems  bio  bits  speedometer  nature  model-organism  mechanics  phys-energy  relativity  electromag  analysis  spock  nitty-gritty  spreading  hanson  street-fighting  speed  gedanken  nibble  miri-cfar  org:ngo
march 2018 by nhaliday
Who We Are | West Hunter
I’m going to review David Reich’s new book, Who We Are and How We Got Here. Extensively: in a sense I’ve already been doing this for a long time. Probably there will be a podcast. The GoFundMe link is here. You can also send money via Paypal (Use the donate button), or bitcoins to 1Jv4cu1wETM5Xs9unjKbDbCrRF2mrjWXr5. In-kind donations, such as orichalcum or mithril, are always appreciated.

This is the book about the application of ancient DNA to prehistory and history.

height difference between northern and southern europeans: https://westhunt.wordpress.com/2018/03/29/who-we-are-1/
mixing, genocide of males, etc.: https://westhunt.wordpress.com/2018/03/29/who-we-are-2-purity-of-essence/
rapid change in polygenic traits (appearance by Kevin Mitchell and funny jab at Brad Delong ("regmonkey")): https://westhunt.wordpress.com/2018/03/30/rapid-change-in-polygenic-traits/
schiz, bipolar, and IQ: https://westhunt.wordpress.com/2018/03/30/rapid-change-in-polygenic-traits/#comment-105605
Dan Graur being dumb: https://westhunt.wordpress.com/2018/04/02/the-usual-suspects/
prediction of neanderthal mixture and why: https://westhunt.wordpress.com/2018/04/03/who-we-are-3-neanderthals/
New Guineans tried to use Denisovan admixture to avoid UN sanctions (by "not being human"): https://westhunt.wordpress.com/2018/04/04/who-we-are-4-denisovans/
also some commentary on decline of Out-of-Africa, including:
"Homo Naledi, a small-brained homonin identified from recently discovered fossils in South Africa, appears to have hung around way later that you’d expect (up to 200,000 years ago, maybe later) than would be the case if modern humans had occupied that area back then. To be blunt, we would have eaten them."

Live Not By Lies: https://westhunt.wordpress.com/2018/04/08/live-not-by-lies/
Next he slams people that suspect that upcoming genetic genetic analysis will, in most cases, confirm traditional stereotypes about race – the way the world actually looks.

The people Reich dumps on are saying perfectly reasonable things. He criticizes Henry Harpending for saying that he’d never seen an African with a hobby. Of course, Henry had actually spent time in Africa, and that’s what he’d seen. The implication is that people in Malthusian farming societies – which Africa was not – were selected to want to work, even where there was no immediate necessity to do so. Thus hobbies, something like a gerbil running in an exercise wheel.

He criticized Nicholas Wade, for saying that different races have different dispositions. Wade’s book wasn’t very good, but of course personality varies by race: Darwin certainly thought so. You can see differences at birth. Cover a baby’s nose with a cloth: Chinese and Navajo babies quietly breathe through their mouth, European and African babies fuss and fight.

Then he attacks Watson, for asking when Reich was going to look at Jewish genetics – the kind that has led to greater-than-average intelligence. Watson was undoubtedly trying to get a rise out of Reich, but it’s a perfectly reasonable question. Ashkenazi Jews are smarter than the average bear and everybody knows it. Selection is the only possible explanation, and the conditions in the Middle ages – white-collar job specialization and a high degree of endogamy, were just what the doctor ordered.

Watson’s a prick, but he’s a great prick, and what he said was correct. Henry was a prince among men, and Nick Wade is a decent guy as well. Reich is totally out of line here: he’s being a dick.

Now Reich may be trying to burnish his anti-racist credentials, which surely need some renewal after having pointing out that race as colloquially used is pretty reasonable, there’s no reason pops can’t be different, people that said otherwise ( like Lewontin, Gould, Montagu, etc. ) were lying, Aryans conquered Europe and India, while we’re tied to the train tracks with scary genetic results coming straight at us. I don’t care: he’s being a weasel, slandering the dead and abusing the obnoxious old genius who laid the foundations of his field. Reich will also get old someday: perhaps he too will someday lose track of all the nonsense he’s supposed to say, or just stop caring. Maybe he already has… I’m pretty sure that Reich does not like lying – which is why he wrote this section of the book (not at all logically necessary for his exposition of the ancient DNA work) but the required complex juggling of lies and truth required to get past the demented gatekeepers of our society may not be his forte. It has been said that if it was discovered that someone in the business was secretly an android, David Reich would be the prime suspect. No Talleyrand he.

https://westhunt.wordpress.com/2018/04/12/who-we-are-6-the-americas/
The population that accounts for the vast majority of Native American ancestry, which we will call Amerinds, came into existence somewhere in northern Asia. It was formed from a mix of Ancient North Eurasians and a population related to the Han Chinese – about 40% ANE and 60% proto-Chinese. Is looks as if most of the paternal ancestry was from the ANE, while almost all of the maternal ancestry was from the proto-Han. [Aryan-Transpacific ?!?] This formation story – ANE boys, East-end girls – is similar to the formation story for the Indo-Europeans.

https://westhunt.wordpress.com/2018/04/18/who-we-are-7-africa/
In some ways, on some questions, learning more from genetics has left us less certain. At this point we really don’t know where anatomically humans originated. Greater genetic variety in sub-Saharan African has been traditionally considered a sign that AMH originated there, but it possible that we originated elsewhere, perhaps in North Africa or the Middle East, and gained extra genetic variation when we moved into sub-Saharan Africa and mixed with various archaic groups that already existed. One consideration is that finding recent archaic admixture in a population may well be a sign that modern humans didn’t arise in that region ( like language substrates) – which makes South Africa and West Africa look less likely. The long-continued existence of homo naledi in South Africa suggests that modern humans may not have been there for all that long – if we had co-existed with homo naledi, they probably wouldn’t lasted long. The oldest known skull that is (probably) AMh was recently found in Morocco, while modern humans remains, already known from about 100,000 years ago in Israel, have recently been found in northern Saudi Arabia.

While work by Nick Patterson suggests that modern humans were formed by a fusion between two long-isolated populations, a bit less than half a million years ago.

So: genomics had made recent history Africa pretty clear. Bantu agriculuralists expanded and replaced hunter-gatherers, farmers and herders from the Middle East settled North Africa, Egypt and northeaat Africa, while Nilotic herdsmen expanded south from the Sudan. There are traces of earlier patterns and peoples, but today, only traces. As for questions back further in time, such as the origins of modern humans – we thought we knew, and now we know we don’t. But that’s progress.

https://westhunt.wordpress.com/2018/04/18/reichs-journey/
David Reich’s professional path must have shaped his perspective on the social sciences. Look at the record. He starts his professional career examining the role of genetics in the elevated prostate cancer risk seen in African-American men. Various social-science fruitcakes oppose him even looking at the question of ancestry ( African vs European). But they were wrong: certain African-origin alleles explain the increased risk. Anthropologists (and human geneticists) were sure (based on nothing) that modern humans hadn’t interbred with Neanderthals – but of course that happened. Anthropologists and archaeologists knew that Gustaf Kossina couldn’t have been right when he said that widespread material culture corresponded to widespread ethnic groups, and that migration was the primary explanation for changes in the archaeological record – but he was right. They knew that the Indo-European languages just couldn’t have been imposed by fire and sword – but Reich’s work proved them wrong. Lots of people – the usual suspects plus Hindu nationalists – were sure that the AIT ( Aryan Invasion Theory) was wrong, but it looks pretty good today.

Some sociologists believed that caste in India was somehow imposed or significantly intensified by the British – but it turns out that most jatis have been almost perfectly endogamous for two thousand years or more…

It may be that Reich doesn’t take these guys too seriously anymore. Why should he?

varnas, jatis, aryan invastion theory: https://westhunt.wordpress.com/2018/04/22/who-we-are-8-india/

europe and EEF+WHG+ANE: https://westhunt.wordpress.com/2018/05/01/who-we-are-9-europe/

https://www.nationalreview.com/2018/03/book-review-david-reich-human-genes-reveal-history/
The massive mixture events that occurred in the recent past to give rise to Europeans and South Asians, to name just two groups, were likely “male mediated.” That’s another way of saying that men on the move took local women as brides or concubines. In the New World there are many examples of this, whether it be among African Americans, where most European ancestry seems to come through men, or in Latin America, where conquistadores famously took local women as paramours. Both of these examples are disquieting, and hint at the deep structural roots of patriarchal inequality and social subjugation that form the backdrop for the emergence of many modern peoples.
west-hunter  scitariat  books  review  sapiens  anthropology  genetics  genomics  history  antiquity  iron-age  world  europe  gavisti  aDNA  multi  politics  culture-war  kumbaya-kult  social-science  academia  truth  westminster  environmental-effects  embodied  pop-diff  nordic  mediterranean  the-great-west-whale  germanic  the-classics  shift  gene-flow  homo-hetero  conquest-empire  morality  diversity  aphorism  migration  migrant-crisis  EU  africa  MENA  gender  selection  speed  time  population-genetics  error  concrete  econotariat  economics  regression  troll  lol  twitter  social  media  street-fighting  methodology  robust  disease  psychiatry  iq  correlation  usa  obesity  dysgenics  education  track-record  people  counterexample  reason  thinking  fisher  giants  old-anglo  scifi-fantasy  higher-ed  being-right  stories  reflection  critique  multiplicative  iteration-recursion  archaics  asia  developing-world  civil-liberty  anglo  oceans  food  death  horror  archaeology  gnxp  news  org:mag  right-wing  age-of-discovery  latin-america  ea
march 2018 by nhaliday
Baldwin effect - Wikipedia
If animals entered a new environment—or their old environment rapidly changed—those that could flexibly respond by learning new behaviors or by ontogenetically adapting would be naturally preserved. This saved remnant would, over several generations, have the opportunity to exhibit spontaneously congenital variations similar to their acquired traits and have these variations naturally selected. It would look as though the acquired traits had sunk into the hereditary substance in a Lamarckian fashion, but the process would really be neo-Darwinian.

Selected offspring would tend to have an increased capacity for learning new skills rather than being confined to genetically coded, relatively fixed abilities. In effect, it places emphasis on the fact that the sustained behavior of a species or group can shape the evolution of that species. The "Baldwin effect" is better understood in evolutionary developmental biology literature as a scenario in which a character or trait change occurring in an organism as a result of its interaction with its environment becomes gradually assimilated into its developmental genetic or epigenetic repertoire (Simpson, 1953; Newman, 2002). In the words of Daniel Dennett,[2]

Thanks to the Baldwin effect, species can be said to pretest the efficacy of particular different designs by phenotypic (individual) exploration of the space of nearby possibilities. If a particularly winning setting is thereby discovered, this discovery will create a new selection pressure: organisms that are closer in the adaptive landscape to that discovery will have a clear advantage over those more distant.

An update to the Baldwin Effect was developed by Jean Piaget, Paul Weiss, and Conrad Waddington in the 1960s–1970s. This new version included an explicit role for the social in shaping subsequent natural change in humans (both evolutionary and developmental), with reference to alterations of selection pressures.[3]

...

Suppose a species is threatened by a new predator and there is a behavior that makes it more difficult for the predator to kill individuals of the species. Individuals who learn the behavior more quickly will obviously be at an advantage. As time goes on, the ability to learn the behavior will improve (by genetic selection), and at some point it will seem to be an instinct.
concept  wiki  reference  bio  evolution  learning  instinct  culture  cycles  intricacy  dennett  big-peeps  cultural-dynamics  anthropology  sapiens  flexibility  deep-materialism  new-religion  darwinian  evopsych  iteration-recursion
march 2018 by nhaliday
Prisoner's dilemma - Wikipedia
caveat to result below:
An extension of the IPD is an evolutionary stochastic IPD, in which the relative abundance of particular strategies is allowed to change, with more successful strategies relatively increasing. This process may be accomplished by having less successful players imitate the more successful strategies, or by eliminating less successful players from the game, while multiplying the more successful ones. It has been shown that unfair ZD strategies are not evolutionarily stable. The key intuition is that an evolutionarily stable strategy must not only be able to invade another population (which extortionary ZD strategies can do) but must also perform well against other players of the same type (which extortionary ZD players do poorly, because they reduce each other's surplus).[14]

Theory and simulations confirm that beyond a critical population size, ZD extortion loses out in evolutionary competition against more cooperative strategies, and as a result, the average payoff in the population increases when the population is bigger. In addition, there are some cases in which extortioners may even catalyze cooperation by helping to break out of a face-off between uniform defectors and win–stay, lose–switch agents.[8]

https://alfanl.com/2018/04/12/defection/
Nature boils down to a few simple concepts.

Haters will point out that I oversimplify. The haters are wrong. I am good at saying a lot with few words. Nature indeed boils down to a few simple concepts.

In life, you can either cooperate or defect.

Used to be that defection was the dominant strategy, say in the time when the Roman empire started to crumble. Everybody complained about everybody and in the end nothing got done. Then came Jesus, who told people to be loving and cooperative, and boom: 1800 years later we get the industrial revolution.

Because of Jesus we now find ourselves in a situation where cooperation is the dominant strategy. A normie engages in a ton of cooperation: with the tax collector who wants more and more of his money, with schools who want more and more of his kid’s time, with media who wants him to repeat more and more party lines, with the Zeitgeist of the Collective Spirit of the People’s Progress Towards a New Utopia. Essentially, our normie is cooperating himself into a crumbling Western empire.

Turns out that if everyone blindly cooperates, parasites sprout up like weeds until defection once again becomes the standard.

The point of a post-Christian religion is to once again create conditions for the kind of cooperation that led to the industrial revolution. This necessitates throwing out undead Christianity: you do not blindly cooperate. You cooperate with people that cooperate with you, you defect on people that defect on you. Christianity mixed with Darwinism. God and Gnon meet.

This also means we re-establish spiritual hierarchy, which, like regular hierarchy, is a prerequisite for cooperation. It is this hierarchical cooperation that turns a household into a force to be reckoned with, that allows a group of men to unite as a front against their enemies, that allows a tribe to conquer the world. Remember: Scientology bullied the Cathedral’s tax department into submission.

With a functioning hierarchy, men still gossip, lie and scheme, but they will do so in whispers behind closed doors. In your face they cooperate and contribute to the group’s wellbeing because incentives are thus that contributing to group wellbeing heightens status.

Without a functioning hierarchy, men gossip, lie and scheme, but they do so in your face, and they tell you that you are positively deluded for accusing them of gossiping, lying and scheming. Seeds will not sprout in such ground.

Spiritual dominance is established in the same way any sort of dominance is established: fought for, taken. But the fight is ritualistic. You can’t force spiritual dominance if no one listens, or if you are silenced the ritual is not allowed to happen.

If one of our priests is forbidden from establishing spiritual dominance, that is a sure sign an enemy priest is in better control and has vested interest in preventing you from establishing spiritual dominance..

They defect on you, you defect on them. Let them suffer the consequences of enemy priesthood, among others characterized by the annoying tendency that very little is said with very many words.

https://contingentnotarbitrary.com/2018/04/14/rederiving-christianity/
To recap, we started with a secular definition of Logos and noted that its telos is existence. Given human nature, game theory and the power of cooperation, the highest expression of that telos is freely chosen universal love, tempered by constant vigilance against defection while maintaining compassion for the defectors and forgiving those who repent. In addition, we must know the telos in order to fulfill it.

In Christian terms, looks like we got over half of the Ten Commandments (know Logos for the First, don’t defect or tempt yourself to defect for the rest), the importance of free will, the indestructibility of evil (group cooperation vs individual defection), loving the sinner and hating the sin (with defection as the sin), forgiveness (with conditions), and love and compassion toward all, assuming only secular knowledge and that it’s good to exist.

Iterated Prisoner's Dilemma is an Ultimatum Game: http://infoproc.blogspot.com/2012/07/iterated-prisoners-dilemma-is-ultimatum.html
The history of IPD shows that bounded cognition prevented the dominant strategies from being discovered for over over 60 years, despite significant attention from game theorists, computer scientists, economists, evolutionary biologists, etc. Press and Dyson have shown that IPD is effectively an ultimatum game, which is very different from the Tit for Tat stories told by generations of people who worked on IPD (Axelrod, Dawkins, etc., etc.).

...

For evolutionary biologists: Dyson clearly thinks this result has implications for multilevel (group vs individual selection):
... Cooperation loses and defection wins. The ZD strategies confirm this conclusion and make it sharper. ... The system evolved to give cooperative tribes an advantage over non-cooperative tribes, using punishment to give cooperation an evolutionary advantage within the tribe. This double selection of tribes and individuals goes way beyond the Prisoners' Dilemma model.

implications for fractionalized Europe vis-a-vis unified China?

and more broadly does this just imply we're doomed in the long run RE: cooperation, morality, the "good society", so on...? war and group-selection is the only way to get a non-crab bucket civilization?

Iterated Prisoner’s Dilemma contains strategies that dominate any evolutionary opponent:
http://www.pnas.org/content/109/26/10409.full
http://www.pnas.org/content/109/26/10409.full.pdf
https://www.edge.org/conversation/william_h_press-freeman_dyson-on-iterated-prisoners-dilemma-contains-strategies-that

https://en.wikipedia.org/wiki/Ultimatum_game

analogy for ultimatum game: the state gives the demos a bargain take-it-or-leave-it, and...if the demos refuses...violence?

The nature of human altruism: http://sci-hub.tw/https://www.nature.com/articles/nature02043
- Ernst Fehr & Urs Fischbacher

Some of the most fundamental questions concerning our evolutionary origins, our social relations, and the organization of society are centred around issues of altruism and selfishness. Experimental evidence indicates that human altruism is a powerful force and is unique in the animal world. However, there is much individual heterogeneity and the interaction between altruists and selfish individuals is vital to human cooperation. Depending on the environment, a minority of altruists can force a majority of selfish individuals to cooperate or, conversely, a few egoists can induce a large number of altruists to defect. Current gene-based evolutionary theories cannot explain important patterns of human altruism, pointing towards the importance of both theories of cultural evolution as well as gene–culture co-evolution.

...

Why are humans so unusual among animals in this respect? We propose that quantitatively, and probably even qualitatively, unique patterns of human altruism provide the answer to this question. Human altruism goes far beyond that which has been observed in the animal world. Among animals, fitness-reducing acts that confer fitness benefits on other individuals are largely restricted to kin groups; despite several decades of research, evidence for reciprocal altruism in pair-wise repeated encounters4,5 remains scarce6–8. Likewise, there is little evidence so far that individual reputation building affects cooperation in animals, which contrasts strongly with what we find in humans. If we randomly pick two human strangers from a modern society and give them the chance to engage in repeated anonymous exchanges in a laboratory experiment, there is a high probability that reciprocally altruistic behaviour will emerge spontaneously9,10.

However, human altruism extends far beyond reciprocal altruism and reputation-based cooperation, taking the form of strong reciprocity11,12. Strong reciprocity is a combination of altruistic rewarding, which is a predisposition to reward others for cooperative, norm-abiding behaviours, and altruistic punishment, which is a propensity to impose sanctions on others for norm violations. Strong reciprocators bear the cost of rewarding or punishing even if they gain no individual economic benefit whatsoever from their acts. In contrast, reciprocal altruists, as they have been defined in the biological literature4,5, reward and punish only if this is in their long-term self-interest. Strong reciprocity thus constitutes a powerful incentive for cooperation even in non-repeated interactions and when reputation gains are absent, because strong reciprocators will reward those who cooperate and punish those who defect.

...

We will show that the interaction between selfish and strongly reciprocal … [more]
concept  conceptual-vocab  wiki  reference  article  models  GT-101  game-theory  anthropology  cultural-dynamics  trust  cooperate-defect  coordination  iteration-recursion  sequential  axelrod  discrete  smoothness  evolution  evopsych  EGT  economics  behavioral-econ  sociology  new-religion  deep-materialism  volo-avolo  characterization  hsu  scitariat  altruism  justice  group-selection  decision-making  tribalism  organizing  hari-seldon  theory-practice  applicability-prereqs  bio  finiteness  multi  history  science  social-science  decision-theory  commentary  study  summary  giants  the-trenches  zero-positive-sum  🔬  bounded-cognition  info-dynamics  org:edge  explanation  exposition  org:nat  eden  retention  long-short-run  darwinian  markov  equilibrium  linear-algebra  nitty-gritty  competition  war  explanans  n-factor  europe  the-great-west-whale  occident  china  asia  sinosphere  orient  decentralized  markets  market-failure  cohesion  metabuch  stylized-facts  interdisciplinary  physics  pdf  pessimism  time  insight  the-basilisk  noblesse-oblige  the-watchers  ideas  l
march 2018 by nhaliday
Sequence Modeling with CTC
A visual guide to Connectionist Temporal Classiﬁcation, an algorithm used to train deep neural networks in speech recognition, handwriting recognition and other sequence problems.
acmtariat  techtariat  org:bleg  nibble  better-explained  machine-learning  deep-learning  visual-understanding  visualization  analysis  let-me-see  research  sequential  audio  classification  model-class  exposition  language  acm  approximation  comparison  markov  iteration-recursion  concept  atoms  distribution  orders  DP  heuristic  optimization  trees  greedy  matching  gradient-descent  org:popup
december 2017 by nhaliday
Fitting a Structural Equation Model
seems rather unrigorous: nonlinear optimization, possibility of nonconvergence, doesn't even mention local vs. global optimality...
pdf  slides  lectures  acm  stats  hypothesis-testing  graphs  graphical-models  latent-variables  model-class  optimization  nonlinearity  gotchas  nibble  ML-MAP-E  iteration-recursion  convergence
november 2017 by nhaliday
Autocratic Rule and Social Capital: Evidence from Imperial China by Melanie Meng Xue, Mark Koyama :: SSRN
This paper studies how autocratic rule affects social capital. Between 1660-1788, individuals in imperial China were persecuted if they were suspected of holding subversive attitudes towards the state. A difference-in-differences approach suggests that these persecutions led to a decline of 38% in social capital, as measured by the number of charitable organizations, in each subsequent decade. Investigating the long-run effect of autocratic rule, we show that persecutions are associated with lower levels of trust, political engagement, and the under provision of local public goods. These results indicate a possible vicious cycle in which autocratic rule becomes self-reinforcing through a permanent decline in social capital.
study  economics  broad-econ  econotariat  history  early-modern  growth-econ  authoritarianism  antidemos  china  asia  sinosphere  orient  n-factor  social-capital  individualism-collectivism  charity  cliometrics  trust  cohesion  political-econ  polisci  public-goodish  correlation  intervention  unintended-consequences  iteration-recursion  cycles  effect-size  path-dependence  🎩  leviathan  endogenous-exogenous  control  branches  pseudoE  slippery-slope  counter-revolution  nascent-state  microfoundations  explanans  the-great-west-whale  occident  madisonian  hari-seldon  law  egalitarianism-hierarchy  local-global  decentralized  the-watchers  noblesse-oblige  benevolence
september 2017 by nhaliday
Rank aggregation basics: Local Kemeny optimisation | David R. MacIver
This turns our problem from a global search to a local one: Basically we can start from any point in the search space and search locally by swapping adjacent pairs until we hit a minimum. This turns out to be quite easy to do. _We basically run insertion sort_: At step n we have the first n items in a locally Kemeny optimal order. Swap the n+1th item backwards until the majority think its predecessor is < it. This ensures all adjacent pairs are in the majority order, so swapping them would result in a greater than or equal K. This is of course an O(n^2) algorithm. In fact, the problem of merely finding a locally Kemeny optimal solution can be done in O(n log(n)) (for much the same reason as you can sort better than insertion sort). You just take the directed graph of majority votes and find a Hamiltonian Path. The nice thing about the above version of the algorithm is that it gives you a lot of control over where you start your search.
techtariat  liner-notes  papers  tcs  algorithms  machine-learning  acm  optimization  approximation  local-global  orders  graphs  graph-theory  explanation  iteration-recursion  time-complexity  nibble
september 2017 by nhaliday
Here Be Sermons | Melting Asphalt
The Costly Coordination Mechanism of Common Knowledge: https://www.lesserwrong.com/posts/9QxnfMYccz9QRgZ5z/the-costly-coordination-mechanism-of-common-knowledge
- Dictatorships all through history have attempted to suppress freedom of the press and freedom of speech. Why is this? Are they just very sensitive? On the other side, the leaders of the Enlightenment fought for freedom of speech, and would not budge an inch against this principle.
- When two people are on a date and want to sleep with each other, the conversation will often move towards but never explicitly discuss having sex. The two may discuss going back to the place of one of theirs, with a different explicit reason discussed (e.g. "to have a drink"), even if both want to have sex.
- Throughout history, communities have had religious rituals that look very similar. Everyone in the village has to join in. There are repetitive songs, repetitive lectures on the same holy books, chanting together. Why, of all the possible community events (e.g. dinner, parties, etc) is this the most common type?
What these three things have in common, is common knowledge - or at least, the attempt to create it.

...

Common knowledge is often much easier to build in small groups - in the example about getting off the bus, the two need only to look at each other, share a nod, and common knowledge is achieved. Building common knowledge between hundreds or thousands of people is significantly harder, and the fact that religion has such a significant ability to do so is why it has historically had so much connection to politics.
postrat  simler  essay  insight  community  religion  theos  speaking  impro  morality  info-dynamics  commentary  ratty  yvain  ssc  obama  race  hanson  tribalism  network-structure  peace-violence  cohesion  gnosis-logos  multi  todo  enlightenment-renaissance-restoration-reformation  sex  sexuality  coordination  cooperate-defect  lesswrong  ritual  free-riding  GT-101  equilibrium  civil-liberty  exit-voice  game-theory  nuclear  deterrence  arms  military  defense  money  monetary-fiscal  government  drugs  crime  sports  public-goodish  leviathan  explanation  incentives  interests  gray-econ  media  trust  revolution  signaling  tradition  power  internet  social  facebook  academia  publishing  communication  business  startups  cost-benefit  iteration-recursion  social-norms  reinforcement  alignment
september 2017 by nhaliday
Population Growth and Technological Change: One Million B.C. to 1990
The nonrivalry of technology, as modeled in the endogenous growth literature, implies that high population spurs technological change. This paper constructs and empirically tests a model of long-run world population growth combining this implication with the Malthusian assumption that technology limits population. The model predicts that over most of history, the growth rate of population will be proportional to its level. Empirical tests support this prediction and show that historically, among societies with no possibility for technological contact, those with larger initial populations have had faster technological change and population growth.

Table I gives the gist (population growth rate scales w/ tech innovation). Note how the Mongol invasions + reverberations stand out.

https://jasoncollins.org/2011/08/15/more-people-more-ideas-in-the-long-run/
pdf  study  economics  growth-econ  broad-econ  cliometrics  anthropology  cjones-like  population  demographics  scale  innovation  technology  ideas  deep-materialism  stylized-facts  correlation  speed  flux-stasis  history  antiquity  iron-age  medieval  early-modern  mostly-modern  piracy  garett-jones  spearhead  big-picture  density  iteration-recursion  magnitude  econotariat  multi  commentary  summary  🎩  path-dependence  pop-diff  malthus  time-series  data  world  microfoundations  hari-seldon  conquest-empire  disease  parasites-microbiome  spreading  gavisti  asia  war  death  nihil  trends
august 2017 by nhaliday
The Determinants of Trust
Both individual experiences and community characteristics influence how much people trust each other. Using data drawn from US localities we find that the strongest factors that reduce trust are: i) a recent history of traumatic experiences, even though the passage of time reduces this effect fairly rapidly; ii) belonging to a group that historically felt discriminated against, such as minorities (black in particular) and, to a lesser extent, women; iii) being economically unsuccessful in terms of income and education; iv) living in a racially mixed community and/or in one with a high degree of income disparity. Religious beliefs and ethnic origins do not significantly affect trust. The latter result may be an indication that the American melting pot at least up to a point works, in terms of homogenizing attitudes of different cultures, even though racial cleavages leading to low trust are still quite high.

Understanding Trust: http://www.nber.org/papers/w13387
In this paper we resolve this puzzle by recognizing that trust has two components: a belief-based one and a preference based one. While the sender's behavior reflects both, we show that WVS-like measures capture mostly the belief-based component, while questions on past trusting behavior are better at capturing the preference component of trust.

MEASURING TRUST: http://scholar.harvard.edu/files/laibson/files/measuring_trust.pdf
We combine two experiments and a survey to measure trust and trustworthiness— two key components of social capital. Standard attitudinal survey questions about trust predict trustworthy behavior in our experiments much better than they predict trusting behavior. Trusting behavior in the experiments is predicted by past trusting behavior outside of the experiments. When individuals are closer socially, both trust and trustworthiness rise. Trustworthiness declines when partners are of different races or nationalities. High status individuals are able to elicit more trustworthiness in others.

What is Social Capital? The Determinants of Trust and Trustworthiness: http://www.nber.org/papers/w7216
Using a sample of Harvard undergraduates, we analyze trust and social capital in two experiments. Trusting behavior and trustworthiness rise with social connection; differences in race and nationality reduce the level of trustworthiness. Certain individuals appear to be persistently more trusting, but these people do not say they are more trusting in surveys. Survey questions about trust predict trustworthiness not trust. Only children are less trustworthy. People behave in a more trustworthy manner towards higher status individuals, and therefore status increases earnings in the experiment. As such, high status persons can be said to have more social capital.

Trust and Cheating: http://www.nber.org/papers/w18509
We find that: i) both parties to a trust exchange have implicit notions of what constitutes cheating even in a context without promises or messages; ii) these notions are not unique - the vast majority of senders would feel cheated by a negative return on their trust/investment, whereas a sizable minority defines cheating according to an equal split rule; iii) these implicit notions affect the behavior of both sides to the exchange in terms of whether to trust or cheat and to what extent. Finally, we show that individual's notions of what constitutes cheating can be traced back to two classes of values instilled by parents: cooperative and competitive. The first class of values tends to soften the notion while the other tightens it.

Nationalism and Ethnic-Based Trust: Evidence from an African Border Region: https://u.osu.edu/robinson.1012/files/2015/12/Robinson_NationalismTrust-1q3q9u1.pdf
These results offer microlevel evidence that a strong and salient national identity can diminish ethnic barriers to trust in diverse societies.

One Team, One Nation: Football, Ethnic Identity, and Conflict in Africa: http://conference.nber.org/confer//2017/SI2017/DEV/Durante_Depetris-Chauvin.pdf
Do collective experiences that prime sentiments of national unity reduce interethnic tensions and conflict? We examine this question by looking at the impact of national football teams’ victories in sub-Saharan Africa. Combining individual survey data with information on over 70 official matches played between 2000 and 2015, we find that individuals interviewed in the days after a victory of their country’s national team are less likely to report a strong sense of ethnic identity and more likely to trust people of other ethnicities than those interviewed just before. The effect is sizable and robust and is not explained by generic euphoria or optimism. Crucially, national victories do not only affect attitudes but also reduce violence. Indeed, using plausibly exogenous variation from close qualifications to the Africa Cup of Nations, we find that countries that (barely) qualified experience significantly less conflict in the following six months than countries that (barely) did not. Our findings indicate that, even where ethnic tensions have deep historical roots, patriotic shocks can reduce inter-ethnic tensions and have a tangible impact on conflict.

Why Does Ethnic Diversity Undermine Public Goods Provision?: http://www.columbia.edu/~mh2245/papers1/HHPW.pdf
We identify three families of mechanisms that link diversity to public goods provision—–what we term “preferences,” “technology,” and “strategy selection” mechanisms—–and run a series of experimental games that permit us to compare the explanatory power of distinct mechanisms within each of these three families. Results from games conducted with a random sample of 300 subjects from a slum neighborhood of Kampala, Uganda, suggest that successful public goods provision in homogenous ethnic communities can be attributed to a strategy selection mechanism: in similar settings, co-ethnics play cooperative equilibria, whereas non-co-ethnics do not. In addition, we find evidence for a technology mechanism: co-ethnics are more closely linked on social networks and thus plausibly better able to support cooperation through the threat of social sanction. We find no evidence for prominent preference mechanisms that emphasize the commonality of tastes within ethnic groups or a greater degree of altruism toward co-ethnics, and only weak evidence for technology mechanisms that focus on the impact of shared ethnicity on the productivity of teams.

does it generalize to first world?

Higher Intelligence Groups Have Higher Cooperation Rates in the Repeated Prisoner's Dilemma: https://ideas.repec.org/p/iza/izadps/dp8499.html
The initial cooperation rates are similar, it increases in the groups with higher intelligence to reach almost full cooperation, while declining in the groups with lower intelligence. The difference is produced by the cumulation of small but persistent differences in the response to past cooperation of the partner. In higher intelligence subjects, cooperation after the initial stages is immediate and becomes the default mode, defection instead requires more time. For lower intelligence groups this difference is absent. Cooperation of higher intelligence subjects is payoff sensitive, thus not automatic: in a treatment with lower continuation probability there is no difference between different intelligence groups

Why societies cooperate: https://voxeu.org/article/why-societies-cooperate
Three attributes are often suggested to generate cooperative behaviour – a good heart, good norms, and intelligence. This column reports the results of a laboratory experiment in which groups of players benefited from learning to cooperate. It finds overwhelming support for the idea that intelligence is the primary condition for a socially cohesive, cooperative society. Warm feelings towards others and good norms have only a small and transitory effect.

individual payoff, etc.:

Trust, Values and False Consensus: http://www.nber.org/papers/w18460
Trust beliefs are heterogeneous across individuals and, at the same time, persistent across generations. We investigate one mechanism yielding these dual patterns: false consensus. In the context of a trust game experiment, we show that individuals extrapolate from their own type when forming trust beliefs about the same pool of potential partners - i.e., more (less) trustworthy individuals form more optimistic (pessimistic) trust beliefs - and that this tendency continues to color trust beliefs after several rounds of game-play. Moreover, we show that one's own type/trustworthiness can be traced back to the values parents transmit to their children during their upbringing. In a second closely-related experiment, we show the economic impact of mis-calibrated trust beliefs stemming from false consensus. Miscalibrated beliefs lower participants' experimental trust game earnings by about 20 percent on average.

The Right Amount of Trust: http://www.nber.org/papers/w15344
We investigate the relationship between individual trust and individual economic performance. We find that individual income is hump-shaped in a measure of intensity of trust beliefs. Our interpretation is that highly trusting individuals tend to assume too much social risk and to be cheated more often, ultimately performing less well than those with a belief close to the mean trustworthiness of the population. On the other hand, individuals with overly pessimistic beliefs avoid being cheated, but give up profitable opportunities, therefore underperforming. The cost of either too much or too little trust is comparable to the income lost by forgoing college.

...

This framework allows us to show that income-maximizing trust typically exceeds the trust level of the average person as well as to estimate the distribution of income lost to trust mistakes. We find that although a majority of individuals has well calibrated beliefs, a non-trivial proportion of the population (10%) has trust beliefs sufficiently poorly calibrated to lower income by more than 13%.

Do Trust and … [more]
study  economics  alesina  growth-econ  broad-econ  trust  cohesion  social-capital  religion  demographics  race  diversity  putnam-like  compensation  class  education  roots  phalanges  general-survey  multi  usa  GT-101  conceptual-vocab  concept  behavioral-econ  intricacy  composition-decomposition  values  descriptive  correlation  harvard  field-study  migration  poll  status  🎩  🌞  chart  anthropology  cultural-dynamics  psychology  social-psych  sociology  cooperate-defect  justice  egalitarianism-hierarchy  inequality  envy  n-factor  axelrod  pdf  microfoundations  nationalism-globalism  africa  intervention  counter-revolution  tribalism  culture  society  ethnocentrism  coordination  world  developing-world  innovation  econ-productivity  government  stylized-facts  madisonian  wealth-of-nations  identity-politics  public-goodish  s:*  legacy  things  optimization  curvature  s-factor  success  homo-hetero  higher-ed  models  empirical  contracts  human-capital  natural-experiment  endo-exo  data  scale  trade  markets  time  supply-demand  summary
august 2017 by nhaliday
How to Escape Saddle Points Efficiently – Off the convex path
A core, emerging problem in nonconvex optimization involves the escape of saddle points. While recent research has shown that gradient descent (GD) generically escapes saddle points asymptotically (see Rong Ge’s and Ben Recht’s blog posts), the critical open problem is one of efficiency — is GD able to move past saddle points quickly, or can it be slowed down significantly? How does the rate of escape scale with the ambient dimensionality? In this post, we describe our recent work with Rong Ge, Praneeth Netrapalli and Sham Kakade, that provides the first provable positive answer to the efficiency question, showing that, rather surprisingly, GD augmented with suitable perturbations escapes saddle points efficiently; indeed, in terms of rate and dimension dependence it is almost as if the saddle points aren’t there!
acmtariat  org:bleg  nibble  liner-notes  machine-learning  acm  optimization  gradient-descent  local-global  off-convex  time-complexity  random  perturbation  michael-jordan  iterative-methods  research  learning-theory  math.DS  iteration-recursion  org:popup
july 2017 by nhaliday
spaceships - Can there be a space age without petroleum (crude oil)? - Worldbuilding Stack Exchange
Yes...probably

What was really important to our development of technology was not oil, but coal. Access to large deposits of high-quality coal largely fueled the industrial revolution, and it was the industrial revolution that really got us on the first rungs of the technological ladder.

Oil is a fantastic fuel for an advanced civilisation, but it's not essential. Indeed, I would argue that our ability to dig oil out of the ground is a crutch, one that we should have discarded long ago. The reason oil is so essential to us today is that all our infrastructure is based on it, but if we'd never had oil we could still have built a similar infrastructure. Solar power was first displayed to the public in 1878. Wind power has been used for centuries. Hydroelectric power is just a modification of the same technology as wind power.

Without oil, a civilisation in the industrial age would certainly be able to progress and advance to the space age. Perhaps not as quickly as we did, but probably more sustainably.

Without coal, though...that's another matter

What would the industrial age be like without oil and coal?: https://worldbuilding.stackexchange.com/questions/45919/what-would-the-industrial-age-be-like-without-oil-and-coal

Out of the ashes: https://aeon.co/essays/could-we-reboot-a-modern-civilisation-without-fossil-fuels
It took a lot of fossil fuels to forge our industrial world. Now they're almost gone. Could we do it again without them?

But charcoal-based industry didn’t die out altogether. In fact, it survived to flourish in Brazil. Because it has substantial iron deposits but few coalmines, Brazil is the largest charcoal producer in the world and the ninth biggest steel producer. We aren’t talking about a cottage industry here, and this makes Brazil a very encouraging example for our thought experiment.

The trees used in Brazil’s charcoal industry are mainly fast-growing eucalyptus, cultivated specifically for the purpose. The traditional method for creating charcoal is to pile chopped staves of air-dried timber into a great dome-shaped mound and then cover it with turf or soil to restrict airflow as the wood smoulders. The Brazilian enterprise has scaled up this traditional craft to an industrial operation. Dried timber is stacked into squat, cylindrical kilns, built of brick or masonry and arranged in long lines so that they can be easily filled and unloaded in sequence. The largest sites can sport hundreds of such kilns. Once filled, their entrances are sealed and a fire is lit from the top.
q-n-a  stackex  curiosity  gedanken  biophysical-econ  energy-resources  long-short-run  technology  civilization  industrial-revolution  heavy-industry  multi  modernity  frontier  allodium  the-world-is-just-atoms  big-picture  ideas  risk  volo-avolo  news  org:mag  org:popup  direct-indirect  retrofit  dirty-hands  threat-modeling  duplication  iteration-recursion  latin-america  track-record  trivia  cocktail  data
june 2017 by nhaliday
One more time | West Hunter
One of our local error sources suggested that it would be impossible to rebuild technical civilization, once fallen. Now if every human were dead I’d agree, but in most other scenarios it wouldn’t be particularly difficult, assuming that the survivors were no more silly and fractious than people are today.  So assume a mild disaster, something like the effect of myxomatosis on the rabbits of Australia, or perhaps toe-to-toe nuclear combat with the Russkis – ~90%  casualties worldwide.

https://westhunt.wordpress.com/2015/05/17/one-more-time/#comment-69221
Books are everywhere. In the type of scenario I sketched out, almost no knowledge would be lost – so Neolithic tech is irrelevant. Look, if a single copy of the 1911 Britannica survived, all would be well.

You could of course harvest metals from the old cities. But even if if you didn’t, the idea that there is no more copper or zinc or tin in the ground is just silly. “recoverable ore” is mostly an economic concept.

Moreover, if we’re talking wiring and electrical uses, one can use aluminum, which makes up 8% of the Earth’s crust.

https://westhunt.wordpress.com/2015/05/17/one-more-time/#comment-69368
Some of those book tell you how to win.

Look, assume that some communities strive to relearn how to make automatic weapons and some don’t. How does that story end? Do I have to explain everything?

I guess so!

https://westhunt.wordpress.com/2015/05/17/one-more-time/#comment-69334
Well, perhaps having a zillion times more books around would make a difference. That and all the “X for Dummies” books, which I think the Romans didn’t have.

A lot of Classical civ wasn’t very useful: on the whole they didn’t invent much. On the whole, technology advanced quite a bit more rapidly in Medieval times.

https://westhunt.wordpress.com/2015/05/17/one-more-time/#comment-69225
How much coal and oil are in the ground that can still be extracted with 19th century tech? Honest question; I don’t know.
--
Lots of coal left. Not so much oil (using simple methods), but one could make it from low-grade coal, with the Fischer-Tropsch process. Sasol does this.

Then again, a recovering society wouldn’t need much at first.

https://westhunt.wordpress.com/2015/05/17/one-more-time/#comment-69223
That’s more like it.

#1. Consider Grand Coulee Dam. Gigawatts. Feeling of power!
#2. Of course.
#3. Might be easier to make superconducting logic circuits with MgB2, starting over.

https://westhunt.wordpress.com/2015/05/17/one-more-time/#comment-69325
Your typical biker guy is more mechanically minded than the average Joe. Welding, electrical stuff, this and that.

https://westhunt.wordpress.com/2015/05/17/one-more-time/#comment-69260
If fossil fuels were unavailable -or just uneconomical at first- we’d be back to charcoal for our Stanley Steamers and railroads. We’d still have both.

The French, and others, used wood-gasifier trucks during WWII.

https://westhunt.wordpress.com/2015/05/17/one-more-time/#comment-69407
Teslas are of course a joke.
west-hunter  scitariat  civilization  risk  nihil  gedanken  frontier  allodium  technology  energy-resources  knowledge  the-world-is-just-atoms  discussion  speculation  analysis  biophysical-econ  big-picture  🔬  ideas  multi  history  iron-age  the-classics  medieval  europe  poast  the-great-west-whale  the-trenches  optimism  volo-avolo  mostly-modern  world-war  gallic  track-record  musk  barons  transportation  driving  contrarianism  agriculture  retrofit  industrial-revolution  dirty-hands  books  competition  war  group-selection  comparison  mediterranean  conquest-empire  gibbon  speedometer  class  threat-modeling  duplication  iteration-recursion  trivia  cocktail  encyclopedic  definite-planning  embodied  gnosis-logos  kumbaya-kult
may 2017 by nhaliday
[1502.05274] How predictable is technological progress?
Recently it has become clear that many technologies follow a generalized version of Moore's law, i.e. costs tend to drop exponentially, at different rates that depend on the technology. Here we formulate Moore's law as a correlated geometric random walk with drift, and apply it to historical data on 53 technologies. We derive a closed form expression approximating the distribution of forecast errors as a function of time. Based on hind-casting experiments we show that this works well, making it possible to collapse the forecast errors for many different technologies at different time horizons onto the same universal distribution. This is valuable because it allows us to make forecasts for any given technology with a clear understanding of the quality of the forecasts. As a practical demonstration we make distributional forecasts at different time horizons for solar photovoltaic modules, and show how our method can be used to estimate the probability that a given technology will outperform another technology at a given point in the future.

model:
- p_t = unit price of tech
- log(p_t) = y_0 - μt + ∑_{i <= t} n_i
- n_t iid noise process
preprint  study  economics  growth-econ  innovation  discovery  technology  frontier  tetlock  meta:prediction  models  time  definite-planning  stylized-facts  regression  econometrics  magnitude  energy-resources  phys-energy  money  cost-benefit  stats  data-science  🔬  ideas  speedometer  multiplicative  methodology  stochastic-processes  time-series  stock-flow  iteration-recursion  org:mat  street-fighting  the-bones  whiggish-hegelian  pessimism  eden-heaven
april 2017 by nhaliday
Discovering Limits to Growth | Do the Math
https://en.wikipedia.org/wiki/The_Limits_to_Growth
https://foundational-research.org/the-future-of-growth-near-zero-growth-rates/
One may of course be skeptical that this general trend will also apply to the growth of our technology and economy at large, as innovation seems to continually postpone our clash with the ceiling, yet it seems inescapable that it must. For in light of what we know about physics, we can conclude that exponential growth of the kinds we see today, in technology in particular and in our economy more generally, must come to an end, and do so relatively soon.
scitariat  prediction  hmm  economics  growth-econ  biophysical-econ  world  energy-resources  the-world-is-just-atoms  books  summary  quotes  commentary  malthus  models  dynamical  🐸  mena4  demographics  environment  org:bleg  nibble  regularizer  science-anxiety  deep-materialism  nihil  the-bones  whiggish-hegelian  multi  tetlock  trends  wiki  macro  pessimism  eh  primitivism  new-religion  cynicism-idealism  gnon  review  recommendations  long-short-run  futurism  ratty  miri-cfar  effective-altruism  hanson  econ-metrics  ems  magnitude  street-fighting  nitty-gritty  physics  data  phys-energy  🔬  multiplicative  iteration-recursion
march 2017 by nhaliday
Performance Trends in AI | Otium
Deep learning has revolutionized the world of artificial intelligence. But how much does it improve performance? How have computers gotten better at different tasks over time, since the rise of deep learning?

In games, what the data seems to show is that exponential growth in data and computation power yields exponential improvements in raw performance. In other words, you get out what you put in. Deep learning matters, but only because it provides a way to turn Moore’s Law into corresponding performance improvements, for a wide class of problems. It’s not even clear it’s a discontinuous advance in performance over non-deep-learning systems.

In image recognition, deep learning clearly is a discontinuous advance over other algorithms. But the returns to scale and the improvements over time seem to be flattening out as we approach or surpass human accuracy.

In speech recognition, deep learning is again a discontinuous advance. We are still far away from human accuracy, and in this regime, accuracy seems to be improving linearly over time.

In machine translation, neural nets seem to have made progress over conventional techniques, but it’s not yet clear if that’s a real phenomenon, or what the trends are.

In natural language processing, trends are positive, but deep learning doesn’t generally seem to do better than trendline.

...

The learned agent performs much better than the hard-coded agent, but moves more jerkily and “randomly” and doesn’t know the law of reflection. Similarly, the reports of AlphaGo producing “unusual” Go moves are consistent with an agent that can do pattern-recognition over a broader space than humans can, but which doesn’t find the “laws” or “regularities” that humans do.

Perhaps, contrary to the stereotype that contrasts “mechanical” with “outside-the-box” thinking, reinforcement learners can “think outside the box” but can’t find the box?

http://slatestarcodex.com/2017/08/02/where-the-falling-einstein-meets-the-rising-mouse/
ratty  core-rats  summary  prediction  trends  analysis  spock  ai  deep-learning  state-of-art  🤖  deepgoog  games  nlp  computer-vision  nibble  reinforcement  model-class  faq  org:bleg  shift  chart  technology  language  audio  accuracy  speaking  foreign-lang  definite-planning  china  asia  microsoft  google  ideas  article  speedometer  whiggish-hegelian  yvain  ssc  smoothness  data  hsu  scitariat  genetics  iq  enhancement  genetic-load  neuro  neuro-nitgrit  brain-scan  time-series  multiplicative  iteration-recursion  additive  multi  arrows
january 2017 by nhaliday
Bestiary of Behavioral Economics/Trust Game - Wikibooks, open books for an open world
In the trust game, like the ultimatum game and the dictator game, there are two participants that are anonymously paired. Both of these individuals are given some quantity of money. The first individual, or player, is told that he must send some amount of his money to an anonymous second player, though the amount sent may be zero. The first player is also informed that whatever he sends will be tripled by the experimenter. So, when the first player chooses a value, the experimenter will take it, triple it, and give that money to the second player. The second player is then told to make a similar choice – give some amount of the now-tripled money back to the first player, even if that amount is zero.

Even with perfect information about the mechanics of the game, the first player option to send nothing (and thus the second player option to send nothing back) is the Nash equilibrium for the game.

In the original Berg et al. experiment, thirty out of thirty-two game trials resulted in a violation of the results predicted by standard economic theory. In these thirty cases, first players sent money that averaged slightly over fifty percent of their original endowment.

Heritability of cooperative behavior in the trust game: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2268795/
- trust defined by the standard A->B->A trust game
- smallish h^2, small but nonzero shared environment, primarily non-shared environment (~70%)

The results of our mixed-effects Bayesian ACE analysis suggest that variation in how subjects play the trust game is partially accounted for by genetic differences (Tables 2 and ​and33 and Fig. 2). In the ACE model of trust, the heritability estimate is 20% (C.I. 3–38%) in the Swedish experiment and 10% (C.I. 4–21%) in the U.S. experiment. The ACE model of trust also demonstrates that environmental variation plays a role. In particular, unshared environmental variation is a much more significant source of phenotypic variation than genetic variation (e2 = 68% vs. c2 = 12% in Sweden and e2 = 82% vs. c2 = 8% in the U.S.; P < 0.0001 in both samples). In the ACE model of trustworthiness, heritability (h2) generates 18% (C.I. 8–30%) of the variance in the Swedish experiment and 17% (C.I. 5–32%) in the U.S. experiment. Once again, environmental differences play a role (e2 = 66% vs. c2 = 17% in Sweden and e2 = 71% vs. c2 = 12% in the U.S.; P < 0.0001 in both samples).

Trust and Gender: An Examination of Behavior and Beliefs in the Investment Game: https://www.researchgate.net/publication/222329553_Trust_and_Gender_An_Examination_of_Behavior_and_Beliefs_in_the_Investment_Game
How does gender influence trust, the likelihood of being trusted and the level of trustworthiness? We compare choices by men and women in the Investment Game and use questionnaire data to try to understand the motivations for the behavioral differences. We find that men trust more than women, and women are more trustworthy than men. The relationship between expected return and trusting behavior is stronger among men than women, suggesting that men view the interaction more strategically than women. Women felt more obligated both to trust and reciprocate, but the impact of obligation on behavior varies.

Genetic Influences Are Virtually Absent for Trust: http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0093880
trust defined by poll

Over the past decades, numerous twin studies have revealed moderate to high heritability estimates for individual differences in a wide range of human traits, including cognitive ability, psychiatric disorders, and personality traits. Even factors that are generally believed to be environmental in nature have been shown to be under genetic control, albeit modest. Is such heritability also present in _social traits that are conceptualized as causes and consequences of social interactions_ or in other ways strongly shaped by behavior of other people? Here we examine a population-based sample of 1,012 twins and relatives. We show that the genetic influence on generalized trust in other people (trust-in-others: h2 = 5%, ns), and beliefs regarding other people’s trust in the self (trust-in-self: h2 = 13%, ns), is virtually absent. As test-retest reliability for both scales were found to be moderate or high (r = .76 and r = .53, respectively) in an independent sample, we conclude that all variance in trust is likely to be accounted for by non-shared environmental influences.

Dutch sample

Generalized Trust: Four Lessons From Genetics and Culture: http://journals.sagepub.com/doi/abs/10.1177/0963721414552473
We share four basic lessons on trust: (a) Generalized trust is more a matter of culture than genetics; (b) trust is deeply rooted in social interaction experiences (that go beyond childhood), networks, and media; (c) people have too little trust in other people in general; and (d) it is adaptive to regulate a “healthy dose” of generalized trust.

Trust is heritable, whereas distrust is not: http://www.pnas.org/content/early/2017/06/13/1617132114
Notably, although both trust and distrust are strongly influenced by the individual’s unique environment, interestingly, trust shows significant genetic influences, whereas distrust does not. Rather, distrust appears to be primarily socialized, including influences within the family.

[ed.: All this is consistent with my intuition that moral behavior is more subject to cultural/"free will"-type influences.]
models  economics  behavioral-econ  decision-theory  wiki  reference  classic  minimum-viable  game-theory  decision-making  trust  GT-101  putnam-like  justice  social-capital  cooperate-defect  microfoundations  multi  study  psychology  social-psych  regularizer  environmental-effects  coordination  variance-components  europe  nordic  usa  🌞  🎩  anglo  biodet  objective-measure  sociology  behavioral-gen  poll  self-report  null-result  comparison  org:nat  chart  iteration-recursion  homo-hetero  intricacy
december 2016 by nhaliday
Edge.org: 2016 : WHAT DO YOU CONSIDER THE MOST INTERESTING RECENT [SCIENTIFIC] NEWS? WHAT MAKES IT IMPORTANT?
highlights:
- quantum supremacy [Scott Aaronson]
- gene drive
- gene editing/CRISPR
- carcinogen may be entropy
- differentiable programming
- quantitative biology
soft:
- antisocial punishment of pro-social cooperators
- "strongest prejudice" (politics) [Haidt]
- Europeans' origins [Cochran]
- "Anthropic Capitalism And The New Gimmick Economy" [Eric Weinstein]

https://archive.is/gNGDJ
There's an underdiscussed contradiction between the idea that our society would make almost all knowledge available freely and instantaneously to almost everyone and that almost everyone would find gainful employment as knowledge workers. Value is in scarcity not abundance.
--
You’d need to turn reputational-based systems into an income stream
technology  discussion  trends  gavisti  west-hunter  aaronson  haidt  list  expert  science  biotech  geoengineering  top-n  org:edge  frontier  multi  CRISPR  2016  big-picture  links  the-world-is-just-atoms  quantum  quantum-info  computation  metameta  🔬  scitariat  q-n-a  zeitgeist  speedometer  cancer  random  epidemiology  mutation  GT-101  cooperate-defect  cultural-dynamics  anthropology  expert-experience  tcs  volo-avolo  questions  thiel  capitalism  labor  supply-demand  internet  tech  economics  broad-econ  prediction  automation  realness  gnosis-logos  iteration-recursion  similarity  uniqueness  homo-hetero  education  duplication  creative  software  programming  degrees-of-freedom  futurism  order-disorder  flux-stasis  public-goodish  markets  market-failure  piracy  property-rights  free-riding  twitter  social  backup  ratty  unaffiliated  gnon  contradiction  career  planning  hmm  idk  knowledge  higher-ed  pro-rata  sociality  reinforcement  tribalism  us-them  politics  coalitions  prejudice  altruism  human-capital  engineering  unintended-consequences
november 2016 by nhaliday
Standards Drift | West Hunter
We now know that the fraction of Neanderthal ancestry in coding regions has been gradually decreasing with time since the origin admixture, and is now something half as large as it was originally. There were some useful Neanderthal alleles that were favored by selection, and others that deleterious enough to have disappeared completely, but we’re talking about the general trend.

...

I’m thinking of it as standards drift. In a populations, alleles are always being selected for compatibility, for working correctly, conferring high fitness, on a particular average genetic background. Each allele has a spec it needs to meet. That spec doesn’t necessarily stay the same over time: obviously changes in environment will make a difference. Drift should matter too: if a given allele becomes more common, even by chance, the specs will change for other alleles that interact with it. But there’s always a spec.

When two populations split, their specs start to drift apart. There’s no genetic equivalent of that iridium meter bar. Function at the organismal level doesn’t change so much, but there are many slightly different ways of achieving that function.

...

While we’re at it, if there are Pygmies whose genomes are majority ancient Pygmy, their Bantu component is probably slightly incompatible: if left to themselves for a hundred thousand years, they’d probably lose a fair amount of it. Of course they will all be eaten long before that happens.

https://westhunt.wordpress.com/2016/04/08/the-1/
We don’t see people today with Neanderthal Y chromosomes or mtDNA. I keep hearing people argue that this means that mating between Neanderthal males and AMH females must have produced sterile males, or that matings between AMH men and Neanderthal women were all sterile, or whatever.

That is not necessarily the case. A slight disadvantage is all that would be required to totally eliminate Neanderthal Y-chromosomes or mtDNA.

Imagine that a Neanderthal Y-chromosome reduces the bearer’s fitness by 1%, and that the original frequency of Neanderthal Y chromosomes (after admixture) was 2%.

It’s been something like 1500 generations. The expected frequency is 5.67 x 10-9. In real life it would probably have fluctuated to zero, and of course stayed there.

Understand and remember.

https://westhunt.wordpress.com/2017/08/17/mtdna-capers/
The first problem is that there may not have been enough Neanderthals. Selection is not very effective in removing deleterious alleles when their selective disadvantage is < 1/N. For Neanderthals, some analyses indicate the effective population size was around 1000 (others think it was a large but deeply subdivided population), but the effective pop for mtDNA (haploid and only transmitted by females ) was 1/4th that – so, N ~250. Not very big.

The other, general, problem with mtDNA is lack of recombination. In an asexual lineage, mutations accumulate. Muller's ratchet. The only fix is back-mutation, which is very rare, unless the species population size is huge. Sex, on the other hand, reshuffles: a kid can have fewer deleterious mutations than either parent.

So you don’t expect hominid mtDNA to be in great shape, nearly perfectly optimized. That’s closer to true for nuclear genes. Since hominid mtDNA is not too close to optimal, it’s not a huge surprise if population A has noticeably more effective mitochondria than population B.

https://westhunt.wordpress.com/2016/02/18/croatoan/
west-hunter  genetics  evolution  archaics  sapiens  speculation  context  gene-flow  scitariat  gene-drift  multi  aDNA  genomics  archaeology  history  anthropology  critique  explanation  hmm  antiquity  population-genetics  nibble  stylized-facts  methodology  language  selection  ideas  aphorism  rant  africa  lol  population  pop-structure  china  asia  multiplicative  iteration-recursion  magnitude  quantitative-qualitative
november 2016 by nhaliday
What’s the catch? | West Hunter
Neanderthals and the Wrath of Khan

if someone were to try to create a Neanderthal a few years from now, starting with ancient DNA, they’d have to have worry a lot about data errors, because such errors would translate into mutations, which might be harmful or even lethal. Assume that we have figured out how to get the gene expression right, have all the proper methylation etc: we have modern humans as a template and you know there isn’t that much difference.

They might try consensus averaging – take three high-quality Neanderthal genomes and make your synthetic genome by majority rule: we ignore a nucleotide change in one genome if it’s not there in the other two. ‘tell me three times’, a simple form of error-correcting code.

But doing this would cause a problem. Can you see what the problem is?
west-hunter  sapiens  speculation  enhancement  archaics  discussion  genetics  genetic-load  🌞  gedanken  unintended-consequences  cocktail  error  aDNA  signal-noise  coding-theory  scitariat  wild-ideas  ideas  archaeology  perturbation  iteration-recursion  duplication  forms-instances  traces
november 2016 by nhaliday
On the morning after a Doge’s death the members of the “Maggior Consiglio,” the council representing the freemen of the city, convened to first select by lot 30 of its members older than 30 years, who were designated as “electors.” But if—perhaps by analogy to the U.S. Electoral College—you think that the 30 then simply elected the Doge, you’re mistaken.

Those 30 were reduced to 9 by lot. The 9 then designated a group of 40, each of whom needed 7 approval votes out of the 9 members of the committee. Back in the hall, with the entire Maggior Consiglio present, these 40 would be reduced by lot to 12. As before, the 12 would nominate 25 names, subject to approval by 9 members of the committee. In the hall, these 25 would be reduced to 9. In turn, the 9 nominated 5 names each, commanding the support of at least 7 members of the group. Those 45 would be reduced by lot to 11, and then nominate the 41 true electors of the Doge.

Only then would the real election begin. The 41 were kept isolated in the ducal palace until the Doge was elected. Each member of the “Quarantuno” could nominate a candidate. In the early years, a name would be picked at random, and a yes/no vote would be held. This would be repeated until a candidate was found who had the support of 25 members. Later on, this sequential procedure was changed to simultaneous approval vote, where each member of the Quarantuno votes yes or no on each of the nominated candidates. The candidate with the highest number of votes would be elected as the new Doge, provided that he had the attained at least 25 yes votes.

The complex voting and randomization procedure was connected to a broader set of rules curbing electoral patronage, corruption, and factions.
polisci  government  politics  history  mediterranean  europe  trivia  social-choice  org:med  iteration-recursion  org:popup  medieval  early-modern  coalitions  machiavelli  trust  sequential  enlightenment-renaissance-restoration-reformation  mechanism-design
november 2016 by nhaliday
Megafaunal Extinctions | West Hunter
When competent human hunters encountered naive fauna, the biggest animals, things like mammoths and toxodons and diprotodons, all went extinct. It is not hard to see why this occurred. Large animals are more worth hunting than rabbits, and easier to catch, while having a far lower reproductive rate. Moreover, humans are not naturally narrow specialists on any one species, so are not limited by the abundance of that species in the way that the lynx population depends on the hare population. Being omnivores, they could manage even when the megafauna as a whole were becoming rare.

There were subtle factors at work as well: the first human colonists in a new land probably didn’t develop ethnic/language splits for some time, which meant that the no-mans-land zones between tribes that can act as natural game preserves didn’t exist in that crucial early period. Such game preserves might have allowed the megafauna to evolve better defenses against humans – but they never got the chance.

It happened in the Americas, in Australia, in New Zealand, in Madagascar, and in sundry islands. There is no reason to think that climate had much to do with it, except in the sense that climatic change may sometimes have helped open up a path to those virgin lands in which the hand of man had never set foot, via melting glaciers or low sea level.

I don’t know the numbers, but certainly a large fraction of archeologists and paleontologists, perhaps a majority, don’t believe that human hunters were responsible, or believe that hunting was only one of several factors. Donald Grayson and David Meltzer, for example. Why do they think this? In part I think it is an aversion to simple explanations, a reversal of Ockham’s razor, which is common in these fields. Of course then I have to explain why they would do such a silly thing, and I can’t. Probably some with these opinions are specialists in a particular geographic area, and do not appreciate the power of looking at multiple extinction events: it’s pretty hard to argue that the climate just happened to change whenever people showed when it happens five or six times.

It might be that belief in specialization is even more of a problem than specialization itself. Lots of time you have to gather insights and information from several fields to make progress on a puzzle. It seems to me that many researchers aren’t willing to learn much outside their field, even when it’s the only route to the answer. But then, maybe they can’t. I remember an anthropologist who could believe in humans rapidly filling up New Zealand, which is about the size of Colorado, but just couldn’t see how they could have managed to fill up a whole continent in a couple of thousand years. Evidently she didn’t understand geometric growth. She is not alone. I have see anthropologists argue [The revolution that wasn’t] that increased human density in ancient Africa was driven by the continent ‘finally getting full’, rather than increased intellectual abilities and resulting greater technological sophistication. That’s truly silly. Look, back in those days, technology changed slowly: you would hardly notice significant change over 50k years. Human populations grow far faster than that, given the chance. Imagine a population with three surviving children per couple, which is nothing special: it would grow by a factor of ten million in a thousand years. The average long-term growth rate was very low, but that is because the rate of increase in human capabilities, which determine the carrying capacity, was very slow – not because rapid population growth is difficult or impossible.

I could explain this to my 11-year old twins in five minutes, but I don’t know that I could ever explain it to Brooks and McBrearty.

https://westhunt.wordpress.com/2012/05/20/megafaunal-extinctions/#comment-3039
Why do people act as if a slightly more habitable Greenland a millennium ago somehow disproves the statement that the world as a whole was cooler then than now? Motivated reasoning: they want a certain conclusion real bad. At this point it’s become an identifying tribal marker, like left-wingers believing in the innocence of Alger Hiss. And of course they’re mostly just repeating nonsense that some flack dreamed up. Many of the same people will mouth drivel about how a Finn and a Zulu could easily be genetically closer two each other than to other co-ethnics, which is never, ever, true.

When you think about it, falsehoods, stupid crap, make the best group identifiers, because anyone might agree with you when you’re obviously right. Signing up to clear nonsense is a better test of group loyalty. A true friend is with you when you’re wrong. Ideally, not just wrong, but barking mad, rolling around in your own vomit wrong. Movement conservatives have learned this lesson well.

https://westhunt.wordpress.com/2013/09/12/younger-dryas-meteorite/
It has been suggested that a large meteorite was responsible for an odd climatic twitch from about 12,800 to 11,500 years ago (the Younger Dryas , a temporary return to glacial conditions in the Northern Hemisphere) and for the extinction of the large mammals of North America. They hypothesize air bursts or impact of a swarm of meteors , centered around the Great Lakes. Probably this is all nonsense.

The topic of the Holocene extinction of megafauna seems to bring out the crazy in people. In my opinion, the people supporting this Younger Dryas impact hypothesis are nuts, and half of their opponents are nuts as well.

...

The problem for that meteorite explanation of North Ammerican megafaunal extinction is that South America had an even more varied set of megafauna (gomphotheriums, toxodonts, macrauchenia, glyptodonts, giant sloths, etc) and they went extinct around the same time (probably a few hundred years later). There’s no way for a hit around the Great Lakes to wipe out stuff in Patagonia, barring a huge, dinosaur-killer type hit that throws tremendous amount of debris into suborbital trajectories. But that would have hit the entire world… Didn’t happen.

https://westhunt.wordpress.com/2012/05/26/redlining/
If you take too many chances in the process of making a living, you’ll get yourself killed before you manage to raise a family. Therefore there is a maximum sustainable risk per calorie acquired from hunting *. If the average member of the species incurs too much risk, more than that sustainable maximum, the species goes extinct. The Neanderthals must have come closer to that red line than anatomically modern humans in Africa, judging from their beat-up skeletons, which resemble those of rodeo riders. They were almost entirely carnivorous, judging from isotopic studies, and that helps us understand all those fractures: they apparently had limited access to edible plants, which entail far lower risks. Tubers and berries seldom break your ribs.

...

Risk per calorie was particularly high among the Neanderthals because they seem to have had no way of storing meat – they had no drying racks or storage pits in frozen ground like those used by their successors. Think of it this way: storage allow more complete usage of a large carcass such as a bison, that might weigh over a thousand pounds – it wouldn’t be easy to eat all of that before it went bad. Higher utilization – using all of the buffalo – drops the risk per calorie.

You might think that they could have chased rabbits or whatever, but that is relatively unrewarding. It works a lot better if you can use nets or snares, but no evidence of such devices has been found among the Neanderthals.

It looks as if the Neanderthals had health insurance: surely someone else fed them while they were recovering from being hurt. You see the same pattern, to a degree, in lions, and it probably existed in sabertooths as well, since they often exhibit significant healed injuries.

...

So we can often understand the pattern, but why were mammoths rapidly wiped out in the Americas while elephants survived in Africa and south Asia? I offer several possible explanations. First, North American mammoths had no evolved behavioral defenses against man – while Old World elephants had had time to acquire such adaptations. That may have made hunting old world elephants far more dangerous, and therefore less attractive. Second, there are areas in Africa that are almost uninhabitable, due to the tsetse fly. They may have acted as natural game preserves, and there are no equivalents in the Americas. Third, the Babel effect: in the early days, paleoIndians likely had not yet split into different ethnic groups with different languages: with less fighting among the early Indians, animals would not have had relatively border regions acting as refugia. Also, with fewer human-caused casualties, paleoindians could have taken more risks in hunting.

https://westhunt.wordpress.com/2013/09/18/hunter-gatherer-fish-and-game-laws/
I don’t think that there are any. But then how did they manage to be one-with-the-land custodians of wildlife? Uh….

Conservation is hard. Even if the population as a whole would be better off if a given prey species persisted in fair numbers, any single individual would benefit from cheating – even from eating the very last mammoth.

More complicated societies, with private property and draconian laws against poaching, do better, but even they don’t show much success in preserving a tasty prey species over the long haul. Considers the aurochs, the wild ancestor of the cow. The Indian version seems to have been wiped out 4-5,000 years ago. The Eurasian version was still common in Roman times, but was rare by the 13th century, surviving only in Poland. Theoretically, only members of the Piast dynasty could hunt aurochsen – but they still went extinct in 1627.

How then did edible species survive in pre-state societies? I can think of several ways in which some species managed to survive … [more]
west-hunter  sapiens  antiquity  rant  nature  occam  thick-thin  migration  scitariat  info-dynamics  multi  archaics  nihil  archaeology  kumbaya-kult  the-trenches  discussion  speculation  ideas  environment  food  energy-resources  farmers-and-foragers  history  bio  malthus  cooperate-defect  property-rights  free-riding  public-goodish  alt-inst  population  density  multiplicative  technology  iteration-recursion  magnitude  quantitative-qualitative  study  contradiction  no-go  spreading  death  interests  climate-change  epistemic  truth  coalitions  left-wing  right-wing  science  poast  europe  nordic  agriculture  efficiency  tribalism  signaling  us-them  leviathan  duty  cohesion  organizing  axelrod  westminster  preference-falsification  illusion  inference  apollonian-dionysian
november 2016 by nhaliday
Hidden Games | West Hunter
Since we are arguably a lot smarter than ants or bees, you might think that most adaptive personality variation in humans would be learned (a response to exterior cues) rather than heritable. Maybe some is, but much variation looks heritable. People don’t seem to learn to be aggressive or meek – they just are, and in those tendencies resemble their biological parents. I wish I (or anyone else) understood better why this is so, but there are some notions floating around that may explain it. One is that jacks of all trades are masters of none: if you play the same role all the time, you’ll be better at it than someone who keep switching personalities. It could be the case that such switching is physiologically difficult and/or expensive. And in at least some cases, being predictable has social value. Someone who is known to be implacably aggressive will win at ‘chicken’. Being known as the sort of guy who would rush into a burning building to save ugly strangers may pay off, even though actually running into that blaze does not.

...

This kind of game-theoretic genetic variation, driving distinct behavioral strategies, can have some really odd properties. For one thing, there can be more than one possible stable mix of behavioral types even in identical ecological situations. It’s a bit like dropping a marble onto a hilly landscape with many unconnected valleys – it will roll to the bottom of some valley, but initial conditions determine which valley. Small perturbations will not knock the marble out of the valley it lands in. In the same way, two human populations could fall into different states, different stable mixes of behavioral traits, for no reason at all other than chance and then stay there indefinitely. Populations are even more likely to fall into qualitatively different stable states when the ecological situations are significantly different.

...

What this means, think, is that it is entirely possible that human societies fall into fundamentally different patterns because of genetic influences on behavior that are best understood via evolutionary game theory. Sometimes one population might have a psychological type that doesn’t exist at all in another society, or the distribution could be substantially different. Sometimes these different social patterns will be predictable results of different ecological situations, sometimes the purest kind of chance. Sometimes the internal dynamics of these genetic systems will produce oscillatory (or chaotic!) changes in gene frequencies over time, which means changes in behavior and personality over time. In some cases, these internal genetic dynamics may be the fundamental reason for the rise and fall of empires. Societies in one stable distribution, in a particular psychological/behavioral/life history ESS, may simply be unable to replicate some of the institutions found in peoples in a different ESS.

Evolutionary forces themselves vary according to what ESS you’re in. Which ESS you’re in may be the most fundamental ethnic fact, and explain the most profound ethnic behavioral differences

Look, everyone is always looking for the secret principles that underlie human society and history, some algebra that takes mounds of historical and archaeological data – the stuff that happens – and explains it in some compact way, lets us understand it, just as continental drift made a comprehensible story out of geology. On second thought, ‘everyone’ mean that smallish fraction of researchers that are slaves of curiosity…

This approach isn’t going to explain everything – nothing will. But it might explain a lot, which would make it a hell of a lot more valuable than modern sociology or cultural anthropology. I would hope that an analysis of this sort might help explain fundamental long-term flavor difference between different human societies, differences in life-history strategies especially (dads versus cads, etc). If we get particularly lucky, maybe we’ll have some notions of why the Mayans got bored with civilization, why Chinese kids are passive at birth while European and African kids are feisty. We’ll see.

Of course we could be wrong. It’s going to have be tested and checked: it’s not magic. It is based on the realization that the sort of morphs and game-theoretic balances we see in some nonhuman species are if anything more likely to occur in humans, because our societies are so complex, because the effectiveness of a course of action so often depends on the psychologies of other individuals – that and the obvious fact that people are not the same everywhere.
west-hunter  sapiens  game-theory  evolution  personality  thinking  essay  adversarial  GT-101  EGT  scitariat  tradeoffs  equilibrium  strategy  distribution  sociality  variance-components  flexibility  rigidity  diversity  biodet  behavioral-gen  nature  within-without  roots  explanans  psychology  social-psych  evopsych  intricacy  oscillation  pro-rata  iteration-recursion  insight  curiosity  letters  models  theory-practice  civilization  latin-america  farmers-and-foragers  age-of-discovery  china  asia  sinosphere  europe  the-great-west-whale  africa  developmental  empirical  humanity  courage  virtu  theory-of-mind  reputation  cybernetics  random  degrees-of-freedom  manifolds  occam  parsimony  turchin  broad-econ  deep-materialism  cultural-dynamics  anthropology  cliometrics  hari-seldon  learning  ecology  context  leadership  cost-benefit  apollonian-dionysian  detail-architecture  history  antiquity  pop-diff  comparison  plots  being-becoming  number  uniqueness
november 2016 by nhaliday
Overcoming Bias : Trump, Political Innovator
Many have expressed great anxiety about Trump’s win, saying that he is is bad overall because he induces greater global and domestic uncertainly. In their mind, this includes a higher chances of wars, coups, riots, collapse of democracy, and so on. But overall these seem to be generic consequences of political innovation. Innovation in general is disruptive and costly in the short run, but can aide adaptation in the long run.

So you can dislike Trump for two very different reasons, First, you can dislike innovation on the other side of the political spectrum, as you see that coming at the expense of your side. Or, or you can dislike political innovation in general. But if innovation is the process of adapting to changing conditions, it must be mostly a question of when, not if. And less frequent innovations are probably bigger changes, which is probably more disruptive overall.

So what you should really be asking is: what were the obstacles to smaller past innovations in Trump’s new direction? And how can we reduce such obstacles?

https://www.overcomingbias.com/2016/11/dial-it-back.html
In a repeated game, where the same people play the same game over and over, cooperation can more easily arise than in a one-shot version of the game, where such people play only once and then never interact again. This sort of cooperation gets easier the more that players care about the many future iterations of the game, compared to the current iteration.

When a group repeats the same game, but some iterations count much more than others, then defection from cooperation is most likely at a big “endgame” iteration.

https://www.overcomingbias.com/2016/11/careful-who-you-call-racist.html
https://www.overcomingbias.com/2016/11/get-a-grip-theres-a-much-bigger-picture.html
past and future contextualize present
hanson  ratty  politics  2016-election  thinking  innovation  society  coalitions  contrarianism  polisci  insight  hmm  idk  trump  2016  social-choice  elections  current-events  multi  rhetoric  culture-war  game-theory  iteration-recursion  sequential  GT-101  race  language  identity-politics  cooperate-defect  polarization  regularizer  futurism  rationality  essay  big-picture  context
november 2016 by nhaliday
Overcoming Bias : Lognormal Jobs
could be the case that exponential tech improvement -> linear job replacement, as long as distribution of jobs across automatability is log-normal (I don't entirely follow the argument)

Paul Christiano has objection (to premise not argument) in the comments
hanson  thinking  street-fighting  futurism  automation  labor  economics  ai  prediction  🎩  gray-econ  regularizer  contrarianism  c:*  models  distribution  marginal  2016  meta:prediction  discussion  clever-rats  ratty  speedometer  ideas  neuro  additive  multiplicative  magnitude  iteration-recursion
november 2016 by nhaliday
Local max-cut in smoothed polynomial time | I’m a bandit
local = maximal wrt to flipping a single vertex (algorithm is to just flip as long as improvement possible)
tidbits  optimization  algorithms  papers  acm  exposition  acmtariat  sebastien-bubeck  local-global  liner-notes  org:bleg  nibble  average-case  tcs  time-complexity  iteration-recursion  elegance
october 2016 by nhaliday
Overcoming Bias : Two Kinds Of Status
prestige and dominance

More here. I was skeptical at first, but now am convinced: humans see two kinds of status, and approve of prestige-status much more than domination-status. I’ll have much more to say about this in the coming days, but it is far from clear to me that prestige-status is as much better than domination-status as people seem to think. Efforts to achieve prestige-status also have serious negative side-effects.

Two Ways to the Top: Evidence That Dominance and Prestige Are Distinct Yet Viable Avenues to Social Rank and Influence: https://henrich.fas.harvard.edu/files/henrich/files/cheng_et_al_2013.pdf
Dominance (the use of force and intimidation to induce fear) and Prestige (the sharing of expertise or know-how to gain respect)

...

According to the model, Dominance initially arose in evolutionary history as a result of agonistic contests for material resources and mates that were common among nonhuman species, but continues to exist in contemporary human societies, largely in the form of psychological intimidation, coercion, and wielded control over costs and benefits (e.g., access to resources, mates, and well-being). In both humans and nonhumans, Dominance hierarchies are thought to emerge to help maintain patterns of submission directed from subordinates to Dominants, thereby minimizing agonistic battles and incurred costs.

In contrast, Prestige is likely unique to humans, because it is thought to have emerged from selection pressures to preferentially attend to and acquire cultural knowledge from highly skilled or successful others, a capacity considered to be less developed in other animals (Boyd & Richerson, 1985; Laland & Galef, 2009). In this view, social learning (i.e., copying others) evolved in humans as a low-cost fitness-maximizing, information-gathering mechanism (Boyd & Richerson, 1985). Once it became adaptive to copy skilled others, a preference for social models with better than average information would have emerged. This would promote competition for access to the highest quality models, and deference toward these models in exchange for copying and learning opportunities. Consequently, selection likely favored Prestige differentiation, with individuals possessing high-quality information or skills elevated to the top of the hierarchy. Meanwhile, other individuals may reach the highest ranks of their group’s hierarchy by wielding threat of force, regardless of the quality of their knowledge or skills. Thus, Dominance and Prestige can be thought of as coexisting avenues to attaining rank and influence within social groups, despite being underpinned by distinct motivations and behavioral patterns, and resulting in distinct patterns of imitation and deference from subordinates.

Importantly, both Dominance and Prestige are best conceptualized as cognitive and behavioral strategies (i.e., suites of subjective feelings, cognitions, motivations, and behavioral patterns that together produce certain outcomes) deployed in certain situations, and can be used (with more or less success) by any individual within a group. They are not types of individuals, or even, necessarily, traits within individuals. Instead, we assume that all situated dyadic relationships contain differential degrees of both Dominance and Prestige, such that each person is simultaneously Dominant and Prestigious to some extent, to some other individual. Thus, it is possible that a high degree of Dominance and a high degree of Prestige may be found within the same individual, and may depend on who is doing the judging. For example, by controlling students’ access to rewards and punishments, school teachers may exert Dominance in their relationships with some students, but simultaneously enjoy Prestige with others, if they are respected and deferred to for their competence and wisdom. Indeed, previous studies have shown that based on both self- and peer ratings, Dominance and Prestige are largely independent (mean r = -.03; Cheng et al., 2010).

Status Hypocrisy: https://www.overcomingbias.com/2017/01/status-hypocrisy.html
Today we tend to say that our leaders have prestige, while their leaders have dominance. That is, their leaders hold power via personal connections and the threat and practice of violence, bribes, sex, gossip, and conformity pressures. Our leaders, instead, mainly just have whatever abilities follow from our deepest respect and admiration regarding their wisdom and efforts on serious topics that matter for us all. Their leaders more seek power, while ours more have leadership thrust upon them. Because of this us/them split, we tend to try to use persuasion on us, but force on them, when seeking to to change behaviors.

...

Clearly, while there is some fact of the matter about how much a person gains their status via licit or illicit means, there is also a lot of impression management going on. We like to give others the impression that we personally mainly want prestige in ourselves and our associates, and that we only grant others status via the prestige they have earned. But let me suggest that, compared to this ideal, we actually want more dominance in ourselves and our associates than we like to admit, and we submit more often to dominance.

"The proper dichotomy is not “virile vs. wimpy” as has been supposed, but “exciting vs. drab,” with the former having the two distinct sub-groups “macho man vs. pretty boy.” Another way to see that this is the right dichotomy is to look around the world: wherever girls really dig macho men, they also dig the peacocky musician type too, finding safe guys a bit boring. And conversely, where devoted dads do the best, it’s more difficult for macho men or in-town-for-a-day rockstars to make out like bandits. …

Whatever it is about high-pathogen-load areas that selects for greater polygynous behavior … will result in an increase in both gorilla-like and peacock-like males, since they’re two viable ways to pursue a polygynous mating strategy."

This fits with there being two kinds of status: dominance and prestige. Macho men, such as CEOs and athletes, have dominance, while musicians and artists have prestige. But women seek both short and long term mates. Since both kinds of status suggest good genes, both attract women seeking short term mates. This happens more when women are younger and richer, and when there is more disease. Foragers pretend they don’t respect dominance as much as they do, so prestigious men get more overt attention, while dominant men get more covert attention.

Women seeking long term mates also consider a man’s ability to supply resources, and may settle for poorer genes to get more resources. Dominant men tend to have more resources than prestigious men, so such men are more likely to fill both roles, being long term mates for some women and short term mates for others. Men who can offer only prestige must accept worse long term mates, while men who can offer only resources must accept few short term mates. Those low in prestige, resources, or dominance must accept no mates. A man who had prestige, dominance, and resources would get the best short and long term mates – what men are these?

Stories are biased toward dramatic events, and so are biased toward events with risky men; it is harder to tell a good story about the attraction of a resource-rich man. So stories naturally encourage short term mating. Shouldn’t this make long-term mates wary of strong mate attraction to dramatic stories?

Woman want three things: someone to fight for them (the Warrior), someone to provide for them (the Tycoon) and someone to excite their emotions or entertain them (the Wizard).

In this context,

Dom=Warrior

To repeat:

There is an old distinction between "proximate" and "ultimate" causes. Evolution is an ultimate cause, physiology (and psychology, here) is a proximate cause. The flower bends to follow the sun because it gathers more light that way, but the immediate mechanism of the bending involves hormones called auxins. I see a lot of speculation about, say, sexual cognitive dimorphism whose ultimate cause is evolutionary, but not so much speculation about the proximate cause - the "how" of the difference, rather than the "why". And here I think a visit to an older mode of explanation like Marsden's - one which is psychological rather than genetic - can sensitize us to the fact that the proximate causes of a behavioral tendency need not be a straightforward matter of being hardwired differently.

This leads to my second point, which is just that we should remember that human beings actually possess consciousness. This means not only that the proximate cause of a behavior may deeply involve subjectivity, self-awareness, and an existential situation. It also means that all of these propositions about what people do are susceptible to change once they have been spelled out and become part of the culture. It is rather like the stock market: once everyone knows (or believes) something, then that information provides no advantage, creating an incentive for novelty.

Finally, the consequences of new beliefs about the how and the why of human nature and human behavior. Right or wrong, theories already begin to have consequences once they are taken up and incorporated into subjectivity. We really need a new Foucault to take on this topic.

The Economics of Social Status: http://www.meltingasphalt.com/the-economics-of-social-status/
Prestige vs. dominance. Joseph Henrich (of WEIRD fame) distinguishes two types of status. Prestige is the kind of status we get from being an impressive human specimen (think Meryl Streep), and it's governed by our 'approach' instincts. Dominance, on the other hand, is … [more]
things  status  hanson  thinking  comparison  len:short  anthropology  farmers-and-foragers  phalanges  ratty  duty  power  humility  hypocrisy  hari-seldon  multi  sex  gender  signaling  🐝  tradeoffs  evopsych  insight  models  sexuality  gender-diff  chart  postrat  yvain  ssc  simler  critique  essay  debate  paying-rent  gedanken  empirical  operational  vague  info-dynamics  len:long  community  henrich  long-short-run  rhetoric  contrarianism  coordination  social-structure  hidden-motives  politics  2016-election  rationality  links  study  summary  list  hive-mind  speculation  coalitions  values  🤖  metabuch  envy  universalism-particularism  egalitarianism-hierarchy  s-factor  unintended-consequences  tribalism  group-selection  justice  inequality  competition  cultural-dynamics  peace-violence  ranking  machiavelli  authoritarianism  strategy  tactics  organizing  leadership  management  n-factor  duplication  thiel  volo-avolo  todo  technocracy  rent-seeking  incentives  econotariat  marginal-rev  civilization  rot  gibbon
september 2016 by nhaliday
Are You Living in a Computer Simulation?
Bostrom's anthropic arguments

https://www.jetpress.org/volume7/simulation.htm
In sum, if your descendants might make simulations of lives like yours, then you might be living in a simulation. And while you probably cannot learn much detail about the specific reasons for and nature of the simulation you live in, you can draw general conclusions by making analogies to the types and reasons of simulations today. If you might be living in a simulation then all else equal it seems that you should care less about others, live more for today, make your world look likely to become eventually rich, expect to and try to participate in pivotal events, be entertaining and praiseworthy, and keep the famous people around you happy and interested in you.

Theological Implications of the Simulation Argument: https://www.tandfonline.com/doi/pdf/10.1080/15665399.2010.10820012
Nick Bostrom’s Simulation Argument (SA) has many intriguing theological implications. We work out some of them here. We show how the SA can be used to develop novel versions of the Cosmological and Design Arguments. We then develop some of the affinities between Bostrom’s naturalistic theogony and more traditional theological topics. We look at the resurrection of the body and at theodicy. We conclude with some reflections on the relations between the SA and Neoplatonism (friendly) and between the SA and theism (less friendly).

https://www.gwern.net/Simulation-inferences
lesswrong  philosophy  weird  idk  thinking  insight  links  summary  rationality  ratty  bostrom  sampling-bias  anthropic  theos  simulation  hanson  decision-making  advice  mystic  time-preference  futurism  letters  entertainment  multi  morality  humility  hypocrisy  wealth  malthus  power  drama  gedanken  pdf  article  essay  religion  christianity  the-classics  big-peeps  iteration-recursion  aesthetics  nietzschean  axioms  gwern  analysis  realness  von-neumann  space  expansionism  duplication  spreading  sequential  cs  computation  outcome-risk  measurement  empirical  questions  bits  information-theory  efficiency  algorithms  physics  relativity  ems  neuro  data  scale  magnitude  complexity  risk  existence  threat-modeling  civilization  forms-instances
september 2016 by nhaliday
per page:    204080120160

Copy this bookmark: