recentpopularlog in

nhaliday : magnitude   158

« earlier  
Linus's Law - Wikipedia
Linus's Law is a claim about software development, named in honor of Linus Torvalds and formulated by Eric S. Raymond in his essay and book The Cathedral and the Bazaar (1999).[1][2] The law states that "given enough eyeballs, all bugs are shallow";

--

In Facts and Fallacies about Software Engineering, Robert Glass refers to the law as a "mantra" of the open source movement, but calls it a fallacy due to the lack of supporting evidence and because research has indicated that the rate at which additional bugs are uncovered does not scale linearly with the number of reviewers; rather, there is a small maximum number of useful reviewers, between two and four, and additional reviewers above this number uncover bugs at a much lower rate.[4] While closed-source practitioners also promote stringent, independent code analysis during a software project's development, they focus on in-depth review by a few and not primarily the number of "eyeballs".[5][6]

Although detection of even deliberately inserted flaws[7][8] can be attributed to Raymond's claim, the persistence of the Heartbleed security bug in a critical piece of code for two years has been considered as a refutation of Raymond's dictum.[9][10][11][12] Larry Seltzer suspects that the availability of source code may cause some developers and researchers to perform less extensive tests than they would with closed source software, making it easier for bugs to remain.[12] In 2015, the Linux Foundation's executive director Jim Zemlin argued that the complexity of modern software has increased to such levels that specific resource allocation is desirable to improve its security. Regarding some of 2014's largest global open source software vulnerabilities, he says, "In these cases, the eyeballs weren't really looking".[11] Large scale experiments or peer-reviewed surveys to test how well the mantra holds in practice have not been performed.

Given enough eyeballs, all bugs are shallow? Revisiting Eric Raymond with bug bounty programs: https://academic.oup.com/cybersecurity/article/3/2/81/4524054

https://hbfs.wordpress.com/2009/03/31/how-many-eyeballs-to-make-a-bug-shallow/
wiki  reference  aphorism  ideas  stylized-facts  programming  engineering  linux  worse-is-better/the-right-thing  correctness  debugging  checking  best-practices  security  error  scale  ubiquity  collaboration  oss  realness  empirical  evidence-based  multi  study  info-econ  economics  intricacy  plots  manifolds  techtariat  cracker-prog  os  systems  magnitude  quantitative-qualitative  number  threat-modeling 
october 2019 by nhaliday
"Performance Matters" by Emery Berger - YouTube
Stabilizer is a tool that enables statistically sound performance evaluation, making it possible to understand the impact of optimizations and conclude things like the fact that the -O2 and -O3 optimization levels are indistinguishable from noise (sadly true).

Since compiler optimizations have run out of steam, we need better profiling support, especially for modern concurrent, multi-threaded applications. Coz is a new "causal profiler" that lets programmers optimize for throughput or latency, and which pinpoints and accurately predicts the impact of optimizations.

- randomize extraneous factors like code layout and stack size to avoid spurious speedups
- simulate speedup of component of concurrent system (to assess effect of optimization before attempting) by slowing down the complement (all but that component)
- latency vs. throughput, Little's law
video  presentation  programming  engineering  nitty-gritty  performance  devtools  compilers  latency-throughput  concurrency  legacy  causation  wire-guided  let-me-see  manifolds  pro-rata  tricks  endogenous-exogenous  control  random  signal-noise  comparison  marginal  llvm  systems  hashing  computer-memory  build-packaging  composition-decomposition  coupling-cohesion  local-global  dbs  direct-indirect  symmetry  research  models  metal-to-virtual  linux  measurement  simulation  magnitude  realness  hypothesis-testing  techtariat 
october 2019 by nhaliday
Anti-hash test. - Codeforces
- Thue-Morse sequence
- nice paper: http://www.mii.lt/olympiads_in_informatics/pdf/INFOL119.pdf
In general, polynomial string hashing is a useful technique in construction of efficient string algorithms. One simply needs to remember to carefully select the modulus M and the variable of the polynomial p depending on the application. A good rule of thumb is to pick both values as prime numbers with M as large as possible so that no integer overflow occurs and p being at least the size of the alphabet.
2.2. Upper Bound on M
[stuff about 32- and 64-bit integers]
2.3. Lower Bound on M
On the other side Mis bounded due to the well-known birthday paradox: if we consider a collection of m keys with m ≥ 1.2√M then the chance of a collision to occur within this collection is at least 50% (assuming that the distribution of fingerprints is close to uniform on the set of all strings). Thus if the birthday paradox applies then one needs to choose M=ω(m^2)to have a fair chance to avoid a collision. However, one should note that not always the birthday paradox applies. As a benchmark consider the following two problems.

I generally prefer to use Schwartz-Zippel to reason about collision probabilities w/ this kind of thing, eg, https://people.eecs.berkeley.edu/~sinclair/cs271/n3.pdf.

A good way to get more accurate results: just use multiple primes and the Chinese remainder theorem to get as large an M as you need w/o going beyond 64-bit arithmetic.

more on this: https://codeforces.com/blog/entry/60442
oly  oly-programming  gotchas  howto  hashing  algorithms  strings  random  best-practices  counterexample  multi  pdf  papers  nibble  examples  fields  polynomials  lecture-notes  yoga  probability  estimate  magnitude  hacker  adversarial  CAS  lattice  discrete 
august 2019 by nhaliday
The Reason Why | West Hunter
There are odd things about the orbits of trans-Neptunian objects that suggest ( to some) that there might be an undiscovered super-Earth-sized planet  a few hundred AU from the Sun..

We haven’t seen it, but then it would be very hard to see. The underlying reason is simple enough, but I have never seen anyone mention it: the signal from such objects drops as the fourth power of distance from the Sun.   Not the second power, as is the case with luminous objects like stars, or faraway objects that are close to a star.  We can image close-in planets of other stars that are light-years distant, but it’s very difficult to see a fair-sized planet a few hundred AU out.
--
interesting little fun fact
west-hunter  scitariat  nibble  tidbits  scale  magnitude  visuo  electromag  spatial  space  measurement  paradox  physics 
july 2019 by nhaliday
Why books don’t work | Andy Matuschak
https://www.spreaker.com/user/10197011/designing-and-developing-new-tools-for-t
https://twitter.com/andy_matuschak/status/1190675776036687878
https://archive.is/hNIFG
https://archive.is/f9Bwh
hmm: "zettelkasten like note systems have you do a linear search for connections, that gets exponentially more expensive as your note body grows",
https://twitter.com/Meaningness/status/1210309788141117440
https://archive.is/P6PH2
https://archive.is/uD9ls
https://archive.is/Sb9Jq

https://twitter.com/Scholars_Stage/status/1199702832728948737
https://archive.is/cc4zf
I reviewed today my catalogue of 420~ books I have read over the last six years and I am in despair. There are probably 100~ whose contents I can tell you almost nothing about—nothing noteworthy anyway.
techtariat  worrydream  learning  education  teaching  higher-ed  neurons  thinking  rhetoric  essay  michael-nielsen  retention  better-explained  bounded-cognition  info-dynamics  info-foraging  books  communication  lectures  contrarianism  academia  scholar  design  meta:reading  studying  form-design  writing  technical-writing  skunkworks  multi  broad-econ  wonkish  unaffiliated  twitter  social  discussion  backup  reflection  metameta  podcast  audio  interview  impetus  space  open-problems  questions  tech  hard-tech  startups  commentary  postrat  europe  germanic  notetaking  graphs  network-structure  similarity  intersection-connectedness  magnitude  cost-benefit  multiplicative 
may 2019 by nhaliday
Theory of Self-Reproducing Automata - John von Neumann
Fourth Lecture: THE ROLE OF HIGH AND OF EXTREMELY HIGH COMPLICATION

Comparisons between computing machines and the nervous systems. Estimates of size for computing machines, present and near future.

Estimates for size for the human central nervous system. Excursus about the “mixed” character of living organisms. Analog and digital elements. Observations about the “mixed” character of all componentry, artificial as well as natural. Interpretation of the position to be taken with respect to these.

Evaluation of the discrepancy in size between artificial and natural automata. Interpretation of this discrepancy in terms of physical factors. Nature of the materials used.

The probability of the presence of other intellectual factors. The role of complication and the theoretical penetration that it requires.

Questions of reliability and errors reconsidered. Probability of individual errors and length of procedure. Typical lengths of procedure for computing machines and for living organisms--that is, for artificial and for natural automata. Upper limits on acceptable probability of error in individual operations. Compensation by checking and self-correcting features.

Differences of principle in the way in which errors are dealt with in artificial and in natural automata. The “single error” principle in artificial automata. Crudeness of our approach in this case, due to the lack of adequate theory. More sophisticated treatment of this problem in natural automata: The role of the autonomy of parts. Connections between this autonomy and evolution.

- 10^10 neurons in brain, 10^4 vacuum tubes in largest computer at time
- machines faster: 5 ms from neuron potential to neuron potential, 10^-3 ms for vacuum tubes

https://en.wikipedia.org/wiki/John_von_Neumann#Computing
pdf  article  papers  essay  nibble  math  cs  computation  bio  neuro  neuro-nitgrit  scale  magnitude  comparison  acm  von-neumann  giants  thermo  phys-energy  speed  performance  time  density  frequency  hardware  ems  efficiency  dirty-hands  street-fighting  fermi  estimate  retention  physics  interdisciplinary  multi  wiki  links  people  🔬  atoms  duplication  iteration-recursion  turing  complexity  measure  nature  technology  complex-systems  bits  information-theory  circuits  robust  structure  composition-decomposition  evolution  mutation  axioms  analogy  thinking  input-output  hi-order-bits  coding-theory  flexibility  rigidity  automata-languages 
april 2018 by nhaliday
Complexity no Bar to AI - Gwern.net
Critics of AI risk suggest diminishing returns to computing (formalized asymptotically) means AI will be weak; this argument relies on a large number of questionable premises and ignoring additional resources, constant factors, and nonlinear returns to small intelligence advantages, and is highly unlikely. (computer science, transhumanism, AI, R)
created: 1 June 2014; modified: 01 Feb 2018; status: finished; confidence: likely; importance: 10
ratty  gwern  analysis  faq  ai  risk  speedometer  intelligence  futurism  cs  computation  complexity  tcs  linear-algebra  nonlinearity  convexity-curvature  average-case  adversarial  article  time-complexity  singularity  iteration-recursion  magnitude  multiplicative  lower-bounds  no-go  performance  hardware  humanity  psychology  cog-psych  psychometrics  iq  distribution  moments  complement-substitute  hanson  ems  enhancement  parable  detail-architecture  universalism-particularism  neuro  ai-control  environment  climate-change  threat-modeling  security  theory-practice  hacker  academia  realness  crypto  rigorous-crypto  usa  government 
april 2018 by nhaliday
The Hanson-Yudkowsky AI-Foom Debate - Machine Intelligence Research Institute
How Deviant Recent AI Progress Lumpiness?: http://www.overcomingbias.com/2018/03/how-deviant-recent-ai-progress-lumpiness.html
I seem to disagree with most people working on artificial intelligence (AI) risk. While with them I expect rapid change once AI is powerful enough to replace most all human workers, I expect this change to be spread across the world, not concentrated in one main localized AI system. The efforts of AI risk folks to design AI systems whose values won’t drift might stop global AI value drift if there is just one main AI system. But doing so in a world of many AI systems at similar abilities levels requires strong global governance of AI systems, which is a tall order anytime soon. Their continued focus on preventing single system drift suggests that they expect a single main AI system.

The main reason that I understand to expect relatively local AI progress is if AI progress is unusually lumpy, i.e., arriving in unusually fewer larger packages rather than in the usual many smaller packages. If one AI team finds a big lump, it might jump way ahead of the other teams.

However, we have a vast literature on the lumpiness of research and innovation more generally, which clearly says that usually most of the value in innovation is found in many small innovations. We have also so far seen this in computer science (CS) and AI. Even if there have been historical examples where much value was found in particular big innovations, such as nuclear weapons or the origin of humans.

Apparently many people associated with AI risk, including the star machine learning (ML) researchers that they often idolize, find it intuitively plausible that AI and ML progress is exceptionally lumpy. Such researchers often say, “My project is ‘huge’, and will soon do it all!” A decade ago my ex-co-blogger Eliezer Yudkowsky and I argued here on this blog about our differing estimates of AI progress lumpiness. He recently offered Alpha Go Zero as evidence of AI lumpiness:

...

In this post, let me give another example (beyond two big lumps in a row) of what could change my mind. I offer a clear observable indicator, for which data should have available now: deviant citation lumpiness in recent ML research. One standard measure of research impact is citations; bigger lumpier developments gain more citations that smaller ones. And it turns out that the lumpiness of citations is remarkably constant across research fields! See this March 3 paper in Science:

I Still Don’t Get Foom: http://www.overcomingbias.com/2014/07/30855.html
All of which makes it look like I’m the one with the problem; everyone else gets it. Even so, I’m gonna try to explain my problem again, in the hope that someone can explain where I’m going wrong. Here goes.

“Intelligence” just means an ability to do mental/calculation tasks, averaged over many tasks. I’ve always found it plausible that machines will continue to do more kinds of mental tasks better, and eventually be better at pretty much all of them. But what I’ve found it hard to accept is a “local explosion.” This is where a single machine, built by a single project using only a tiny fraction of world resources, goes in a short time (e.g., weeks) from being so weak that it is usually beat by a single human with the usual tools, to so powerful that it easily takes over the entire world. Yes, smarter machines may greatly increase overall economic growth rates, and yes such growth may be uneven. But this degree of unevenness seems implausibly extreme. Let me explain.

If we count by economic value, humans now do most of the mental tasks worth doing. Evolution has given us a brain chock-full of useful well-honed modules. And the fact that most mental tasks require the use of many modules is enough to explain why some of us are smarter than others. (There’d be a common “g” factor in task performance even with independent module variation.) Our modules aren’t that different from those of other primates, but because ours are different enough to allow lots of cultural transmission of innovation, we’ve out-competed other primates handily.

We’ve had computers for over seventy years, and have slowly build up libraries of software modules for them. Like brains, computers do mental tasks by combining modules. An important mental task is software innovation: improving these modules, adding new ones, and finding new ways to combine them. Ideas for new modules are sometimes inspired by the modules we see in our brains. When an innovation team finds an improvement, they usually sell access to it, which gives them resources for new projects, and lets others take advantage of their innovation.

...

In Bostrom’s graph above the line for an initially small project and system has a much higher slope, which means that it becomes in a short time vastly better at software innovation. Better than the entire rest of the world put together. And my key question is: how could it plausibly do that? Since the rest of the world is already trying the best it can to usefully innovate, and to abstract to promote such innovation, what exactly gives one small project such a huge advantage to let it innovate so much faster?

...

In fact, most software innovation seems to be driven by hardware advances, instead of innovator creativity. Apparently, good ideas are available but must usually wait until hardware is cheap enough to support them.

Yes, sometimes architectural choices have wider impacts. But I was an artificial intelligence researcher for nine years, ending twenty years ago, and I never saw an architecture choice make a huge difference, relative to other reasonable architecture choices. For most big systems, overall architecture matters a lot less than getting lots of detail right. Researchers have long wandered the space of architectures, mostly rediscovering variations on what others found before.

Some hope that a small project could be much better at innovation because it specializes in that topic, and much better understands new theoretical insights into the basic nature of innovation or intelligence. But I don’t think those are actually topics where one can usefully specialize much, or where we’ll find much useful new theory. To be much better at learning, the project would instead have to be much better at hundreds of specific kinds of learning. Which is very hard to do in a small project.

What does Bostrom say? Alas, not much. He distinguishes several advantages of digital over human minds, but all software shares those advantages. Bostrom also distinguishes five paths: better software, brain emulation (i.e., ems), biological enhancement of humans, brain-computer interfaces, and better human organizations. He doesn’t think interfaces would work, and sees organizations and better biology as only playing supporting roles.

...

Similarly, while you might imagine someday standing in awe in front of a super intelligence that embodies all the power of a new age, superintelligence just isn’t the sort of thing that one project could invent. As “intelligence” is just the name we give to being better at many mental tasks by using many good mental modules, there’s no one place to improve it. So I can’t see a plausible way one project could increase its intelligence vastly faster than could the rest of the world.

Takeoff speeds: https://sideways-view.com/2018/02/24/takeoff-speeds/
Futurists have argued for years about whether the development of AGI will look more like a breakthrough within a small group (“fast takeoff”), or a continuous acceleration distributed across the broader economy or a large firm (“slow takeoff”).

I currently think a slow takeoff is significantly more likely. This post explains some of my reasoning and why I think it matters. Mostly the post lists arguments I often hear for a fast takeoff and explains why I don’t find them compelling.

(Note: this is not a post about whether an intelligence explosion will occur. That seems very likely to me. Quantitatively I expect it to go along these lines. So e.g. while I disagree with many of the claims and assumptions in Intelligence Explosion Microeconomics, I don’t disagree with the central thesis or with most of the arguments.)
ratty  lesswrong  subculture  miri-cfar  ai  risk  ai-control  futurism  books  debate  hanson  big-yud  prediction  contrarianism  singularity  local-global  speed  speedometer  time  frontier  distribution  smoothness  shift  pdf  economics  track-record  abstraction  analogy  links  wiki  list  evolution  mutation  selection  optimization  search  iteration-recursion  intelligence  metameta  chart  analysis  number  ems  coordination  cooperate-defect  death  values  formal-values  flux-stasis  philosophy  farmers-and-foragers  malthus  scale  studying  innovation  insight  conceptual-vocab  growth-econ  egalitarianism-hierarchy  inequality  authoritarianism  wealth  near-far  rationality  epistemic  biases  cycles  competition  arms  zero-positive-sum  deterrence  war  peace-violence  winner-take-all  technology  moloch  multi  plots  research  science  publishing  humanity  labor  marginal  urban-rural  structure  composition-decomposition  complex-systems  gregory-clark  decentralized  heavy-industry  magnitude  multiplicative  endogenous-exogenous  models  uncertainty  decision-theory  time-prefer 
april 2018 by nhaliday
Eternity in six hours: intergalactic spreading of intelligent life and sharpening the Fermi paradox
We do this by demonstrating that traveling between galaxies – indeed even launching a colonisation project for the entire reachable universe – is a relatively simple task for a star-spanning civilization, requiring modest amounts of energy and resources. We start by demonstrating that humanity itself could likely accomplish such a colonisation project in the foreseeable future, should we want to, and then demonstrate that there are millions of galaxies that could have reached us by now, using similar methods. This results in a considerable sharpening of the Fermi paradox.
pdf  study  article  essay  anthropic  fermi  space  expansionism  bostrom  ratty  philosophy  xenobio  ideas  threat-modeling  intricacy  time  civilization  🔬  futurism  questions  paradox  risk  physics  engineering  interdisciplinary  frontier  technology  volo-avolo  dirty-hands  ai  automation  robotics  duplication  iteration-recursion  von-neumann  data  scale  magnitude  skunkworks  the-world-is-just-atoms  hard-tech  ems  bio  bits  speedometer  nature  model-organism  mechanics  phys-energy  relativity  electromag  analysis  spock  nitty-gritty  spreading  hanson  street-fighting  speed  gedanken  nibble 
march 2018 by nhaliday
Chickenhawks – Gene Expression
I know I seem like a warblogger, and I promise I’ll shift to something more esoteric and non-current-eventsy very soon, but check this table out on fatalities by profession. It ranges from 50 per 100,000 for cab-drivers to 100 per 100,000 for fisherman & loggers. Granted, there have surely been work related fatalities in the American military in the past year, but we’ve had about 30 fatalities so far, and perhaps we’ll go up to 200-300 in the current campaign if we don’t get into house-to-house fighting. How many fatalities occurred during the Afghan campaign? Look at this table of historic casualty rates. I don’t do this to say that being a soldier is something that isn’t a big deal-but for me, the “chickenhawk” insult seems less resonant taking into the account the changes that have been wrought by technology in the post-Vietnam era. Casualty rates seem to be approaching the order of magnitude of some of the more cvil dangerous professions. That is most certainly a good thing.
gnxp  scitariat  commentary  war  meta:war  usa  iraq-syria  MENA  military  death  pro-rata  data  comparison  fighting  outcome-risk  uncertainty  martial  time-series  history  early-modern  mostly-modern  pre-ww2  world-war  europe  gallic  revolution  the-south  germanic  israel  scale  magnitude  cold-war 
february 2018 by nhaliday
Why do stars twinkle?
According to many astronomers and educators, twinkle (stellar scintillation) is caused by atmospheric structure that works like ordinary lenses and prisms. Pockets of variable temperature - and hence index of refraction - randomly shift and focus starlight, perceived by eye as changes in brightness. Pockets also disperse colors like prisms, explaining the flashes of color often seen in bright stars. Stars appear to twinkle more than planets because they are points of light, whereas the twinkling points on planetary disks are averaged to a uniform appearance. Below, figure 1 is a simulation in glass of the kind of turbulence structure posited in the lens-and-prism theory of stellar scintillation, shown over the Penrose tile floor to demonstrate the random lensing effects.

However appealing and ubiquitous on the internet, this popular explanation is wrong, and my aim is to debunk the myth. This research is mostly about showing that the lens-and-prism theory just doesn't work, but I also have a stellar list of references that explain the actual cause of scintillation, starting with two classic papers by C.G. Little and S. Chandrasekhar.
nibble  org:junk  space  sky  visuo  illusion  explanans  physics  electromag  trivia  cocktail  critique  contrarianism  explanation  waves  simulation  experiment  hmm  magnitude  atmosphere  roots  idk 
december 2017 by nhaliday
light - Why doesn't the moon twinkle? - Astronomy Stack Exchange
As you mention, when light enters our atmosphere, it goes through several parcels of gas with varying density, temperature, pressure, and humidity. These differences make the refractive index of the parcels different, and since they move around (the scientific term for air moving around is "wind"), the light rays take slightly different paths through the atmosphere.

Stars are point sources
…the Moon is not
nibble  q-n-a  overflow  space  physics  trivia  cocktail  navigation  sky  visuo  illusion  measure  random  electromag  signal-noise  flux-stasis  explanation  explanans  magnitude  atmosphere  roots 
december 2017 by nhaliday
Estimation of effect size distribution from genome-wide association studies and implications for future discoveries
We report a set of tools to estimate the number of susceptibility loci and the distribution of their effect sizes for a trait on the basis of discoveries from existing genome-wide association studies (GWASs). We propose statistical power calculations for future GWASs using estimated distributions of effect sizes. Using reported GWAS findings for height, Crohn’s disease and breast, prostate and colorectal (BPC) cancers, we determine that each of these traits is likely to harbor additional loci within the spectrum of low-penetrance common variants. These loci, which can be identified from sufficiently powerful GWASs, together could explain at least 15–20% of the known heritability of these traits. However, for BPC cancers, which have modest familial aggregation, our analysis suggests that risk models based on common variants alone will have modest discriminatory power (63.5% area under curve), even with new discoveries.

later paper:
Distribution of allele frequencies and effect sizes and their interrelationships for common genetic susceptibility variants: http://www.pnas.org/content/108/44/18026.full

Recent discoveries of hundreds of common susceptibility SNPs from genome-wide association studies provide a unique opportunity to examine population genetic models for complex traits. In this report, we investigate distributions of various population genetic parameters and their interrelationships using estimates of allele frequencies and effect-size parameters for about 400 susceptibility SNPs across a spectrum of qualitative and quantitative traits. We calibrate our analysis by statistical power for detection of SNPs to account for overrepresentation of variants with larger effect sizes in currently known SNPs that are expected due to statistical power for discovery. Across all qualitative disease traits, minor alleles conferred “risk” more often than “protection.” Across all traits, an inverse relationship existed between “regression effects” and allele frequencies. Both of these trends were remarkably strong for type I diabetes, a trait that is most likely to be influenced by selection, but were modest for other traits such as human height or late-onset diseases such as type II diabetes and cancers. Across all traits, the estimated effect-size distribution suggested the existence of increasingly large numbers of susceptibility SNPs with decreasingly small effects. For most traits, the set of SNPs with intermediate minor allele frequencies (5–20%) contained an unusually small number of susceptibility loci and explained a relatively small fraction of heritability compared with what would be expected from the distribution of SNPs in the general population. These trends could have several implications for future studies of common and uncommon variants.

...

Relationship Between Allele Frequency and Effect Size. We explored the relationship between allele frequency and effect size in different scales. An inverse relationship between the squared regression coefficient and f(1 − f) was observed consistently across different traits (Fig. 3). For a number of these traits, however, the strengths of these relationships become less pronounced after adjustment for ascertainment due to study power. The strength of the trend, as captured by the slope of the fitted line (Table 2), markedly varies between traits, with an almost 10-fold change between the two extremes of distinct types of traits. After adjustment, the most pronounced trend was seen for type I diabetes and Crohn’s disease among qualitative traits and LDL level among quantitative traits. In exploring the relationship between the frequency of the risk allele and the magnitude of the associated risk coefficient (Fig. S4), we observed a quadratic pattern that indicates increasing risk coefficients as the risk-allele frequency diverges away from 0.50 either toward 0 or toward 1. Thus, it appears that regression coefficients for common susceptibility SNPs increase in magnitude monotonically with decreasing minor-allele frequency, irrespective of whether the minor allele confers risk or protection. However, for some traits, such as type I diabetes, risk alleles were predominantly minor alleles, that is, they had frequencies of less than 0.50.
pdf  nibble  study  article  org:nat  🌞  biodet  genetics  population-genetics  GWAS  QTL  distribution  disease  cancer  stat-power  bioinformatics  magnitude  embodied  prediction  scale  scaling-up  variance-components  multi  missing-heritability  effect-size  regression  correlation  data 
november 2017 by nhaliday
Autoignition temperature - Wikipedia
The autoignition temperature or kindling point of a substance is the lowest temperature at which it spontaneously ignites in normal atmosphere without an external source of ignition, such as a flame or spark. This temperature is required to supply the activation energy needed for combustion. The temperature at which a chemical ignites decreases as the pressure or oxygen concentration increases. It is usually applied to a combustible fuel mixture.

The time {\displaystyle t_{\text{ig}}} {\displaystyle t_{\text{ig}}} it takes for a material to reach its autoignition temperature {\displaystyle T_{\text{ig}}} {\displaystyle T_{\text{ig}}} when exposed to a heat flux {\displaystyle q''} {\displaystyle q''} is given by the following equation:
nibble  wiki  reference  concept  metrics  identity  physics  thermo  temperature  time  stock-flow  phys-energy  chemistry  article  street-fighting  fire  magnitude  data  list 
november 2017 by nhaliday
Static electricity - Wikipedia
Electrons can be exchanged between materials on contact; materials with weakly bound electrons tend to lose them while materials with sparsely filled outer shells tend to gain them. This is known as the triboelectric effect and results in one material becoming positively charged and the other negatively charged. The polarity and strength of the charge on a material once they are separated depends on their relative positions in the triboelectric series. The triboelectric effect is the main cause of static electricity as observed in everyday life, and in common high-school science demonstrations involving rubbing different materials together (e.g., fur against an acrylic rod). Contact-induced charge separation causes your hair to stand up and causes "static cling" (for example, a balloon rubbed against the hair becomes negatively charged; when near a wall, the charged balloon is attracted to positively charged particles in the wall, and can "cling" to it, appearing to be suspended against gravity).
nibble  wiki  reference  article  physics  electromag  embodied  curiosity  IEEE  dirty-hands  phys-energy  safety  data  magnitude  scale 
november 2017 by nhaliday
Genetics: CHROMOSOMAL MAPS AND MAPPING FUNCTIONS
Any particular gene has a specific location (its "locus") on a particular chromosome. For any two genes (or loci) alpha and beta, we can ask "What is the recombination frequency between them?" If the genes are on different chromosomes, the answer is 50% (independent assortment). If the two genes are on the same chromosome, the recombination frequency will be somewhere in the range from 0 to 50%. The "map unit" (1 cM) is the genetic map distance that corresponds to a recombination frequency of 1%. In large chromosomes, the cumulative map distance may be much greater than 50cM, but the maximum recombination frequency is 50%. Why? In large chromosomes, there is enough length to allow for multiple cross-overs, so we have to ask what result we expect for random multiple cross-overs.

1. How is it that random multiple cross-overs give the same result as independent assortment?

Figure 5.12 shows how the various double cross-over possibilities add up, resulting in gamete genotype percentages that are indistinguisable from independent assortment (50% parental type, 50% non-parental type). This is a very important figure. It provides the explanation for why genes that are far apart on a very large chromosome sort out in crosses just as if they were on separate chromosomes.

2. Is there a way to measure how close together two crossovers can occur involving the same two chromatids? That is, how could we measure whether there is spacial "interference"?

Figure 5.13 shows how a measurement of the gamete frequencies resulting from a "three point cross" can answer this question. If we would get a "lower than expected" occurrence of recombinant genotypes aCb and AcB, it would suggest that there is some hindrance to the two cross-overs occurring this close together. Crosses of this type in Drosophila have shown that, in this organism, double cross-overs do not occur at distances of less than about 10 cM between the two cross-over sites. ( Textbook, page 196. )

3. How does all of this lead to the "mapping function", the mathematical (graphical) relation between the observed recombination frequency (percent non-parental gametes) and the cumulative genetic distance in map units?

Figure 5.14 shows the result for the two extremes of "complete interference" and "no interference". The situation for real chromosomes in real organisms is somewhere between these extremes, such as the curve labelled "interference decreasing with distance".
org:junk  org:edu  explanation  faq  nibble  genetics  genomics  bio  ground-up  magnitude  data  flux-stasis  homo-hetero  measure  orders  metric-space  limits  measurement 
october 2017 by nhaliday
[1709.01149] Biotechnology and the lifetime of technical civilizations
The number of people able to end Earth's technical civilization has heretofore been small. Emerging dual-use technologies, such as biotechnology, may give similar power to thousands or millions of individuals. To quantitatively investigate the ramifications of such a marked shift on the survival of both terrestrial and extraterrestrial technical civilizations, this paper presents a two-parameter model for civilizational lifespans, i.e. the quantity L in Drake's equation for the number of communicating extraterrestrial civilizations. One parameter characterizes the population lethality of a civilization's biotechnology and the other characterizes the civilization's psychosociology. L is demonstrated to be less than the inverse of the product of these two parameters. Using empiric data from Pubmed to inform the biotechnology parameter, the model predicts human civilization's median survival time as decades to centuries, even with optimistic psychosociological parameter values, thereby positioning biotechnology as a proximate threat to human civilization. For an ensemble of civilizations having some median calculated survival time, the model predicts that, after 80 times that duration, only one in 1024 civilizations will survive -- a tempo and degree of winnowing compatible with Hanson's "Great Filter." Thus, assuming that civilizations universally develop advanced biotechnology, before they become vigorous interstellar colonizers, the model provides a resolution to the Fermi paradox.
preprint  article  gedanken  threat-modeling  risk  biotech  anthropic  fermi  ratty  hanson  models  xenobio  space  civilization  frontier  hmm  speedometer  society  psychology  social-psych  anthropology  cultural-dynamics  disease  parasites-microbiome  maxim-gun  prepping  science-anxiety  technology  magnitude  scale  data  prediction  speculation  ideas  🌞  org:mat  study  offense-defense  arms  unintended-consequences  spreading  explanans  sociality  cybernetics 
october 2017 by nhaliday
Caught in the act | West Hunter
The fossil record is sparse. Let me try to explain that. We have at most a few hundred Neanderthal skeletons, most in pretty poor shape. How many Neanderthals ever lived? I think their population varied in size quite a bit – lowest during glacial maxima, probably highest in interglacials. Their degree of genetic diversity suggests an effective population size of ~1000, but that would be dominated by the low points (harmonic average). So let’s say 50,000 on average, over their whole range (Europe, central Asia, the Levant, perhaps more). Say they were around for 300,000 years, with a generation time of 30 years – 10,000 generations, for a total of five hundred million Neanderthals over all time. So one in a million Neanderthals ends up in a museum: one every 20 generations. Low time resolution!

So if anatomically modern humans rapidly wiped out Neanderthals, we probably couldn’t tell. In much the same way, you don’t expect to find the remains of many dinosaurs killed by the Cretaceous meteor impact (at most one millionth of one generation, right?), or of Columbian mammoths killed by a wave of Amerindian hunters. Sometimes invaders leave a bigger footprint: a bunch of cities burning down with no rebuilding tells you something. But even when you know that population A completely replaced population B, it can be hard to prove that just how it happened. After all, population A could have all committed suicide just before B showed up. Stranger things have happened – but not often.
west-hunter  scitariat  discussion  ideas  data  objektbuch  scale  magnitude  estimate  population  sapiens  archaics  archaeology  pro-rata  history  antiquity  methodology  volo-avolo  measurement  pop-structure  density  time  frequency  apollonian-dionysian  traces  evidence 
september 2017 by nhaliday
Atrocity statistics from the Roman Era
Christian Martyrs [make link]
Gibbon, Decline & Fall v.2 ch.XVI: < 2,000 k. under Roman persecution.
Ludwig Hertling ("Die Zahl de Märtyrer bis 313", 1944) estimated 100,000 Christians killed between 30 and 313 CE. (cited -- unfavorably -- by David Henige, Numbers From Nowhere, 1998)
Catholic Encyclopedia, "Martyr": number of Christian martyrs under the Romans unknown, unknowable. Origen says not many. Eusebius says thousands.

...

General population decline during The Fall of Rome: 7,000,000 [make link]
- Colin McEvedy, The New Penguin Atlas of Medieval History (1992)
- From 2nd Century CE to 4th Century CE: Empire's population declined from 45M to 36M [i.e. 9M]
- From 400 CE to 600 CE: Empire's population declined by 20% [i.e. 7.2M]
- Paul Bairoch, Cities and economic development: from the dawn of history to the present, p.111
- "The population of Europe except Russia, then, having apparently reached a high point of some 40-55 million people by the start of the third century [ca.200 C.E.], seems to have fallen by the year 500 to about 30-40 million, bottoming out at about 20-35 million around 600." [i.e. ca.20M]
- Francois Crouzet, A History of the European Economy, 1000-2000 (University Press of Virginia: 2001) p.1.
- "The population of Europe (west of the Urals) in c. AD 200 has been estimated at 36 million; by 600, it had fallen to 26 million; another estimate (excluding ‘Russia’) gives a more drastic fall, from 44 to 22 million." [i.e. 10M or 22M]

also:
The geometric mean of these two extremes would come to 4½ per day, which is a credible daily rate for the really bad years.

why geometric mean? can you get it as the MLE given min{X1, ..., Xn} and max{X1, ..., Xn} for {X_i} iid Poissons? some kinda limit? think it might just be a rule of thumb.

yeah, it's a rule of thumb. found it it his book (epub).
org:junk  data  let-me-see  scale  history  iron-age  mediterranean  the-classics  death  nihil  conquest-empire  war  peace-violence  gibbon  trivia  multi  todo  AMT  expectancy  heuristic  stats  ML-MAP-E  data-science  estimate  magnitude  population  demographics  database  list  religion  christianity  leviathan 
september 2017 by nhaliday
Population Growth and Technological Change: One Million B.C. to 1990
The nonrivalry of technology, as modeled in the endogenous growth literature, implies that high population spurs technological change. This paper constructs and empirically tests a model of long-run world population growth combining this implication with the Malthusian assumption that technology limits population. The model predicts that over most of history, the growth rate of population will be proportional to its level. Empirical tests support this prediction and show that historically, among societies with no possibility for technological contact, those with larger initial populations have had faster technological change and population growth.

Table I gives the gist (population growth rate scales w/ tech innovation). Note how the Mongol invasions + reverberations stand out.

https://jasoncollins.org/2011/08/15/more-people-more-ideas-in-the-long-run/
pdf  study  economics  growth-econ  broad-econ  cliometrics  anthropology  cjones-like  population  demographics  scale  innovation  technology  ideas  deep-materialism  stylized-facts  correlation  speed  flux-stasis  history  antiquity  iron-age  medieval  early-modern  mostly-modern  piracy  garett-jones  spearhead  big-picture  density  iteration-recursion  magnitude  econotariat  multi  commentary  summary  🎩  path-dependence  pop-diff  malthus  time-series  data  world  microfoundations  hari-seldon  conquest-empire  disease  parasites-microbiome  spreading  gavisti  asia  war  death  nihil  trends 
august 2017 by nhaliday
Introduction to Scaling Laws
https://betadecay.wordpress.com/2009/10/02/the-physics-of-scaling-laws-and-dimensional-analysis/
http://galileo.phys.virginia.edu/classes/304/scaling.pdf

Galileo’s Discovery of Scaling Laws: https://www.mtholyoke.edu/~mpeterso/classes/galileo/scaling8.pdf
Days 1 and 2 of Two New Sciences

An example of such an insight is “the surface of a small solid is comparatively greater than that of a large one” because the surface goes like the square of a linear dimension, but the volume goes like the cube.5 Thus as one scales down macroscopic objects, forces on their surfaces like viscous drag become relatively more important, and bulk forces like weight become relatively less important. Galileo uses this idea on the First Day in the context of resistance in free fall, as an explanation for why similar objects of different size do not fall exactly together, but the smaller one lags behind.
nibble  org:junk  exposition  lecture-notes  physics  mechanics  street-fighting  problem-solving  scale  magnitude  estimate  fermi  mental-math  calculation  nitty-gritty  multi  scitariat  org:bleg  lens  tutorial  guide  ground-up  tricki  skeleton  list  cheatsheet  identity  levers  hi-order-bits  yoga  metabuch  pdf  article  essay  history  early-modern  europe  the-great-west-whale  science  the-trenches  discovery  fluid  architecture  oceans  giants  tidbits  elegance 
august 2017 by nhaliday
Lecture 3: Global Energy Cycle
solar flux, albedo, greenhouse effect, energy balance, vertical distribution of energy, tilt and seasons
pdf  slides  nibble  physics  electromag  space  earth  sky  atmosphere  environment  temperature  stock-flow  data  magnitude  scale  phys-energy  distribution  oscillation  cycles  lectures  geography 
august 2017 by nhaliday
The Earth-Moon system
nice way of expressing Kepler's law (scaled by AU, solar mass, year, etc.) among other things

1. PHYSICAL PROPERTIES OF THE MOON
2. LUNAR PHASES
3. ECLIPSES
4. TIDES
nibble  org:junk  explanation  trivia  data  objektbuch  space  mechanics  spatial  visualization  earth  visual-understanding  navigation  experiment  measure  marginal  gravity  scale  physics  nitty-gritty  tidbits  identity  cycles  time  magnitude  street-fighting  calculation  oceans  pro-rata  rhythm  flux-stasis 
august 2017 by nhaliday
Effective population size for advantageous mutations | West Hunter
So, with beneficial mutations, the effective population size is very different. Instead of being dominated by bottlenecks, it is more influenced by eras of large population size – more and more so as the selective advantage of the mutation increases. In the limit, if we imagine  mutations so advantageous that they spread  very rapidly, the effective population size approaches the population mean.
west-hunter  scitariat  discussion  ideas  speculation  bio  evolution  sapiens  genetics  population-genetics  pop-structure  population  gene-drift  magnitude  street-fighting  methodology  stylized-facts  nibble  🌞 
august 2017 by nhaliday
Is the U.S. Aggregate Production Function Cobb-Douglas? New Estimates of the Elasticity of Substitution∗
world-wide: http://www.socsci.uci.edu/~duffy/papers/jeg2.pdf
https://www.weforum.org/agenda/2016/01/is-the-us-labour-share-as-constant-as-we-thought
https://www.economicdynamics.org/meetpapers/2015/paper_844.pdf
We find that IPP capital entirely explains the observed decline of the US labor share, which otherwise is secularly constant over the past 65 years for structures and equipment capital. The labor share decline simply reflects the fact that the US economy is undergoing a transition toward a larger IPP sector.
https://ideas.repec.org/p/red/sed015/844.html
http://www.robertdkirkby.com/blog/2015/summary-of-piketty-i/
https://www.brookings.edu/bpea-articles/deciphering-the-fall-and-rise-in-the-net-capital-share/
The Fall of the Labor Share and the Rise of Superstar Firms: http://www.nber.org/papers/w23396
The Decline of the U.S. Labor Share: https://www.brookings.edu/wp-content/uploads/2016/07/2013b_elsby_labor_share.pdf
Table 2 has industry disaggregation
Estimating the U.S. labor share: https://www.bls.gov/opub/mlr/2017/article/estimating-the-us-labor-share.htm

Why Workers Are Losing to Capitalists: https://www.bloomberg.com/view/articles/2017-09-20/why-workers-are-losing-to-capitalists
Automation and offshoring may be conspiring to reduce labor's share of income.
pdf  study  economics  growth-econ  econometrics  usa  data  empirical  analysis  labor  capital  econ-productivity  manifolds  magnitude  multi  world  🎩  piketty  econotariat  compensation  inequality  winner-take-all  org:ngo  org:davos  flexibility  distribution  stylized-facts  regularizer  hmm  history  mostly-modern  property-rights  arrows  invariance  industrial-org  trends  wonkish  roots  synthesis  market-power  efficiency  variance-components  business  database  org:gov  article  model-class  models  automation  nationalism-globalism  trade  news  org:mag  org:biz  org:bv  noahpinion  explanation  summary  methodology  density  polarization  map-territory  input-output 
july 2017 by nhaliday
Harmonic mean - Wikipedia
The harmonic mean is a Schur-concave function, and dominated by the minimum of its arguments, in the sense that for any positive set of arguments, {\displaystyle \min(x_{1}\ldots x_{n})\leq H(x_{1}\ldots x_{n})\leq n\min(x_{1}\ldots x_{n})} . Thus, the harmonic mean cannot be made arbitrarily large by changing some values to bigger ones (while having at least one value unchanged).

more generally, for the weighted mean w/ Pr(x_i)=t_i, H(x1,...,xn) <= x_i/t_i
nibble  math  properties  estimate  concept  definition  wiki  reference  extrema  magnitude  expectancy  metrics  ground-up 
july 2017 by nhaliday
Dadly adaptations | West Hunter
If we understood how this works, we might find that individuals and populations vary in their propensity to show paternal care ( for genetic reasons). I would guess that paternal care was ancestral in modern humans, but it’s easy enough to lose something like this when selective pressures no longer favor it. Wolves have paternal care, but dogs have lost it.

This could have something to do with better health in married men. High testosterone levels aren’t cost-free.

It’s possible that various modern environmental factors interfere with the triggers for dadliness. That would hardly be surprising, since we don’t really know how they work.

All this has a number of interesting social implications. Let’s see how many of them you guys can spot.

Poles in the Tent: https://westhunt.wordpress.com/2013/07/09/poles-in-the-tent/
I’m considering a different question: what was the impact of men’s contribution on their children’s survival and fitness? That’s not quite the same as the number of calories contributed. Food is not a single undifferentiated quantity: it’s a category, including a number of different kinds that can’t be freely substituted for each other. Proteins, fats, and carbohydrates can all serve as fuel, but you need protein to build tissue. And amino acids, the building blocks of proteins, are not all fungible. Some we can’t synthesize (essential amino acids) others can only be synthesized from a limited set of precursors, etc. Edible plants often have suboptimal mixes of amino acids ( too many Qs, not enough Us) , but I’ve never heard of this being a problem with meat. Then you have to consider essential fatty acids, vitamins, and trace elements.

In principle, if high-quality protein were the long pole in the tent, male provisioning of meat, which we see in chimpanzees, might matter quite a bit more than you would think from the number of calories alone. I’m not say that is necessarily the case, but it might be, and it’s worth checking out.

Sexual selection vs job specialization: https://westhunt.wordpress.com/2017/10/02/sexual-selection-vs-job-specialization/
Pretty much every species is subject to sexual selection: heritable characteristics that lead to more mates or better mates can be favored by natural selection. Typically, sexual selection favors different strategies in males and females. Generally, males can gain fitness with increased mating opportunities, while females gain more from high-quality mates or mates that confer resources. Since the variance in reproduction is usually greater in males than females, sexual selection is usually stronger in males, although it exists and is significant in both sexes.

Usually, though, males and females of a given species have very similar ways of making a living. A male deer and a female deer both eat grass or arugula or whatever. Sexual selection may drive them to evolve in different directions, but finding something to eat mostly drives them in the same direction.

Humans are an exception. In the long past, men hunted and women gathered. The mix varied: in Arctic regions, men produce almost all the food (while women made and repaired gear, as well as raising children). In groups like the Bushmen, women produced most of the calories, but done rightly you would count more than calories: if most of the local plants had low protein or low-quality protein (wrong amino acid mix), meat from hunting could be important out of proportion to its caloric value.

This has been going for a long time, so there must have been selection for traits that aided provisioning ability in each sex. Those job-related selective pressures probably changed with time. For example, male strength may have become less valuable when the Bushmen developed poison arrows.

I was looking for an intelligent discussion of this question – but I ran into this and couldn’t force myself to read further: ” It should not simply be assumed that the exclusion of women from hunting rests upon “natural” physiological differences. ”

God give me strength.

https://westhunt.wordpress.com/2017/10/02/sexual-selection-vs-job-specialization/#comment-96323
What does Greg think about the “plows vs hoes” theory? (As seen here, although Sarah Constantin didn’t invent it.)

The claim is that some societies adopted farming (Europe, the Middle East, Asia) while some societies adopted horticulture (Oceana, sub-Saharan Africa, various primitive peoples) and that this had an affect on gender relations.

Basically: farming is backbreaking work, which favours males, giving them a lot of social capital. You end up with a patriarchal kind of society, where the men do stuff and the women are mostly valuable for raising offspring.

...

It’s kinda true, in places. There is a connection I haven’t seen explicated: the ‘hoe culture” has to have some factor keeping population density low, so that labor is scarcer than land. Tropical diseases like malaria might be part of that. Then again, crops like yams don’t store well, better to keep them in the ground until eating. That means it’s hard to tax people – easy with grain bins. No taxes -> no State – > high local violence. At times, VD may also help limit density, cf Africa’s ‘sterility belt’.

I am not a Moron: https://westhunt.wordpress.com/2017/11/03/i-am-not-a-moron/
So said Augustin Fuentes on Twitter, a few days ago. He’s the same guy that said “Genes don’t do anything by themselves; epigenetics and complex metabolic and developmental systems are at play in how bodies work. The roundworm C. elegans has about 20,000 genes while humans have about 23,000 genes, yet it is pretty obvious that humans are more than 15-percent more complex than roundworms. So while genes matter, they are only a small part of the whole evolutionary picture. Focusing just on DNA won’t get you anywhere.”

Fuentes was claiming that we don’t really know that, back in prehistory, men did most of the hunting while women gathered.

...

Someone (Will@Evolving _Moloch) criticized this as a good candidate for the most misleading paragraph ever written. The folly of youth! When you’ve been around as long as I have, sonny, you will realize how hard it is to set records for stupidity.

Fuente’s para is multidimensional crap, of course. People used to hunt animals like red deer, or bison, or eland: sometimes mammoths or rhinos. Big animals. Back in the day, our ancestors used stabbing spears, which go back at least half a million years. Stand-off weapons like atlatls, or bows, or JSOW, are relatively recent. Hunters took big risks & suffered frequent injuries. Men are almost twice as strong as women, particularly in upper-body strength, which is what matters in spear-chucking. They’re also faster, which can be very important which your ambush fails.
So men did the hunting. This isn’t complicated.

Which contemporary hunter-gather societies followed this pattern, had men do almost all of the big-game hunting? All of them.

...

Look, feminists aren’t happy with human nature, the one that actually exists and is the product of long-term evolutionary pressures. Too bad for them. When they say stuff like “It should not simply be assumed that the exclusion of women from hunting rests upon “natural” physiological differences. “, they just sound like fools.. ‘natural physiological differences” exist. They’re as obvious a punch in the kisser.

Suppose you wanted to construct a society with effective sexual equality – which is probably just a mistake, but suppose it. The most effective approach would surely entail knowing and taking into account how the world actually ticks. You’d be better off understanding that about 6,000 genes (out of 20,000) show significant expression differences between the sexes, than by pretending that we’re all the same. You would to make it so: by hook or by crook, by state force and genetic engineering.

Similarly, if you want to minimize war, pretending that people aren’t warlike is a poor start – about as sensible as fighting forest fires by pretending that trees aren’t flammable.

My advice to Augustin Fuentes, about not being a moron: show, don’t tell.

https://westhunt.wordpress.com/2017/11/03/i-am-not-a-moron/#comment-97721
Since DNA is the enduring part, the part that gets transmitted from one generation to the next, the part that contains the instructions/program that determine development and specify everything – he’s wrong. Stupid, like you. Well, to be fair, ignorant as well: there are technical aspects of genetics that Agustin Fuentes is unlikely to know anything about, things that are almost never covered in the typical education of an anthropologist. I doubt if he knows what a Fisher wave is, or anything about selfish genetic elements, or coalescent theory, or for that matter the breeder’s equation.

There are a number of complex technical subjects, things that at least some people understand: those people can do stuff that the man in the street can’t. In most cases, amateurs don’t jump in and pretend to know what’s going on. For example you don’t hear much concerning amateur opinions concerning detonation physics or group theory. But they’re happy to have opinions about natural selection, even though they know fuck-all about it.

https://twitter.com/FinchesofDarwin/status/922924692389818368
https://archive.is/AcBgh
"Significantly fewer females are present at hunts than males...females tend to appear at the hunting site once the capture has been made..."

“Women in Tech”: https://bloodyshovel.wordpress.com/2017/10/26/women-in-tech/
west-hunter  scitariat  discussion  ideas  pop-diff  biodet  behavioral-gen  endocrine  parenting  life-history  speculation  time-preference  temperance  health  counter-revolution  rot  zeitgeist  environmental-effects  science-anxiety  legacy  incentives  sapiens  multi  farmers-and-foragers  food  gender  gender-diff  intricacy  dimensionality  agriculture  selection  symmetry  comparison  leviathan  cultural-dynamics  peace-violence  taxes  broad-econ  microfoundations  africa  europe  asia  MENA  world  developing-world  🌞  order-disorder  population  density  scale  stylized-facts  cocktail  anthropology  roots  parasites-microbiome  phalanges  things  analogy  direction  rant  EEA  evopsych  is-ought  attaq  data  genetics  genomics  war  people  track-record  poast  population-genetics  selfish-gene  magnitude  twitter  social  commentary  backup  quotes  gnon  right-wing  aphorism  sv  tech  identity-politics  envy  allodium  outcome-risk  hari-seldon 
june 2017 by nhaliday
Lanchester's laws - Wikipedia
Lanchester's laws are mathematical formulae for calculating the relative strengths of a predator–prey pair, originally devised to analyse relative strengths of military forces.
war  meta:war  models  plots  time  differential  street-fighting  methodology  strategy  tactics  wiki  reference  history  mostly-modern  pre-ww2  world-war  britain  old-anglo  giants  magnitude  arms  identity 
june 2017 by nhaliday
Comprehensive Military Power: World’s Top 10 Militaries of 2015 - The Unz Review
gnon  military  defense  scale  top-n  list  ranking  usa  china  asia  analysis  data  sinosphere  critique  russia  capital  magnitude  street-fighting  individualism-collectivism  europe  germanic  world  developing-world  latin-america  MENA  india  war  meta:war  history  mostly-modern  world-war  prediction  trends  realpolitik  strategy  thucydides  great-powers  multi  news  org:mag  org:biz  org:foreign  current-events  the-bones  org:rec  org:data  org:popup  skunkworks  database  dataset  power  energy-resources  heavy-industry  economics  growth-econ  foreign-policy  geopolitics  maps  project  expansionism  the-world-is-just-atoms  civilization  let-me-see  wiki  reference  metrics  urban  population  japan  britain  gallic  allodium  definite-planning  kumbaya-kult  peace-violence  urban-rural  wealth  wealth-of-nations  econ-metrics  dynamic  infographic 
june 2017 by nhaliday
Genomic analysis of family data reveals additional genetic effects on intelligence and personality | bioRxiv
methodology:
Using Extended Genealogy to Estimate Components of Heritability for 23 Quantitative and Dichotomous Traits: http://journals.plos.org/plosgenetics/article?id=10.1371/journal.pgen.1003520
Pedigree- and SNP-Associated Genetics and Recent Environment are the Major Contributors to Anthropometric and Cardiometabolic Trait Variation: http://journals.plos.org/plosgenetics/article?id=10.1371/journal.pgen.1005804

Missing Heritability – found?: https://westhunt.wordpress.com/2017/02/09/missing-heritability-found/
There is an interesting new paper out on genetics and IQ. The claim is that they have found the missing heritability – in rare variants, generally different in each family.

Some of the variants, the ones we find with GWAS, are fairly common and fitness-neutral: the variant that slightly increases IQ confers the same fitness (or very close to the same) as the one that slightly decreases IQ – presumably because of other effects it has. If this weren’t the case, it would be impossible for both of the variants to remain common.

The rare variants that affect IQ will generally decrease IQ – and since pleiotropy is the norm, usually they’ll be deleterious in other ways as well. Genetic load.

Happy families are all alike; every unhappy family is unhappy in its own way.: https://westhunt.wordpress.com/2017/06/06/happy-families-are-all-alike-every-unhappy-family-is-unhappy-in-its-own-way/
It now looks as if the majority of the genetic variance in IQ is the product of mutational load, and the same may be true for many psychological traits. To the extent this is the case, a lot of human psychological variation must be non-adaptive. Maybe some personality variation fulfills an evolutionary function, but a lot does not. Being a dumb asshole may be a bug, rather than a feature. More generally, this kind of analysis could show us whether particular low-fitness syndromes, like autism, were ever strategies – I suspect not.

It’s bad new news for medicine and psychiatry, though. It would suggest that what we call a given type of mental illness, like schizophrenia, is really a grab-bag of many different syndromes. The ultimate causes are extremely varied: at best, there may be shared intermediate causal factors. Not good news for drug development: individualized medicine is a threat, not a promise.

see also comment at: https://pinboard.in/u:nhaliday/b:a6ab4034b0d0

https://www.reddit.com/r/slatestarcodex/comments/5sldfa/genomic_analysis_of_family_data_reveals/
So the big implication here is that it's better than I had dared hope - like Yang/Visscher/Hsu have argued, the old GCTA estimate of ~0.3 is indeed a rather loose lower bound on additive genetic variants, and the rest of the missing heritability is just the relatively uncommon additive variants (ie <1% frequency), and so, like Yang demonstrated with height, using much more comprehensive imputation of SNP scores or using whole-genomes will be able to explain almost all of the genetic contribution. In other words, with better imputation panels, we can go back and squeeze out better polygenic scores from old GWASes, new GWASes will be able to reach and break the 0.3 upper bound, and eventually we can feasibly predict 0.5-0.8. Between the expanding sample sizes from biobanks, the still-falling price of whole genomes, the gradual development of better regression methods (informative priors, biological annotation information, networks, genetic correlations), and better imputation, the future of GWAS polygenic scores is bright. Which obviously will be extremely helpful for embryo selection/genome synthesis.

The argument that this supports mutation-selection balance is weaker but plausible. I hope that it's true, because if that's why there is so much genetic variation in intelligence, then that strongly encourages genetic engineering - there is no good reason or Chesterton fence for intelligence variants being non-fixed, it's just that evolution is too slow to purge the constantly-accumulating bad variants. And we can do better.
https://rubenarslan.github.io/generation_scotland_pedigree_gcta/

The surprising implications of familial association in disease risk: https://arxiv.org/abs/1707.00014
https://spottedtoad.wordpress.com/2017/06/09/personalized-medicine-wont-work-but-race-based-medicine-probably-will/
As Greg Cochran has pointed out, this probably isn’t going to work. There are a few genes like BRCA1 (which makes you more likely to get breast and ovarian cancer) that we can detect and might affect treatment, but an awful lot of disease turns out to be just the result of random chance and deleterious mutation. This means that you can’t easily tailor disease treatment to people’s genes, because everybody is fucked up in their own special way. If Johnny is schizophrenic because of 100 random errors in the genes that code for his neurons, and Jack is schizophrenic because of 100 other random errors, there’s very little way to test a drug to work for either of them- they’re the only one in the world, most likely, with that specific pattern of errors. This is, presumably why the incidence of schizophrenia and autism rises in populations when dads get older- more random errors in sperm formation mean more random errors in the baby’s genes, and more things that go wrong down the line.

The looming crisis in human genetics: http://www.economist.com/node/14742737
Some awkward news ahead
- Geoffrey Miller

Human geneticists have reached a private crisis of conscience, and it will become public knowledge in 2010. The crisis has depressing health implications and alarming political ones. In a nutshell: the new genetics will reveal much less than hoped about how to cure disease, and much more than feared about human evolution and inequality, including genetic differences between classes, ethnicities and races.

2009!
study  preprint  bio  biodet  behavioral-gen  GWAS  missing-heritability  QTL  🌞  scaling-up  replication  iq  education  spearhead  sib-study  multi  west-hunter  scitariat  genetic-load  mutation  medicine  meta:medicine  stylized-facts  ratty  unaffiliated  commentary  rhetoric  wonkish  genetics  genomics  race  pop-structure  poast  population-genetics  psychiatry  aphorism  homo-hetero  generalization  scale  state-of-art  ssc  reddit  social  summary  gwern  methodology  personality  britain  anglo  enhancement  roots  s:*  2017  data  visualization  database  let-me-see  bioinformatics  news  org:rec  org:anglo  org:biz  track-record  prediction  identity-politics  pop-diff  recent-selection  westminster  inequality  egalitarianism-hierarchy  high-dimension  applications  dimensionality  ideas  no-go  volo-avolo  magnitude  variance-components  GCTA  tradeoffs  counter-revolution  org:mat  dysgenics  paternal-age  distribution  chart  abortion-contraception-embryo  universalism-particularism 
june 2017 by nhaliday
Logic | West Hunter
All the time I hear some public figure saying that if we ban or allow X, then logically we have to ban or allow Y, even though there are obvious practical reasons for X and obvious practical reasons against Y.

No, we don’t.

http://www.amnation.com/vfr/archives/005864.html
http://www.amnation.com/vfr/archives/002053.html

compare: https://pinboard.in/u:nhaliday/b:190b299cf04a

Small Change Good, Big Change Bad?: https://www.overcomingbias.com/2018/02/small-change-good-big-change-bad.html
And on reflection it occurs to me that this is actually THE standard debate about change: some see small changes and either like them or aren’t bothered enough to advocate what it would take to reverse them, while others imagine such trends continuing long enough to result in very large and disturbing changes, and then suggest stronger responses.

For example, on increased immigration some point to the many concrete benefits immigrants now provide. Others imagine that large cumulative immigration eventually results in big changes in culture and political equilibria. On fertility, some wonder if civilization can survive in the long run with declining population, while others point out that population should rise for many decades, and few endorse the policies needed to greatly increase fertility. On genetic modification of humans, some ask why not let doctors correct obvious defects, while others imagine parents eventually editing kid genes mainly to max kid career potential. On oil some say that we should start preparing for the fact that we will eventually run out, while others say that we keep finding new reserves to replace the ones we use.

...

If we consider any parameter, such as typical degree of mind wandering, we are unlikely to see the current value as exactly optimal. So if we give people the benefit of the doubt to make local changes in their interest, we may accept that this may result in a recent net total change we don’t like. We may figure this is the price we pay to get other things we value more, and we we know that it can be very expensive to limit choices severely.

But even though we don’t see the current value as optimal, we also usually see the optimal value as not terribly far from the current value. So if we can imagine current changes as part of a long term trend that eventually produces very large changes, we can become more alarmed and willing to restrict current changes. The key question is: when is that a reasonable response?

First, big concerns about big long term changes only make sense if one actually cares a lot about the long run. Given the usual high rates of return on investment, it is cheap to buy influence on the long term, compared to influence on the short term. Yet few actually devote much of their income to long term investments. This raises doubts about the sincerity of expressed long term concerns.

Second, in our simplest models of the world good local choices also produce good long term choices. So if we presume good local choices, bad long term outcomes require non-simple elements, such as coordination, commitment, or myopia problems. Of course many such problems do exist. Even so, someone who claims to see a long term problem should be expected to identify specifically which such complexities they see at play. It shouldn’t be sufficient to just point to the possibility of such problems.

...

Fourth, many more processes and factors limit big changes, compared to small changes. For example, in software small changes are often trivial, while larger changes are nearly impossible, at least without starting again from scratch. Similarly, modest changes in mind wandering can be accomplished with minor attitude and habit changes, while extreme changes may require big brain restructuring, which is much harder because brains are complex and opaque. Recent changes in market structure may reduce the number of firms in each industry, but that doesn’t make it remotely plausible that one firm will eventually take over the entire economy. Projections of small changes into large changes need to consider the possibility of many such factors limiting large changes.

Fifth, while it can be reasonably safe to identify short term changes empirically, the longer term a forecast the more one needs to rely on theory, and the more different areas of expertise one must consider when constructing a relevant model of the situation. Beware a mere empirical projection into the long run, or a theory-based projection that relies on theories in only one area.

We should very much be open to the possibility of big bad long term changes, even in areas where we are okay with short term changes, or at least reluctant to sufficiently resist them. But we should also try to hold those who argue for the existence of such problems to relatively high standards. Their analysis should be about future times that we actually care about, and can at least roughly foresee. It should be based on our best theories of relevant subjects, and it should consider the possibility of factors that limit larger changes.

And instead of suggesting big ways to counter short term changes that might lead to long term problems, it is often better to identify markers to warn of larger problems. Then instead of acting in big ways now, we can make sure to track these warning markers, and ready ourselves to act more strongly if they appear.

Growth Is Change. So Is Death.: https://www.overcomingbias.com/2018/03/growth-is-change-so-is-death.html
I see the same pattern when people consider long term futures. People can be quite philosophical about the extinction of humanity, as long as this is due to natural causes. Every species dies; why should humans be different? And few get bothered by humans making modest small-scale short-term modifications to their own lives or environment. We are mostly okay with people using umbrellas when it rains, moving to new towns to take new jobs, etc., digging a flood ditch after our yard floods, and so on. And the net social effect of many small changes is technological progress, economic growth, new fashions, and new social attitudes, all of which we tend to endorse in the short run.

Even regarding big human-caused changes, most don’t worry if changes happen far enough in the future. Few actually care much about the future past the lives of people they’ll meet in their own life. But for changes that happen within someone’s time horizon of caring, the bigger that changes get, and the longer they are expected to last, the more that people worry. And when we get to huge changes, such as taking apart the sun, a population of trillions, lifetimes of millennia, massive genetic modification of humans, robots replacing people, a complete loss of privacy, or revolutions in social attitudes, few are blasé, and most are quite wary.

This differing attitude regarding small local changes versus large global changes makes sense for parameters that tend to revert back to a mean. Extreme values then do justify extra caution, while changes within the usual range don’t merit much notice, and can be safely left to local choice. But many parameters of our world do not mostly revert back to a mean. They drift long distances over long times, in hard to predict ways that can be reasonably modeled as a basic trend plus a random walk.

This different attitude can also make sense for parameters that have two or more very different causes of change, one which creates frequent small changes, and another which creates rare huge changes. (Or perhaps a continuum between such extremes.) If larger sudden changes tend to cause more problems, it can make sense to be more wary of them. However, for most parameters most change results from many small changes, and even then many are quite wary of this accumulating into big change.

For people with a sharp time horizon of caring, they should be more wary of long-drifting parameters the larger the changes that would happen within their horizon time. This perspective predicts that the people who are most wary of big future changes are those with the longest time horizons, and who more expect lumpier change processes. This prediction doesn’t seem to fit well with my experience, however.

Those who most worry about big long term changes usually seem okay with small short term changes. Even when they accept that most change is small and that it accumulates into big change. This seems incoherent to me. It seems like many other near versus far incoherences, like expecting things to be simpler when you are far away from them, and more complex when you are closer. You should either become more wary of short term changes, knowing that this is how big longer term change happens, or you should be more okay with big long term change, seeing that as the legitimate result of the small short term changes you accept.

https://www.overcomingbias.com/2018/03/growth-is-change-so-is-death.html#comment-3794966996
The point here is the gradual shifts of in-group beliefs are both natural and no big deal. Humans are built to readily do this, and forget they do this. But ultimately it is not a worry or concern.

But radical shifts that are big, whether near or far, portend strife and conflict. Either between groups or within them. If the shift is big enough, our intuition tells us our in-group will be in a fight. Alarms go off.
west-hunter  scitariat  discussion  rant  thinking  rationality  metabuch  critique  systematic-ad-hoc  analytical-holistic  metameta  ideology  philosophy  info-dynamics  aphorism  darwinian  prudence  pragmatic  insight  tradition  s:*  2016  multi  gnon  right-wing  formal-values  values  slippery-slope  axioms  alt-inst  heuristic  anglosphere  optimate  flux-stasis  flexibility  paleocon  polisci  universalism-particularism  ratty  hanson  list  examples  migration  fertility  intervention  demographics  population  biotech  enhancement  energy-resources  biophysical-econ  nature  military  inequality  age-generation  time  ideas  debate  meta:rhetoric  local-global  long-short-run  gnosis-logos  gavisti  stochastic-processes  eden-heaven  politics  equilibrium  hive-mind  genetics  defense  competition  arms  peace-violence  walter-scheidel  speed  marginal  optimization  search  time-preference  patience  futurism  meta:prediction  accuracy  institutions  tetlock  theory-practice  wire-guided  priors-posteriors  distribution  moments  biases  epistemic  nea 
may 2017 by nhaliday
POPULATION STRUCTURE AND QUANTITATIVE CHARACTERS
The variance of among-group variance is substantial and does not depend on the number of loci contributing to variance in the character. It is just as large for polygenic characters as for single loci with the same additive variance. This implies that one polygenic character contains exactly as much information about population relationships as one single-locus marker.

same is true of expectation apparently (so drift has same impact on polygenic and single-locus traits)
pdf  study  west-hunter  scitariat  bio  genetics  genomics  sapiens  QTL  correlation  null-result  magnitude  nibble  🌞  models  population-genetics  methodology  regularizer  moments  bias-variance  pop-diff  pop-structure  gene-drift 
may 2017 by nhaliday
Edge.org: 2017 : WHAT SCIENTIFIC TERM OR CONCEPT OUGHT TO BE MORE WIDELY KNOWN?
highlights:
- the genetic book of the dead [Dawkins]
- complementarity [Frank Wilczek]
- relative information
- effective theory [Lisa Randall]
- affordances [Dennett]
- spontaneous symmetry breaking
- relatedly, equipoise [Nicholas Christakis]
- case-based reasoning
- population reasoning (eg, common law)
- criticality [Cesar Hidalgo]
- Haldan's law of the right size (!SCALE!)
- polygenic scores
- non-ergodic
- ansatz
- state [Aaronson]: http://www.scottaaronson.com/blog/?p=3075
- transfer learning
- effect size
- satisficing
- scaling
- the breeder's equation [Greg Cochran]
- impedance matching

soft:
- reciprocal altruism
- life history [Plomin]
- intellectual honesty [Sam Harris]
- coalitional instinct (interesting claim: building coalitions around "rationality" actually makes it more difficult to update on new evidence as it makes you look like a bad person, eg, the Cathedral)
basically same: https://twitter.com/ortoiseortoise/status/903682354367143936

more: https://www.edge.org/conversation/john_tooby-coalitional-instincts

interesting timing. how woke is this dude?
org:edge  2017  technology  discussion  trends  list  expert  science  top-n  frontier  multi  big-picture  links  the-world-is-just-atoms  metameta  🔬  scitariat  conceptual-vocab  coalitions  q-n-a  psychology  social-psych  anthropology  instinct  coordination  duty  power  status  info-dynamics  cultural-dynamics  being-right  realness  cooperate-defect  westminster  chart  zeitgeist  rot  roots  epistemic  rationality  meta:science  analogy  physics  electromag  geoengineering  environment  atmosphere  climate-change  waves  information-theory  bits  marginal  quantum  metabuch  homo-hetero  thinking  sapiens  genetics  genomics  evolution  bio  GT-101  low-hanging  minimum-viable  dennett  philosophy  cog-psych  neurons  symmetry  humility  life-history  social-structure  GWAS  behavioral-gen  biodet  missing-heritability  ergodic  machine-learning  generalization  west-hunter  population-genetics  methodology  blowhards  spearhead  group-level  scale  magnitude  business  scaling-tech  tech  business-models  optimization  effect-size  aaronson  state  bare-hands  problem-solving  politics 
may 2017 by nhaliday
[1705.03394] That is not dead which can eternal lie: the aestivation hypothesis for resolving Fermi's paradox
If a civilization wants to maximize computation it appears rational to aestivate until the far future in order to exploit the low temperature environment: this can produce a 10^30 multiplier of achievable computation. We hence suggest the "aestivation hypothesis": the reason we are not observing manifestations of alien civilizations is that they are currently (mostly) inactive, patiently waiting for future cosmic eras. This paper analyzes the assumptions going into the hypothesis and how physical law and observational evidence constrain the motivations of aliens compatible with the hypothesis.

http://aleph.se/andart2/space/the-aestivation-hypothesis-popular-outline-and-faq/

simpler explanation (just different math for Drake equation):
Dissolving the Fermi Paradox: http://www.jodrellbank.manchester.ac.uk/media/eps/jodrell-bank-centre-for-astrophysics/news-and-events/2017/uksrn-slides/Anders-Sandberg---Dissolving-Fermi-Paradox-UKSRN.pdf
http://marginalrevolution.com/marginalrevolution/2017/07/fermi-paradox-resolved.html
Overall the argument is that point estimates should not be shoved into a Drake equation and then multiplied by each, as that requires excess certainty and masks much of the ambiguity of our knowledge about the distributions. Instead, a Bayesian approach should be used, after which the fate of humanity looks much better. Here is one part of the presentation:

Life Versus Dark Energy: How An Advanced Civilization Could Resist the Accelerating Expansion of the Universe: https://arxiv.org/abs/1806.05203
The presence of dark energy in our universe is causing space to expand at an accelerating rate. As a result, over the next approximately 100 billion years, all stars residing beyond the Local Group will fall beyond the cosmic horizon and become not only unobservable, but entirely inaccessible, thus limiting how much energy could one day be extracted from them. Here, we consider the likely response of a highly advanced civilization to this situation. In particular, we argue that in order to maximize its access to useable energy, a sufficiently advanced civilization would chose to expand rapidly outward, build Dyson Spheres or similar structures around encountered stars, and use the energy that is harnessed to accelerate those stars away from the approaching horizon and toward the center of the civilization. We find that such efforts will be most effective for stars with masses in the range of M∼(0.2−1)M⊙, and could lead to the harvesting of stars within a region extending out to several tens of Mpc in radius, potentially increasing the total amount of energy that is available to a future civilization by a factor of several thousand. We also discuss the observable signatures of a civilization elsewhere in the universe that is currently in this state of stellar harvesting.
preprint  study  essay  article  bostrom  ratty  anthropic  philosophy  space  xenobio  computation  physics  interdisciplinary  ideas  hmm  cocktail  temperature  thermo  information-theory  bits  🔬  threat-modeling  time  scale  insight  multi  commentary  liner-notes  pdf  slides  error  probability  ML-MAP-E  composition-decomposition  econotariat  marginal-rev  fermi  risk  org:mat  questions  paradox  intricacy  multiplicative  calculation  street-fighting  methodology  distribution  expectancy  moments  bayesian  priors-posteriors  nibble  measurement  existence  technology  geoengineering  magnitude  spatial  density  spreading  civilization  energy-resources  phys-energy  measure  direction  speculation  structure 
may 2017 by nhaliday
Estimating the number of unseen variants in the human genome
To find all common variants (frequency at least 1%) the number of individuals that need to be sequenced is small (∼350) and does not differ much among the different populations; our data show that, subject to sequence accuracy, the 1000 Genomes Project is likely to find most of these common variants and a high proportion of the rarer ones (frequency between 0.1 and 1%). The data reveal a rule of diminishing returns: a small number of individuals (∼150) is sufficient to identify 80% of variants with a frequency of at least 0.1%, while a much larger number (> 3,000 individuals) is necessary to find all of those variants.

A map of human genome variation from population-scale sequencing: http://www.internationalgenome.org/sites/1000genomes.org/files/docs/nature09534.pdf

Scientists using data from the 1000 Genomes Project, which sequenced one thousand individuals from 26 human populations, found that "a typical [individual] genome differs from the reference human genome at 4.1 million to 5.0 million sites … affecting 20 million bases of sequence."[11] Nearly all (>99.9%) of these sites are small differences, either single nucleotide polymorphisms or brief insertion-deletions in the genetic sequence, but structural variations account for a greater number of base-pairs than the SNPs and indels.[11]

Human genetic variation: https://en.wikipedia.org/wiki/Human_genetic_variation

Singleton Variants Dominate the Genetic Architecture of Human Gene Expression: https://www.biorxiv.org/content/early/2017/12/15/219238
study  sapiens  genetics  genomics  population-genetics  bioinformatics  data  prediction  cost-benefit  scale  scaling-up  org:nat  QTL  methodology  multi  pdf  curvature  convexity-curvature  nonlinearity  measurement  magnitude  🌞  distribution  missing-heritability  pop-structure  genetic-load  mutation  wiki  reference  article  structure  bio  preprint  biodet  variance-components  nibble  chart 
may 2017 by nhaliday
[1502.05274] How predictable is technological progress?
Recently it has become clear that many technologies follow a generalized version of Moore's law, i.e. costs tend to drop exponentially, at different rates that depend on the technology. Here we formulate Moore's law as a correlated geometric random walk with drift, and apply it to historical data on 53 technologies. We derive a closed form expression approximating the distribution of forecast errors as a function of time. Based on hind-casting experiments we show that this works well, making it possible to collapse the forecast errors for many different technologies at different time horizons onto the same universal distribution. This is valuable because it allows us to make forecasts for any given technology with a clear understanding of the quality of the forecasts. As a practical demonstration we make distributional forecasts at different time horizons for solar photovoltaic modules, and show how our method can be used to estimate the probability that a given technology will outperform another technology at a given point in the future.

model:
- p_t = unit price of tech
- log(p_t) = y_0 - μt + ∑_{i <= t} n_i
- n_t iid noise process
preprint  study  economics  growth-econ  innovation  discovery  technology  frontier  tetlock  meta:prediction  models  time  definite-planning  stylized-facts  regression  econometrics  magnitude  energy-resources  phys-energy  money  cost-benefit  stats  data-science  🔬  ideas  speedometer  multiplicative  methodology  stochastic-processes  time-series  stock-flow  iteration-recursion  org:mat  street-fighting  the-bones 
april 2017 by nhaliday
Evolution of Virulence | West Hunter
Once upon a time, I thought a lot about evolution and pathogens. I still do, on occasion.

It used to be the case [and still is] that many biologists thought that natural selection would inevitably tend towards a situation in which pathogens did infinitesimal harm to their host. This despite the epidemics all around them. I remember reading a book on parasitology in which the gormless author mentioned a certain species of parasitic copepod that routinely blinded the fish they attached to. He said that many a naive grad student would think that that these parasitic copepods were bad for the fish, but sophisticated evolutionists like himself knew (and would explain to the newbies) that of course the fish didn’t suffer any reduction in fitness by going blind – theory said so ! Clearly, that man had a Ph.D.

If a pathogen can gain increased reproduction by tapping host resources, or by doing any damn thing that helps itself and hurts the host, that tactic may pay, and be selected for. It depends on the balance between the advantages and costs – almost entirely those to the pathogen, since the pathogen evolves much more rapidly than the host. In some cases, as much as a million times faster – because of generations that may be 20 minutes long rather than 20 years, because pathogens often have very large populations, which favors Fisherian acceleration, and in many cases, a relatively high mutation rate. Pathogen evolution is, at least some cases, so rapid that you see significant evolutionary change within a single host. Along the same lines, we have seen very significant evolutionary changes in antibiotic resistance among pathogenic bacteria over the past few decades, but I’m pretty sure that there hasn’t been much evolutionary change in mankind since I was a kid.

So when analyzing virulence, people mostly consider evolutionary pressures on the pathogens, rather than the host. Something like the Born-Oppenheimer approximation.
west-hunter  bio  disease  parasites-microbiome  red-queen  thinking  incentives  evolution  🌞  deep-materialism  discussion  mutation  selection  time  immune  scitariat  maxim-gun  cooperate-defect  ideas  anthropic  is-ought  gender  gender-diff  scale  magnitude  stylized-facts  approximation  analogy  comparison  pro-rata  cost-benefit  EGT  equilibrium 
april 2017 by nhaliday
The Creativity of Civilisations | pseudoerasmus
- in most of the premodern period, the density of observed achievement (relative to population, time, space) was so small that you don’t need very many intelligent people to explain it;
- I really don’t know what the intelligence of premodern peoples was, but we probably shouldn’t infer the population mean from premodern achievements;
- there’s no need to invoke dysgenic or eugenic reasons for the fluctuations in the fortunes of civilisations, as so many cranks are wont to do.

http://www.thenewatlantis.com/publications/why-the-arabic-world-turned-away-from-science
https://gnxp.nofe.me/2017/08/24/arab-islamic-science-was-not-arab-islamic/
https://gnxp.nofe.me/2003/11/24/iranians-aren-t-arabs/
econotariat  pseudoE  economics  growth-econ  human-capital  iq  history  cliometrics  medieval  islam  MENA  europe  the-great-west-whale  divergence  innovation  iron-age  nature  technology  agriculture  elite  dysgenics  speculation  critique  bounded-cognition  iran  asia  social-structure  tails  murray  civilization  magnitude  street-fighting  models  unaffiliated  broad-econ  info-dynamics  scale  biophysical-econ  behavioral-gen  chart  article  rot  wealth-of-nations  great-powers  microfoundations  frontier  multi  news  org:mag  letters  science  gnxp  scitariat  rant  stagnation  explanans  roots  occident  orient 
march 2017 by nhaliday
« earlier      
per page:    204080120160

Copy this bookmark:





to read