nhaliday : giants   188

 « earlier
exponential function - Feynman's Trick for Approximating \$e^x\$ - Mathematics Stack Exchange
1. e^2.3 ~ 10
2. e^.7 ~ 2
3. e^x ~ 1+x
e = 2.71828...

errors (absolute, relative):
1. +0.0258, 0.26%
2. -0.0138, -0.68%
3. 1 + x approximates e^x on [-.3, .3] with absolute error < .05, and relative error < 5.6% (3.7% for [0, .3]).
nibble  q-n-a  overflow  math  feynman  giants  mental-math  calculation  multiplicative  AMT  identity  objektbuch  explanation  howto  estimate  street-fighting  stories  approximation  data  trivia  nitty-gritty
october 2019 by nhaliday
The Existential Risk of Math Errors - Gwern.net
How big is this upper bound? Mathematicians have often made errors in proofs. But it’s rarer for ideas to be accepted for a long time and then rejected. But we can divide errors into 2 basic cases corresponding to type I and type II errors:

1. Mistakes where the theorem is still true, but the proof was incorrect (type I)
2. Mistakes where the theorem was false, and the proof was also necessarily incorrect (type II)

Before someone comes up with a final answer, a mathematician may have many levels of intuition in formulating & working on the problem, but we’ll consider the final end-product where the mathematician feels satisfied that he has solved it. Case 1 is perhaps the most common case, with innumerable examples; this is sometimes due to mistakes in the proof that anyone would accept is a mistake, but many of these cases are due to changing standards of proof. For example, when David Hilbert discovered errors in Euclid’s proofs which no one noticed before, the theorems were still true, and the gaps more due to Hilbert being a modern mathematician thinking in terms of formal systems (which of course Euclid did not think in). (David Hilbert himself turns out to be a useful example of the other kind of error: his famous list of 23 problems was accompanied by definite opinions on the outcome of each problem and sometimes timings, several of which were wrong or questionable5.) Similarly, early calculus used ‘infinitesimals’ which were sometimes treated as being 0 and sometimes treated as an indefinitely small non-zero number; this was incoherent and strictly speaking, practically all of the calculus results were wrong because they relied on an incoherent concept - but of course the results were some of the greatest mathematical work ever conducted6 and when later mathematicians put calculus on a more rigorous footing, they immediately re-derived those results (sometimes with important qualifications), and doubtless as modern math evolves other fields have sometimes needed to go back and clean up the foundations and will in the future.7

...

Isaac Newton, incidentally, gave two proofs of the same solution to a problem in probability, one via enumeration and the other more abstract; the enumeration was correct, but the other proof totally wrong and this was not noticed for a long time, leading Stigler to remark:

...

TYPE I > TYPE II?
“Lefschetz was a purely intuitive mathematician. It was said of him that he had never given a completely correct proof, but had never made a wrong guess either.”
- Gian-Carlo Rota13

Case 2 is disturbing, since it is a case in which we wind up with false beliefs and also false beliefs about our beliefs (we no longer know that we don’t know). Case 2 could lead to extinction.

...

Except, errors do not seem to be evenly & randomly distributed between case 1 and case 2. There seem to be far more case 1s than case 2s, as already mentioned in the early calculus example: far more than 50% of the early calculus results were correct when checked more rigorously. Richard Hamming attributes to Ralph Boas a comment that while editing Mathematical Reviews that “of the new results in the papers reviewed most are true but the corresponding proofs are perhaps half the time plain wrong”.

...

Gian-Carlo Rota gives us an example with Hilbert:

...

Olga labored for three years; it turned out that all mistakes could be corrected without any major changes in the statement of the theorems. There was one exception, a paper Hilbert wrote in his old age, which could not be fixed; it was a purported proof of the continuum hypothesis, you will find it in a volume of the Mathematische Annalen of the early thirties.

...

Leslie Lamport advocates for machine-checked proofs and a more rigorous style of proofs similar to natural deduction, noting a mathematician acquaintance guesses at a broad error rate of 1/329 and that he routinely found mistakes in his own proofs and, worse, believed false conjectures30.

[more on these "structured proofs":
https://mathoverflow.net/questions/35727/community-experiences-writing-lamports-structured-proofs
]

We can probably add software to that list: early software engineering work found that, dismayingly, bug rates seem to be simply a function of lines of code, and one would expect diseconomies of scale. So one would expect that in going from the ~4,000 lines of code of the Microsoft DOS operating system kernel to the ~50,000,000 lines of code in Windows Server 2003 (with full systems of applications and libraries being even larger: the comprehensive Debian repository in 2007 contained ~323,551,126 lines of code) that the number of active bugs at any time would be… fairly large. Mathematical software is hopefully better, but practitioners still run into issues (eg Durán et al 2014, Fonseca et al 2017) and I don’t know of any research pinning down how buggy key mathematical systems like Mathematica are or how much published mathematics may be erroneous due to bugs. This general problem led to predictions of doom and spurred much research into automated proof-checking, static analysis, and functional languages31.

[related:
https://mathoverflow.net/questions/11517/computer-algebra-errors
I don't know any interesting bugs in symbolic algebra packages but I know a true, enlightening and entertaining story about something that looked like a bug but wasn't.

Define sinc𝑥=(sin𝑥)/𝑥.

Someone found the following result in an algebra package: ∫∞0𝑑𝑥sinc𝑥=𝜋/2
They then found the following results:

...

So of course when they got:

∫∞0𝑑𝑥sinc𝑥sinc(𝑥/3)sinc(𝑥/5)⋯sinc(𝑥/15)=(467807924713440738696537864469/935615849440640907310521750000)𝜋

hmm:
Which means that nobody knows Fourier analysis nowdays. Very sad and discouraging story... – fedja Jan 29 '10 at 18:47

--

Because the most popular systems are all commercial, they tend to guard their bug database rather closely -- making them public would seriously cut their sales. For example, for the open source project Sage (which is quite young), you can get a list of all the known bugs from this page. 1582 known issues on Feb.16th 2010 (which includes feature requests, problems with documentation, etc).

That is an order of magnitude less than the commercial systems. And it's not because it is better, it is because it is younger and smaller. It might be better, but until SAGE does a lot of analysis (about 40% of CAS bugs are there) and a fancy user interface (another 40%), it is too hard to compare.

I once ran a graduate course whose core topic was studying the fundamental disconnect between the algebraic nature of CAS and the analytic nature of the what it is mostly used for. There are issues of logic -- CASes work more or less in an intensional logic, while most of analysis is stated in a purely extensional fashion. There is no well-defined 'denotational semantics' for expressions-as-functions, which strongly contributes to the deeper bugs in CASes.]

...

Should such widely-believed conjectures as P≠NP or the Riemann hypothesis turn out be false, then because they are assumed by so many existing proofs, a far larger math holocaust would ensue38 - and our previous estimates of error rates will turn out to have been substantial underestimates. But it may be a cloud with a silver lining, if it doesn’t come at a time of danger.

https://mathoverflow.net/questions/338607/why-doesnt-mathematics-collapse-down-even-though-humans-quite-often-make-mista

more on formal methods in programming:
https://www.quantamagazine.org/formal-verification-creates-hacker-proof-code-20160920/
https://intelligence.org/2014/03/02/bob-constable/

Update: measured effort
In the October 2018 issue of Communications of the ACM there is an interesting article about Formally verified software in the real world with some estimates of the effort.

Interestingly (based on OS development for military equipment), it seems that producing formally proved software requires 3.3 times more effort than with traditional engineering techniques. So it's really costly.

On the other hand, it requires 2.3 times less effort to get high security software this way than with traditionally engineered software if you add the effort to make such software certified at a high security level (EAL 7). So if you have high reliability or security requirements there is definitively a business case for going formal.

WHY DON'T PEOPLE USE FORMAL METHODS?: https://www.hillelwayne.com/post/why-dont-people-use-formal-methods/
You can see examples of how all of these look at Let’s Prove Leftpad. HOL4 and Isabelle are good examples of “independent theorem” specs, SPARK and Dafny have “embedded assertion” specs, and Coq and Agda have “dependent type” specs.6

If you squint a bit it looks like these three forms of code spec map to the three main domains of automated correctness checking: tests, contracts, and types. This is not a coincidence. Correctness is a spectrum, and formal verification is one extreme of that spectrum. As we reduce the rigour (and effort) of our verification we get simpler and narrower checks, whether that means limiting the explored state space, using weaker types, or pushing verification to the runtime. Any means of total specification then becomes a means of partial specification, and vice versa: many consider Cleanroom a formal verification technique, which primarily works by pushing code review far beyond what’s humanly possible.

...

The question, then: “is 90/95/99% correct significantly cheaper than 100% correct?” The answer is very yes. We all are comfortable saying that a codebase we’ve well-tested and well-typed is mostly correct modulo a few fixes in prod, and we’re even writing more than four lines of code a day. In fact, the vast… [more]
ratty  gwern  analysis  essay  realness  truth  correctness  reason  philosophy  math  proofs  formal-methods  cs  programming  engineering  worse-is-better/the-right-thing  intuition  giants  old-anglo  error  street-fighting  heuristic  zooming  risk  threat-modeling  software  lens  logic  inference  physics  differential  geometry  estimate  distribution  robust  speculation  nonlinearity  cost-benefit  convexity-curvature  measure  scale  trivia  cocktail  history  early-modern  europe  math.CA  rigor  news  org:mag  org:sci  miri-cfar  pdf  thesis  comparison  examples  org:junk  q-n-a  stackex  pragmatic  tradeoffs  cracker-prog  techtariat  invariance  DSL  chart  ecosystem  grokkability  heavyweights  CAS  static-dynamic  lower-bounds  complexity  tcs  open-problems  big-surf  ideas  certificates-recognition  proof-systems  PCP  mediterranean  SDP  meta:prediction  epistemic  questions  guessing  distributed  overflow  nibble  soft-question  track-record  big-list  hmm  frontier  state-of-art  move-fast-(and-break-things)  grokkability-clarity  technical-writing  trust
july 2019 by nhaliday
Interview with Donald Knuth | Interview with Donald Knuth | InformIT
Andrew Binstock and Donald Knuth converse on the success of open source, the problem with multicore architecture, the disappointing lack of interest in literate programming, the menace of reusable code, and that urban legend about winning a programming contest with a single compilation.

Reusable vs. re-editable code: https://hal.archives-ouvertes.fr/hal-01966146/document

https://www.johndcook.com/blog/2008/05/03/reusable-code-vs-re-editable-code/
I think whether code should be editable or in “an untouchable black box” depends on the number of developers involved, as well as their talent and motivation. Knuth is a highly motivated genius working in isolation. Most software is developed by large teams of programmers with varying degrees of motivation and talent. I think the further you move away from Knuth along these three axes the more important black boxes become.
nibble  interview  giants  expert-experience  programming  cs  software  contrarianism  carmack  oss  prediction  trends  linux  concurrency  desktop  comparison  checking  debugging  stories  engineering  hmm  idk  algorithms  books  debate  flux-stasis  duplication  parsimony  best-practices  writing  documentation  latex  intricacy  structure  hardware  caching  workflow  editors  composition-decomposition  coupling-cohesion  exposition  technical-writing  thinking  cracker-prog  code-organizing  grokkability  multi  techtariat  commentary  pdf  reflection  essay  examples  python  data-science  libraries  grokkability-clarity  project-management
june 2019 by nhaliday
Teach debugging
A friend of mine and I couldn't understand why some people were having so much trouble; the material seemed like common sense. The Feynman Method was the only tool we needed.

1. Write down the problem
2. Think real hard
3. Write down the solution

The Feynman Method failed us on the last project: the design of a divider, a real-world-scale project an order of magnitude more complex than anything we'd been asked to tackle before. On the day he assigned the project, the professor exhorted us to begin early. Over the next few weeks, we heard rumors that some of our classmates worked day and night without making progress.

...

And then, just after midnight, a number of our newfound buddies from dinner reported successes. Half of those who started from scratch had working designs. Others were despondent, because their design was still broken in some subtle, non-obvious way. As I talked with one of those students, I began poring over his design. And after a few minutes, I realized that the Feynman method wasn't the only way forward: it should be possible to systematically apply a mechanical technique repeatedly to find the source of our problems. Beneath all the abstractions, our projects consisted purely of NAND gates (woe to those who dug around our toolbox enough to uncover dynamic logic), which outputs a 0 only when both inputs are 1. If the correct output is 0, both inputs should be 1. The input that isn't is in error, an error that is, itself, the output of a NAND gate where at least one input is 0 when it should be 1. We applied this method recursively, finding the source of all the problems in both our designs in under half an hour.

How To Debug Any Program: https://www.blinddata.com/blog/how-to-debug-any-program-9
May 8th 2019 by Saketh Are

Start by Questioning Everything

...

When a program is behaving unexpectedly, our attention tends to be drawn first to the most complex portions of the code. However, mistakes can come in all forms. I've personally been guilty of rushing to debug sophisticated portions of my code when the real bug was that I forgot to read in the input file. In the following section, we'll discuss how to reliably focus our attention on the portions of the program that need correction.

Then Question as Little as Possible

Suppose that we have a program and some input on which its behavior doesn’t match our expectations. The goal of debugging is to narrow our focus to as small a section of the program as possible. Once our area of interest is small enough, the value of the incorrect output that is being produced will typically tell us exactly what the bug is.

In order to catch the point at which our program diverges from expected behavior, we must inspect the intermediate state of the program. Suppose that we select some point during execution of the program and print out all values in memory. We can inspect the results manually and decide whether they match our expectations. If they don't, we know for a fact that we can focus on the first half of the program. It either contains a bug, or our expectations of what it should produce were misguided. If the intermediate state does match our expectations, we can focus on the second half of the program. It either contains a bug, or our understanding of what input it expects was incorrect.

Question Things Efficiently

For practical purposes, inspecting intermediate state usually doesn't involve a complete memory dump. We'll typically print a small number of variables and check whether they have the properties we expect of them. Verifying the behavior of a section of code involves:

1. Before it runs, inspecting all values in memory that may influence its behavior.
2. Reasoning about the expected behavior of the code.
3. After it runs, inspecting all values in memory that may be modified by the code.

Reasoning about expected behavior is typically the easiest step to perform even in the case of highly complex programs. Practically speaking, it's time-consuming and mentally strenuous to write debug output into your program and to read and decipher the resulting values. It is therefore advantageous to structure your code into functions and sections that pass a relatively small amount of information between themselves, minimizing the number of values you need to inspect.

...

Finding the Right Question to Ask

We’ve assumed so far that we have available a test case on which our program behaves unexpectedly. Sometimes, getting to that point can be half the battle. There are a few different approaches to finding a test case on which our program fails. It is reasonable to attempt them in the following order:

1. Verify correctness on the sample inputs.
2. Test additional small cases generated by hand.
3. Adversarially construct corner cases by hand.
4. Re-read the problem to verify understanding of input constraints.
5. Design large cases by hand and write a program to construct them.
6. Write a generator to construct large random cases and a brute force oracle to verify outputs.
techtariat  dan-luu  engineering  programming  debugging  IEEE  reflection  stories  education  higher-ed  checklists  iteration-recursion  divide-and-conquer  thinking  ground-up  nitty-gritty  giants  feynman  error  input-output  structure  composition-decomposition  abstraction  systematic-ad-hoc  reduction  teaching  state  correctness  multi  oly  oly-programming  metabuch  neurons  problem-solving  wire-guided  marginal  strategy  tactics  methodology  simplification-normalization
may 2019 by nhaliday
Sci-Hub | The genetics of human fertility. Current Opinion in Psychology, 27, 41–45 | 10.1016/j.copsyc.2018.07.011
very short

Overall, there is a suggestion of two different reproductive strategies proving to be successful in modern Western societies: (1) a strategy associated with socially conservative values, including a high commitment to the bearing of children within marriage; and(2) a strategy associated with antisocial behavior, early sexual experimentation, a variety of sexual partners, low educational attainment, low commitment to marriage, haphazard pregnancies, and indifference to politics. This notion of distinct lifestyles characterized in common by relatively high fertility deserves further empirical and theoretical study.
pdf  piracy  study  fertility  biodet  behavioral-gen  genetics  genetic-correlation  iq  education  class  right-wing  politics  ideology  long-short-run  time-preference  strategy  planning  correlation  life-history  dysgenics  rot  personality  psychology  gender  gender-diff  fisher  giants  old-anglo  tradition  religion  psychiatry  disease  autism  👽  stress  variance-components  equilibrium  class-warfare
march 2019 by nhaliday
Science - Wikipedia
In Northern Europe, the new technology of the printing press was widely used to publish many arguments, including some that disagreed widely with contemporary ideas of nature. René Descartes and Francis Bacon published philosophical arguments in favor of a new type of non-Aristotelian science. Descartes emphasized individual thought and argued that mathematics rather than geometry should be used in order to study nature. Bacon emphasized the importance of experiment over contemplation. Bacon further questioned the Aristotelian concepts of formal cause and final cause, and promoted the idea that science should study the laws of "simple" natures, such as heat, rather than assuming that there is any specific nature, or "formal cause," of each complex type of thing. This new modern science began to see itself as describing "laws of nature". This updated approach to studies in nature was seen as mechanistic. Bacon also argued that science should aim for the first time at practical inventions for the improvement of all human life.

Age of Enlightenment

...

During this time, the declared purpose and value of science became producing wealth and inventions that would improve human lives, in the materialistic sense of having more food, clothing, and other things. In Bacon's words, "the real and legitimate goal of sciences is the endowment of human life with new inventions and riches", and he discouraged scientists from pursuing intangible philosophical or spiritual ideas, which he believed contributed little to human happiness beyond "the fume of subtle, sublime, or pleasing speculation".[72]
article  wiki  reference  science  philosophy  letters  history  iron-age  mediterranean  the-classics  medieval  europe  the-great-west-whale  early-modern  ideology  telos-atelos  ends-means  new-religion  weird  enlightenment-renaissance-restoration-reformation  culture  the-devil  anglo  big-peeps  giants  religion  theos  tip-of-tongue  hmm  truth  dirty-hands  engineering  roots  values  formal-values  quotes  causation  forms-instances  technology  logos
august 2018 by nhaliday
Why read old philosophy? | Meteuphoric
(This story would suggest that in physics students are maybe missing out on learning the styles of thought that produce progress in physics. My guess is that instead they learn them in grad school when they are doing research themselves, by emulating their supervisors, and that the helpfulness of this might partially explain why Nobel prizewinner advisors beget Nobel prizewinner students.)

The story I hear about philosophy—and I actually don’t know how much it is true—is that as bits of philosophy come to have any methodological tools other than ‘think about it’, they break off and become their own sciences. So this would explain philosophy’s lone status in studying old thinkers rather than impersonal methods—philosophy is the lone ur-discipline without impersonal methods but thinking.

This suggests a research project: try summarizing what Aristotle is doing rather than Aristotle’s views. Then write a nice short textbook about it.
ratty  learning  reading  studying  prioritizing  history  letters  philosophy  science  comparison  the-classics  canon  speculation  reflection  big-peeps  iron-age  mediterranean  roots  lens  core-rats  thinking  methodology  grad-school  academia  physics  giants  problem-solving  meta:research  scholar  the-trenches  explanans  crux  metameta  duplication  sociality  innovation  quixotic  meta:reading  classic
june 2018 by nhaliday
Theory of Self-Reproducing Automata - John von Neumann
Fourth Lecture: THE ROLE OF HIGH AND OF EXTREMELY HIGH COMPLICATION

Comparisons between computing machines and the nervous systems. Estimates of size for computing machines, present and near future.

Estimates for size for the human central nervous system. Excursus about the “mixed” character of living organisms. Analog and digital elements. Observations about the “mixed” character of all componentry, artificial as well as natural. Interpretation of the position to be taken with respect to these.

Evaluation of the discrepancy in size between artificial and natural automata. Interpretation of this discrepancy in terms of physical factors. Nature of the materials used.

The probability of the presence of other intellectual factors. The role of complication and the theoretical penetration that it requires.

Questions of reliability and errors reconsidered. Probability of individual errors and length of procedure. Typical lengths of procedure for computing machines and for living organisms--that is, for artificial and for natural automata. Upper limits on acceptable probability of error in individual operations. Compensation by checking and self-correcting features.

Differences of principle in the way in which errors are dealt with in artificial and in natural automata. The “single error” principle in artificial automata. Crudeness of our approach in this case, due to the lack of adequate theory. More sophisticated treatment of this problem in natural automata: The role of the autonomy of parts. Connections between this autonomy and evolution.

- 10^10 neurons in brain, 10^4 vacuum tubes in largest computer at time
- machines faster: 5 ms from neuron potential to neuron potential, 10^-3 ms for vacuum tubes

https://en.wikipedia.org/wiki/John_von_Neumann#Computing
pdf  article  papers  essay  nibble  math  cs  computation  bio  neuro  neuro-nitgrit  scale  magnitude  comparison  acm  von-neumann  giants  thermo  phys-energy  speed  performance  time  density  frequency  hardware  ems  efficiency  dirty-hands  street-fighting  fermi  estimate  retention  physics  interdisciplinary  multi  wiki  links  people  🔬  atoms  duplication  iteration-recursion  turing  complexity  measure  nature  technology  complex-systems  bits  information-theory  circuits  robust  structure  composition-decomposition  evolution  mutation  axioms  analogy  thinking  input-output  hi-order-bits  coding-theory  flexibility  rigidity  automata-languages
april 2018 by nhaliday
Society of Mind - Wikipedia
A core tenet of Minsky's philosophy is that "minds are what brains do". The society of mind theory views the human mind and any other naturally evolved cognitive systems as a vast society of individually simple processes known as agents. These processes are the fundamental thinking entities from which minds are built, and together produce the many abilities we attribute to minds. The great power in viewing a mind as a society of agents, as opposed to the consequence of some basic principle or some simple formal system, is that different agents can be based on different types of processes with different purposes, ways of representing knowledge, and methods for producing results.

This idea is perhaps best summarized by the following quote:

What magical trick makes us intelligent? The trick is that there is no trick. The power of intelligence stems from our vast diversity, not from any single, perfect principle. —Marvin Minsky, The Society of Mind, p. 308

https://en.wikipedia.org/wiki/Modularity_of_mind

The modular organization of human anatomical
brain networks: Accounting for the cost of wiring: https://www.mitpressjournals.org/doi/pdfplus/10.1162/NETN_a_00002
Brain networks are expected to be modular. However, existing techniques for estimating a network’s modules make it difficult to assess the influence of organizational principles such as wiring cost reduction on the detected modules. Here we present a modification of an existing module detection algorithm that allowed us to focus on connections that are unexpected under a cost-reduction wiring rule and to identify modules from among these connections. We applied this technique to anatomical brain networks and showed that the modules we detected differ from those detected using the standard technique. We demonstrated that these novel modules are spatially distributed, exhibit unique functional fingerprints, and overlap considerably with rich clubs, giving rise to an alternative and complementary interpretation of the functional roles of specific brain regions. Finally, we demonstrated that, using the modified module detection approach, we can detect modules in a developmental dataset that track normative patterns of maturation. Collectively, these findings support the hypothesis that brain networks are composed of modules and provide additional insight into the function of those modules.
books  ideas  speculation  structure  composition-decomposition  complex-systems  neuro  ai  psychology  cog-psych  intelligence  reduction  wiki  giants  philosophy  number  cohesion  diversity  systematic-ad-hoc  detail-architecture  pdf  study  neuro-nitgrit  brain-scan  nitty-gritty  network-structure  graphs  graph-theory  models  whole-partial-many  evopsych  eden  reference  psych-architecture  article  coupling-cohesion  multi
april 2018 by nhaliday
Who We Are | West Hunter
I’m going to review David Reich’s new book, Who We Are and How We Got Here. Extensively: in a sense I’ve already been doing this for a long time. Probably there will be a podcast. The GoFundMe link is here. You can also send money via Paypal (Use the donate button), or bitcoins to 1Jv4cu1wETM5Xs9unjKbDbCrRF2mrjWXr5. In-kind donations, such as orichalcum or mithril, are always appreciated.

This is the book about the application of ancient DNA to prehistory and history.

height difference between northern and southern europeans: https://westhunt.wordpress.com/2018/03/29/who-we-are-1/
mixing, genocide of males, etc.: https://westhunt.wordpress.com/2018/03/29/who-we-are-2-purity-of-essence/
rapid change in polygenic traits (appearance by Kevin Mitchell and funny jab at Brad Delong ("regmonkey")): https://westhunt.wordpress.com/2018/03/30/rapid-change-in-polygenic-traits/
schiz, bipolar, and IQ: https://westhunt.wordpress.com/2018/03/30/rapid-change-in-polygenic-traits/#comment-105605
Dan Graur being dumb: https://westhunt.wordpress.com/2018/04/02/the-usual-suspects/
prediction of neanderthal mixture and why: https://westhunt.wordpress.com/2018/04/03/who-we-are-3-neanderthals/
New Guineans tried to use Denisovan admixture to avoid UN sanctions (by "not being human"): https://westhunt.wordpress.com/2018/04/04/who-we-are-4-denisovans/
also some commentary on decline of Out-of-Africa, including:
"Homo Naledi, a small-brained homonin identified from recently discovered fossils in South Africa, appears to have hung around way later that you’d expect (up to 200,000 years ago, maybe later) than would be the case if modern humans had occupied that area back then. To be blunt, we would have eaten them."

Live Not By Lies: https://westhunt.wordpress.com/2018/04/08/live-not-by-lies/
Next he slams people that suspect that upcoming genetic genetic analysis will, in most cases, confirm traditional stereotypes about race – the way the world actually looks.

The people Reich dumps on are saying perfectly reasonable things. He criticizes Henry Harpending for saying that he’d never seen an African with a hobby. Of course, Henry had actually spent time in Africa, and that’s what he’d seen. The implication is that people in Malthusian farming societies – which Africa was not – were selected to want to work, even where there was no immediate necessity to do so. Thus hobbies, something like a gerbil running in an exercise wheel.

He criticized Nicholas Wade, for saying that different races have different dispositions. Wade’s book wasn’t very good, but of course personality varies by race: Darwin certainly thought so. You can see differences at birth. Cover a baby’s nose with a cloth: Chinese and Navajo babies quietly breathe through their mouth, European and African babies fuss and fight.

Then he attacks Watson, for asking when Reich was going to look at Jewish genetics – the kind that has led to greater-than-average intelligence. Watson was undoubtedly trying to get a rise out of Reich, but it’s a perfectly reasonable question. Ashkenazi Jews are smarter than the average bear and everybody knows it. Selection is the only possible explanation, and the conditions in the Middle ages – white-collar job specialization and a high degree of endogamy, were just what the doctor ordered.

Watson’s a prick, but he’s a great prick, and what he said was correct. Henry was a prince among men, and Nick Wade is a decent guy as well. Reich is totally out of line here: he’s being a dick.

Now Reich may be trying to burnish his anti-racist credentials, which surely need some renewal after having pointing out that race as colloquially used is pretty reasonable, there’s no reason pops can’t be different, people that said otherwise ( like Lewontin, Gould, Montagu, etc. ) were lying, Aryans conquered Europe and India, while we’re tied to the train tracks with scary genetic results coming straight at us. I don’t care: he’s being a weasel, slandering the dead and abusing the obnoxious old genius who laid the foundations of his field. Reich will also get old someday: perhaps he too will someday lose track of all the nonsense he’s supposed to say, or just stop caring. Maybe he already has… I’m pretty sure that Reich does not like lying – which is why he wrote this section of the book (not at all logically necessary for his exposition of the ancient DNA work) but the required complex juggling of lies and truth required to get past the demented gatekeepers of our society may not be his forte. It has been said that if it was discovered that someone in the business was secretly an android, David Reich would be the prime suspect. No Talleyrand he.

https://westhunt.wordpress.com/2018/04/12/who-we-are-6-the-americas/
The population that accounts for the vast majority of Native American ancestry, which we will call Amerinds, came into existence somewhere in northern Asia. It was formed from a mix of Ancient North Eurasians and a population related to the Han Chinese – about 40% ANE and 60% proto-Chinese. Is looks as if most of the paternal ancestry was from the ANE, while almost all of the maternal ancestry was from the proto-Han. [Aryan-Transpacific ?!?] This formation story – ANE boys, East-end girls – is similar to the formation story for the Indo-Europeans.

https://westhunt.wordpress.com/2018/04/18/who-we-are-7-africa/
In some ways, on some questions, learning more from genetics has left us less certain. At this point we really don’t know where anatomically humans originated. Greater genetic variety in sub-Saharan African has been traditionally considered a sign that AMH originated there, but it possible that we originated elsewhere, perhaps in North Africa or the Middle East, and gained extra genetic variation when we moved into sub-Saharan Africa and mixed with various archaic groups that already existed. One consideration is that finding recent archaic admixture in a population may well be a sign that modern humans didn’t arise in that region ( like language substrates) – which makes South Africa and West Africa look less likely. The long-continued existence of homo naledi in South Africa suggests that modern humans may not have been there for all that long – if we had co-existed with homo naledi, they probably wouldn’t lasted long. The oldest known skull that is (probably) AMh was recently found in Morocco, while modern humans remains, already known from about 100,000 years ago in Israel, have recently been found in northern Saudi Arabia.

While work by Nick Patterson suggests that modern humans were formed by a fusion between two long-isolated populations, a bit less than half a million years ago.

So: genomics had made recent history Africa pretty clear. Bantu agriculuralists expanded and replaced hunter-gatherers, farmers and herders from the Middle East settled North Africa, Egypt and northeaat Africa, while Nilotic herdsmen expanded south from the Sudan. There are traces of earlier patterns and peoples, but today, only traces. As for questions back further in time, such as the origins of modern humans – we thought we knew, and now we know we don’t. But that’s progress.

https://westhunt.wordpress.com/2018/04/18/reichs-journey/
David Reich’s professional path must have shaped his perspective on the social sciences. Look at the record. He starts his professional career examining the role of genetics in the elevated prostate cancer risk seen in African-American men. Various social-science fruitcakes oppose him even looking at the question of ancestry ( African vs European). But they were wrong: certain African-origin alleles explain the increased risk. Anthropologists (and human geneticists) were sure (based on nothing) that modern humans hadn’t interbred with Neanderthals – but of course that happened. Anthropologists and archaeologists knew that Gustaf Kossina couldn’t have been right when he said that widespread material culture corresponded to widespread ethnic groups, and that migration was the primary explanation for changes in the archaeological record – but he was right. They knew that the Indo-European languages just couldn’t have been imposed by fire and sword – but Reich’s work proved them wrong. Lots of people – the usual suspects plus Hindu nationalists – were sure that the AIT ( Aryan Invasion Theory) was wrong, but it looks pretty good today.

Some sociologists believed that caste in India was somehow imposed or significantly intensified by the British – but it turns out that most jatis have been almost perfectly endogamous for two thousand years or more…

It may be that Reich doesn’t take these guys too seriously anymore. Why should he?

varnas, jatis, aryan invastion theory: https://westhunt.wordpress.com/2018/04/22/who-we-are-8-india/

europe and EEF+WHG+ANE: https://westhunt.wordpress.com/2018/05/01/who-we-are-9-europe/

https://www.nationalreview.com/2018/03/book-review-david-reich-human-genes-reveal-history/
The massive mixture events that occurred in the recent past to give rise to Europeans and South Asians, to name just two groups, were likely “male mediated.” That’s another way of saying that men on the move took local women as brides or concubines. In the New World there are many examples of this, whether it be among African Americans, where most European ancestry seems to come through men, or in Latin America, where conquistadores famously took local women as paramours. Both of these examples are disquieting, and hint at the deep structural roots of patriarchal inequality and social subjugation that form the backdrop for the emergence of many modern peoples.
west-hunter  scitariat  books  review  sapiens  anthropology  genetics  genomics  history  antiquity  iron-age  world  europe  gavisti  aDNA  multi  politics  culture-war  kumbaya-kult  social-science  academia  truth  westminster  environmental-effects  embodied  pop-diff  nordic  mediterranean  the-great-west-whale  germanic  the-classics  shift  gene-flow  homo-hetero  conquest-empire  morality  diversity  aphorism  migration  migrant-crisis  EU  africa  MENA  gender  selection  speed  time  population-genetics  error  concrete  econotariat  economics  regression  troll  lol  twitter  social  media  street-fighting  methodology  robust  disease  psychiatry  iq  correlation  usa  obesity  dysgenics  education  track-record  people  counterexample  reason  thinking  fisher  giants  old-anglo  scifi-fantasy  higher-ed  being-right  stories  reflection  critique  multiplicative  iteration-recursion  archaics  asia  developing-world  civil-liberty  anglo  oceans  food  death  horror  archaeology  gnxp  news  org:mag  right-wing  age-of-discovery  latin-america  ea
march 2018 by nhaliday
I was watching Charade the other day, not for the first time, and was noticing that the action scenes with Cary Grant (human fly, and fighting George Kennedy) really weren’t very convincing.  Age. But think what it would be like today: we’d see Audrey Hepburn kicking the living shit out of Kennedy, probably cutting his throat with his own claw – while still being utterly adorable.

https://westhunt.wordpress.com/2018/03/04/shtrafbats/
Was thinking about how there are far too many reviewers, and far too few movies worth reviewing. It might be fun to review the movies that should have been made, instead. Someone ought to make a movie about the life of Konstantin Rokossovsky – an officer arrested and tortured by Stalin (ended up with denailed fingers and steel teeth) who became one of the top Soviet generals. The story would be focused on his command of 16th Army in the final defense of Moscow – an army group composed entirely of penal battalions. The Legion of the Damned.

https://westhunt.wordpress.com/2018/03/04/shtrafbats/#comment-103767
There hasn’t been a good Gulag Archipelago movie, has there?

One historical movie that I’d really like to see would be about the defense of Malta by the Knights of St. John. That or the defense of Vienna. Either one would be very “timely”, which is a word many reviewers seem to misuse quite laughably these days.
--
My oldest son made the same suggestion – The Great Siege

Siege of Vienna – Drawing of the Dark?

https://westhunt.wordpress.com/2018/03/04/shtrafbats/#comment-103846
The Conquest of New Spain.
--
Only Cortez was fully awake. Him and von Neumann.
west-hunter  scitariat  reflection  discussion  aphorism  troll  film  culture  classic  multi  review  history  mostly-modern  world-war  russia  communism  authoritarianism  military  war  poast  medieval  conquest-empire  expansionism  age-of-discovery  europe  eastern-europe  mediterranean  usa  latin-america  big-peeps  von-neumann  giants
march 2018 by nhaliday
The Coming Technological Singularity
Within thirty years, we will have the technological
means to create superhuman intelligence. Shortly after,
the human era will be ended.

Is such progress avoidable? If not to be avoided, can
events be guided so that we may survive? These questions
are investigated. Some possible answers (and some further
dangers) are presented.

_What is The Singularity?_

The acceleration of technological progress has been the central
feature of this century. I argue in this paper that we are on the edge
of change comparable to the rise of human life on Earth. The precise
cause of this change is the imminent creation by technology of
entities with greater than human intelligence. There are several means
by which science may achieve this breakthrough (and this is another
reason for having confidence that the event will occur):
o The development of computers that are "awake" and
superhumanly intelligent. (To date, most controversy in the
area of AI relates to whether we can create human equivalence
in a machine. But if the answer is "yes, we can", then there
is little doubt that beings more intelligent can be constructed
shortly thereafter.
o Large computer networks (and their associated users) may "wake
up" as a superhumanly intelligent entity.
o Computer/human interfaces may become so intimate that users
may reasonably be considered superhumanly intelligent.
o Biological science may find ways to improve upon the natural
human intellect.

The first three possibilities depend in large part on
improvements in computer hardware. Progress in computer hardware has
followed an amazingly steady curve in the last few decades [16]. Based
largely on this trend, I believe that the creation of greater than
human intelligence will occur during the next thirty years. (Charles
Platt [19] has pointed out the AI enthusiasts have been making claims
like this for the last thirty years. Just so I'm not guilty of a
relative-time ambiguity, let me more specific: I'll be surprised if
this event occurs before 2005 or after 2030.)

What are the consequences of this event? When greater-than-human
intelligence drives progress, that progress will be much more rapid.
In fact, there seems no reason why progress itself would not involve
the creation of still more intelligent entities -- on a still-shorter
time scale. The best analogy that I see is with the evolutionary past:
Animals can adapt to problems and make inventions, but often no faster
than natural selection can do its work -- the world acts as its own
simulator in the case of natural selection. We humans have the ability
to internalize the world and conduct "what if's" in our heads; we can
solve many problems thousands of times faster than natural selection.
Now, by creating the means to execute those simulations at much higher
speeds, we are entering a regime as radically different from our human
past as we humans are from the lower animals.
org:junk  humanity  accelerationism  futurism  prediction  classic  technology  frontier  speedometer  ai  risk  internet  time  essay  rhetoric  network-structure  ai-control  morality  ethics  volo-avolo  egalitarianism-hierarchy  intelligence  scale  giants  scifi-fantasy  speculation  quotes  religion  theos  singularity  flux-stasis  phase-transition  cybernetics  coordination  cooperate-defect  moloch  communication  bits  speed  efficiency  eden-heaven  ecology  benevolence  end-times  good-evil  identity  the-self  whole-partial-many  density
march 2018 by nhaliday
Prisoner's dilemma - Wikipedia
caveat to result below:
An extension of the IPD is an evolutionary stochastic IPD, in which the relative abundance of particular strategies is allowed to change, with more successful strategies relatively increasing. This process may be accomplished by having less successful players imitate the more successful strategies, or by eliminating less successful players from the game, while multiplying the more successful ones. It has been shown that unfair ZD strategies are not evolutionarily stable. The key intuition is that an evolutionarily stable strategy must not only be able to invade another population (which extortionary ZD strategies can do) but must also perform well against other players of the same type (which extortionary ZD players do poorly, because they reduce each other's surplus).[14]

Theory and simulations confirm that beyond a critical population size, ZD extortion loses out in evolutionary competition against more cooperative strategies, and as a result, the average payoff in the population increases when the population is bigger. In addition, there are some cases in which extortioners may even catalyze cooperation by helping to break out of a face-off between uniform defectors and win–stay, lose–switch agents.[8]

https://alfanl.com/2018/04/12/defection/
Nature boils down to a few simple concepts.

Haters will point out that I oversimplify. The haters are wrong. I am good at saying a lot with few words. Nature indeed boils down to a few simple concepts.

In life, you can either cooperate or defect.

Used to be that defection was the dominant strategy, say in the time when the Roman empire started to crumble. Everybody complained about everybody and in the end nothing got done. Then came Jesus, who told people to be loving and cooperative, and boom: 1800 years later we get the industrial revolution.

Because of Jesus we now find ourselves in a situation where cooperation is the dominant strategy. A normie engages in a ton of cooperation: with the tax collector who wants more and more of his money, with schools who want more and more of his kid’s time, with media who wants him to repeat more and more party lines, with the Zeitgeist of the Collective Spirit of the People’s Progress Towards a New Utopia. Essentially, our normie is cooperating himself into a crumbling Western empire.

Turns out that if everyone blindly cooperates, parasites sprout up like weeds until defection once again becomes the standard.

The point of a post-Christian religion is to once again create conditions for the kind of cooperation that led to the industrial revolution. This necessitates throwing out undead Christianity: you do not blindly cooperate. You cooperate with people that cooperate with you, you defect on people that defect on you. Christianity mixed with Darwinism. God and Gnon meet.

This also means we re-establish spiritual hierarchy, which, like regular hierarchy, is a prerequisite for cooperation. It is this hierarchical cooperation that turns a household into a force to be reckoned with, that allows a group of men to unite as a front against their enemies, that allows a tribe to conquer the world. Remember: Scientology bullied the Cathedral’s tax department into submission.

With a functioning hierarchy, men still gossip, lie and scheme, but they will do so in whispers behind closed doors. In your face they cooperate and contribute to the group’s wellbeing because incentives are thus that contributing to group wellbeing heightens status.

Without a functioning hierarchy, men gossip, lie and scheme, but they do so in your face, and they tell you that you are positively deluded for accusing them of gossiping, lying and scheming. Seeds will not sprout in such ground.

Spiritual dominance is established in the same way any sort of dominance is established: fought for, taken. But the fight is ritualistic. You can’t force spiritual dominance if no one listens, or if you are silenced the ritual is not allowed to happen.

If one of our priests is forbidden from establishing spiritual dominance, that is a sure sign an enemy priest is in better control and has vested interest in preventing you from establishing spiritual dominance..

They defect on you, you defect on them. Let them suffer the consequences of enemy priesthood, among others characterized by the annoying tendency that very little is said with very many words.

https://contingentnotarbitrary.com/2018/04/14/rederiving-christianity/
To recap, we started with a secular definition of Logos and noted that its telos is existence. Given human nature, game theory and the power of cooperation, the highest expression of that telos is freely chosen universal love, tempered by constant vigilance against defection while maintaining compassion for the defectors and forgiving those who repent. In addition, we must know the telos in order to fulfill it.

In Christian terms, looks like we got over half of the Ten Commandments (know Logos for the First, don’t defect or tempt yourself to defect for the rest), the importance of free will, the indestructibility of evil (group cooperation vs individual defection), loving the sinner and hating the sin (with defection as the sin), forgiveness (with conditions), and love and compassion toward all, assuming only secular knowledge and that it’s good to exist.

Iterated Prisoner's Dilemma is an Ultimatum Game: http://infoproc.blogspot.com/2012/07/iterated-prisoners-dilemma-is-ultimatum.html
The history of IPD shows that bounded cognition prevented the dominant strategies from being discovered for over over 60 years, despite significant attention from game theorists, computer scientists, economists, evolutionary biologists, etc. Press and Dyson have shown that IPD is effectively an ultimatum game, which is very different from the Tit for Tat stories told by generations of people who worked on IPD (Axelrod, Dawkins, etc., etc.).

...

For evolutionary biologists: Dyson clearly thinks this result has implications for multilevel (group vs individual selection):
... Cooperation loses and defection wins. The ZD strategies confirm this conclusion and make it sharper. ... The system evolved to give cooperative tribes an advantage over non-cooperative tribes, using punishment to give cooperation an evolutionary advantage within the tribe. This double selection of tribes and individuals goes way beyond the Prisoners' Dilemma model.

implications for fractionalized Europe vis-a-vis unified China?

and more broadly does this just imply we're doomed in the long run RE: cooperation, morality, the "good society", so on...? war and group-selection is the only way to get a non-crab bucket civilization?

Iterated Prisoner’s Dilemma contains strategies that dominate any evolutionary opponent:
http://www.pnas.org/content/109/26/10409.full
http://www.pnas.org/content/109/26/10409.full.pdf
https://www.edge.org/conversation/william_h_press-freeman_dyson-on-iterated-prisoners-dilemma-contains-strategies-that

https://en.wikipedia.org/wiki/Ultimatum_game

analogy for ultimatum game: the state gives the demos a bargain take-it-or-leave-it, and...if the demos refuses...violence?

The nature of human altruism: http://sci-hub.tw/https://www.nature.com/articles/nature02043
- Ernst Fehr & Urs Fischbacher

Some of the most fundamental questions concerning our evolutionary origins, our social relations, and the organization of society are centred around issues of altruism and selfishness. Experimental evidence indicates that human altruism is a powerful force and is unique in the animal world. However, there is much individual heterogeneity and the interaction between altruists and selfish individuals is vital to human cooperation. Depending on the environment, a minority of altruists can force a majority of selfish individuals to cooperate or, conversely, a few egoists can induce a large number of altruists to defect. Current gene-based evolutionary theories cannot explain important patterns of human altruism, pointing towards the importance of both theories of cultural evolution as well as gene–culture co-evolution.

...

Why are humans so unusual among animals in this respect? We propose that quantitatively, and probably even qualitatively, unique patterns of human altruism provide the answer to this question. Human altruism goes far beyond that which has been observed in the animal world. Among animals, fitness-reducing acts that confer fitness benefits on other individuals are largely restricted to kin groups; despite several decades of research, evidence for reciprocal altruism in pair-wise repeated encounters4,5 remains scarce6–8. Likewise, there is little evidence so far that individual reputation building affects cooperation in animals, which contrasts strongly with what we find in humans. If we randomly pick two human strangers from a modern society and give them the chance to engage in repeated anonymous exchanges in a laboratory experiment, there is a high probability that reciprocally altruistic behaviour will emerge spontaneously9,10.

However, human altruism extends far beyond reciprocal altruism and reputation-based cooperation, taking the form of strong reciprocity11,12. Strong reciprocity is a combination of altruistic rewarding, which is a predisposition to reward others for cooperative, norm-abiding behaviours, and altruistic punishment, which is a propensity to impose sanctions on others for norm violations. Strong reciprocators bear the cost of rewarding or punishing even if they gain no individual economic benefit whatsoever from their acts. In contrast, reciprocal altruists, as they have been defined in the biological literature4,5, reward and punish only if this is in their long-term self-interest. Strong reciprocity thus constitutes a powerful incentive for cooperation even in non-repeated interactions and when reputation gains are absent, because strong reciprocators will reward those who cooperate and punish those who defect.

...

We will show that the interaction between selfish and strongly reciprocal … [more]
concept  conceptual-vocab  wiki  reference  article  models  GT-101  game-theory  anthropology  cultural-dynamics  trust  cooperate-defect  coordination  iteration-recursion  sequential  axelrod  discrete  smoothness  evolution  evopsych  EGT  economics  behavioral-econ  sociology  new-religion  deep-materialism  volo-avolo  characterization  hsu  scitariat  altruism  justice  group-selection  decision-making  tribalism  organizing  hari-seldon  theory-practice  applicability-prereqs  bio  finiteness  multi  history  science  social-science  decision-theory  commentary  study  summary  giants  the-trenches  zero-positive-sum  🔬  bounded-cognition  info-dynamics  org:edge  explanation  exposition  org:nat  eden  retention  long-short-run  darwinian  markov  equilibrium  linear-algebra  nitty-gritty  competition  war  explanans  n-factor  europe  the-great-west-whale  occident  china  asia  sinosphere  orient  decentralized  markets  market-failure  cohesion  metabuch  stylized-facts  interdisciplinary  physics  pdf  pessimism  time  insight  the-basilisk  noblesse-oblige  the-watchers  ideas  l
march 2018 by nhaliday
Books 2017 | West Hunter
Arabian Sands
The Aryans
The Big Show
The Camel and the Wheel
Civil War on Western Waters
Company Commander
Double-edged Secrets
The Forgotten Soldier
Genes in Conflict
Hive Mind
The horse, the wheel, and language
The Penguin Atlas of Medieval History
Habitable Planets for Man
The genetical theory of natural selection
The Rise of the Greeks
To Lose a Battle
The Jewish War
Tropical Gangsters
The Forgotten Revolution
Egil’s Saga
Shapers
Time Patrol

Russo: https://westhunt.wordpress.com/2017/12/14/books-2017/#comment-98568
west-hunter  scitariat  books  recommendations  list  top-n  confluence  2017  info-foraging  canon  🔬  ideas  s:*  history  mostly-modern  world-war  britain  old-anglo  travel  MENA  frontier  reflection  europe  gallic  war  sapiens  antiquity  archaeology  technology  divergence  the-great-west-whale  transportation  nature  long-short-run  intel  tradecraft  japan  asia  usa  spearhead  garett-jones  hive-mind  economics  broad-econ  giants  fisher  space  iron-age  medieval  the-classics  civilization  judaism  conquest-empire  africa  developing-world  institutions  science  industrial-revolution  the-trenches  wild-ideas  innovation  speedometer  nordic  mediterranean  speculation  fiction  scifi-fantasy  time  encyclopedic  multi  poast  critique  cost-benefit  tradeoffs  quixotic
december 2017 by nhaliday
Lynn Margulis | West Hunter
Margulis went on to theorize that symbiotic relationships between organisms are the dominant driving force of evolution. There certainly are important examples of this: as far as I know, every complex organism that digests cellulose manages it thru a symbiosis with various prokaryotes. Many organisms with a restricted diet have symbiotic bacteria that provide essential nutrients – aphids, for example. Tall fescue, a popular turf grass on golf courses, carries an endosymbiotic fungus. And so on, and on on.

She went on to oppose neodarwinism, particularly rejecting inter-organismal competition (and population genetics itself). From Wiki: [ She also believed that proponents of the standard theory “wallow in their zoological, capitalistic, competitive, cost-benefit interpretation of Darwin – having mistaken him… Neo-Darwinism, which insists on [the slow accrual of mutations by gene-level natural selection], is in a complete funk.”[8] ‘

...

You might think that Lynn Margulis is an example of someone that could think outside the box because she’d never even been able to find it in the first place – but that’s more true of autistic types [like Dirac or Turing], which I doubt she was in any way. I’d say that some traditional prejudices [dislike of capitalism and individual competition], combined with the sort of general looniness that leaves one open to unconventional ideas, drove her in a direction that bore fruit, more or less by coincidence. A successful creative scientist does not have to be right about everything, or indeed about much of anything: they need to contribute at least one new, true, and interesting thing.

https://westhunt.wordpress.com/2017/11/25/lynn-margulis/#comment-98174
“A successful creative scientist does not have to be right about everything, or indeed about much of anything: they need to contribute at least one new, true, and interesting thing.” Yes – it’s like old bands. As long as they have just one song in heavy rotation on the classic rock stations, they can tour endlessly – it doesn’t matter that they have only one or even no original members performing. A scientific example of this phenomena is Kary Mullins. He’ll always have PCR, even if a glowing raccoon did greet him with the words, “Good evening, Doctor.”

Nobel Savage: https://www.lrb.co.uk/v21/n13/steven-shapin/nobel-savage
Dancing Naked in the Mind Field by Kary Mullis

jet fuel can't melt steel beams: https://westhunt.wordpress.com/2017/11/25/lynn-margulis/#comment-98201
You have to understand a subject extremely well to make arguments why something couldn’t have happened. The easiest cases involve some purported explanation violating a conservation law of physics: that wasn’t the case here.

Do I think you’re a hotshot, deeply knowledgeable about structural engineering, properties of materials, using computer models, etc? A priori, pretty unlikely. What are the odds that you know as much simple mechanics as I do? a priori, still pretty unlikely. Most likely, you’re talking through your hat.

Next, the conspiracy itself is unlikely: quite a few people would be involved – unlikely that none of them would talk. It’s not that easy to find people that would go along with such a thing, believe it or not. The Communists were pretty good at conspiracy, but people defected, people talked: not just Whittaker Chambers, not just Igor Gouzenko.
west-hunter  scitariat  discussion  people  profile  science  the-trenches  innovation  discovery  ideas  turing  giants  autism  👽  bio  evolution  eden  roots  darwinian  capitalism  competition  cooperate-defect  being-right  info-dynamics  frontier  curiosity  creative  multi  poast  prudence  org:mag  org:anglo  letters  books  review  critique  summary  lol  genomics  social-science  sociology  psychology  psychiatry  ability-competence  rationality  epistemic  reason  events  terrorism  usa  islam  communism  coordination  organizing  russia  dirty-hands  degrees-of-freedom  alignment
november 2017 by nhaliday
[1509.02504] Electric charge in hyperbolic motion: The early history and other geometrical aspects
We revisit the early work of Minkowski and Sommerfeld concerning hyperbolic motion, and we describe some geometrical aspects of the electrodynamic interaction. We discuss the advantages of a time symmetric formulation in which the material points are replaced by infinitesimal length elements.

SPACE AND TIME: An annotated, illustrated edition of Hermann Minkowski's revolutionary essay: http://web.mit.edu/redingtn/www/netadv/SP20130311.html
nibble  preprint  papers  org:mat  physics  electromag  relativity  exposition  history  mostly-modern  pre-ww2  science  the-trenches  discovery  intricacy  classic  explanation  einstein  giants  plots  manifolds  article  multi  liner-notes  org:junk  org:edu  absolute-relative
november 2017 by nhaliday
Karl Pearson and the Chi-squared Test
Pearson's paper of 1900 introduced what subsequently became known as the chi-squared test of goodness of fit. The terminology and allusions of 80 years ago create a barrier for the modern reader, who finds that the interpretation of Pearson's test procedure and the assessment of what he achieved are less than straightforward, notwithstanding the technical advances made since then. An attempt is made here to surmount these difficulties by exploring Pearson's relevant activities during the first decade of his statistical career, and by describing the work by his contemporaries and predecessors which seem to have influenced his approach to the problem. Not all the questions are answered, and others remain for further study.

original paper: http://www.economics.soton.ac.uk/staff/aldrich/1900.pdf

How did Karl Pearson come up with the chi-squared statistic?: https://stats.stackexchange.com/questions/97604/how-did-karl-pearson-come-up-with-the-chi-squared-statistic
He proceeds by working with the multivariate normal, and the chi-square arises as a sum of squared standardized normal variates.

You can see from the discussion on p160-161 he's clearly discussing applying the test to multinomial distributed data (I don't think he uses that term anywhere). He apparently understands the approximate multivariate normality of the multinomial (certainly he knows the margins are approximately normal - that's a very old result - and knows the means, variances and covariances, since they're stated in the paper); my guess is that most of that stuff is already old hat by 1900. (Note that the chi-squared distribution itself dates back to work by Helmert in the mid-1870s.)

Then by the bottom of p163 he derives a chi-square statistic as "a measure of goodness of fit" (the statistic itself appears in the exponent of the multivariate normal approximation).

He then goes on to discuss how to evaluate the p-value*, and then he correctly gives the upper tail area of a χ212χ122 beyond 43.87 as 0.000016. [You should keep in mind, however, that he didn't correctly understand how to adjust degrees of freedom for parameter estimation at that stage, so some of the examples in his papers use too high a d.f.]
nibble  papers  acm  stats  hypothesis-testing  methodology  history  mostly-modern  pre-ww2  old-anglo  giants  science  the-trenches  stories  multi  q-n-a  overflow  explanation  summary  innovation  discovery  distribution  degrees-of-freedom  limits
october 2017 by nhaliday
New Theory Cracks Open the Black Box of Deep Learning | Quanta Magazine
A new idea called the “information bottleneck” is helping to explain the puzzling success of today’s artificial-intelligence algorithms — and might also explain how human brains learn.

sounds like he's just talking about autoencoders?
news  org:mag  org:sci  popsci  announcement  research  deep-learning  machine-learning  acm  information-theory  bits  neuro  model-class  big-surf  frontier  nibble  hmm  signal-noise  deepgoog  expert  ideas  wild-ideas  summary  talks  video  israel  roots  physics  interdisciplinary  ai  intelligence  shannon  giants  arrows  preimage  lifts-projections  composition-decomposition  characterization  markov  gradient-descent  papers  liner-notes  experiment  hi-order-bits  generalization  expert-experience  explanans  org:inst  speedometer
september 2017 by nhaliday
Ptolemy's Model of the Solar System
It follows, from the above discussion, that the geocentric model of Ptolemy is equivalent to a heliocentric model in which the various planetary orbits are represented as eccentric circles, and in which the radius vector connecting a given planet to its corresponding equant revolves at a uniform rate. In fact, Ptolemy's model of planetary motion can be thought of as a version of Kepler's model which is accurate to first-order in the planetary eccentricities--see Cha. 4. According to the Ptolemaic scheme, from the point of view of the earth, the orbit of the sun is described by a single circular motion, whereas that of a planet is described by a combination of two circular motions. In reality, the single circular motion of the sun represents the (approximately) circular motion of the earth around the sun, whereas the two circular motions of a typical planet represent a combination of the planet's (approximately) circular motion around the sun, and the earth's motion around the sun. Incidentally, the popular myth that Ptolemy's scheme requires an absurdly large number of circles in order to fit the observational data to any degree of accuracy has no basis in fact. Actually, Ptolemy's model of the sun and the planets, which fits the data very well, only contains 12 circles (i.e., 6 deferents and 6 epicycles).
org:junk  org:edu  nibble  physics  space  mechanics  history  iron-age  mediterranean  the-classics  science  the-trenches  the-great-west-whale  giants  models  intricacy  parsimony
september 2017 by nhaliday
Fermat's Library | Cassini, Rømer and the velocity of light annotated/explained version.
Abstract: The discovery of the finite nature of the velocity of light is usually attributed to Rømer. However, a text at the Paris Observatory confirms the minority opinion according to which Cassini was first to propose the ‘successive motion’ of light, while giving a rather correct order of magnitude for the duration of its propagation from the Sun to the Earth. We examine this question, and discuss why, in spite of the criticisms of Halley, Cassini abandoned this hypothesis while leaving Rømer free to publish it.
liner-notes  papers  essay  history  early-modern  europe  the-great-west-whale  giants  the-trenches  mediterranean  nordic  science  innovation  discovery  physics  electromag  space  speed  nibble  org:sci  org:mat
september 2017 by nhaliday
My Old Boss | West Hunter
Back in those days, there was interest in finding better ways to communicate with a submerged submarine.  One method under consideration used an orbiting laser to send pulses of light over the ocean, using a special wavelength, for which there was a very good detector.  Since even the people running the laser might not know the boomer’s exact location, while weather and such might also interfere,  my old boss was trying to figure out methods of reliably transmitting messages when some pulses were randomly lost – which is of course a well-developed subject,  error-correcting codes. But he didn’t know that.  Hadn’t even heard of it.

Around this time, my old boss was flying from LA to Washington, and started talking with his seatmate about this  submarine communication problem.  His seatmate – Irving S. Reed – politely said that he had done a little work on some similar problems.  During this conversation, my informant, a fellow minion sitting behind my old boss, was doggedly choking back hysterical laughter, not wanting to interrupt this very special conversation.
west-hunter  scitariat  stories  reflection  working-stiff  engineering  dirty-hands  electromag  communication  coding-theory  giants  bits  management  signal-noise
september 2017 by nhaliday
newtonian gravity - Newton's original proof of gravitation for non-point-mass objects - Physics Stack Exchange
This theorem is Proposition LXXI, Theorem XXXI in the Principia. To warm up, consider the more straightforward proof of the preceding theorem, that there's no inverse-square force inside of a spherical shell:

picture

The crux of the argument is that the triangles HPI and LPK are similar. The mass enclosed in the small-but-near patch of sphere HI goes like the square of the distance HP, while the mass enclosed in the large-but-far patch of sphere KL, with the same solid angle, goes like the square of the distance KP. This mass ratio cancels out the distance-squared ratio governing the strength of the force, and so the net force from those two patches vanishes.

For a point mass outside a shell, Newton's approach is essentially the same as the modern approach:

picture

One integral is removed because we're considering a thin spherical shell rather than a solid sphere. The second integral, "as the semi-circle AKB revolves about the diameter AB," trivially turns Newton's infinitesimal arcs HI and KL into annuli.

The third integral is over all the annuli in the sphere, over 0≤ϕ≤τ/20≤ϕ≤τ/2 or over R−r≤s≤R+rR−r≤s≤R+r. This one is a little bit hairy, even with the advantage of modern notation.

Newton's clever trick is to consider the relationship between the force due to the smaller, nearer annulus HI and the larger, farther annulus KL defined by the same viewing angle (in modern notation, dθdθ). If I understand correctly he argues again, based on lots of similar triangles with infinitesimal angles, that the smaller-but-nearer annulus and the larger-but-farther annulus exert the same force at P. Furthermore, he shows that the force doesn't depend on the distance PF, and thus doesn't depend on the radius of the sphere; the only parameter left is the distance PS (squared) between the particle and the sphere's center. Since the argument doesn't depend on the angle HPS, it's true for all the annuli, and the theorem is proved.
nibble  q-n-a  overflow  giants  old-anglo  the-trenches  physics  mechanics  gravity  proofs  symmetry  geometry  spatial
september 2017 by nhaliday
Why was the Catholic Church so opposed to heliocentrism (for example, in the Renaissance)? Why did they not simply claim that God lived in the Sun, so we go around Him? - Quora
The main reason the Catholic Church opposed the teaching of heliocentrism as a fact was that it was contrary to the science of the time.

Amongst the modern myths about early science is the persistent idea that the opposition to heliocentrism was one of "science" versus "religion". According to this story, early modern astronomers like Copernicus and Galileo "proved" the earth went around the sun and the other scientists of the time agreed. But the Catholic Church clung to a literal interpretation of the Bible and rejected this idea purely out of a fanatical faith, insisting that the earth had to be the centre of the cosmos because man was the pinnacle of all creation. Pretty much everything in this popular story is wrong.
q-n-a  qra  history  medieval  europe  the-great-west-whale  science  the-trenches  discovery  giants  mediterranean  religion  christianity  protestant-catholic  theos  being-right  physics  mechanics  space  iron-age  the-classics  censorship
september 2017 by nhaliday
GALILEO'S STUDIES OF PROJECTILE MOTION
During the Renaissance, the focus, especially in the arts, was on representing as accurately as possible the real world whether on a 2 dimensional surface or a solid such as marble or granite. This required two things. The first was new methods for drawing or painting, e.g., perspective. The second, relevant to this topic, was careful observation.

With the spread of cannon in warfare, the study of projectile motion had taken on greater importance, and now, with more careful observation and more accurate representation, came the realization that projectiles did not move the way Aristotle and his followers had said they did: the path of a projectile did not consist of two consecutive straight line components but was instead a smooth curve. [1]

Now someone needed to come up with a method to determine if there was a special curve a projectile followed. But measuring the path of a projectile was not easy.

Using an inclined plane, Galileo had performed experiments on uniformly accelerated motion, and he now used the same apparatus to study projectile motion. He placed an inclined plane on a table and provided it with a curved piece at the bottom which deflected an inked bronze ball into a horizontal direction. The ball thus accelerated rolled over the table-top with uniform motion and then fell off the edge of the table Where it hit the floor, it left a small mark. The mark allowed the horizontal and vertical distances traveled by the ball to be measured. [2]

By varying the ball's horizontal velocity and vertical drop, Galileo was able to determine that the path of a projectile is parabolic.

https://www.scientificamerican.com/author/stillman-drake/

Galileo's Discovery of the Parabolic Trajectory: http://www.jstor.org/stable/24949756

Galileo's Experimental Confirmation of Horizontal Inertia: Unpublished Manuscripts (Galileo
Gleanings XXII): https://sci-hub.tw/https://www.jstor.org/stable/229718
- Drake Stillman

MORE THAN A DECADE HAS ELAPSED since Thomas Settle published a classic paper in which Galileo's well-known statements about his experiments on inclined planes were completely vindicated.' Settle's paper replied to an earlier attempt by Alexandre Koyre to show that Galileo could not have obtained the results he claimed in his Two New Sciences by actual observations using the equipment there described. The practical ineffectiveness of Settle's painstaking repetition of the experiments in altering the opinion of historians of science is only too evident. Koyre's paper was reprinted years later in book form without so much as a note by the editors concerning Settle's refutation of its thesis.2 And the general literature continues to belittle the role of experiment in Galileo's physics.

More recently James MacLachlan has repeated and confirmed a different experiment reported by Galileo-one which has always seemed highly exaggerated and which was also rejected by Koyre with withering sarcasm.3 In this case, however, it was accuracy of observation rather than precision of experimental data that was in question. Until now, nothing has been produced to demonstrate Galileo's skill in the design and the accurate execution of physical experiment in the modern sense.

Pant of a page of Galileo's unpublished manuscript notes, written late in 7608, corroborating his inertial assumption and leading directly to his discovery of the parabolic trajectory. (Folio 1 16v Vol. 72, MSS Galileiani; courtesy of the Biblioteca Nazionale di Firenze.)

...

(The same skeptical historians, however, believe that to show that Galileo could have used the medieval mean-speed theorem suffices to prove that he did use it, though it is found nowhere in his published or unpublished writings.)

...

Now, it happens that among Galileo's manuscript notes on motion there are many pages that were not published by Favaro, since they contained only calculations or diagrams without attendant propositions or explanations. Some pages that were published had first undergone considerable editing, making it difficult if not impossible to discern their full significance from their printed form. This unpublished material includes at least one group of notes which cannot satisfactorily be accounted for except as representing a series of experiments designed to test a fundamental assumption, which led to a new, important discovery. In these documents precise empirical data are given numerically, comparisons are made with calculated values derived from theory, a source of discrepancy from still another expected result is noted, a new experiment is designed to eliminate this, and further empirical data are recorded. The last-named data, although proving to be beyond Galileo's powers of mathematical analysis at the time, when subjected to modern analysis turn out to be remarkably precise. If this does not represent the experimental process in its fully modern sense, it is hard to imagine what standards historians require to be met.

The discovery of these notes confirms the opinion of earlier historians. They read only Galileo's published works, but did so without a preconceived notion of continuity in the history of ideas. The opinion of our more sophisticated colleagues has its sole support in philosophical interpretations that fit with preconceived views of orderly long-term scientific development. To find manuscript evidence that Galileo was at home in the physics laboratory hardly surprises me. I should find it much more astonishing if, by reasoning alone, working only from fourteenth-century theories and conclusions, he had continued along lines so different from those followed by profound philosophers in earlier centuries. It is to be hoped that, warned by these examples, historians will begin to restore the old cautionary clauses in analogous instances in which scholarly opinions are revised without new evidence, simply to fit historical theories.

In what follows, the newly discovered documents are presented in the context of a hypothetical reconstruction of Galileo's thought.

...

As early as 1590, if we are correct in ascribing Galileo's juvenile De motu to that date, it was his belief that an ideal body resting on an ideal horizontal plane could be set in motion by a force smaller than any previously assigned force, however small. By "horizontal plane" he meant a surface concentric with the earth but which for reasonable distances would be indistinguishable from a level plane. Galileo noted at the time that experiment did not confirm this belief that the body could be set in motion by a vanishingly small force, and he attributed the failure to friction, pressure, the imperfection of material surfaces and spheres, and the departure of level planes from concentricity with the earth.5

It followed from this belief that under ideal conditions the motion so induced would also be perpetual and uniform. Galileo did not mention these consequences until much later, and it is impossible to say just when he perceived them. They are, however, so evident that it is safe to assume that he saw them almost from the start. They constitute a trivial case of the proposition he seems to have been teaching before 1607-that a mover is required to start motion, but that absence of resistance is then sufficient to account for its continuation.6

In mid-1604, following some investigations of motions along circular arcs and motions of pendulums, Galileo hit upon the law that in free fall the times elapsed from rest are as the smaller distance is to the mean proportional between two distances fallen.7 This gave him the times-squared law as well as the rule of odd numbers for successive distances and speeds in free fall. During the next few years he worked out a large number of theorems relating to motion along inclined planes, later published in the Two New Sciences. He also arrived at the rule that the speed terminating free fall from rest was double the speed of the fall itself. These theorems survive in manuscript notes of the period 1604-1609. (Work during these years can be identified with virtual certainty by the watermarks in the paper used, as I have explained elsewhere.8)

In the autumn of 1608, after a summer at Florence, Galileo seems to have interested himself in the question whether the actual slowing of a body moving horizontally followed any particular rule. On folio 117i of the manuscripts just mentioned, the numbers 196, 155, 121, 100 are noted along the horizontal line near the middle of the page (see Fig. 1). I believe that this was the first entry on this leaf, for reasons that will appear later, and that Galileo placed his grooved plane in the level position and recorded distances traversed in equal times along it. Using a metronome, and rolling a light wooden ball about 4 3/4 inches in diameter along a plane with a groove 1 3/4 inches wide, I obtained similar relations over a distance of 6 feet. The figures obtained vary greatly for balls of different materials and weights and for greatly different initial speeds.9 But it suffices for my present purposes that Galileo could have obtained the figures noted by observing the actual deceleration of a ball along a level plane. It should be noted that the watermark on this leaf is like that on folio 116, to which we shall come presently, and it will be seen later that the two sheets are closely connected in time in other ways as well.

The relatively rapid deceleration is obviously related to the contact of ball and groove. Were the ball to roll right off the end of the plane, all resistance to horizontal motion would be virtually removed. If, then, there were any way to have a given ball leave the plane at different speeds of which the ratios were known, Galileo's old idea that horizontal motion would continue uniformly in the absence of resistance could be put to test. His law of free fall made this possible. The ratios of speeds could be controlled by allowing the ball to fall vertically through known heights, at the ends of which it would be deflected horizontally. Falls through given heights … [more]
nibble  org:junk  org:edu  physics  mechanics  gravity  giants  the-trenches  discovery  history  early-modern  europe  mediterranean  the-great-west-whale  frontier  science  empirical  experiment  arms  technology  lived-experience  time  measurement  dirty-hands  iron-age  the-classics  medieval  sequential  wire-guided  error  wiki  reference  people  quantitative-qualitative  multi  pdf  piracy  study  essay  letters  discrete  news  org:mag  org:sci  popsci
august 2017 by nhaliday
Isaac Newton: the first physicist.
[...] More fundamentally, Newton's mathematical approach has become so basic to all of physics that he is generally regarded as _the father of the clockwork universe_: the first, and perhaps the greatest, physicist.

The Alchemist

In fact, Newton was deeply opposed to the mechanistic conception of the world. A secretive alchemist [...]. His written work on the subject ran to more than a million words, far more than he ever produced on calculus or mechanics [21]. Obsessively religious, he spent years correlating biblical prophecy with historical events [319ff]. He became deeply convinced that Christian doctrine had been deliberately corrupted by _the false notion of the trinity_, and developed a vicious contempt for conventional (trinitarian) Christianity and for Roman Catholicism in particular [324]. [...] He believed that God mediated the gravitational force [511](353), and opposed any attempt to give a mechanistic explanation of chemistry or gravity, since that would diminish the role of God [646]. He consequently conceived such _a hatred of Descartes_, on whose foundations so many of his achievements were built, that at times _he refused even to write his name_ [399,401].

The Man

Newton was rigorously puritanical: when one of his few friends told him "a loose story about a nun", he ended their friendship (267). [...] He thought of himself as the sole inventor of the calculus, and hence the greatest mathematician since the ancients, and left behind a huge corpus of unpublished work, mostly alchemy and biblical exegesis, that he believed future generations would appreciate more than his own (199,511).

[...] Even though these unattractive qualities caused him to waste huge amounts of time and energy in ruthless vendettas against colleagues who in many cases had helped him (see below), they also drove him to the extraordinary achievements for which he is still remembered. And for all his arrogance, Newton's own summary of his life (574) was beautifully humble:

"I do not know how I may appear to the world, but to myself I seem to have been only like a boy, playing on the sea-shore, and diverting myself, in now and then finding a smoother pebble or prettier shell than ordinary, whilst the great ocean of truth lay all undiscovered before me."

Before Newton

...

1. Calculus. Descartes, in 1637, pioneered the use of coordinates to turn geometric problems into algebraic ones, a method that Newton was never to accept [399]. Descartes, Fermat, and others investigated methods of calculating the tangents to arbitrary curves [28-30]. Kepler, Cavalieri, and others used infinitesimal slices to calculate volumes and areas enclosed by curves [30], but no unified treatment of these problems had yet been found.
2. Mechanics & Planetary motion. The elliptical orbits of the planets having been established by Kepler, Descartes proposed the idea of a purely mechanical heliocentric universe, following deterministic laws, and with no need of any divine agency [15], another anathema to Newton. _No one imagined, however, that a single law might explain both falling bodies and planetary motion_. Galileo invented the concept of inertia, anticipating Newton's first and second laws of motion (293), and Huygens used it to analyze collisions and circular motion [11]. Again, these pieces of progress had not been synthesized into a general method for analyzing forces and motion.
3. Light. Descartes claimed that light was a pressure wave, Gassendi that it was a stream of particles (corpuscles) [13]. As might be guessed, Newton vigorously supported the corpuscular theory. _White light was universally believed to be the pure form_, and colors were some added property bequeathed to it upon reflection from matter (150). Descartes had discovered the sine law of refraction (94), but it was not known that some colors were refracted more than others. The pattern was the familiar one: many pieces of the puzzle were in place, but the overall picture was still unclear.

The Natural Philosopher

Between 1671 and 1690, Newton was to supply definitive treatments of most of these problems. By assiduous experimentation with prisms he established that colored light was actually fundamental, and that it could be recombined to create white light. He did not publish the result for 6 years, by which time it seemed so obvious to him that he found great difficulty in responding patiently to the many misunderstandings and objections with which it met [239ff].

He invented differential and integral calculus in 1665-6, but failed to publish it. Leibniz invented it independently 10 years later, and published it first [718]. This resulted in a priority dispute which degenerated into a feud characterized by extraordinary dishonesty and venom on both sides (542).

In discovering gravitation, Newton was also _barely ahead of the rest of the pack_. Hooke was the first to realize that orbital motion was produced by a centripetal force (268), and in 1679 _he suggested an inverse square law to Newton_ [387]. Halley and Wren came to the same conclusion, and turned to Newton for a proof, which he duly supplied [402]. Newton did not stop there, however. From 1684 to 1687 he worked continuously on a grand synthesis of the whole of mechanics, the "Philosophiae Naturalis Principia Mathematica," in which he developed his three laws of motion and showed in detail that the universal force of gravitation could explain the fall of an apple as well as the precise motions of planets and comets.

The "Principia" crystallized the new conceptions of force and inertia that had gradually been emerging, and marks the beginning of theoretical physics as the mathematical field that we know today. It is not an easy read: Newton had developed the idea that geometry and equations should never be combined [399], and therefore _refused to use simple analytical techniques in his proofs_, requiring classical geometric constructions instead [428]. He even made his Principia _deliberately abstruse in order to discourage amateurs from feeling qualified to criticize it_ [459].

[...] most of the rest of his life was spent in administrative work as Master of the Mint and as President of the Royal Society, _a position he ruthlessly exploited in the pursuit of vendettas_ against Hooke (300ff,500), Leibniz (510ff), and Flamsteed (490,500), among others. He kept secret his disbelief in Christ's divinity right up until his dying moment, at which point he refused the last rites, at last openly defying the church (576). [...]
org:junk  people  old-anglo  giants  physics  mechanics  gravity  books  religion  christianity  theos  science  the-trenches  britain  history  early-modern  the-great-west-whale  stories  math  math.CA  nibble  discovery
august 2017 by nhaliday
Introduction to Scaling Laws
http://galileo.phys.virginia.edu/classes/304/scaling.pdf

Galileo’s Discovery of Scaling Laws: https://www.mtholyoke.edu/~mpeterso/classes/galileo/scaling8.pdf
Days 1 and 2 of Two New Sciences

An example of such an insight is “the surface of a small solid is comparatively greater than that of a large one” because the surface goes like the square of a linear dimension, but the volume goes like the cube.5 Thus as one scales down macroscopic objects, forces on their surfaces like viscous drag become relatively more important, and bulk forces like weight become relatively less important. Galileo uses this idea on the First Day in the context of resistance in free fall, as an explanation for why similar objects of different size do not fall exactly together, but the smaller one lags behind.
nibble  org:junk  exposition  lecture-notes  physics  mechanics  street-fighting  problem-solving  scale  magnitude  estimate  fermi  mental-math  calculation  nitty-gritty  multi  scitariat  org:bleg  lens  tutorial  guide  ground-up  tricki  skeleton  list  cheatsheet  identity  levers  hi-order-bits  yoga  metabuch  pdf  article  essay  history  early-modern  europe  the-great-west-whale  science  the-trenches  discovery  fluid  architecture  oceans  giants  tidbits  elegance
august 2017 by nhaliday
The Function of Reason | Edge.org
https://www.edge.org/conversation/hugo_mercier-the-argumentative-theory

How Social Is Reason?: http://www.overcomingbias.com/2017/08/how-social-is-reason.html

Reading The Enigma of Reason. Pretty good so far. Not incredibly surprising to me so far. To be clear, their argument is somewhat orthogonal to the whole ‘rationality’ debate you may be familiar with from Daniel Kahneman and Amos Tversky’s work (e.g., see Heuristics and Biases).

One of the major problems in analysis is that rationality, reflection and ratiocination, are slow and error prone. To get a sense of that, just read ancient Greek science. Eratosthenes may have calculated to within 1% of the true circumference of the world, but Aristotle’s speculations on the nature of reproduction were rather off.

You may be as clever as Eratosthenes, but most people are not. But you probably accept that the world is round and 24,901 miles around. If you are not American you probably are vague on miles anyway. But you know what the social consensus is, and you accept it because it seems reasonable.

One of the points in cultural evolution work is that a lot of the time rather than relying on your own intuition and or reason, it is far more effective and cognitively cheaper to follow social norms of your ingroup. I only bring this up because unfortunately many pathologies of our political and intellectual world today are not really pathologies. That is, they’re not bugs, but features.

Finished The Enigma of Reason. The basic thesis that reasoning is a way to convince people after you’ve already come to a conclusion, that is, rationalization, was already one I shared. That makes sense since one of the coauthors, Dan Sperber, has been influential in the “naturalistic” school of anthropology. If you’ve read books like In Gods We Trust The Enigma of Reason goes fast. But it is important to note that the cognitive anthropology perspective is useful in things besides religion. I’m thinking in particular of politics.

https://gnxp.nofe.me/2017/07/30/the-delusion-of-reasons-empire/
My point here is that many of our beliefs are arrived at in an intuitive manner, and we find reasons to justify those beliefs. One of the core insights you’ll get from The Enigma of Reason is that rationalization isn’t that big of a misfire or abuse of our capacities. It’s probably just a natural outcome for what and how we use reason in our natural ecology.

Mercier and Sperber contrast their “interactionist” model of what reason is for with an “intellectualist: model. The intellecutalist model is rather straightforward. It is one where individual reasoning capacities exist so that one may make correct inferences about the world around us, often using methods that mimic those in abstract elucidated systems such as formal logic or Bayesian reasoning. When reasoning doesn’t work right, it’s because people aren’t using it for it’s right reasons. It can be entirely solitary because the tools don’t rely on social input or opinion.

The interactionist model holds that reasoning exists because it is a method of persuasion within social contexts. It is important here to note that the authors do not believe that reasoning is simply a tool for winning debates. That is, increasing your status in a social game. Rather, their overall thesis seems to be in alignment with the idea that cognition of reasoning properly understood is a social process. In this vein they offer evidence of how juries may be superior to judges, and the general examples you find in the “wisdom of the crowds” literature. Overall the authors make a strong case for the importance of diversity of good-faith viewpoints, because they believe that the truth on the whole tends to win out in dialogic formats (that is, if there is a truth; they are rather unclear and muddy about normative disagreements and how those can be resolved).

The major issues tend to crop up when reasoning is used outside of its proper context. One of the literature examples, which you are surely familiar with, in The Enigma of Reason is a psychological experiment where there are two conditions, and the researchers vary the conditions and note wide differences in behavior. In particular, the experiment where psychologists put subjects into a room where someone out of view is screaming for help. When they are alone, they quite often go to see what is wrong immediately. In contrast, when there is a confederate of the psychologists in the room who ignores the screaming, people also tend to ignore the screaming.

The researchers know the cause of the change in behavior. It’s the introduction of the confederate and that person’s behavior. But the subjects when interviewed give a wide range of plausible and possible answers. In other words, they are rationalizing their behavior when called to justify it in some way. This is entirely unexpected, we all know that people are very good at coming up with answers to explain their behavior (often in the best light possible). But that doesn’t mean they truly understanding their internal reasons, which seem to be more about intuition.

But much of The Enigma of Reason also recounts how bad people are at coming up with coherent and well thought out rationalizations. That is, their “reasons” tend to be ad hoc and weak. We’re not very good at formal logic or even simple syllogistic reasoning. The explanation for this seems to be two-fold.

...

At this point we need to address the elephant in the room: some humans seem extremely good at reasoning in a classical sense. I’m talking about individuals such as Blaise Pascal, Carl Friedrich Gauss, and John von Neumann. Early on in The Enigma of Reason the authors point out the power of reason by alluding to Eratosthenes’s calculation of the circumference of the earth, which was only off by one percent. Myself, I would have mentioned Archimedes, who I suspect was a genius on the same level as the ones mentioned above.

Mercier and Sperber state near the end of the book that math in particular is special and a powerful way to reason. We all know this. In math the axioms are clear, and agreed upon. And one can inspect the chain of propositions in a very transparent manner. Mathematics has guard-rails for any human who attempts to engage in reasoning. By reducing the ability of humans to enter into unforced errors math is the ideal avenue for solitary individual reasoning. But it is exceptional.

Second, though it is not discussed in The Enigma of Reason there does seem to be variation in general and domain specific intelligence within the human population. People who flourish in mathematics usually have high general intelligences, but they also often exhibit a tendency to be able to engage in high levels of visual-spatial conceptualization.

One the whole the more intelligent you are the better you are able to reason. But that does not mean that those with high intelligence are immune from the traps of motivated reasoning or faulty logic. Mercier and Sperber give many examples. There are two. Linus Pauling was indisputably brilliant, but by the end of his life he was consistently pushing Vitamin C quackery (in part through a very selective interpretation of the scientific literature).* They also point out that much of Isaac Newton’s prodigious intellectual output turns out to have been focused on alchemy and esoteric exegesis which is totally impenetrable. Newton undoubtedly had a first class mind, but if the domain it was applied to was garbage, then the output was also garbage.

...

Overall, the take-homes are:

Reasoning exists to persuade in a group context through dialogue, not individual ratiocination.
Reasoning can give rise to storytelling when prompted, even if the reasons have no relationship to the underlying causality.
Motivated reasoning emerges because we are not skeptical of the reasons we proffer, but highly skeptical of reasons which refute our own.
The “wisdom of the crowds” is not just a curious phenomenon, but one of the primary reasons that humans have become more socially complex and our brains have larger.
Ultimately, if you want to argue someone out of their beliefs…well, good luck with that. But you should read The Enigma of Reason to understand the best strategies (many of them are common sense, and I’ve come to them independently simply through 15 years of having to engage with people of diverse viewpoints).

* R. A. Fisher, who was one of the pioneers of both evolutionary genetics and statistics, famously did not believe there was a connection between smoking and cancer. He himself smoked a pipe regularly.

** From what we know about Blaise Pascal and Isaac Newton, their personalities were such that they’d probably be killed or expelled from a hunter-gatherer band.
books  summary  psychology  social-psych  cog-psych  anthropology  rationality  biases  epistemic  thinking  neurons  realness  truth  info-dynamics  language  speaking  persuasion  dark-arts  impro  roots  ideas  speculation  hypocrisy  intelligence  eden  philosophy  multi  review  critique  ratty  hanson  org:edge  video  interview  communication  insight  impetus  hidden-motives  X-not-about-Y  signaling  🤖  metameta  metabuch  dennett  meta:rhetoric  gnxp  scitariat  open-things  giants  fisher  old-anglo  history  iron-age  mediterranean  the-classics  reason  religion  theos  noble-lie  intuition  instinct  farmers-and-foragers  egalitarianism-hierarchy  early-modern  britain  europe  gallic  hari-seldon  theory-of-mind  parallax  darwinian  evolution  telos-atelos  intricacy  evopsych  chart  traces
august 2017 by nhaliday
Man's Future Birthright: Essays on Science and Humanity by H. J. Muller. - Reviewed by Theodosius Dobzhansky
Hermann J. Muller (1890-1967) was not only a great geneticist but a visionary full of messianic zeal, profoundly concerned about directing the evolutionary course of mankind toward what he believed a better future.
pdf  essay  article  books  review  expert  genetics  dysgenics  science-anxiety  giants  mutation  genetic-load  enhancement  🌞  values  sanctity-degradation  morality  expert-experience
july 2017 by nhaliday
Lanchester's laws - Wikipedia
Lanchester's laws are mathematical formulae for calculating the relative strengths of a predator–prey pair, originally devised to analyse relative strengths of military forces.
war  meta:war  models  plots  time  differential  street-fighting  methodology  strategy  tactics  wiki  reference  history  mostly-modern  pre-ww2  world-war  britain  old-anglo  giants  magnitude  arms  identity
june 2017 by nhaliday
::.Václav Havel.:: The Power of the Powerless/Havel's greengrocer
"The Power of the Powerless" (October 1978) was originally written ("quickly," Havel said later) as a discussion piece for a projected joint Polish Czechoslovak volume of essays on the subject of freedom and power. All the participants were to receive Havel's essay, and then respond to it in writing. Twenty participants were chosen on both sides, but only the Czechoslovak side was completed. Meanwhile, in May 1979, some of the Czechoslovak contributors who were also members of VONS (the Committee to Defend the Unjustly Prosecuted), including Havel, were arrested, and it was decided to go ahead and "publish" the Czechoslovak contributions separately.

Havel's essay has had a profound impact on Eastern Europe. Here is what Zbygniew Bujak, a Solidarity activist, told me: "This essay reached us in the Ursus factory in 1979 at a point when we felt we were at the end of the road. Inspired by KOR [the Polish Workers' Defense Committee], we had been speaking on the shop floor, talking to people, participating in public meetings, trying to speak the truth about the factory, the country, and politics. There came a moment when people thought we were crazy. Why were we doing this? Why were we taking such risks? Not seeing any immediate and tangible results, we began to doubt the purposefulness of what we were doing. Shouldn’t we be coming up with other methods, other ways?

"Then came the essay by Havel. Reading it gave us the theoretical underpinnings for our activity. It maintained our spirits; we did not give up, and a year later—in August 1980—it became clear that the party apparatus and the factory management were afraid of us. We mattered. And the rank and file saw us as leaders of the movement. When I look at the victories of Solidarity, and of Charter 77, I see in them an astonishing fulfillment of the prophecies and knowledge contained in Havel's essay."

Translated by Paul Wilson, "The Power of the Powerless" has appeared several times in English, foremost in The Power of the Powerless: Citizens Against the State in Central-Eastern Europe, edited by John Keane, with an Introduction by Steven Lukes (London: Hutchinson, 1985). That volume includes a selection of nine other essays from the original Czech and Slovak collection.

...

THE MANAGER of a fruit-and-vegetable shop places in his window, among the onions and carrots, the slogan: "Workers of the world, unite!" Why does he do it? What is he trying to communicate to the world? Is he genuinely enthusiastic about the idea of unity among the workers of the world? Is his enthusiasm so great that he feels an irrepressible impulse to acquaint the public with his ideals? Has he really given more than a moment's thought to how such a unification might occur and what it would mean?

I think it can safely be assumed that the overwhelming majority of shopkeepers never think about the slogans they put in their windows, nor do they use them to express their real opinions. That poster was delivered to our greengrocer from the enterprise headquarters along with the onions and carrots. He put them all into the window simply because it has been done that way for years, because everyone does it, and because that is the way it has to be. If he were to refuse, there could be trouble. He could be reproached for not having the proper decoration in his window; someone might even accuse him of disloyalty. He does it because these things must be done if one is to get along in life. It is one of the thousands of details that guarantee him a relatively tranquil life "in harmony with society," as they say.

Obviously the greengrocer is indifferent to the semantic content of the slogan on exhibit; he does not put the slogan in his window from any personal desire to acquaint the public with the ideal it expresses. This, of course, does not mean that his action has no motive or significance at all, or that the slogan communicates nothing to anyone. The slogan is really a sign, and as such it contains a subliminal but very definite message. Verbally, it might be expressed this way: "I, the greengrocer XY, live here and I know what I must do. I behave in the manner expected of me. I can be depended upon and am beyond reproach. I am obedient and therefore I have the right to be left in peace." This message, of course, has an addressee: it is directed above, to the greengrocer's superior, and at the same time it is a shield that protects the greengrocer from potential informers. The slogan's real meaning, therefore, is rooted firmly in the greengrocer's existence. It reflects his vital interests. But what are those vital interests?

...

Individuals need not believe all these mystifications, but they must behave as though they did, or they must at least tolerate them in silence, or get along well with those who work with them. For this reason, however, they must live within a lie. They need not accept the lie. It is enough for them to have accepted their life with it and in it. For by this very fact, individuals confirm the system, fulfill the system, make the system, are the system.

Live Not By Lies: http://www.orthodoxytoday.org/articles/SolhenitsynLies.php
- Alexander Solzhenitsyn
We do not exhort ourselves. We have not sufficiently matured to march into the squares and shout the truth our loud or to express aloud what we think. It's not necessary.

It's dangerous. But let us refuse to say that which we do not think.

This is our path, the easiest and most accessible one, which takes into account out inherent cowardice, already well rooted. And it is much easier—it's dangerous even to say this—than the sort of civil disobedience which Gandhi advocated.

Our path is to talk away fro the gangrenous boundary. If we did not paste together the dead bones and scales of ideology, if we did not sew together the rotting rags, we would be astonished how quickly the lies would be rendered helpless and subside.

That which should be naked would then really appear naked before the whole world.

So in our timidity, let each of us make a choice: Whether consciously, to remain a servant of falsehood—of course, it is not out of inclination, but to feed one's family, that one raises his children in the spirit of lies—or to shrug off the lies and become an honest man worthy of respect both by one's children and contemporaries.

The Kolmogorov option: http://www.scottaaronson.com/blog/?p=3376
As far as I can tell, the answer is simply: because Kolmogorov knew better than to pick fights he couldn’t win. He judged that he could best serve the cause of truth by building up an enclosed little bubble of truth, and protecting that bubble from interference by the Soviet system, and even making the bubble useful to the system wherever he could—rather than futilely struggling to reform the system, and simply making martyrs of himself and all his students for his trouble.

I don't really agree w/ this

http://www.orthodoxytoday.org/articles7/SolzhenitsynWarning.php

http://www.catholicworldreport.com/2015/07/08/revisiting-aleksandr-solzhenitsyns-warnings-to-the-west/
At first regarded as a hero by Americans, he eventually found his popularity waning, thanks in part to his controversial 1978 commencement address at Harvard University.

...

"Without any censorship, in the West fashionable trends of thought and ideas are carefully separated from those which are not fashionable; nothing is forbidden, but what is not fashionable will hardly ever find its way into periodicals or books or be heard in colleges. Legally your researchers are free, but they are conditioned by the fashion of the day. There is no open violence such as in the East; however, a selection dictated by fashion and the need to match mass standards frequently prevents independent-minded people from giving their contribution to public life."

“The press has become the greatest power within the Western countries,” he also insisted, “more powerful than the legislature, the executive and the judiciary. One would then like to ask: by what law has it been elected and to whom is it responsible?”

Our Culture, What’s Left Of It: http://archive.frontpagemag.com/readArticle.aspx?ARTID=7445
FP: You mention how 19th century French aristocrat, the Marquis de Custine, made several profound observations on how border guards in Russia wasted his time pushing their weight around in stupid and pointless ways, and that this is connected to the powerlessness that humans live under authoritarianism. Tell us a bit more of how this dynamic works in Russia.

Dalrymple: With regard to Russia, I am not an expert, but I have an interest in the country. I believe that it is necessary to study 19th century Russian history to understand the modern world. I suspect that the characteristic of Russian authoritarianism precedes the Soviet era (if you read Custine, you will be astonished by how much of what he observed prefigured the Soviet era, which of course multiplied the tendencies a thousand times).

...

FP: You make the shrewd observation of how political correctness engenders evil because of “the violence that it does to people’s souls by forcing them to say or imply what they do not believe, but must not question.” Can you talk about this a bit?

Dalrymple: Political correctness is communist propaganda writ small. In my study of communist societies, I came to the conclusion that the purpose of communist propaganda was not to persuade or convince, nor to inform, but to humiliate; and therefore, the less it corresponded to reality the better. When people are forced to remain silent when they are being told the most obvious lies, or even worse when they are forced to repeat the lies themselves, they lose once and for all their sense of probity. To assent to obvious lies is to co-operate with evil, and in some small way to become evil oneself. One's standing to resist anything is thus eroded, and even destroyed. A society of emasculated liars is easy to control. I think if you examine political correctness, it has the same effect and is … [more]
classic  politics  polisci  history  mostly-modern  eastern-europe  authoritarianism  communism  antidemos  revolution  essay  org:junk  government  power  reflection  clown-world  quotes  lived-experience  nascent-state  truth  info-dynamics  realness  volo-avolo  class-warfare  multi  domestication  courage  humility  virtu  individualism-collectivism  n-factor  academia  giants  cold-war  tcstariat  aaronson  org:bleg  nibble  russia  science  parable  civil-liberty  exit-voice  big-peeps  censorship  media  propaganda  gnon  isteveish  albion  identity-politics  westminster  track-record  interview  wiki  reference  jargon  aphorism  anarcho-tyranny  managerial-state  zeitgeist  rot  path-dependence  paleocon  orwellian  solzhenitsyn  fashun  status  usa  labor  left-wing  organization  intel  capitalism  competition  long-short-run  patience  food  death
june 2017 by nhaliday
Lucio Russo - Wikipedia
In The Forgotten Revolution: How Science Was Born in 300 BC and Why It Had to Be Reborn (Italian: La rivoluzione dimenticata), Russo promotes the belief that Hellenistic science in the period 320-144 BC reached heights not achieved by Classical age science, and proposes that it went further than ordinarily thought, in multiple fields not normally associated with ancient science.

La Rivoluzione Dimenticata (The Forgotten Revolution), Reviewed by Sandro Graffi: http://www.ams.org/notices/199805/review-graffi.pdf

Before turning to the question of the decline of Hellenistic science, I come back to the new light shed by the book on Euclid’s Elements and on pre-Ptolemaic astronomy. Euclid’s definitions of the elementary geometric entities—point, straight line, plane—at the beginning of the Elements have long presented a problem.7 Their nature is in sharp contrast with the approach taken in the rest of the book, and continued by mathematicians ever since, of refraining from defining the fundamental entities explicitly but limiting themselves to postulating the properties which they enjoy. Why should Euclid be so hopelessly obscure right at the beginning and so smooth just after? The answer is: the definitions are not Euclid’s. Toward the beginning of the second century A.D. Heron of Alexandria found it convenient to introduce definitions of the elementary objects (a sign of decadence!) in his commentary on Euclid’s Elements, which had been written at least 400 years before. All manuscripts of the Elements copied ever since included Heron’s definitions without mention, whence their attribution to Euclid himself. The philological evidence leading to this conclusion is quite convincing.8

...

What about the general and steady (on the average) impoverishment of Hellenistic science under the Roman empire? This is a major historical problem, strongly tied to the even bigger one of the decline and fall of the antique civilization itself. I would summarize the author’s argument by saying that it basically represents an application to science of a widely accepted general theory on decadence of antique civilization going back to Max Weber. Roman society, mainly based on slave labor, underwent an ultimately unrecoverable crisis as the traditional sources of that labor force, essentially wars, progressively dried up. To save basic farming, the remaining slaves were promoted to be serfs, and poor free peasants reduced to serfdom, but this made trade disappear. A society in which production is almost entirely based on serfdom and with no trade clearly has very little need of culture, including science and technology. As Max Weber pointed out, when trade vanished, so did the marble splendor of the ancient towns, as well as the spiritual assets that went with it: art, literature, science, and sophisticated commercial laws. The recovery of Hellenistic science then had to wait until the disappearance of serfdom at the end of the Middle Ages. To quote Max Weber: “Only then with renewed vigor did the old giant rise up again.”

...

The epilogue contains the (rather pessimistic) views of the author on the future of science, threatened by the apparent triumph of today’s vogue of irrationality even in leading institutions (e.g., an astrology professorship at the Sorbonne). He looks at today’s ever-increasing tendency to teach science more on a fideistic than on a deductive or experimental basis as the first sign of a decline which could be analogous to the post-Hellenistic one.

Praising Alexandrians to excess: https://sci-hub.tw/10.1088/2058-7058/17/4/35
The Economic Record review: https://sci-hub.tw/10.1111/j.1475-4932.2004.00203.x

listed here: https://pinboard.in/u:nhaliday/b:c5c09f2687c1

Was Roman Science in Decline? (Excerpt from My New Book): https://www.richardcarrier.info/archives/13477
people  trivia  cocktail  history  iron-age  mediterranean  the-classics  speculation  west-hunter  scitariat  knowledge  wiki  ideas  wild-ideas  technology  innovation  contrarianism  multi  pdf  org:mat  books  review  critique  regularizer  todo  piracy  physics  canon  science  the-trenches  the-great-west-whale  broad-econ  the-world-is-just-atoms  frontier  speedometer  🔬  conquest-empire  giants  economics  article  growth-econ  cjones-like  industrial-revolution  empirical  absolute-relative  truth  rot  zeitgeist  gibbon  big-peeps  civilization  malthus  roots  old-anglo  britain  early-modern  medieval  social-structure  limits  quantitative-qualitative  rigor  lens  systematic-ad-hoc  analytical-holistic  cycles  space  mechanics  math  geometry  gravity  revolution  novelty  meta:science  is-ought  flexibility  trends  reason  applicability-prereqs  theory-practice  traces  evidence  psycho-atoms
may 2017 by nhaliday
Victorian naturalists | EVOLVING ECONOMICS
Being a naturalist in the Victorian era was a different exercise to today. From Darwin’s The Descent of Man:

Many kinds of monkeys have a strong taste for tea, coffee, and spiritous liquors: they will also, as I have myself seen, smoke tobacco with pleasure. (6. The same tastes are common to some animals much lower in the scale. Mr. A. Nichols informs me that he kept in Queensland, in Australia, three individuals of the Phaseolarctus cinereus [koalas]; and that, without having been taught in any way, they acquired a strong taste for rum, and for smoking tobacco.) Brehm asserts that the natives of north-eastern Africa catch the wild baboons by exposing vessels with strong beer, by which they are made drunk. He has seen some of these animals, which he kept in confinement, in this state; and he gives a laughable account of their behaviour and strange grimaces. On the following morning they were very cross and dismal; they held their aching heads with both hands, and wore a most pitiable expression: when beer or wine was offered them, they turned away with disgust, but relished the juice of lemons. An American monkey, an Ateles, after getting drunk on brandy, would never touch it again, and thus was wiser than many men. These trifling facts prove how similar the nerves of taste must be in monkeys and man, and how similarly their whole nervous system is affected.
econotariat  broad-econ  commentary  quotes  giants  nature  analogy  comparison  sapiens  ethanol  lol  cocktail  britain  anglo  africa  history  early-modern  stories  frontier  evolution  neuro  optimate  aristos  old-anglo  darwinian  pre-ww2  eden
may 2017 by nhaliday
William Stanley Jevons - Wikipedia
William Stanley Jevons FRS (/ˈdʒɛvənz/;[2] 1 September 1835 – 13 August 1882) was an English economist and logician.

Irving Fisher described Jevons' book A General Mathematical Theory of Political Economy (1862) as the start of the mathematical method in economics.[3] It made the case that economics as a science concerned with quantities is necessarily mathematical.[4] In so doing, it expounded upon the "final" (marginal) utility theory of value. Jevons' work, along with similar discoveries made by Carl Menger in Vienna (1871) and by Léon Walras in Switzerland (1874), marked the opening of a new period in the history of economic thought. Jevons' contribution to the marginal revolution in economics in the late 19th century established his reputation as a leading political economist and logician of the time.

Jevons broke off his studies of the natural sciences in London in 1854 to work as an assayer in Sydney, where he acquired an interest in political economy. Returning to the UK in 1859, he published General Mathematical Theory of Political Economy in 1862, outlining the marginal utility theory of value, and A Serious Fall in the Value of Gold in 1863. For Jevons, the utility or value to a consumer of an additional unit of a product is inversely related to the number of units of that product he already owns, at least beyond some critical quantity.

It was for The Coal Question (1865), in which he called attention to the gradual exhaustion of the UK's coal supplies, that he received public recognition, in which he put forth what is now known as the Jevons paradox, i.e. that increases in energy production efficiency leads to more not less consumption. The most important of his works on logic and scientific methods is his Principles of Science (1874),[5] as well as The Theory of Political Economy (1871) and The State in Relation to Labour (1882). Among his inventions was the logic piano, a mechanical computer.

In economics, the Jevons paradox (/ˈdʒɛvənz/; sometimes the Jevons effect) occurs when technological progress increases the efficiency with which a resource is used (reducing the amount necessary for any one use), but the rate of consumption of that resource rises because of increasing demand.[1] The Jevons paradox is perhaps the most widely known paradox in environmental economics.[2] However, governments and environmentalists generally assume that efficiency gains will lower resource consumption, ignoring the possibility of the paradox arising.[3]

The Coal Question: http://www.econlib.org/library/YPDBooks/Jevons/jvnCQ.html
people  big-peeps  history  early-modern  britain  economics  growth-econ  ORFE  industrial-revolution  energy-resources  giants  anglosphere  wiki  nihil  civilization  prepping  old-anglo  biophysical-econ  the-world-is-just-atoms  pre-ww2  multi  stylized-facts  efficiency  technology  org:econlib  books  modernity  volo-avolo  values  formal-values  decision-making  decision-theory
may 2017 by nhaliday
'Capital in the Twenty-First Century' by Thomas Piketty, reviewed | New Republic
by Robert Solow (positive)

The data then exhibit a clear pattern. In France and Great Britain, national capital stood fairly steadily at about seven times national income from 1700 to 1910, then fell sharply from 1910 to 1950, presumably as a result of wars and depression, reaching a low of 2.5 in Britain and a bit less than 3 in France. The capital-income ratio then began to climb in both countries, and reached slightly more than 5 in Britain and slightly less than 6 in France by 2010. The trajectory in the United States was slightly different: it started at just above 3 in 1770, climbed to 5 in 1910, fell slightly in 1920, recovered to a high between 5 and 5.5 in 1930, fell to below 4 in 1950, and was back to 4.5 in 2010.

The wealth-income ratio in the United States has always been lower than in Europe. The main reason in the early years was that land values bulked less in the wide open spaces of North America. There was of course much more land, but it was very cheap. Into the twentieth century and onward, however, the lower capital-income ratio in the United States probably reflects the higher level of productivity: a given amount of capital could support a larger production of output than in Europe. It is no surprise that the two world wars caused much less destruction and dissipation of capital in the United States than in Britain and France. The important observation for Piketty’s argument is that, in all three countries, and elsewhere as well, the wealth-income ratio has been increasing since 1950, and is almost back to nineteenth-century levels. He projects this increase to continue into the current century, with weighty consequences that will be discussed as we go on.

...

Now if you multiply the rate of return on capital by the capital-income ratio, you get the share of capital in the national income. For example, if the rate of return is 5 percent a year and the stock of capital is six years worth of national income, income from capital will be 30 percent of national income, and so income from work will be the remaining 70 percent. At last, after all this preparation, we are beginning to talk about inequality, and in two distinct senses. First, we have arrived at the functional distribution of income—the split between income from work and income from wealth. Second, it is always the case that wealth is more highly concentrated among the rich than income from labor (although recent American history looks rather odd in this respect); and this being so, the larger the share of income from wealth, the more unequal the distribution of income among persons is likely to be. It is this inequality across persons that matters most for good or ill in a society.

...

The data are complicated and not easily comparable across time and space, but here is the flavor of Piketty’s summary picture. Capital is indeed very unequally distributed. Currently in the United States, the top 10 percent own about 70 percent of all the capital, half of that belonging to the top 1 percent; the next 40 percent—who compose the “middle class”—own about a quarter of the total (much of that in the form of housing), and the remaining half of the population owns next to nothing, about 5 percent of total wealth. Even that amount of middle-class property ownership is a new phenomenon in history. The typical European country is a little more egalitarian: the top 1 percent own 25 percent of the total capital, and the middle class 35 percent. (A century ago the European middle class owned essentially no wealth at all.) If the ownership of wealth in fact becomes even more concentrated during the rest of the twenty-first century, the outlook is pretty bleak unless you have a taste for oligarchy.

Income from wealth is probably even more concentrated than wealth itself because, as Piketty notes, large blocks of wealth tend to earn a higher return than small ones. Some of this advantage comes from economies of scale, but more may come from the fact that very big investors have access to a wider range of investment opportunities than smaller investors. Income from work is naturally less concentrated than income from wealth. In Piketty’s stylized picture of the United States today, the top 1 percent earns about 12 percent of all labor income, the next 9 percent earn 23 percent, the middle class gets about 40 percent, and the bottom half about a quarter of income from work. Europe is not very different: the top 10 percent collect somewhat less and the other two groups a little more.

You get the picture: modern capitalism is an unequal society, and the rich-get-richer dynamic strongly suggest that it will get more so. But there is one more loose end to tie up, already hinted at, and it has to do with the advent of very high wage incomes. First, here are some facts about the composition of top incomes. About 60 percent of the income of the top 1 percent in the United States today is labor income. Only when you get to the top tenth of 1 percent does income from capital start to predominate. The income of the top hundredth of 1 percent is 70 percent from capital. The story for France is not very different, though the proportion of labor income is a bit higher at every level. Evidently there are some very high wage incomes, as if you didn’t know.

This is a fairly recent development. In the 1960s, the top 1 percent of wage earners collected a little more than 5 percent of all wage incomes. This fraction has risen pretty steadily until nowadays, when the top 1 percent of wage earners receive 10–12 percent of all wages. This time the story is rather different in France. There the share of total wages going to the top percentile was steady at 6 percent until very recently, when it climbed to 7 percent. The recent surge of extreme inequality at the top of the wage distribution may be primarily an American development. Piketty, who with Emmanuel Saez has made a careful study of high-income tax returns in the United States, attributes this to the rise of what he calls “supermanagers.” The very highest income class consists to a substantial extent of top executives of large corporations, with very rich compensation packages. (A disproportionate number of these, but by no means all of them, come from the financial services industry.) With or without stock options, these large pay packages get converted to wealth and future income from wealth. But the fact remains that much of the increased income (and wealth) inequality in the United States is driven by the rise of these supermanagers.

and Deirdre McCloskey (p critical): https://ejpe.org/journal/article/view/170
nice discussion of empirical economics, economic history, market failures and statism, etc., with several bon mots

Piketty’s great splash will undoubtedly bring many young economically interested scholars to devote their lives to the study of the past. That is good, because economic history is one of the few scientifically quantitative branches of economics. In economic history, as in experimental economics and a few other fields, the economists confront the evidence (as they do not for example in most macroeconomics or industrial organization or international trade theory nowadays).

...

Piketty gives a fine example of how to do it. He does not get entangled as so many economists do in the sole empirical tool they are taught, namely, regression analysis on someone else’s “data” (one of the problems is the word data, meaning “things given”: scientists should deal in capta, “things seized”). Therefore he does not commit one of the two sins of modern economics, the use of meaningless “tests” of statistical significance (he occasionally refers to “statistically insignificant” relations between, say, tax rates and growth rates, but I am hoping he does not suppose that a large coefficient is “insignificant” because R. A. Fisher in 1925 said it was). Piketty constructs or uses statistics of aggregate capital and of inequality and then plots them out for inspection, which is what physicists, for example, also do in dealing with their experiments and observations. Nor does he commit the other sin, which is to waste scientific time on existence theorems. Physicists, again, don’t. If we economists are going to persist in physics envy let us at least learn what physicists actually do. Piketty stays close to the facts, and does not, for example, wander into the pointless worlds of non-cooperative game theory, long demolished by experimental economics. He also does not have recourse to non-computable general equilibrium, which never was of use for quantitative economic science, being a branch of philosophy, and a futile one at that. On both points, bravissimo.

...

Since those founding geniuses of classical economics, a market-tested betterment (a locution to be preferred to “capitalism”, with its erroneous implication that capital accumulation, not innovation, is what made us better off) has enormously enriched large parts of a humanity now seven times larger in population than in 1800, and bids fair in the next fifty years or so to enrich everyone on the planet. [Not SSA or MENA...]

...

Then economists, many on the left but some on the right, in quick succession from 1880 to the present—at the same time that market-tested betterment was driving real wages up and up and up—commenced worrying about, to name a few of the pessimisms concerning “capitalism” they discerned: greed, alienation, racial impurity, workers’ lack of bargaining strength, workers’ bad taste in consumption, immigration of lesser breeds, monopoly, unemployment, business cycles, increasing returns, externalities, under-consumption, monopolistic competition, separation of ownership from control, lack of planning, post-War stagnation, investment spillovers, unbalanced growth, dual labor markets, capital insufficiency (William Easterly calls it “capital fundamentalism”), peasant irrationality, capital-market imperfections, public … [more]
news  org:mag  big-peeps  econotariat  economics  books  review  capital  capitalism  inequality  winner-take-all  piketty  wealth  class  labor  mobility  redistribution  growth-econ  rent-seeking  history  mostly-modern  trends  compensation  article  malaise  🎩  the-bones  whiggish-hegelian  cjones-like  multi  mokyr-allen-mccloskey  expert  market-failure  government  broad-econ  cliometrics  aphorism  lens  gallic  clarity  europe  critique  rant  optimism  regularizer  pessimism  ideology  behavioral-econ  authoritarianism  intervention  polanyi-marx  politics  left-wing  absolute-relative  regression-to-mean  legacy  empirical  data-science  econometrics  methodology  hypothesis-testing  physics  iron-age  mediterranean  the-classics  quotes  krugman  world  entrepreneurialism  human-capital  education  supply-demand  plots  manifolds  intersection  markets  evolution  darwinian  giants  old-anglo  egalitarianism-hierarchy  optimate  morality  ethics  envy  stagnation  nl-and-so-can-you  expert-experience  courage  stats  randy-ayndy  reason  intersection-connectedness  detail-architect
april 2017 by nhaliday
The Ionian Mission | West Hunter
I have have had famous people ask me how the Ionian Greeks became so smart (in Classical times, natch). In Classical times, the Greeks – particularly the Ionian Greeks – gave everybody this impression – in everyday experience, and certainly in terms of production of outstanding intellects. Everybody thought so. Nobody said this about the Persians – and nobody said it about the Jews, who never said it about themselves.

It’s an interesting question: perhaps there was some process analogous to that which we have proposed as an explanation for the high intelligence of the Ashkenazi Jews. Or maybe something else happened – a different selective process, or maybe it was all cultural. It’s hard to know – the Greek Dark Ages, the long period of illiteracy after the fall of Mycenaean civilization, is poorly understood, certainly by me.

Suppose that your biological IQ capacity (in favorable conditions) is set by a few hundred or thousand SNPS, and that we have identified those SNPS. With luck, we might find enough skeletons with intact DNA to see if the Ionian Greeks really were smarter than the average bear, and how that changed over time.

More generally, we could see if civilization boosted or decreased IQ, in various situations. This could be a big part of the historical process – civilizations falling because average competence has dropped, science being born because the population is now ready for it…

I think we’ll be ready to try this in a year or two. The biggest problems will be political, since this approach would also predict results in existing populations – although that would probably not be very interesting, since we already know all those results.

The Ancient Greeks Weren’t All Geniuses: http://www.unz.com/akarlin/ancient-greeks-not-geniuses/
west-hunter  scitariat  history  iron-age  mediterranean  the-classics  iq  pop-diff  aDNA  civilization  leviathan  GWAS  genetics  biodet  behavioral-gen  anthropology  sapiens  speculation  discussion  recent-selection  archaeology  virtu  cycles  oscillation  broad-econ  microfoundations  multi  gnon  commentary  quotes  galton  giants  old-anglo  psychometrics  malthus  health  embodied  kinship  parasites-microbiome  china  asia  innovation  frontier  demographics  environmental-effects  alien-character  🌞  debate
april 2017 by nhaliday
Interview Greg Cochran by Future Strategist
https://westhunt.wordpress.com/2016/08/10/interview/

- IQ enhancement (somewhat apprehensive, wonder why?)
- ~20 years to CRISPR enhancement (very ballpark)
- cloning as an alternative strategy
- environmental effects on IQ, what matters (iodine, getting hit in the head), what doesn't (schools, etc.), and toss-ups (childhood/embryonic near-starvation, disease besides direct CNS-affecting ones [!])
- malnutrition did cause more schizophrenia in Netherlands (WW2) and China (Great Leap Forward) though
- story about New Mexico schools and his children (mostly grad students in physics now)
- clever sillies, weird geniuses, and clueless elites
- life-extension and accidents, half-life ~ a few hundred years for a typical American
- Pinker on Harvard faculty adoptions (always Chinese girls)
- parabiosis, organ harvesting
- Chicago economics talk
- Catholic Church, cousin marriage, and the rise of the West
- Gregory Clark and Farewell to Alms
- retinoblastoma cancer, mutational load, and how to deal w/ it ("something will turn up")
- Tularemia and Stalingrad (ex-Soviet scientist literally mentioned his father doing it)
- germ warfare, nuclear weapons, and testing each
- poison gas, Haber, nerve gas, terrorists, Japan, Syria, and Turkey
- nukes at https://en.wikipedia.org/wiki/Incirlik_Air_Base
- IQ of ancient Greeks
- history of China and the Mongols, cloning Genghis Khan
- Alexander the Great vs. Napoleon, Russian army being late for meetup w/ Austrians
- the reason why to go into Iraq: to find and clone Genghis Khan!
- efficacy of torture
- monogamy, polygamy, and infidelity, the Aboriginal system (reverse aging wives)
- education and twin studies
- errors: passing white, female infanticide, interdisciplinary social science/economic imperialism, the slavery and salt story
- Jewish optimism about environmental interventions, Rabbi didn't want people to know, Israelis don't want people to know about group differences between Ashkenazim and other groups in Israel
- NASA spewing crap on extraterrestrial life (eg, thermodynamic gradient too weak for life in oceans of ice moons)
west-hunter  interview  audio  podcast  being-right  error  bounded-cognition  history  mostly-modern  giants  autism  physics  von-neumann  math  longevity  enhancement  safety  government  leadership  elite  scitariat  econotariat  cracker-econ  big-picture  judaism  iq  recent-selection  🌞  spearhead  gregory-clark  2016  space  xenobio  equilibrium  phys-energy  thermo  no-go  🔬  disease  gene-flow  population-genetics  gedanken  genetics  evolution  dysgenics  assortative-mating  aaronson  CRISPR  biodet  variance-components  environmental-effects  natural-experiment  stories  europe  germanic  psychology  cog-psych  psychiatry  china  asia  prediction  frontier  genetic-load  realness  time  aging  pinker  academia  medicine  economics  chicago  social-science  kinship  tribalism  religion  christianity  protestant-catholic  the-great-west-whale  divergence  roots  britain  agriculture  farmers-and-foragers  time-preference  cancer  society  civilization  russia  arms  parasites-microbiome  epidemiology  nuclear  biotech  deterrence  meta:war  terrorism  iraq-syria  MENA  foreign-poli
march 2017 by nhaliday
George Green (mathematician) - Wikipedia
It is unclear to historians exactly where Green obtained information on current developments in mathematics, as Nottingham had little in the way of intellectual resources. What is even more mysterious is that Green had used "the Mathematical Analysis," a form of calculus derived from Leibniz that was virtually unheard of, or even actively discouraged, in England at the time (due to Leibniz being a contemporary of Newton who had his own methods that were championed in England). This form of calculus, and the developments of mathematicians such as Laplace, Lacroix and Poisson were not taught even at Cambridge, let alone Nottingham, and yet Green had not only heard of these developments, but also improved upon them.

It is speculated that only one person educated in mathematics, John Toplis, headmaster of Nottingham High School 1806–1819, graduate from Cambridge and an enthusiast of French mathematics, is known to have lived in Nottingham at the time.
people  history  early-modern  britain  math  science  physics  electromag  stories  frontier  giants  wiki  discovery  old-anglo  pre-ww2  the-trenches  the-great-west-whale  allodium
march 2017 by nhaliday
per page:    204080120160

Copy this bookmark: