recentpopularlog in

nhaliday : distribution   274

« earlier  
Biggest cities in US are losing hundreds of workers every day
A recent Gallup poll found that while 80% of Americans live in urban areas, only 12% said they want to live there. Asked where they would live if they had their choice, the top response was a rural area.
news  org:lite  data  poll  values  distribution  geography  urban-rural  usa  pro-rata  homo-hetero  arbitrage  supply-demand  interface-compatibility  revealed-preference  incentives 
october 2019 by nhaliday
Is there a common method for detecting the convergence of the Gibbs sampler and the expectation-maximization algorithm? - Quora
In practice and theory it is much easier to diagnose convergence in EM (vanilla or variational) than in any MCMC algorithm (including Gibbs sampling).

https://www.quora.com/How-can-you-determine-if-your-Gibbs-sampler-has-converged
There is a special case when you can actually obtain the stationary distribution, and be sure that you did! If your markov chain consists of a discrete state space, then take the first time that a state repeats in your chain: if you randomly sample an element between the repeating states (but only including one of the endpoints) you will have a sample from your true distribution.

One can achieve this 'exact MCMC sampling' more generally by using the coupling from the past algorithm (Coupling from the past).

Otherwise, there is no rigorous statistical test for convergence. It may be possible to obtain a theoretical bound for the convergence rates: but these are quite difficult to obtain, and quite often too large to be of practical use. For example, even for the simple case of using the Metropolis algorithm for sampling from a two-dimensional uniform distribution, the best convergence rate upper bound achieved, by Persi Diaconis, was something with an astronomical constant factor like 10^300.

In fact, it is fair to say that for most high dimensional problems, we have really no idea whether Gibbs sampling ever comes close to converging, but the best we can do is use some simple diagnostics to detect the most obvious failures.
nibble  q-n-a  qra  acm  stats  probability  limits  convergence  distribution  sampling  markov  monte-carlo  ML-MAP-E  checking  equilibrium  stylized-facts  gelman  levers  mixing  empirical  plots  manifolds  multi  fixed-point  iteration-recursion  heuristic  expert-experience  theory-practice  project 
october 2019 by nhaliday
Extreme inbreeding in a European ancestry sample from the contemporary UK population | Nature Communications
Visscher et al

In most human societies, there are taboos and laws banning mating between first- and second-degree relatives, but actual prevalence and effects on health and fitness are poorly quantified. Here, we leverage a large observational study of ~450,000 participants of European ancestry from the UK Biobank (UKB) to quantify extreme inbreeding (EI) and its consequences. We use genotyped SNPs to detect large runs of homozygosity (ROH) and call EI when >10% of an individual’s genome comprise ROHs. We estimate a prevalence of EI of ~0.03%, i.e., ~1/3652. EI cases have phenotypic means between 0.3 and 0.7 standard deviation below the population mean for 7 traits, including stature and cognitive ability, consistent with inbreeding depression estimated from individuals with low levels of inbreeding.
study  org:nat  bio  genetics  genomics  kinship  britain  pro-rata  distribution  embodied  iq  effect-size  tails  gwern  evidence-based  empirical 
october 2019 by nhaliday
The Effect of High-Tech Clusters on the Productivity of Top Inventors
I use longitudinal data on top inventors based on the universe of US patents 1971 - 2007 to quantify the productivity advantages of Silicon-Valley style clusters and their implications for the overall production of patents in the US. I relate the number of patents produced by an inventor in a year to the size of the local cluster, defined as a city × research field × year. I first study the experience of Rochester NY, whose high-tech cluster declined due to the demise of its main employer, Kodak. Due to the growth of digital photography, Kodak employment collapsed after 1996, resulting in a 49.2% decline in the size of the Rochester high-tech cluster. I test whether the change in cluster size affected the productivity of inventors outside Kodak and the photography sector. I find that between 1996 and 2007 the productivity of non-Kodak inventors in Rochester declined by 20.6% relative to inventors in other cities, conditional on inventor fixed effects. In the second part of the paper, I turn to estimates based on all the data in the sample. I find that when an inventor moves to a larger cluster she experiences significant increases in the number of patents produced and the number of citations received.

...

In a counterfactual scenario where the quality of U.S. inventors is held constant but their geographical location is changed so that all cities have the same number of inventors in each field, inventor productivity would increase in small clusters and decline in large clusters. On net, the overall number of patents produced in the US in a year would be 11.07% smaller.

[ed.: I wonder whether the benefits of less concentration (eg, lower cost of living propping up demographics) are actually smaller than the downsides overall.]
study  economics  growth-econ  innovation  roots  branches  sv  tech  econ-productivity  density  urban-rural  winner-take-all  polarization  top-n  pro-rata  distribution  usa  longitudinal  intellectual-property  northeast  natural-experiment  population  endogenous-exogenous  intervention  counterfactual  cost-benefit 
september 2019 by nhaliday
Measures of cultural distance - Marginal REVOLUTION
A new paper with many authors — most prominently Joseph Henrich — tries to measure the cultural gaps between different countries.  I am reproducing a few of their results (see pp.36-37 for more), noting that higher numbers represent higher gaps:

...

Overall the numbers show much greater cultural distance of other nations from China than from the United States, a significant and under-discussed problem for China. For instance, the United States is about as culturally close to Hong Kong as China is.

[ed.: Japan is closer to the US than China. Interesting. I'd like to see some data based on something other than self-reported values though.]

the study:
Beyond WEIRD Psychology: Measuring and Mapping Scales of Cultural and Psychological Distance: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3259613
We present a new tool that provides a means to measure the psychological and cultural distance between two societies and create a distance scale with any population as the point of comparison. Since psychological data is dominated by samples drawn from the United States or other WEIRD nations, this tool provides a “WEIRD scale” to assist researchers in systematically extending the existing database of psychological phenomena to more diverse and globally representative samples. As the extreme WEIRDness of the literature begins to dissolve, the tool will become more useful for designing, planning, and justifying a wide range of comparative psychological projects. We have made our code available and developed an online application for creating other scales (including the “Sino scale” also presented in this paper). We discuss regional diversity within nations showing the relative homogeneity of the United States. Finally, we use these scales to predict various psychological outcomes.
econotariat  marginal-rev  henrich  commentary  study  summary  list  data  measure  metrics  similarity  culture  cultural-dynamics  sociology  things  world  usa  anglo  anglosphere  china  asia  japan  sinosphere  russia  developing-world  canada  latin-america  MENA  europe  eastern-europe  germanic  comparison  great-powers  thucydides  foreign-policy  the-great-west-whale  generalization  anthropology  within-group  homo-hetero  moments  exploratory  phalanges  the-bones  🎩  🌞  broad-econ  cocktail  n-factor  measurement  expectancy  distribution  self-report  values  expression-survival  uniqueness 
september 2019 by nhaliday
Laurence Tratt: What Challenges and Trade-Offs do Optimising Compilers Face?
Summary
It's important to be realistic: most people don't care about program performance most of the time. Modern computers are so fast that most programs run fast enough even with very slow language implementations. In that sense, I agree with Daniel's premise: optimising compilers are often unimportant. But “often” is often unsatisfying, as it is here. Users find themselves transitioning from not caring at all about performance to suddenly really caring, often in the space of a single day.

This, to me, is where optimising compilers come into their own: they mean that even fewer people need care about program performance. And I don't mean that they get us from, say, 98 to 99 people out of 100 not needing to care: it's probably more like going from 80 to 99 people out of 100 not needing to care. This is, I suspect, more significant than it seems: it means that many people can go through an entire career without worrying about performance. Martin Berger reminded me of A N Whitehead’s wonderful line that “civilization advances by extending the number of important operations which we can perform without thinking about them” and this seems a classic example of that at work. Even better, optimising compilers are widely tested and thus generally much more reliable than the equivalent optimisations performed manually.

But I think that those of us who work on optimising compilers need to be honest with ourselves, and with users, about what performance improvement one can expect to see on a typical program. We have a tendency to pick the maximum possible improvement and talk about it as if it's the mean, when there's often a huge difference between the two. There are many good reasons for that gap, and I hope in this blog post I've at least made you think about some of the challenges and trade-offs that optimising compilers are subject to.

[1]
Most readers will be familiar with Knuth’s quip that “premature optimisation is the root of all evil.” However, I doubt that any of us have any real idea what proportion of time is spent in the average part of the average program. In such cases, I tend to assume that Pareto’s principle won't be far too wrong (i.e. that 80% of execution time is spent in 20% of code). In 1971 a study by Knuth and others of Fortran programs, found that 50% of execution time was spent in 4% of code. I don't know of modern equivalents of this study, and for them to be truly useful, they'd have to be rather big. If anyone knows of something along these lines, please let me know!
techtariat  programming  compilers  performance  tradeoffs  cost-benefit  engineering  yak-shaving  pareto  plt  c(pp)  rust  golang  trivia  data  objektbuch  street-fighting  estimate  distribution  pro-rata 
july 2019 by nhaliday
The Existential Risk of Math Errors - Gwern.net
How big is this upper bound? Mathematicians have often made errors in proofs. But it’s rarer for ideas to be accepted for a long time and then rejected. But we can divide errors into 2 basic cases corresponding to type I and type II errors:

1. Mistakes where the theorem is still true, but the proof was incorrect (type I)
2. Mistakes where the theorem was false, and the proof was also necessarily incorrect (type II)

Before someone comes up with a final answer, a mathematician may have many levels of intuition in formulating & working on the problem, but we’ll consider the final end-product where the mathematician feels satisfied that he has solved it. Case 1 is perhaps the most common case, with innumerable examples; this is sometimes due to mistakes in the proof that anyone would accept is a mistake, but many of these cases are due to changing standards of proof. For example, when David Hilbert discovered errors in Euclid’s proofs which no one noticed before, the theorems were still true, and the gaps more due to Hilbert being a modern mathematician thinking in terms of formal systems (which of course Euclid did not think in). (David Hilbert himself turns out to be a useful example of the other kind of error: his famous list of 23 problems was accompanied by definite opinions on the outcome of each problem and sometimes timings, several of which were wrong or questionable5.) Similarly, early calculus used ‘infinitesimals’ which were sometimes treated as being 0 and sometimes treated as an indefinitely small non-zero number; this was incoherent and strictly speaking, practically all of the calculus results were wrong because they relied on an incoherent concept - but of course the results were some of the greatest mathematical work ever conducted6 and when later mathematicians put calculus on a more rigorous footing, they immediately re-derived those results (sometimes with important qualifications), and doubtless as modern math evolves other fields have sometimes needed to go back and clean up the foundations and will in the future.7

...

Isaac Newton, incidentally, gave two proofs of the same solution to a problem in probability, one via enumeration and the other more abstract; the enumeration was correct, but the other proof totally wrong and this was not noticed for a long time, leading Stigler to remark:

...

TYPE I > TYPE II?
“Lefschetz was a purely intuitive mathematician. It was said of him that he had never given a completely correct proof, but had never made a wrong guess either.”
- Gian-Carlo Rota13

Case 2 is disturbing, since it is a case in which we wind up with false beliefs and also false beliefs about our beliefs (we no longer know that we don’t know). Case 2 could lead to extinction.

...

Except, errors do not seem to be evenly & randomly distributed between case 1 and case 2. There seem to be far more case 1s than case 2s, as already mentioned in the early calculus example: far more than 50% of the early calculus results were correct when checked more rigorously. Richard Hamming attributes to Ralph Boas a comment that while editing Mathematical Reviews that “of the new results in the papers reviewed most are true but the corresponding proofs are perhaps half the time plain wrong”.

...

Gian-Carlo Rota gives us an example with Hilbert:

...

Olga labored for three years; it turned out that all mistakes could be corrected without any major changes in the statement of the theorems. There was one exception, a paper Hilbert wrote in his old age, which could not be fixed; it was a purported proof of the continuum hypothesis, you will find it in a volume of the Mathematische Annalen of the early thirties.

...

Leslie Lamport advocates for machine-checked proofs and a more rigorous style of proofs similar to natural deduction, noting a mathematician acquaintance guesses at a broad error rate of 1/329 and that he routinely found mistakes in his own proofs and, worse, believed false conjectures30.

[more on these "structured proofs":
https://academia.stackexchange.com/questions/52435/does-anyone-actually-publish-structured-proofs
https://mathoverflow.net/questions/35727/community-experiences-writing-lamports-structured-proofs
]

We can probably add software to that list: early software engineering work found that, dismayingly, bug rates seem to be simply a function of lines of code, and one would expect diseconomies of scale. So one would expect that in going from the ~4,000 lines of code of the Microsoft DOS operating system kernel to the ~50,000,000 lines of code in Windows Server 2003 (with full systems of applications and libraries being even larger: the comprehensive Debian repository in 2007 contained ~323,551,126 lines of code) that the number of active bugs at any time would be… fairly large. Mathematical software is hopefully better, but practitioners still run into issues (eg Durán et al 2014, Fonseca et al 2017) and I don’t know of any research pinning down how buggy key mathematical systems like Mathematica are or how much published mathematics may be erroneous due to bugs. This general problem led to predictions of doom and spurred much research into automated proof-checking, static analysis, and functional languages31.

[related:
https://mathoverflow.net/questions/11517/computer-algebra-errors
I don't know any interesting bugs in symbolic algebra packages but I know a true, enlightening and entertaining story about something that looked like a bug but wasn't.

Define sinc𝑥=(sin𝑥)/𝑥.

Someone found the following result in an algebra package: ∫∞0𝑑𝑥sinc𝑥=𝜋/2
They then found the following results:

...

So of course when they got:

∫∞0𝑑𝑥sinc𝑥sinc(𝑥/3)sinc(𝑥/5)⋯sinc(𝑥/15)=(467807924713440738696537864469/935615849440640907310521750000)𝜋

hmm:
Which means that nobody knows Fourier analysis nowdays. Very sad and discouraging story... – fedja Jan 29 '10 at 18:47

--

Because the most popular systems are all commercial, they tend to guard their bug database rather closely -- making them public would seriously cut their sales. For example, for the open source project Sage (which is quite young), you can get a list of all the known bugs from this page. 1582 known issues on Feb.16th 2010 (which includes feature requests, problems with documentation, etc).

That is an order of magnitude less than the commercial systems. And it's not because it is better, it is because it is younger and smaller. It might be better, but until SAGE does a lot of analysis (about 40% of CAS bugs are there) and a fancy user interface (another 40%), it is too hard to compare.

I once ran a graduate course whose core topic was studying the fundamental disconnect between the algebraic nature of CAS and the analytic nature of the what it is mostly used for. There are issues of logic -- CASes work more or less in an intensional logic, while most of analysis is stated in a purely extensional fashion. There is no well-defined 'denotational semantics' for expressions-as-functions, which strongly contributes to the deeper bugs in CASes.]

...

Should such widely-believed conjectures as P≠NP or the Riemann hypothesis turn out be false, then because they are assumed by so many existing proofs, a far larger math holocaust would ensue38 - and our previous estimates of error rates will turn out to have been substantial underestimates. But it may be a cloud with a silver lining, if it doesn’t come at a time of danger.

https://mathoverflow.net/questions/338607/why-doesnt-mathematics-collapse-down-even-though-humans-quite-often-make-mista

more on formal methods in programming:
https://www.quantamagazine.org/formal-verification-creates-hacker-proof-code-20160920/
https://intelligence.org/2014/03/02/bob-constable/

https://softwareengineering.stackexchange.com/questions/375342/what-are-the-barriers-that-prevent-widespread-adoption-of-formal-methods
Update: measured effort
In the October 2018 issue of Communications of the ACM there is an interesting article about Formally verified software in the real world with some estimates of the effort.

Interestingly (based on OS development for military equipment), it seems that producing formally proved software requires 3.3 times more effort than with traditional engineering techniques. So it's really costly.

On the other hand, it requires 2.3 times less effort to get high security software this way than with traditionally engineered software if you add the effort to make such software certified at a high security level (EAL 7). So if you have high reliability or security requirements there is definitively a business case for going formal.

WHY DON'T PEOPLE USE FORMAL METHODS?: https://www.hillelwayne.com/post/why-dont-people-use-formal-methods/
You can see examples of how all of these look at Let’s Prove Leftpad. HOL4 and Isabelle are good examples of “independent theorem” specs, SPARK and Dafny have “embedded assertion” specs, and Coq and Agda have “dependent type” specs.6

If you squint a bit it looks like these three forms of code spec map to the three main domains of automated correctness checking: tests, contracts, and types. This is not a coincidence. Correctness is a spectrum, and formal verification is one extreme of that spectrum. As we reduce the rigour (and effort) of our verification we get simpler and narrower checks, whether that means limiting the explored state space, using weaker types, or pushing verification to the runtime. Any means of total specification then becomes a means of partial specification, and vice versa: many consider Cleanroom a formal verification technique, which primarily works by pushing code review far beyond what’s humanly possible.

...

The question, then: “is 90/95/99% correct significantly cheaper than 100% correct?” The answer is very yes. We all are comfortable saying that a codebase we’ve well-tested and well-typed is mostly correct modulo a few fixes in prod, and we’re even writing more than four lines of code a day. In fact, the vast… [more]
ratty  gwern  analysis  essay  realness  truth  correctness  reason  philosophy  math  proofs  formal-methods  cs  programming  engineering  worse-is-better/the-right-thing  intuition  giants  old-anglo  error  street-fighting  heuristic  zooming  risk  threat-modeling  software  lens  logic  inference  physics  differential  geometry  estimate  distribution  robust  speculation  nonlinearity  cost-benefit  convexity-curvature  measure  scale  trivia  cocktail  history  early-modern  europe  math.CA  rigor  news  org:mag  org:sci  miri-cfar  pdf  thesis  comparison  examples  org:junk  q-n-a  stackex  pragmatic  tradeoffs  cracker-prog  techtariat  invariance  DSL  chart  ecosystem  grokkability  heavyweights  CAS  static-dynamic  lower-bounds  complexity  tcs  open-problems  big-surf  ideas  certificates-recognition  proof-systems  PCP  mediterranean  SDP  meta:prediction  epistemic  questions  guessing  distributed  overflow  nibble  soft-question  track-record  big-list  hmm  frontier  state-of-art  move-fast-(and-break-things)  grokkability-clarity  technical-writing  trust 
july 2019 by nhaliday
c++ - Which is faster: Stack allocation or Heap allocation - Stack Overflow
On my machine, using g++ 3.4.4 on Windows, I get "0 clock ticks" for both stack and heap allocation for anything less than 100000 allocations, and even then I get "0 clock ticks" for stack allocation and "15 clock ticks" for heap allocation. When I measure 10,000,000 allocations, stack allocation takes 31 clock ticks and heap allocation takes 1562 clock ticks.

so maybe around 100x difference? what does that work out to in terms of total workload?

hmm:
http://vlsiarch.eecs.harvard.edu/wp-content/uploads/2017/02/asplos17mallacc.pdf
Recent work shows that dynamic memory allocation consumes nearly 7% of all cycles in Google datacenters.

That's not too bad actually. Seems like I shouldn't worry about shifting from heap to stack/globals unless profiling says it's important, particularly for non-oly stuff.

edit: Actually, factor x100 for 7% is pretty high, could be increase constant factor by almost an order of magnitude.

edit: Well actually that's not the right math. 93% + 7%*.01 is not much smaller than 100%
q-n-a  stackex  programming  c(pp)  systems  memory-management  performance  intricacy  comparison  benchmarks  data  objektbuch  empirical  google  papers  nibble  time  measure  pro-rata  distribution  multi  pdf  oly-programming  computer-memory 
june 2019 by nhaliday
The End of the Editor Wars » Linux Magazine
Moreover, even if you assume a broad margin of error, the pollings aren't even close. With all the various text editors available today, Vi and Vim continue to be the choice of over a third of users, while Emacs well back in the pack, no longer a competitor for the most popular text editor.

https://www.quora.com/Are-there-more-Emacs-or-Vim-users
I believe Vim is actually more popular, but it's hard to find any real data on it. The best source I've seen is the annual StackOverflow developer survey where 15.2% of developers used Vim compared to a mere 3.2% for Emacs.

Oddly enough, the report noted that "Data scientists and machine learning developers are about 3 times more likely to use Emacs than any other type of developer," which is not necessarily what I would have expected.

[ed. NB: Vim still dominates overall.]

https://pinboard.in/u:nhaliday/b:6adc1b1ef4dc

Time To End The vi/Emacs Debate: https://cacm.acm.org/blogs/blog-cacm/226034-time-to-end-the-vi-emacs-debate/fulltext

Vim, Emacs and their forever war. Does it even matter any more?: https://blog.sourcerer.io/vim-emacs-and-their-forever-war-does-it-even-matter-any-more-697b1322d510
Like an episode of “Silicon Valley”, a discussion of Emacs vs. Vim used to have a polarizing effect that would guarantee a stimulating conversation, regardless of an engineer’s actual alignment. But nowadays, diehard Emacs and Vim users are getting much harder to find. Maybe I’m in the wrong orbit, but looking around today, I see that engineers are equally or even more likely to choose any one of a number of great (for any given definition of ‘great’) modern editors or IDEs such as Sublime Text, Visual Studio Code, Atom, IntelliJ (… or one of its siblings), Brackets, Visual Studio or Xcode, to name a few. It’s not surprising really — many top engineers weren’t even born when these editors were at version 1.0, and GUIs (for better or worse) hadn’t been invented.

...

… both forums have high traffic and up-to-the-minute comment and discussion threads. Some of the available statistics paint a reasonably healthy picture — Stackoverflow’s 2016 developer survey ranks Vim 4th out of 24 with 26.1% of respondents in the development environments category claiming to use it. Emacs came 15th with 5.2%. In combination, over 30% is, actually, quite impressive considering they’ve been around for several decades.

What’s odd, however, is that if you ask someone — say a random developer — to express a preference, the likelihood is that they will favor for one or the other even if they have used neither in anger. Maybe the meme has spread so widely that all responses are now predominantly ritualistic, and represent something more fundamental than peoples’ mere preference for an editor? There’s a rather obvious political hypothesis waiting to be made — that Emacs is the leftist, socialist, centralized state, while Vim represents the right and the free market, specialization and capitalism red in tooth and claw.

How is Emacs/Vim used in companies like Google, Facebook, or Quora? Are there any libraries or tools they share in public?: https://www.quora.com/How-is-Emacs-Vim-used-in-companies-like-Google-Facebook-or-Quora-Are-there-any-libraries-or-tools-they-share-in-public
In Google there's a fair amount of vim and emacs. I would say at least every other engineer uses one or another.

Among Software Engineers, emacs seems to be more popular, about 2:1. Among Site Reliability Engineers, vim is more popular, about 9:1.
--
People use both at Facebook, with (in my opinion) slightly better tooling for Emacs than Vim. We share a master.emacs and master.vimrc file, which contains the bare essentials (like syntactic highlighting for the Hack language). We also share a Ctags file that's updated nightly with a cron script.

Beyond the essentials, there's a group for Emacs users at Facebook that provides tips, tricks, and major-modes created by people at Facebook. That's where Adam Hupp first developed his excellent mural-mode (ahupp/mural), which does for Ctags what iDo did for file finding and buffer switching.
--
For emacs, it was very informal at Google. There wasn't a huge community of Emacs users at Google, so there wasn't much more than a wiki and a couple language styles matching Google's style guides.

https://trends.google.com/trends/explore?date=all&geo=US&q=%2Fm%2F07zh7,%2Fm%2F01yp0m

https://www.quora.com/Why-is-interest-in-Emacs-dropping
And it is still that. It’s just that emacs is no longer unique, and neither is Lisp.

Dynamically typed scripting languages with garbage collection are a dime a dozen now. Anybody in their right mind developing an extensible text editor today would just use python, ruby, lua, or JavaScript as the extension language and get all the power of Lisp combined with vibrant user communities and millions of lines of ready-made libraries that Stallman and Steele could only dream of in the 70s.

In fact, in many ways emacs and elisp have fallen behind: 40 years after Lambda, the Ultimate Imperative, elisp is still dynamically scoped, and it still doesn’t support multithreading — when I try to use dired to list the files on a slow NFS mount, the entire editor hangs just as thoroughly as it might have in the 1980s. And when I say “doesn’t support multithreading,” I don’t mean there is some other clever trick for continuing to do work while waiting on a system call, like asynchronous callbacks or something. There’s start-process which forks a whole new process, and that’s about it. It’s a concurrency model straight out of 1980s UNIX land.

But being essentially just a decent text editor has robbed emacs of much of its competitive advantage. In a world where every developer tool is scriptable with languages and libraries an order of magnitude more powerful than cranky old elisp, the reason to use emacs is not that it lets a programmer hit a button and evaluate the current expression interactively (which must have been absolutely amazing at one point in the past).

https://www.reddit.com/r/emacs/comments/bh5kk7/why_do_many_new_users_still_prefer_vim_over_emacs/

more general comparison, not just popularity:
Differences between Emacs and Vim: https://stackoverflow.com/questions/1430164/differences-between-Emacs-and-vim

https://www.reddit.com/r/emacs/comments/9hen7z/what_are_the_benefits_of_emacs_over_vim/

https://unix.stackexchange.com/questions/986/what-are-the-pros-and-cons-of-vim-and-emacs

https://www.quora.com/Why-is-Vim-the-programmers-favorite-editor
- Adrien Lucas Ecoffet,

Because it is hard to use. Really.

However, the second part of this sentence applies to just about every good editor out there: if you really learn Sublime Text, you will become super productive. If you really learn Emacs, you will become super productive. If you really learn Visual Studio… you get the idea.

Here’s the thing though, you never actually need to really learn your text editor… Unless you use vim.

...

For many people new to programming, this is the first time they have been a power user of… well, anything! And because they’ve been told how great Vim is, many of them will keep at it and actually become productive, not because Vim is particularly more productive than any other editor, but because it didn’t provide them with a way to not be productive.

They then go on to tell their friends how great Vim is, and their friends go on to become power users and tell their friends in turn, and so forth. All these people believe they became productive because they changed their text editor. Little do they realize that they became productive because their text editor changed them[1].

This is in no way a criticism of Vim. I myself was a beneficiary of such a phenomenon when I learned to type using the Dvorak layout: at that time, I believed that Dvorak would help you type faster. Now I realize the evidence is mixed and that Dvorak might not be much better than Qwerty. However, learning Dvorak forced me to develop good typing habits because I could no longer rely on looking at my keyboard (since I was still using a Qwerty physical keyboard), and this has made me a much more productive typist.

Technical Interview Performance by Editor/OS/Language: https://triplebyte.com/blog/technical-interview-performance-by-editor-os-language
[ed.: I'm guessing this is confounded to all hell.]

The #1 most common editor we see used in interviews is Sublime Text, with Vim close behind.

Emacs represents a fairly small market share today at just about a quarter the userbase of Vim in our interviews. This nicely matches the 4:1 ratio of Google Search Trends for the two editors.

...

Vim takes the prize here, but PyCharm and Emacs are close behind. We’ve found that users of these editors tend to pass our interview at an above-average rate.

On the other end of the spectrum is Eclipse: it appears that someone using either Vim or Emacs is more than twice as likely to pass our technical interview as an Eclipse user.

...

In this case, we find that the average Ruby, Swift, and C# users tend to be stronger, with Python and Javascript in the middle of the pack.

...

Here’s what happens after we select engineers to work with and send them to onsites:

[Python does best.]

There are no wild outliers here, but let’s look at the C++ segment. While C++ programmers have the most challenging time passing Triplebyte’s technical interview on average, the ones we choose to work with tend to have a relatively easier time getting offers at each onsite.

The Rise of Microsoft Visual Studio Code: https://triplebyte.com/blog/editor-report-the-rise-of-visual-studio-code
This chart shows the rates at which each editor's users pass our interview compared to the mean pass rate for all candidates. First, notice the preeminence of Emacs and Vim! Engineers who use these editors pass our interview at significantly higher rates than other engineers. And the effect size is not small. Emacs users pass our interview at a rate 50… [more]
news  linux  oss  tech  editors  devtools  tools  comparison  ranking  flux-stasis  trends  ubiquity  unix  increase-decrease  multi  q-n-a  qra  data  poll  stackex  sv  facebook  google  integration-extension  org:med  politics  stereotypes  coalitions  decentralized  left-wing  right-wing  chart  scale  time-series  distribution  top-n  list  discussion  ide  parsimony  intricacy  cost-benefit  tradeoffs  confounding  analysis  crosstab  pls  python  c(pp)  jvm  microsoft  golang  hmm  correlation  debate  critique  quora  contrarianism  ecosystem  DSL  techtariat  org:com  org:nat  cs 
june 2019 by nhaliday
Lindy effect - Wikipedia
The Lindy effect is a theory that the future life expectancy of some non-perishable things like a technology or an idea is proportional to their current age, so that every additional period of survival implies a longer remaining life expectancy.[1] Where the Lindy effect applies, mortality rate decreases with time. In contrast, living creatures and mechanical things follow a bathtub curve where, after "childhood", the mortality rate increases with time. Because life expectancy is probabilistically derived, a thing may become extinct before its "expected" survival. In other words, one needs to gauge both the age and "health" of the thing to determine continued survival.
wiki  reference  concept  metabuch  ideas  street-fighting  planning  comparison  time  distribution  flux-stasis  history  measure  correlation  arrows  branches  pro-rata  manifolds  aging  stylized-facts  age-generation  robust  technology  thinking  cost-benefit  conceptual-vocab  methodology  threat-modeling  efficiency  neurons  tools  track-record  ubiquity 
june 2019 by nhaliday
Analysis of Current and Future Computer Science Needs via Advertised Faculty Searches for 2019 - CRN
Differences are also seen when analyzing results based on the type of institution. Positions related to Security have the highest percentages for all but top-100 institutions. The area of Artificial Intelligence/Data Mining/Machine Learning is of most interest for top-100 PhD institutions. Roughly 35% of positions for PhD institutions are in data-oriented areas. The results show a strong interest in data-oriented areas by public PhD and private PhD, MS, and BS institutions while public MS and BS institutions are most interested in Security.
org:edu  data  analysis  visualization  trends  recruiting  jobs  career  planning  academia  higher-ed  cs  tcs  machine-learning  systems  pro-rata  measure  long-term  🎓  uncertainty  progression  grad-school  phd  distribution  ranking  top-n  security  status  s-factor  comparison  homo-hetero  correlation  org:ngo  white-paper  cost-benefit 
june 2019 by nhaliday
Reconsidering epistemological scepticism – Dividuals
I blogged before about how I consider an epistemological scepticism fully compatible with being conservative/reactionary. By epistemological scepticism I mean the worldview where concepts, categories, names, classes aren’t considered real, just useful ways to categorize phenomena, but entirely mental constructs, basically just tools. I think you can call this nominalism as well. The nominalism-realism debate was certainly about this. What follows is the pro-empirical worldview where logic and reasoning is considered highly fallible: hence you don’t think and don’t argue too much, you actually look and check things instead. You rely on experience, not reasoning.

...

Anyhow, the argument is that there are classes, which are indeed artificial, and there are kinds, which are products of natural forces, products of causality.

...

And the deeper – Darwinian – argument, unspoken but obvious, is that any being with a model of reality that does not conform to such real clumps, gets eaten by a grue.

This is impressive. It seems I have to extend my one-variable epistemology to a two-variable epistemology.

My former epistemology was that we generally categorize things according to their uses or dangers for us. So “chair” is – very roughly – defined as “anything we can sit on”. Similarly, we can categorize “predator” as “something that eats us or the animals that are useful for us”.

The unspoken argument against this is that the universe or the biosphere exists neither for us nor against us. A fox can eat your rabbits and a lion can eat you, but they don’t exist just for the sake of making your life difficult.

Hence, if you interpret phenomena only from the viewpoint of their uses or dangers for humans, you get only half the picture right. The other half is what it really is and where it came from.

Copying is everything: https://dividuals.wordpress.com/2015/12/14/copying-is-everything/
Philosophy professor Ruth Millikan’s insight that everything that gets copied from an ancestor has a proper function or teleofunction: it is whatever feature or function that made it and its ancestor selected for copying, in competition with all the other similar copiable things. This would mean Aristotelean teleology is correct within the field of copyable things, replicators, i.e. within biology, although in physics still obviously incorrect.

Darwinian Reactionary drew attention to it two years ago and I still don’t understand why didn’t it generate a bigger buzz. It is an extremely important insight.

I mean, this is what we were waiting for, a proper synthesis of science and philosophy, and a proper way to rescue Aristotelean teleology, which leads to so excellent common-sense predictions that intuitively it cannot be very wrong, yet modern philosophy always denied it.

The result from that is the briding of the fact-value gap and burying the naturalistic fallacy: we CAN derive values from facts: a thing is good if it is well suitable for its natural purpose, teleofunction or proper function, which is the purpose it was selected for and copied for, the purpose and the suitability for the purpose that made the ancestors of this thing selected for copying, instead of all the other potential, similar ancestors.

...

What was humankind selected for? I am afraid, the answer is kind of ugly.

Men were selected to compete between groups, the cooperate within groups largely for coordinating for the sake of this competition, and have a low-key competition inside the groups as well for status and leadership. I am afraid, intelligence is all about organizing elaborate tribal raids: “coalitionary arms races”. The most civilized case, least brutal but still expensive case is arms races in prestige status, not dominance status: when Ancient Athens buildt pretty buildings and modern France built the TGV and America sent a man to the Moon in order to gain “gloire” i.e. the prestige type respect and status amongst the nations, the larger groups of mankind. If you are the type who doesn’t like blood, you should probably focus on these kinds of civilized, prestige-project competitions.

Women were selected for bearing children, for having strong and intelligent sons therefore having these heritable traits themselves (HBD kind of contradicts the more radically anti-woman aspects of RedPillery: marry a weak and stupid but attractive silly-blondie type woman and your son’s won’t be that great either), for pleasuring men and in some rarer but existing cases, to be true companions and helpers of their husbands.

https://en.wikipedia.org/wiki/Four_causes
- Matter: a change or movement's material cause, is the aspect of the change or movement which is determined by the material that composes the moving or changing things. For a table, that might be wood; for a statue, that might be bronze or marble.
- Form: a change or movement's formal cause, is a change or movement caused by the arrangement, shape or appearance of the thing changing or moving. Aristotle says for example that the ratio 2:1, and number in general, is the cause of the octave.
- Agent: a change or movement's efficient or moving cause, consists of things apart from the thing being changed or moved, which interact so as to be an agency of the change or movement. For example, the efficient cause of a table is a carpenter, or a person working as one, and according to Aristotle the efficient cause of a boy is a father.
- End or purpose: a change or movement's final cause, is that for the sake of which a thing is what it is. For a seed, it might be an adult plant. For a sailboat, it might be sailing. For a ball at the top of a ramp, it might be coming to rest at the bottom.

https://en.wikipedia.org/wiki/Proximate_and_ultimate_causation
A proximate cause is an event which is closest to, or immediately responsible for causing, some observed result. This exists in contrast to a higher-level ultimate cause (or distal cause) which is usually thought of as the "real" reason something occurred.

...

- Ultimate causation explains traits in terms of evolutionary forces acting on them.
- Proximate causation explains biological function in terms of immediate physiological or environmental factors.
gnon  philosophy  ideology  thinking  conceptual-vocab  forms-instances  realness  analytical-holistic  bio  evolution  telos-atelos  distribution  nature  coarse-fine  epistemic  intricacy  is-ought  values  duplication  nihil  the-classics  big-peeps  darwinian  deep-materialism  selection  equilibrium  subjective-objective  models  classification  smoothness  discrete  schelling  optimization  approximation  comparison  multi  peace-violence  war  coalitions  status  s-factor  fashun  reputation  civilization  intelligence  competition  leadership  cooperate-defect  within-without  within-group  group-level  homo-hetero  new-religion  causation  direct-indirect  ends-means  metabuch  physics  axioms  skeleton  wiki  reference  concept  being-becoming  essence-existence  logos  real-nominal 
july 2018 by nhaliday
Complexity no Bar to AI - Gwern.net
Critics of AI risk suggest diminishing returns to computing (formalized asymptotically) means AI will be weak; this argument relies on a large number of questionable premises and ignoring additional resources, constant factors, and nonlinear returns to small intelligence advantages, and is highly unlikely. (computer science, transhumanism, AI, R)
created: 1 June 2014; modified: 01 Feb 2018; status: finished; confidence: likely; importance: 10
ratty  gwern  analysis  faq  ai  risk  speedometer  intelligence  futurism  cs  computation  complexity  tcs  linear-algebra  nonlinearity  convexity-curvature  average-case  adversarial  article  time-complexity  singularity  iteration-recursion  magnitude  multiplicative  lower-bounds  no-go  performance  hardware  humanity  psychology  cog-psych  psychometrics  iq  distribution  moments  complement-substitute  hanson  ems  enhancement  parable  detail-architecture  universalism-particularism  neuro  ai-control  environment  climate-change  threat-modeling  security  theory-practice  hacker  academia  realness  crypto  rigorous-crypto  usa  government 
april 2018 by nhaliday
The first ethical revolution – Gene Expression
Fifty years ago Julian Jaynes published The Origin of Consciousness in the Breakdown of the Bicameral Mind. Seventy years ago Karl Jaspers introduced the concept of the Axial Age. Both point to the same dynamic historically.

Something happened in the centuries around 500 BCE all around the world. Great religions and philosophies arose. The Indian religious traditions, the Chinese philosophical-political ones, and the roots of what we can recognize as Judaism. In Greece, the precursors of many modern philosophical streams emerged formally, along with a variety of political systems.

The next few centuries saw some more innovation. Rabbinical Judaism transformed a ritualistic tribal religion into an ethical one, and Christianity universalized Jewish religious thought, as well as infusing it with Greek systematic concepts. Meanwhile, Indian and Chinese thought continued to evolve, often due to interactions each other (it is hard to imagine certain later developments in Confucianism without the Buddhist stimulus). Finally, in the 7th century, Islam emerges as the last great world religion.

...

Living in large complex societies with social stratification posed challenges. A religion such as Christianity was not a coincidence, something of its broad outlines may have been inevitable. Universal, portable, ethical, and infused with transcendence and coherency. Similarly, god-kings seem to have universally transformed themselves into the human who binds heaven to earth in some fashion.

The second wave of social-ethical transformation occurred in the early modern period, starting in Europe. My own opinion is that economic growth triggered by innovation and gains in productivity unleashed constraints which had dampened further transformations in the domain of ethics. But the new developments ultimately were simply extensions and modifications on the earlier “source code” (e.g., whereas for nearly two thousand years Christianity had had to make peace with the existence of slavery, in the 19th century anti-slavery activists began marshaling Christian language against the institution).
gnxp  scitariat  discussion  reflection  religion  christianity  theos  judaism  china  asia  sinosphere  orient  india  the-great-west-whale  occident  history  antiquity  iron-age  mediterranean  the-classics  canon  philosophy  morality  ethics  universalism-particularism  systematic-ad-hoc  analytical-holistic  confucian  big-peeps  innovation  stagnation  technology  economics  biotech  enhancement  genetics  bio  flux-stasis  automation  ai  low-hanging  speedometer  time  distribution  smoothness  shift  dennett  simler  volo-avolo  👽  mystic  marginal  farmers-and-foragers  wealth  egalitarianism-hierarchy  values  formal-values  ideology  good-evil 
april 2018 by nhaliday
Ultimate fate of the universe - Wikipedia
The fate of the universe is determined by its density. The preponderance of evidence to date, based on measurements of the rate of expansion and the mass density, favors a universe that will continue to expand indefinitely, resulting in the "Big Freeze" scenario below.[8] However, observations are not conclusive, and alternative models are still possible.[9]

Big Freeze or heat death
Main articles: Future of an expanding universe and Heat death of the universe
The Big Freeze is a scenario under which continued expansion results in a universe that asymptotically approaches absolute zero temperature.[10] This scenario, in combination with the Big Rip scenario, is currently gaining ground as the most important hypothesis.[11] It could, in the absence of dark energy, occur only under a flat or hyperbolic geometry. With a positive cosmological constant, it could also occur in a closed universe. In this scenario, stars are expected to form normally for 1012 to 1014 (1–100 trillion) years, but eventually the supply of gas needed for star formation will be exhausted. As existing stars run out of fuel and cease to shine, the universe will slowly and inexorably grow darker. Eventually black holes will dominate the universe, which themselves will disappear over time as they emit Hawking radiation.[12] Over infinite time, there would be a spontaneous entropy decrease by the Poincaré recurrence theorem, thermal fluctuations,[13][14] and the fluctuation theorem.[15][16]

A related scenario is heat death, which states that the universe goes to a state of maximum entropy in which everything is evenly distributed and there are no gradients—which are needed to sustain information processing, one form of which is life. The heat death scenario is compatible with any of the three spatial models, but requires that the universe reach an eventual temperature minimum.[17]
physics  big-picture  world  space  long-short-run  futurism  singularity  wiki  reference  article  nibble  thermo  temperature  entropy-like  order-disorder  death  nihil  bio  complex-systems  cybernetics  increase-decrease  trends  computation  local-global  prediction  time  spatial  spreading  density  distribution  manifolds  geometry  janus 
april 2018 by nhaliday
The Hanson-Yudkowsky AI-Foom Debate - Machine Intelligence Research Institute
How Deviant Recent AI Progress Lumpiness?: http://www.overcomingbias.com/2018/03/how-deviant-recent-ai-progress-lumpiness.html
I seem to disagree with most people working on artificial intelligence (AI) risk. While with them I expect rapid change once AI is powerful enough to replace most all human workers, I expect this change to be spread across the world, not concentrated in one main localized AI system. The efforts of AI risk folks to design AI systems whose values won’t drift might stop global AI value drift if there is just one main AI system. But doing so in a world of many AI systems at similar abilities levels requires strong global governance of AI systems, which is a tall order anytime soon. Their continued focus on preventing single system drift suggests that they expect a single main AI system.

The main reason that I understand to expect relatively local AI progress is if AI progress is unusually lumpy, i.e., arriving in unusually fewer larger packages rather than in the usual many smaller packages. If one AI team finds a big lump, it might jump way ahead of the other teams.

However, we have a vast literature on the lumpiness of research and innovation more generally, which clearly says that usually most of the value in innovation is found in many small innovations. We have also so far seen this in computer science (CS) and AI. Even if there have been historical examples where much value was found in particular big innovations, such as nuclear weapons or the origin of humans.

Apparently many people associated with AI risk, including the star machine learning (ML) researchers that they often idolize, find it intuitively plausible that AI and ML progress is exceptionally lumpy. Such researchers often say, “My project is ‘huge’, and will soon do it all!” A decade ago my ex-co-blogger Eliezer Yudkowsky and I argued here on this blog about our differing estimates of AI progress lumpiness. He recently offered Alpha Go Zero as evidence of AI lumpiness:

...

In this post, let me give another example (beyond two big lumps in a row) of what could change my mind. I offer a clear observable indicator, for which data should have available now: deviant citation lumpiness in recent ML research. One standard measure of research impact is citations; bigger lumpier developments gain more citations that smaller ones. And it turns out that the lumpiness of citations is remarkably constant across research fields! See this March 3 paper in Science:

I Still Don’t Get Foom: http://www.overcomingbias.com/2014/07/30855.html
All of which makes it look like I’m the one with the problem; everyone else gets it. Even so, I’m gonna try to explain my problem again, in the hope that someone can explain where I’m going wrong. Here goes.

“Intelligence” just means an ability to do mental/calculation tasks, averaged over many tasks. I’ve always found it plausible that machines will continue to do more kinds of mental tasks better, and eventually be better at pretty much all of them. But what I’ve found it hard to accept is a “local explosion.” This is where a single machine, built by a single project using only a tiny fraction of world resources, goes in a short time (e.g., weeks) from being so weak that it is usually beat by a single human with the usual tools, to so powerful that it easily takes over the entire world. Yes, smarter machines may greatly increase overall economic growth rates, and yes such growth may be uneven. But this degree of unevenness seems implausibly extreme. Let me explain.

If we count by economic value, humans now do most of the mental tasks worth doing. Evolution has given us a brain chock-full of useful well-honed modules. And the fact that most mental tasks require the use of many modules is enough to explain why some of us are smarter than others. (There’d be a common “g” factor in task performance even with independent module variation.) Our modules aren’t that different from those of other primates, but because ours are different enough to allow lots of cultural transmission of innovation, we’ve out-competed other primates handily.

We’ve had computers for over seventy years, and have slowly build up libraries of software modules for them. Like brains, computers do mental tasks by combining modules. An important mental task is software innovation: improving these modules, adding new ones, and finding new ways to combine them. Ideas for new modules are sometimes inspired by the modules we see in our brains. When an innovation team finds an improvement, they usually sell access to it, which gives them resources for new projects, and lets others take advantage of their innovation.

...

In Bostrom’s graph above the line for an initially small project and system has a much higher slope, which means that it becomes in a short time vastly better at software innovation. Better than the entire rest of the world put together. And my key question is: how could it plausibly do that? Since the rest of the world is already trying the best it can to usefully innovate, and to abstract to promote such innovation, what exactly gives one small project such a huge advantage to let it innovate so much faster?

...

In fact, most software innovation seems to be driven by hardware advances, instead of innovator creativity. Apparently, good ideas are available but must usually wait until hardware is cheap enough to support them.

Yes, sometimes architectural choices have wider impacts. But I was an artificial intelligence researcher for nine years, ending twenty years ago, and I never saw an architecture choice make a huge difference, relative to other reasonable architecture choices. For most big systems, overall architecture matters a lot less than getting lots of detail right. Researchers have long wandered the space of architectures, mostly rediscovering variations on what others found before.

Some hope that a small project could be much better at innovation because it specializes in that topic, and much better understands new theoretical insights into the basic nature of innovation or intelligence. But I don’t think those are actually topics where one can usefully specialize much, or where we’ll find much useful new theory. To be much better at learning, the project would instead have to be much better at hundreds of specific kinds of learning. Which is very hard to do in a small project.

What does Bostrom say? Alas, not much. He distinguishes several advantages of digital over human minds, but all software shares those advantages. Bostrom also distinguishes five paths: better software, brain emulation (i.e., ems), biological enhancement of humans, brain-computer interfaces, and better human organizations. He doesn’t think interfaces would work, and sees organizations and better biology as only playing supporting roles.

...

Similarly, while you might imagine someday standing in awe in front of a super intelligence that embodies all the power of a new age, superintelligence just isn’t the sort of thing that one project could invent. As “intelligence” is just the name we give to being better at many mental tasks by using many good mental modules, there’s no one place to improve it. So I can’t see a plausible way one project could increase its intelligence vastly faster than could the rest of the world.

Takeoff speeds: https://sideways-view.com/2018/02/24/takeoff-speeds/
Futurists have argued for years about whether the development of AGI will look more like a breakthrough within a small group (“fast takeoff”), or a continuous acceleration distributed across the broader economy or a large firm (“slow takeoff”).

I currently think a slow takeoff is significantly more likely. This post explains some of my reasoning and why I think it matters. Mostly the post lists arguments I often hear for a fast takeoff and explains why I don’t find them compelling.

(Note: this is not a post about whether an intelligence explosion will occur. That seems very likely to me. Quantitatively I expect it to go along these lines. So e.g. while I disagree with many of the claims and assumptions in Intelligence Explosion Microeconomics, I don’t disagree with the central thesis or with most of the arguments.)
ratty  lesswrong  subculture  miri-cfar  ai  risk  ai-control  futurism  books  debate  hanson  big-yud  prediction  contrarianism  singularity  local-global  speed  speedometer  time  frontier  distribution  smoothness  shift  pdf  economics  track-record  abstraction  analogy  links  wiki  list  evolution  mutation  selection  optimization  search  iteration-recursion  intelligence  metameta  chart  analysis  number  ems  coordination  cooperate-defect  death  values  formal-values  flux-stasis  philosophy  farmers-and-foragers  malthus  scale  studying  innovation  insight  conceptual-vocab  growth-econ  egalitarianism-hierarchy  inequality  authoritarianism  wealth  near-far  rationality  epistemic  biases  cycles  competition  arms  zero-positive-sum  deterrence  war  peace-violence  winner-take-all  technology  moloch  multi  plots  research  science  publishing  humanity  labor  marginal  urban-rural  structure  composition-decomposition  complex-systems  gregory-clark  decentralized  heavy-industry  magnitude  multiplicative  endogenous-exogenous  models  uncertainty  decision-theory  time-prefer 
april 2018 by nhaliday
Who We Are | West Hunter
I’m going to review David Reich’s new book, Who We Are and How We Got Here. Extensively: in a sense I’ve already been doing this for a long time. Probably there will be a podcast. The GoFundMe link is here. You can also send money via Paypal (Use the donate button), or bitcoins to 1Jv4cu1wETM5Xs9unjKbDbCrRF2mrjWXr5. In-kind donations, such as orichalcum or mithril, are always appreciated.

This is the book about the application of ancient DNA to prehistory and history.

height difference between northern and southern europeans: https://westhunt.wordpress.com/2018/03/29/who-we-are-1/
mixing, genocide of males, etc.: https://westhunt.wordpress.com/2018/03/29/who-we-are-2-purity-of-essence/
rapid change in polygenic traits (appearance by Kevin Mitchell and funny jab at Brad Delong ("regmonkey")): https://westhunt.wordpress.com/2018/03/30/rapid-change-in-polygenic-traits/
schiz, bipolar, and IQ: https://westhunt.wordpress.com/2018/03/30/rapid-change-in-polygenic-traits/#comment-105605
Dan Graur being dumb: https://westhunt.wordpress.com/2018/04/02/the-usual-suspects/
prediction of neanderthal mixture and why: https://westhunt.wordpress.com/2018/04/03/who-we-are-3-neanderthals/
New Guineans tried to use Denisovan admixture to avoid UN sanctions (by "not being human"): https://westhunt.wordpress.com/2018/04/04/who-we-are-4-denisovans/
also some commentary on decline of Out-of-Africa, including:
"Homo Naledi, a small-brained homonin identified from recently discovered fossils in South Africa, appears to have hung around way later that you’d expect (up to 200,000 years ago, maybe later) than would be the case if modern humans had occupied that area back then. To be blunt, we would have eaten them."

Live Not By Lies: https://westhunt.wordpress.com/2018/04/08/live-not-by-lies/
Next he slams people that suspect that upcoming genetic genetic analysis will, in most cases, confirm traditional stereotypes about race – the way the world actually looks.

The people Reich dumps on are saying perfectly reasonable things. He criticizes Henry Harpending for saying that he’d never seen an African with a hobby. Of course, Henry had actually spent time in Africa, and that’s what he’d seen. The implication is that people in Malthusian farming societies – which Africa was not – were selected to want to work, even where there was no immediate necessity to do so. Thus hobbies, something like a gerbil running in an exercise wheel.

He criticized Nicholas Wade, for saying that different races have different dispositions. Wade’s book wasn’t very good, but of course personality varies by race: Darwin certainly thought so. You can see differences at birth. Cover a baby’s nose with a cloth: Chinese and Navajo babies quietly breathe through their mouth, European and African babies fuss and fight.

Then he attacks Watson, for asking when Reich was going to look at Jewish genetics – the kind that has led to greater-than-average intelligence. Watson was undoubtedly trying to get a rise out of Reich, but it’s a perfectly reasonable question. Ashkenazi Jews are smarter than the average bear and everybody knows it. Selection is the only possible explanation, and the conditions in the Middle ages – white-collar job specialization and a high degree of endogamy, were just what the doctor ordered.

Watson’s a prick, but he’s a great prick, and what he said was correct. Henry was a prince among men, and Nick Wade is a decent guy as well. Reich is totally out of line here: he’s being a dick.

Now Reich may be trying to burnish his anti-racist credentials, which surely need some renewal after having pointing out that race as colloquially used is pretty reasonable, there’s no reason pops can’t be different, people that said otherwise ( like Lewontin, Gould, Montagu, etc. ) were lying, Aryans conquered Europe and India, while we’re tied to the train tracks with scary genetic results coming straight at us. I don’t care: he’s being a weasel, slandering the dead and abusing the obnoxious old genius who laid the foundations of his field. Reich will also get old someday: perhaps he too will someday lose track of all the nonsense he’s supposed to say, or just stop caring. Maybe he already has… I’m pretty sure that Reich does not like lying – which is why he wrote this section of the book (not at all logically necessary for his exposition of the ancient DNA work) but the required complex juggling of lies and truth required to get past the demented gatekeepers of our society may not be his forte. It has been said that if it was discovered that someone in the business was secretly an android, David Reich would be the prime suspect. No Talleyrand he.

https://westhunt.wordpress.com/2018/04/12/who-we-are-6-the-americas/
The population that accounts for the vast majority of Native American ancestry, which we will call Amerinds, came into existence somewhere in northern Asia. It was formed from a mix of Ancient North Eurasians and a population related to the Han Chinese – about 40% ANE and 60% proto-Chinese. Is looks as if most of the paternal ancestry was from the ANE, while almost all of the maternal ancestry was from the proto-Han. [Aryan-Transpacific ?!?] This formation story – ANE boys, East-end girls – is similar to the formation story for the Indo-Europeans.

https://westhunt.wordpress.com/2018/04/18/who-we-are-7-africa/
In some ways, on some questions, learning more from genetics has left us less certain. At this point we really don’t know where anatomically humans originated. Greater genetic variety in sub-Saharan African has been traditionally considered a sign that AMH originated there, but it possible that we originated elsewhere, perhaps in North Africa or the Middle East, and gained extra genetic variation when we moved into sub-Saharan Africa and mixed with various archaic groups that already existed. One consideration is that finding recent archaic admixture in a population may well be a sign that modern humans didn’t arise in that region ( like language substrates) – which makes South Africa and West Africa look less likely. The long-continued existence of homo naledi in South Africa suggests that modern humans may not have been there for all that long – if we had co-existed with homo naledi, they probably wouldn’t lasted long. The oldest known skull that is (probably) AMh was recently found in Morocco, while modern humans remains, already known from about 100,000 years ago in Israel, have recently been found in northern Saudi Arabia.

While work by Nick Patterson suggests that modern humans were formed by a fusion between two long-isolated populations, a bit less than half a million years ago.

So: genomics had made recent history Africa pretty clear. Bantu agriculuralists expanded and replaced hunter-gatherers, farmers and herders from the Middle East settled North Africa, Egypt and northeaat Africa, while Nilotic herdsmen expanded south from the Sudan. There are traces of earlier patterns and peoples, but today, only traces. As for questions back further in time, such as the origins of modern humans – we thought we knew, and now we know we don’t. But that’s progress.

https://westhunt.wordpress.com/2018/04/18/reichs-journey/
David Reich’s professional path must have shaped his perspective on the social sciences. Look at the record. He starts his professional career examining the role of genetics in the elevated prostate cancer risk seen in African-American men. Various social-science fruitcakes oppose him even looking at the question of ancestry ( African vs European). But they were wrong: certain African-origin alleles explain the increased risk. Anthropologists (and human geneticists) were sure (based on nothing) that modern humans hadn’t interbred with Neanderthals – but of course that happened. Anthropologists and archaeologists knew that Gustaf Kossina couldn’t have been right when he said that widespread material culture corresponded to widespread ethnic groups, and that migration was the primary explanation for changes in the archaeological record – but he was right. They knew that the Indo-European languages just couldn’t have been imposed by fire and sword – but Reich’s work proved them wrong. Lots of people – the usual suspects plus Hindu nationalists – were sure that the AIT ( Aryan Invasion Theory) was wrong, but it looks pretty good today.

Some sociologists believed that caste in India was somehow imposed or significantly intensified by the British – but it turns out that most jatis have been almost perfectly endogamous for two thousand years or more…

It may be that Reich doesn’t take these guys too seriously anymore. Why should he?

varnas, jatis, aryan invastion theory: https://westhunt.wordpress.com/2018/04/22/who-we-are-8-india/

europe and EEF+WHG+ANE: https://westhunt.wordpress.com/2018/05/01/who-we-are-9-europe/

https://www.nationalreview.com/2018/03/book-review-david-reich-human-genes-reveal-history/
The massive mixture events that occurred in the recent past to give rise to Europeans and South Asians, to name just two groups, were likely “male mediated.” That’s another way of saying that men on the move took local women as brides or concubines. In the New World there are many examples of this, whether it be among African Americans, where most European ancestry seems to come through men, or in Latin America, where conquistadores famously took local women as paramours. Both of these examples are disquieting, and hint at the deep structural roots of patriarchal inequality and social subjugation that form the backdrop for the emergence of many modern peoples.
west-hunter  scitariat  books  review  sapiens  anthropology  genetics  genomics  history  antiquity  iron-age  world  europe  gavisti  aDNA  multi  politics  culture-war  kumbaya-kult  social-science  academia  truth  westminster  environmental-effects  embodied  pop-diff  nordic  mediterranean  the-great-west-whale  germanic  the-classics  shift  gene-flow  homo-hetero  conquest-empire  morality  diversity  aphorism  migration  migrant-crisis  EU  africa  MENA  gender  selection  speed  time  population-genetics  error  concrete  econotariat  economics  regression  troll  lol  twitter  social  media  street-fighting  methodology  robust  disease  psychiatry  iq  correlation  usa  obesity  dysgenics  education  track-record  people  counterexample  reason  thinking  fisher  giants  old-anglo  scifi-fantasy  higher-ed  being-right  stories  reflection  critique  multiplicative  iteration-recursion  archaics  asia  developing-world  civil-liberty  anglo  oceans  food  death  horror  archaeology  gnxp  news  org:mag  right-wing  age-of-discovery  latin-america  ea 
march 2018 by nhaliday
Antinomia Imediata – experiments in a reaction from the left
https://antinomiaimediata.wordpress.com/lrx/
So, what is the Left Reaction? First of all, it’s reaction: opposition to the modern rationalist establishment, the Cathedral. It opposes the universalist Jacobin program of global government, favoring a fractured geopolitics organized through long-evolved complex systems. It’s profoundly anti-socialist and anti-communist, favoring market economy and individualism. It abhors tribalism and seeks a realistic plan for dismantling it (primarily informed by HBD and HBE). It looks at modernity as a degenerative ratchet, whose only way out is intensification (hence clinging to crypto-marxist market-driven acceleration).

How come can any of this still be in the *Left*? It defends equality of power, i.e. freedom. This radical understanding of liberty is deeply rooted in leftist tradition and has been consistently abhored by the Right. LRx is not democrat, is not socialist, is not progressist and is not even liberal (in its current, American use). But it defends equality of power. It’s utopia is individual sovereignty. It’s method is paleo-agorism. The anti-hierarchy of hunter-gatherer nomads is its understanding of the only realistic objective of equality.

...

In more cosmic terms, it seeks only to fulfill the Revolution’s side in the left-right intelligence pump: mutation or creation of paths. Proudhon’s antinomy is essentially about this: the collective force of the socius, evinced in moral standards and social organization vs the creative force of the individuals, that constantly revolutionize and disrupt the social body. The interplay of these forces create reality (it’s a metaphysics indeed): the Absolute (socius) builds so that the (individualistic) Revolution can destroy so that the Absolute may adapt, and then repeat. The good old formula of ‘solve et coagula’.

Ultimately, if the Neoreaction promises eternal hell, the LRx sneers “but Satan is with us”.

https://antinomiaimediata.wordpress.com/2016/12/16/a-statement-of-principles/
Liberty is to be understood as the ability and right of all sentient beings to dispose of their persons and the fruits of their labor, and nothing else, as they see fit. This stems from their self-awareness and their ability to control and choose the content of their actions.

...

Equality is to be understood as the state of no imbalance of power, that is, of no subjection to another sentient being. This stems from their universal ability for empathy, and from their equal ability for reason.

...

It is important to notice that, contrary to usual statements of these two principles, my standpoint is that Liberty and Equality here are not merely compatible, meaning they could coexist in some possible universe, but rather they are two sides of the same coin, complementary and interdependent. There can be NO Liberty where there is no Equality, for the imbalance of power, the state of subjection, will render sentient beings unable to dispose of their persons and the fruits of their labor[1], and it will limit their ability to choose over their rightful jurisdiction. Likewise, there can be NO Equality without Liberty, for restraining sentient beings’ ability to choose and dispose of their persons and fruits of labor will render some more powerful than the rest, and establish a state of subjection.

https://antinomiaimediata.wordpress.com/2017/04/18/flatness/
equality is the founding principle (and ultimately indistinguishable from) freedom. of course, it’s only in one specific sense of “equality” that this sentence is true.

to try and eliminate the bullshit, let’s turn to networks again:

any nodes’ degrees of freedom is the number of nodes they are connected to in a network. freedom is maximum when the network is symmetrically connected, i. e., when all nodes are connected to each other and thus there is no topographical hierarchy (middlemen) – in other words, flatness.

in this understanding, the maximization of freedom is the maximization of entropy production, that is, of intelligence. As Land puts it:

https://antinomiaimediata.wordpress.com/category/philosophy/mutualism/
gnon  blog  stream  politics  polisci  ideology  philosophy  land  accelerationism  left-wing  right-wing  paradox  egalitarianism-hierarchy  civil-liberty  power  hmm  revolution  analytical-holistic  mutation  selection  individualism-collectivism  tribalism  us-them  modernity  multi  tradeoffs  network-structure  complex-systems  cybernetics  randy-ayndy  insight  contrarianism  metameta  metabuch  characterization  cooperate-defect  n-factor  altruism  list  coordination  graphs  visual-understanding  cartoons  intelligence  entropy-like  thermo  information-theory  order-disorder  decentralized  distribution  degrees-of-freedom  analogy  graph-theory  extrema  evolution  interdisciplinary  bio  differential  geometry  anglosphere  optimate  nascent-state  deep-materialism  new-religion  cool  mystic  the-classics  self-interest  interests  reason  volo-avolo  flux-stasis  invariance  government  markets  paying-rent  cost-benefit  peace-violence  frontier  exit-voice  nl-and-so-can-you  war  track-record  usa  history  mostly-modern  world-war  military  justice  protestant-cathol 
march 2018 by nhaliday
Prisoner's dilemma - Wikipedia
caveat to result below:
An extension of the IPD is an evolutionary stochastic IPD, in which the relative abundance of particular strategies is allowed to change, with more successful strategies relatively increasing. This process may be accomplished by having less successful players imitate the more successful strategies, or by eliminating less successful players from the game, while multiplying the more successful ones. It has been shown that unfair ZD strategies are not evolutionarily stable. The key intuition is that an evolutionarily stable strategy must not only be able to invade another population (which extortionary ZD strategies can do) but must also perform well against other players of the same type (which extortionary ZD players do poorly, because they reduce each other's surplus).[14]

Theory and simulations confirm that beyond a critical population size, ZD extortion loses out in evolutionary competition against more cooperative strategies, and as a result, the average payoff in the population increases when the population is bigger. In addition, there are some cases in which extortioners may even catalyze cooperation by helping to break out of a face-off between uniform defectors and win–stay, lose–switch agents.[8]

https://alfanl.com/2018/04/12/defection/
Nature boils down to a few simple concepts.

Haters will point out that I oversimplify. The haters are wrong. I am good at saying a lot with few words. Nature indeed boils down to a few simple concepts.

In life, you can either cooperate or defect.

Used to be that defection was the dominant strategy, say in the time when the Roman empire started to crumble. Everybody complained about everybody and in the end nothing got done. Then came Jesus, who told people to be loving and cooperative, and boom: 1800 years later we get the industrial revolution.

Because of Jesus we now find ourselves in a situation where cooperation is the dominant strategy. A normie engages in a ton of cooperation: with the tax collector who wants more and more of his money, with schools who want more and more of his kid’s time, with media who wants him to repeat more and more party lines, with the Zeitgeist of the Collective Spirit of the People’s Progress Towards a New Utopia. Essentially, our normie is cooperating himself into a crumbling Western empire.

Turns out that if everyone blindly cooperates, parasites sprout up like weeds until defection once again becomes the standard.

The point of a post-Christian religion is to once again create conditions for the kind of cooperation that led to the industrial revolution. This necessitates throwing out undead Christianity: you do not blindly cooperate. You cooperate with people that cooperate with you, you defect on people that defect on you. Christianity mixed with Darwinism. God and Gnon meet.

This also means we re-establish spiritual hierarchy, which, like regular hierarchy, is a prerequisite for cooperation. It is this hierarchical cooperation that turns a household into a force to be reckoned with, that allows a group of men to unite as a front against their enemies, that allows a tribe to conquer the world. Remember: Scientology bullied the Cathedral’s tax department into submission.

With a functioning hierarchy, men still gossip, lie and scheme, but they will do so in whispers behind closed doors. In your face they cooperate and contribute to the group’s wellbeing because incentives are thus that contributing to group wellbeing heightens status.

Without a functioning hierarchy, men gossip, lie and scheme, but they do so in your face, and they tell you that you are positively deluded for accusing them of gossiping, lying and scheming. Seeds will not sprout in such ground.

Spiritual dominance is established in the same way any sort of dominance is established: fought for, taken. But the fight is ritualistic. You can’t force spiritual dominance if no one listens, or if you are silenced the ritual is not allowed to happen.

If one of our priests is forbidden from establishing spiritual dominance, that is a sure sign an enemy priest is in better control and has vested interest in preventing you from establishing spiritual dominance..

They defect on you, you defect on them. Let them suffer the consequences of enemy priesthood, among others characterized by the annoying tendency that very little is said with very many words.

https://contingentnotarbitrary.com/2018/04/14/rederiving-christianity/
To recap, we started with a secular definition of Logos and noted that its telos is existence. Given human nature, game theory and the power of cooperation, the highest expression of that telos is freely chosen universal love, tempered by constant vigilance against defection while maintaining compassion for the defectors and forgiving those who repent. In addition, we must know the telos in order to fulfill it.

In Christian terms, looks like we got over half of the Ten Commandments (know Logos for the First, don’t defect or tempt yourself to defect for the rest), the importance of free will, the indestructibility of evil (group cooperation vs individual defection), loving the sinner and hating the sin (with defection as the sin), forgiveness (with conditions), and love and compassion toward all, assuming only secular knowledge and that it’s good to exist.

Iterated Prisoner's Dilemma is an Ultimatum Game: http://infoproc.blogspot.com/2012/07/iterated-prisoners-dilemma-is-ultimatum.html
The history of IPD shows that bounded cognition prevented the dominant strategies from being discovered for over over 60 years, despite significant attention from game theorists, computer scientists, economists, evolutionary biologists, etc. Press and Dyson have shown that IPD is effectively an ultimatum game, which is very different from the Tit for Tat stories told by generations of people who worked on IPD (Axelrod, Dawkins, etc., etc.).

...

For evolutionary biologists: Dyson clearly thinks this result has implications for multilevel (group vs individual selection):
... Cooperation loses and defection wins. The ZD strategies confirm this conclusion and make it sharper. ... The system evolved to give cooperative tribes an advantage over non-cooperative tribes, using punishment to give cooperation an evolutionary advantage within the tribe. This double selection of tribes and individuals goes way beyond the Prisoners' Dilemma model.

implications for fractionalized Europe vis-a-vis unified China?

and more broadly does this just imply we're doomed in the long run RE: cooperation, morality, the "good society", so on...? war and group-selection is the only way to get a non-crab bucket civilization?

Iterated Prisoner’s Dilemma contains strategies that dominate any evolutionary opponent:
http://www.pnas.org/content/109/26/10409.full
http://www.pnas.org/content/109/26/10409.full.pdf
https://www.edge.org/conversation/william_h_press-freeman_dyson-on-iterated-prisoners-dilemma-contains-strategies-that

https://en.wikipedia.org/wiki/Ultimatum_game

analogy for ultimatum game: the state gives the demos a bargain take-it-or-leave-it, and...if the demos refuses...violence?

The nature of human altruism: http://sci-hub.tw/https://www.nature.com/articles/nature02043
- Ernst Fehr & Urs Fischbacher

Some of the most fundamental questions concerning our evolutionary origins, our social relations, and the organization of society are centred around issues of altruism and selfishness. Experimental evidence indicates that human altruism is a powerful force and is unique in the animal world. However, there is much individual heterogeneity and the interaction between altruists and selfish individuals is vital to human cooperation. Depending on the environment, a minority of altruists can force a majority of selfish individuals to cooperate or, conversely, a few egoists can induce a large number of altruists to defect. Current gene-based evolutionary theories cannot explain important patterns of human altruism, pointing towards the importance of both theories of cultural evolution as well as gene–culture co-evolution.

...

Why are humans so unusual among animals in this respect? We propose that quantitatively, and probably even qualitatively, unique patterns of human altruism provide the answer to this question. Human altruism goes far beyond that which has been observed in the animal world. Among animals, fitness-reducing acts that confer fitness benefits on other individuals are largely restricted to kin groups; despite several decades of research, evidence for reciprocal altruism in pair-wise repeated encounters4,5 remains scarce6–8. Likewise, there is little evidence so far that individual reputation building affects cooperation in animals, which contrasts strongly with what we find in humans. If we randomly pick two human strangers from a modern society and give them the chance to engage in repeated anonymous exchanges in a laboratory experiment, there is a high probability that reciprocally altruistic behaviour will emerge spontaneously9,10.

However, human altruism extends far beyond reciprocal altruism and reputation-based cooperation, taking the form of strong reciprocity11,12. Strong reciprocity is a combination of altruistic rewarding, which is a predisposition to reward others for cooperative, norm-abiding behaviours, and altruistic punishment, which is a propensity to impose sanctions on others for norm violations. Strong reciprocators bear the cost of rewarding or punishing even if they gain no individual economic benefit whatsoever from their acts. In contrast, reciprocal altruists, as they have been defined in the biological literature4,5, reward and punish only if this is in their long-term self-interest. Strong reciprocity thus constitutes a powerful incentive for cooperation even in non-repeated interactions and when reputation gains are absent, because strong reciprocators will reward those who cooperate and punish those who defect.

...

We will show that the interaction between selfish and strongly reciprocal … [more]
concept  conceptual-vocab  wiki  reference  article  models  GT-101  game-theory  anthropology  cultural-dynamics  trust  cooperate-defect  coordination  iteration-recursion  sequential  axelrod  discrete  smoothness  evolution  evopsych  EGT  economics  behavioral-econ  sociology  new-religion  deep-materialism  volo-avolo  characterization  hsu  scitariat  altruism  justice  group-selection  decision-making  tribalism  organizing  hari-seldon  theory-practice  applicability-prereqs  bio  finiteness  multi  history  science  social-science  decision-theory  commentary  study  summary  giants  the-trenches  zero-positive-sum  🔬  bounded-cognition  info-dynamics  org:edge  explanation  exposition  org:nat  eden  retention  long-short-run  darwinian  markov  equilibrium  linear-algebra  nitty-gritty  competition  war  explanans  n-factor  europe  the-great-west-whale  occident  china  asia  sinosphere  orient  decentralized  markets  market-failure  cohesion  metabuch  stylized-facts  interdisciplinary  physics  pdf  pessimism  time  insight  the-basilisk  noblesse-oblige  the-watchers  ideas  l 
march 2018 by nhaliday
Religiosity and Fertility in the United States: The Role of Fertility Intentions
Using data from the 2002 National Survey of Family Growth (NSFG), we show that women who report that religion is “very important” in their everyday life have both higher fertility and higher intended fertility than those saying religion is “somewhat important” or “not important.” Factors such as unwanted fertility, age at childbearing, or degree of fertility postponement seem not to contribute to religiosity differentials in fertility. This answer prompts more fundamental questions: what is the nature of this greater “religiosity”? And why do the more religious want more children? We show that those saying religion is more important have more traditional gender and family attitudes and that these attitudinal differences account for a substantial part of the fertility differential. We speculate regarding other contributing causes.

Religion, Religiousness and Fertility in the U.S. and in Europe: https://www.demogr.mpg.de/papers/working/wp-2006-013.pdf
2006

RELIGIONS, FERTILITY, AND GROWTH IN SOUTHEAST ASIA: https://onlinelibrary.wiley.com/doi/abs/10.1111/iere.12291
Using Southeast Asian censuses, we show empirically that being Catholic, Buddhist, or Muslim significantly raises fertility, especially for couples with intermediate to high education levels. With these estimates, we identify the parameters of a structural model. Catholicism is strongly pro‐child (increasing total spending on children), followed by Buddhism, whereas Islam is more pro‐birth (redirecting spending from quality to quantity). Pro‐child religions depress growth in its early stages by lowering savings and labor supply. In the later stages of growth, pro‐birth religions impede human capital accumulation.
study  sociology  religion  theos  usa  correlation  fertility  eric-kaufmann  causation  general-survey  demographics  phalanges  intervention  gender  tradition  social-norms  parenting  values  politics  ideology  multi  europe  EU  rot  nihil  data  time-series  distribution  christianity  protestant-catholic  other-xtian  the-great-west-whale  occident  expression-survival  poll  inequality  pro-rata  mediterranean  eastern-europe  wealth  econ-metrics  farmers-and-foragers  buddhism  islam  asia  developing-world  human-capital  investing  developmental  number  quantitative-qualitative  quality  world  natural-experiment  field-study 
february 2018 by nhaliday
Stein's example - Wikipedia
Stein's example (or phenomenon or paradox), in decision theory and estimation theory, is the phenomenon that when three or more parameters are estimated simultaneously, there exist combined estimators more accurate on average (that is, having lower expected mean squared error) than any method that handles the parameters separately. It is named after Charles Stein of Stanford University, who discovered the phenomenon in 1955.[1]

An intuitive explanation is that optimizing for the mean-squared error of a combined estimator is not the same as optimizing for the errors of separate estimators of the individual parameters. In practical terms, if the combined error is in fact of interest, then a combined estimator should be used, even if the underlying parameters are independent; this occurs in channel estimation in telecommunications, for instance (different factors affect overall channel performance). On the other hand, if one is instead interested in estimating an individual parameter, then using a combined estimator does not help and is in fact worse.

...

Many simple, practical estimators achieve better performance than the ordinary estimator. The best-known example is the James–Stein estimator, which works by starting at X and moving towards a particular point (such as the origin) by an amount inversely proportional to the distance of X from that point.
nibble  concept  levers  wiki  reference  acm  stats  probability  decision-theory  estimate  distribution  atoms 
february 2018 by nhaliday
Information Processing: US Needs a National AI Strategy: A Sputnik Moment?
FT podcasts on US-China competition and AI: http://infoproc.blogspot.com/2018/05/ft-podcasts-on-us-china-competition-and.html

A new recommended career path for effective altruists: China specialist: https://80000hours.org/articles/china-careers/
Our rough guess is that it would be useful for there to be at least ten people in the community with good knowledge in this area within the next few years.

By “good knowledge” we mean they’ve spent at least 3 years studying these topics and/or living in China.

We chose ten because that would be enough for several people to cover each of the major areas listed (e.g. 4 within AI, 2 within biorisk, 2 within foreign relations, 1 in another area).

AI Policy and Governance Internship: https://www.fhi.ox.ac.uk/ai-policy-governance-internship/

https://www.fhi.ox.ac.uk/deciphering-chinas-ai-dream/
https://www.fhi.ox.ac.uk/wp-content/uploads/Deciphering_Chinas_AI-Dream.pdf
Deciphering China’s AI Dream
The context, components, capabilities, and consequences of
China’s strategy to lead the world in AI

Europe’s AI delusion: https://www.politico.eu/article/opinion-europes-ai-delusion/
Brussels is failing to grasp threats and opportunities of artificial intelligence.
By BRUNO MAÇÃES

When the computer program AlphaGo beat the Chinese professional Go player Ke Jie in a three-part match, it didn’t take long for Beijing to realize the implications.

If algorithms can already surpass the abilities of a master Go player, it can’t be long before they will be similarly supreme in the activity to which the classic board game has always been compared: war.

As I’ve written before, the great conflict of our time is about who can control the next wave of technological development: the widespread application of artificial intelligence in the economic and military spheres.

...

If China’s ambitions sound plausible, that’s because the country’s achievements in deep learning are so impressive already. After Microsoft announced that its speech recognition software surpassed human-level language recognition in October 2016, Andrew Ng, then head of research at Baidu, tweeted: “We had surpassed human-level Chinese recognition in 2015; happy to see Microsoft also get there for English less than a year later.”

...

One obvious advantage China enjoys is access to almost unlimited pools of data. The machine-learning technologies boosting the current wave of AI expansion are as good as the amount of data they can use. That could be the number of people driving cars, photos labeled on the internet or voice samples for translation apps. With 700 or 800 million Chinese internet users and fewer data protection rules, China is as rich in data as the Gulf States are in oil.

How can Europe and the United States compete? They will have to be commensurately better in developing algorithms and computer power. Sadly, Europe is falling behind in these areas as well.

...

Chinese commentators have embraced the idea of a coming singularity: the moment when AI surpasses human ability. At that point a number of interesting things happen. First, future AI development will be conducted by AI itself, creating exponential feedback loops. Second, humans will become useless for waging war. At that point, the human mind will be unable to keep pace with robotized warfare. With advanced image recognition, data analytics, prediction systems, military brain science and unmanned systems, devastating wars might be waged and won in a matter of minutes.

...

The argument in the new strategy is fully defensive. It first considers how AI raises new threats and then goes on to discuss the opportunities. The EU and Chinese strategies follow opposite logics. Already on its second page, the text frets about the legal and ethical problems raised by AI and discusses the “legitimate concerns” the technology generates.

The EU’s strategy is organized around three concerns: the need to boost Europe’s AI capacity, ethical issues and social challenges. Unfortunately, even the first dimension quickly turns out to be about “European values” and the need to place “the human” at the center of AI — forgetting that the first word in AI is not “human” but “artificial.”

https://twitter.com/mr_scientism/status/983057591298351104
https://archive.is/m3Njh
US military: "LOL, China thinks it's going to be a major player in AI, but we've got all the top AI researchers. You guys will help us develop weapons, right?"

US AI researchers: "No."

US military: "But... maybe just a computer vision app."

US AI researchers: "NO."

https://www.theverge.com/2018/4/4/17196818/ai-boycot-killer-robots-kaist-university-hanwha
https://www.nytimes.com/2018/04/04/technology/google-letter-ceo-pentagon-project.html
https://twitter.com/mr_scientism/status/981685030417326080
https://archive.is/3wbHm
AI-risk was a mistake.
hsu  scitariat  commentary  video  presentation  comparison  usa  china  asia  sinosphere  frontier  technology  science  ai  speedometer  innovation  google  barons  deepgoog  stories  white-paper  strategy  migration  iran  human-capital  corporation  creative  alien-character  military  human-ml  nationalism-globalism  security  investing  government  games  deterrence  defense  nuclear  arms  competition  risk  ai-control  musk  optimism  multi  news  org:mag  europe  EU  80000-hours  effective-altruism  proposal  article  realness  offense-defense  war  biotech  altruism  language  foreign-lang  philosophy  the-great-west-whale  enhancement  foreign-policy  geopolitics  anglo  jobs  career  planning  hmm  travel  charity  tech  intel  media  teaching  tutoring  russia  india  miri-cfar  pdf  automation  class  labor  polisci  society  trust  n-factor  corruption  leviathan  ethics  authoritarianism  individualism-collectivism  revolution  economics  inequality  civic  law  regulation  data  scale  pro-rata  capital  zero-positive-sum  cooperate-defect  distribution  time-series  tre 
february 2018 by nhaliday
Christianity in China | Council on Foreign Relations
projected to outpace CCP membership soon

This fascinating map shows the new religious breakdown in China: http://www.businessinsider.com/new-religious-breakdown-in-china-14

Map Showing the Distribution of Christians in China: http://www.epm.org/resources/2010/Oct/18/map-showing-distribution-christians-china/

Christianity in China: https://en.wikipedia.org/wiki/Christianity_in_China
Accurate data on Chinese Christians is hard to access. According to the most recent internal surveys there are approximately 31 million Christians in China today (2.3% of the total population).[5] On the other hand, some international Christian organizations estimate there are tens of millions more, which choose not to publicly identify as such.[6] The practice of religion continues to be tightly controlled by government authorities.[7] Chinese over the age of 18 are only permitted to join officially sanctioned Christian groups registered with the government-approved Protestant Three-Self Church and China Christian Council and the Chinese Patriotic Catholic Church.[8]

In Xi we trust - Is China cracking down on Christianity?: http://www.dw.com/en/in-xi-we-trust-is-china-cracking-down-on-christianity/a-42224752A

In China, Unregistered Churches Are Driving a Religious Revolution: https://www.theatlantic.com/international/archive/2017/04/china-unregistered-churches-driving-religious-revolution/521544/

Cracks in the atheist edifice: https://www.economist.com/news/briefing/21629218-rapid-spread-christianity-forcing-official-rethink-religion-cracks

Jesus won’t save you — President Xi Jinping will, Chinese Christians told: https://www.washingtonpost.com/news/worldviews/wp/2017/11/14/jesus-wont-save-you-president-xi-jinping-will-chinese-christians-told/

http://www.sixthtone.com/news/1001611/noodles-for-the-messiah-chinas-creative-christian-hymns

https://www.reuters.com/article/us-pope-china-exclusive/exclusive-china-vatican-deal-on-bishops-ready-for-signing-source-idUSKBN1FL67U
Catholics in China are split between those in “underground” communities that recognize the pope and those belonging to a state-controlled Catholic Patriotic Association where bishops are appointed by the government in collaboration with local Church communities.

http://www.bbc.com/news/world-asia-china-42914029
The underground churches recognise only the Vatican's authority, whereas the Chinese state churches refuse to accept the authority of the Pope.

There are currently about 100 Catholic bishops in China, with some approved by Beijing, some approved by the Vatican and, informally, many now approved by both.

...

Under the agreement, the Vatican would be given a say in the appointment of future bishops in China, a Vatican source told news agency Reuters.

For Beijing, an agreement with the Vatican could allow them more control over the country's underground churches.

Globally, it would also enhance China's prestige - to have the world's rising superpower engaging with one of the world's major religions.

Symbolically, it would the first sign of rapprochement between China and the Catholic church in more than half a century.

The Vatican is the only European state that maintains formal diplomatic relations with Taiwan. It is currently unclear if an agreement between China and the Vatican would affect this in any way.

What will this mean for the country's Catholics?

There are currently around 10 million Roman Catholics in China.

https://www.washingtonpost.com/world/asia_pacific/china-vatican-deal-on-bishops-reportedly-ready-for-signing/2018/02/01/2adfc6b2-0786-11e8-b48c-b07fea957bd5_story.html

http://www.catholicherald.co.uk/news/2018/02/06/china-is-the-best-implementer-of-catholic-social-doctrine-says-vatican-bishop/
The chancellor of the Pontifical Academy of Social Sciences praised the 'extraordinary' Communist state

“Right now, those who are best implementing the social doctrine of the Church are the Chinese,” a senior Vatican official has said.

Bishop Marcelo Sánchez Sorondo, chancellor of the Pontifical Academy of Social Sciences, praised the Communist state as “extraordinary”, saying: “You do not have shantytowns, you do not have drugs, young people do not take drugs”. Instead, there is a “positive national conscience”.

The bishop told the Spanish-language edition of Vatican Insider that in China “the economy does not dominate politics, as happens in the United States, something Americans themselves would say.”

Bishop Sánchez Sorondo said that China was implementing Pope Francis’s encyclical Laudato Si’ better than many other countries and praised it for defending Paris Climate Accord. “In that, it is assuming a moral leadership that others have abandoned”, he added.

...

As part of the diplomacy efforts, Bishop Sánchez Sorondo visited the country. “What I found was an extraordinary China,” he said. “What people don’t realise is that the central value in China is work, work, work. There’s no other way, fundamentally it is like St Paul said: he who doesn’t work, doesn’t eat.”

China reveals plan to remove ‘foreign influence’ from Catholic Church: http://catholicherald.co.uk/news/2018/06/02/china-reveals-plan-to-remove-foreign-influence-from-catholic-church1/

China, A Fourth Rome?: http://thermidormag.com/china-a-fourth-rome/
As a Chinaman born in the United States, I find myself able to speak to both places and neither. By accidents of fortune, however – or of providence, rather – I have identified more with China even as I have lived my whole life in the West. English is my third language, after Cantonese and Mandarin, even if I use it to express my intellectually most complex thoughts; and though my best of the three in writing, trained by the use of Latin, it is the vehicle of a Chinese soul. So it is in English that for the past year I have memed an idea as unconventional as it is ambitious, unto the Europæans a stumbling-block, and unto the Chinese foolishness: #China4thRome.

This idea I do not attempt to defend rigorously, between various powers’ conflicting claims to carrying on the Roman heritage; neither do I intend to claim that Moscow, which has seen itself as a Third Rome after the original Rome and then Constantinople, is fallen. Instead, I think back to the division of the Roman empire, first under Diocletian’s Tetrarchy and then at the death of Theodosius I, the last ruler of the undivided Roman empire. In the second partition, at the death of Theodosius, Arcadius became emperor of the East, with his capital in Constantinople, and Honorius emperor of the West, with his capital in Milan and then Ravenna. That the Roman empire did not stay uniformly strong under a plurality of emperors is not the point. What is significant about the administrative division of the Roman empire among several emperors is that the idea of Rome can be one even while its administration is diverse.

By divine providence, the Christian religion – and through it, Rome – has spread even through the bourgeois imperialism of the 19th and 20th centuries. Across the world, the civil calendar of common use is that of Rome, reckoned from 1 January; few places has Roman law left wholly untouched. Nevertheless, never have we observed in the world of Roman culture an ethnogenetic pattern like that of the Chinese empire as described by the prologue of Luo Guanzhong’s Romance of the Three Kingdoms 三國演義: ‘The empire, long divided, must unite; long united, must divide. Thus it has ever been.’1 According to classical Chinese cosmology, the phrase rendered the empire is more literally all under heaven 天下, the Chinese œcumene being its ‘all under heaven’ much as a Persian proverb speaks of the old Persian capital of Isfahan: ‘Esfahān nesf-e jahān ast,’ Isfahan is half the world. As sociologist Fei Xiaotong describes it in his 1988 Tanner Lecture ‘Plurality and Unity in the Configuration of the Chinese People’,

...

And this Chinese œcumene has united and divided for centuries, even as those who live in it have recognized a fundamental unity. But Rome, unlike the Chinese empire, has lived on in multiple successor polities, sometimes several at once, without ever coming back together as one empire administered as one. Perhaps something of its character has instead uniquely suited it to being the spirit of a kind of broader world empire. As Dante says in De Monarchia, ‘As the human race, then, has an end, and this end is a means necessary to the universal end of nature, it follows that nature must have the means in view.’ He continues,

If these things are true, there is no doubt but that nature set apart in the world a place and a people for universal sovereignty; otherwise she would be deficient in herself, which is impossible. What was this place, and who this people, moreover, is sufficiently obvious in what has been said above, and in what shall be added further on. They were Rome and her citizens or people. On this subject our Poet [Vergil] has touched very subtly in his sixth book [of the Æneid], where he brings forward Anchises prophesying in these words to Aeneas, father of the Romans: ‘Verily, that others shall beat out the breathing bronze more finely, I grant you; they shall carve the living feature in the marble, plead causes with more eloquence, and trace the movements of the heavens with a rod, and name the rising stars: thine, O Roman, be the care to rule the peoples with authority; be thy arts these, to teach men the way of peace, to show mercy to the subject, and to overcome the proud.’ And the disposition of place he touches upon lightly in the fourth book, when he introduces Jupiter speaking of Aeneas to Mercury in this fashion: ‘Not such a one did his most beautiful mother promise to us, nor for this twice rescue him from Grecian arms; rather was he to be the man to govern Italy teeming with empire and tumultuous with war.’ Proof enough has been given that the Romans were by nature ordained for sovereignty. Therefore the Roman … [more]
org:ngo  trends  foreign-policy  china  asia  hmm  idk  religion  christianity  theos  anomie  meaningness  community  egalitarianism-hierarchy  protestant-catholic  demographics  time-series  government  leadership  nationalism-globalism  org:data  comparison  sinosphere  civic  the-bones  power  great-powers  thucydides  multi  maps  data  visualization  pro-rata  distribution  geography  within-group  wiki  reference  article  news  org:lite  org:biz  islam  buddhism  org:euro  authoritarianism  antidemos  leviathan  regulation  civil-liberty  chart  absolute-relative  org:mag  org:rec  org:anglo  org:foreign  music  culture  gnon  org:popup  🐸  memes(ew)  essay  rhetoric  conquest-empire  flux-stasis  spreading  paradox  analytical-holistic  tradeoffs  solzhenitsyn  spengler  nietzschean  europe  the-great-west-whale  occident  orient  literature  big-peeps  history  medieval  mediterranean  enlightenment-renaissance-restoration-reformation  expansionism  early-modern  society  civilization  world  MENA  capital  capitalism  innovation  race  alien-character  optimat 
january 2018 by nhaliday
Sequence Modeling with CTC
A visual guide to Connectionist Temporal Classification, an algorithm used to train deep neural networks in speech recognition, handwriting recognition and other sequence problems.
acmtariat  techtariat  org:bleg  nibble  better-explained  machine-learning  deep-learning  visual-understanding  visualization  analysis  let-me-see  research  sequential  audio  classification  model-class  exposition  language  acm  approximation  comparison  markov  iteration-recursion  concept  atoms  distribution  orders  DP  heuristic  optimization  trees  greedy  matching  gradient-descent  org:popup 
december 2017 by nhaliday
Estimation of effect size distribution from genome-wide association studies and implications for future discoveries
We report a set of tools to estimate the number of susceptibility loci and the distribution of their effect sizes for a trait on the basis of discoveries from existing genome-wide association studies (GWASs). We propose statistical power calculations for future GWASs using estimated distributions of effect sizes. Using reported GWAS findings for height, Crohn’s disease and breast, prostate and colorectal (BPC) cancers, we determine that each of these traits is likely to harbor additional loci within the spectrum of low-penetrance common variants. These loci, which can be identified from sufficiently powerful GWASs, together could explain at least 15–20% of the known heritability of these traits. However, for BPC cancers, which have modest familial aggregation, our analysis suggests that risk models based on common variants alone will have modest discriminatory power (63.5% area under curve), even with new discoveries.

later paper:
Distribution of allele frequencies and effect sizes and their interrelationships for common genetic susceptibility variants: http://www.pnas.org/content/108/44/18026.full

Recent discoveries of hundreds of common susceptibility SNPs from genome-wide association studies provide a unique opportunity to examine population genetic models for complex traits. In this report, we investigate distributions of various population genetic parameters and their interrelationships using estimates of allele frequencies and effect-size parameters for about 400 susceptibility SNPs across a spectrum of qualitative and quantitative traits. We calibrate our analysis by statistical power for detection of SNPs to account for overrepresentation of variants with larger effect sizes in currently known SNPs that are expected due to statistical power for discovery. Across all qualitative disease traits, minor alleles conferred “risk” more often than “protection.” Across all traits, an inverse relationship existed between “regression effects” and allele frequencies. Both of these trends were remarkably strong for type I diabetes, a trait that is most likely to be influenced by selection, but were modest for other traits such as human height or late-onset diseases such as type II diabetes and cancers. Across all traits, the estimated effect-size distribution suggested the existence of increasingly large numbers of susceptibility SNPs with decreasingly small effects. For most traits, the set of SNPs with intermediate minor allele frequencies (5–20%) contained an unusually small number of susceptibility loci and explained a relatively small fraction of heritability compared with what would be expected from the distribution of SNPs in the general population. These trends could have several implications for future studies of common and uncommon variants.

...

Relationship Between Allele Frequency and Effect Size. We explored the relationship between allele frequency and effect size in different scales. An inverse relationship between the squared regression coefficient and f(1 − f) was observed consistently across different traits (Fig. 3). For a number of these traits, however, the strengths of these relationships become less pronounced after adjustment for ascertainment due to study power. The strength of the trend, as captured by the slope of the fitted line (Table 2), markedly varies between traits, with an almost 10-fold change between the two extremes of distinct types of traits. After adjustment, the most pronounced trend was seen for type I diabetes and Crohn’s disease among qualitative traits and LDL level among quantitative traits. In exploring the relationship between the frequency of the risk allele and the magnitude of the associated risk coefficient (Fig. S4), we observed a quadratic pattern that indicates increasing risk coefficients as the risk-allele frequency diverges away from 0.50 either toward 0 or toward 1. Thus, it appears that regression coefficients for common susceptibility SNPs increase in magnitude monotonically with decreasing minor-allele frequency, irrespective of whether the minor allele confers risk or protection. However, for some traits, such as type I diabetes, risk alleles were predominantly minor alleles, that is, they had frequencies of less than 0.50.
pdf  nibble  study  article  org:nat  🌞  biodet  genetics  population-genetics  GWAS  QTL  distribution  disease  cancer  stat-power  bioinformatics  magnitude  embodied  prediction  scale  scaling-up  variance-components  multi  missing-heritability  effect-size  regression  correlation  data 
november 2017 by nhaliday
Gender differences in occupational distributions among workers
Women in the Work Force: https://www.theatlantic.com/magazine/archive/1986/09/women-in-the-work-force/304924/
Gender disparity in the workplace might have less to do with discrimination than with women making the choice to stay at home
pdf  org:gov  white-paper  data  database  economics  labor  gender  gender-diff  distribution  dysgenics  multi  news  org:mag  discrimination  demographics 
november 2017 by nhaliday
Politics with Hidden Bases: Unearthing the Deep Roots of Party Systems
The research presented here uses a novel method to show that contemporary party systems may originate much further back than is usually assumed or might be expected—in reality many centuries. Using data on Ireland, a country with a political system that poses significant challenges to the universality of many political science theories, by identifying the ancestry of current party elites we find ethnic bases for the Irish party system arising from population movements that took place from the 12th century. Extensive Irish genealogical knowledge allows us to use surnames as a proxy for ethnic origin. Recent genetic analyses of Irish surnames corroborate Irish genealogical information. The results are particularly compelling given that Ireland is an extremely homogeneous society and therefore provides a tough case for our approach.
pdf  study  broad-econ  polisci  sociology  politics  government  correlation  path-dependence  cliometrics  anglo  britain  history  mostly-modern  time-series  pro-rata  distribution  demographics  coalitions  pop-structure  branches  hari-seldon 
november 2017 by nhaliday
multivariate analysis - Is it possible to have a pair of Gaussian random variables for which the joint distribution is not Gaussian? - Cross Validated
The bivariate normal distribution is the exception, not the rule!

It is important to recognize that "almost all" joint distributions with normal marginals are not the bivariate normal distribution. That is, the common viewpoint that joint distributions with normal marginals that are not the bivariate normal are somehow "pathological", is a bit misguided.

Certainly, the multivariate normal is extremely important due to its stability under linear transformations, and so receives the bulk of attention in applications.

note: there is a multivariate central limit theorem, so those such applications have no problem
nibble  q-n-a  overflow  stats  math  acm  probability  distribution  gotchas  intricacy  characterization  structure  composition-decomposition  counterexample  limits  concentration-of-measure 
october 2017 by nhaliday
Karl Pearson and the Chi-squared Test
Pearson's paper of 1900 introduced what subsequently became known as the chi-squared test of goodness of fit. The terminology and allusions of 80 years ago create a barrier for the modern reader, who finds that the interpretation of Pearson's test procedure and the assessment of what he achieved are less than straightforward, notwithstanding the technical advances made since then. An attempt is made here to surmount these difficulties by exploring Pearson's relevant activities during the first decade of his statistical career, and by describing the work by his contemporaries and predecessors which seem to have influenced his approach to the problem. Not all the questions are answered, and others remain for further study.

original paper: http://www.economics.soton.ac.uk/staff/aldrich/1900.pdf

How did Karl Pearson come up with the chi-squared statistic?: https://stats.stackexchange.com/questions/97604/how-did-karl-pearson-come-up-with-the-chi-squared-statistic
He proceeds by working with the multivariate normal, and the chi-square arises as a sum of squared standardized normal variates.

You can see from the discussion on p160-161 he's clearly discussing applying the test to multinomial distributed data (I don't think he uses that term anywhere). He apparently understands the approximate multivariate normality of the multinomial (certainly he knows the margins are approximately normal - that's a very old result - and knows the means, variances and covariances, since they're stated in the paper); my guess is that most of that stuff is already old hat by 1900. (Note that the chi-squared distribution itself dates back to work by Helmert in the mid-1870s.)

Then by the bottom of p163 he derives a chi-square statistic as "a measure of goodness of fit" (the statistic itself appears in the exponent of the multivariate normal approximation).

He then goes on to discuss how to evaluate the p-value*, and then he correctly gives the upper tail area of a χ212χ122 beyond 43.87 as 0.000016. [You should keep in mind, however, that he didn't correctly understand how to adjust degrees of freedom for parameter estimation at that stage, so some of the examples in his papers use too high a d.f.]
nibble  papers  acm  stats  hypothesis-testing  methodology  history  mostly-modern  pre-ww2  old-anglo  giants  science  the-trenches  stories  multi  q-n-a  overflow  explanation  summary  innovation  discovery  distribution  degrees-of-freedom  limits 
october 2017 by nhaliday
Section 10 Chi-squared goodness-of-fit test.
- pf that chi-squared statistic for Pearson's test (multinomial goodness-of-fit) actually has chi-squared distribution asymptotically
- the gotcha: terms Z_j in sum aren't independent
- solution:
- compute the covariance matrix of the terms to be E[Z_iZ_j] = -sqrt(p_ip_j)
- note that an equivalent way of sampling the Z_j is to take a random standard Gaussian and project onto the plane orthogonal to (sqrt(p_1), sqrt(p_2), ..., sqrt(p_r))
- that is equivalent to just sampling a Gaussian w/ 1 less dimension (hence df=r-1)
QED
pdf  nibble  lecture-notes  mit  stats  hypothesis-testing  acm  probability  methodology  proofs  iidness  distribution  limits  identity  direction  lifts-projections 
october 2017 by nhaliday
The Political Typology: Beyond Red vs. Blue | Pew Research Center
The new typology has eight groups: Three are strongly ideological, highly politically engaged and overwhelmingly partisan – two on the right and one on the left. Steadfast Conservatives are staunch critics of government and the social safety net and are very socially conservative. Business Conservatives share Steadfast Conservatives’ preference for limited government, but differ in their support for Wall Street and business, as well as immigration reform. And Business Conservatives are far more moderate on social issues than are Steadfast Conservatives.

At the other end of the spectrum, Solid Liberals express liberal attitudes across almost every realm – government, the economy and business and foreign policy, as well as on race, homosexuality and abortion – and are reliable and loyal Democratic voters.

Taken together, these three groups form the electoral base of the Democratic and Republican Parties, and their influence on American politics is strong. While Solid Liberals, Steadfast Conservatives and Business Conservatives collectively make up only 36% of the American public, they represent 43% of registered voters and fully 57% of the more politically engaged segment of the American public: those who regularly vote and routinely follow government and public affairs.

The other typology groups are less partisan, less predictable and have little in common with each other or the groups at either end of the political spectrum. The one thing they do share is that they are less engaged politically than the groups on the right or left.

Young Outsiders lean Republican but do not have a strong allegiance to the Republican Party; in fact they tend to dislike both political parties. On many issues, from their support for environmental regulation to their liberal views on social issues, they diverge from traditional GOP orthodoxy. Yet in their support for limited government, Young Outsiders are firmly in the Republicans’ camp.

Hard-Pressed Skeptics have been battered by the struggling economy, and their difficult financial circumstances have left them resentful of both government and business. Despite their criticism of government performance, they back more generous government support for the poor and needy. Most Hard-Pressed Skeptics say they voted for Obama in 2012, though fewer than half approve of his job performance today.

The Next Generation Left are young, relatively affluent and very liberal on social issues like same-sex marriage and abortion. But they have reservations about the cost of social programs. And while most of the Next Generation Left support affirmative action, they decisively reject the idea that racial discrimination is the main reason why many blacks are unable to get ahead.

The Faith and Family Left lean Democratic, based on their confidence in government and support for federal programs to address the nation’s problems. But this very religious, racially and ethnically diverse group is uncomfortable with the pace of societal change, including the acceptance of homosexuality and non-traditional family structures.

And finally, an eighth group, the Bystanders, representing 10% of the public, are on the sidelines of the political process. They are not registered to vote and pay very little attention to politics.

...

The Faith and Family Left is by far the most racially and ethnically diverse group in the typology: In fact, just 41% are white non-Hispanic; 30% are black, 19% are Hispanic and 8% are other or mixed race. The Faith and Family Left also is less affluent and less educated than the other Democratically-oriented groups, and is older as well.

They also have strong religious convictions, which distinguishes them from Solid Liberals and the Next Generation Left. Fully 91% say “it is necessary to believe in God in order to be moral and have good values.” No more than about one-in-ten in the other Democratically-oriented groups agree. And the Faith and Family Left have much more conservative positions on social issues. Just 37% favor same-sex marriage, less than half the share of the other two groups on the left.

The Faith and Family Left support activist government and a strong social safety net, though by less overwhelming margins than Solid Liberals. And while the Faith and Family Left support affirmative action programs, just 31% believe that “racial discrimination is the main reason many black people can’t get ahead these days.” Among the much less racially diverse Solid Liberals, 80% think racial discrimination is the main barrier to black progress.

...

First, Steadfast Conservatives take very conservative views on key social issues like homosexuality and immigration, while Business Conservatives are less conservative – if not actually progressive – on these issues. Nearly three-quarters of Steadfast Conservatives (74%) believe that homosexuality should be discouraged by society. Among Business Conservatives, just 31% think homosexuality should be discouraged; 58% believe it should be accepted.

Business Conservatives have generally positive attitudes toward immigrants and 72% favor a “path to citizenship” for those in the U.S. illegally, if they meet certain conditions. Steadfast Conservatives are more critical of immigrants; 50% support a path to citizenship, the lowest share of any typology group.

Second, just as Steadfast Conservatives are opposed to big government, they also are skeptical of big business. They believe that large corporations have too much power, and nearly half (48%) say the economic system unfairly favors powerful interests. By contrast, as their name suggests, Business Conservatives are far more positive about the free market, and overwhelmingly regard business – and Wall Street – positively.

group profiles (including demographics): http://www.people-press.org/2014/06/26/appendix-1-typology-group-profiles/

2017 redux:
Political Typology Reveals Deep Fissures on the Right and Left: http://www.people-press.org/2017/10/24/political-typology-reveals-deep-fissures-on-the-right-and-left/
Nearly a year after Donald Trump was elected president, the Republican coalition is deeply divided on such major issues as immigration, America’s role in the world and the fundamental fairness of the U.S. economic system.

The Democratic coalition is largely united in staunch opposition to President Trump. Yet, while Trump’s election has triggered a wave of political activism within the party’s sizable liberal bloc, the liberals’ sky-high political energy is not nearly as evident among other segments in the Democratic base. And Democrats also are internally divided over U.S. global involvement, as well as some religious and social issues.

...

Divisions on the right

The political typology finds two distinctly different groups on the right – Core Conservatives and Country First Conservatives, who both overwhelmingly approve of Trump, but disagree on much else – including immigration and whether it benefits the U.S. to be active internationally.

Core Conservatives, who are in many ways the most traditional group of Republicans, have an outsized influence on the GOP coalition; while they make up just 13% of the public – and about a third (31%) of all Republicans and Republican-leaning independents – they constitute a much larger share (43%) of politically engaged Republicans.

This financially comfortable, male-dominated group overwhelmingly supports smaller government, lower corporate tax rates and believes in the fairness of the nation’s economic system. And a large majority of Core Conservatives (68%) express a positive view of U.S. involvement in the global economy “because it provides the U.S. with new markets and opportunities for growth.”

Country First Conservatives, a much smaller segment of the GOP base, are older and less educated than other Republican-leaning typology groups. Unlike Core Conservatives, Country First Conservatives are unhappy with the nation’s course, highly critical of immigrants and deeply wary of U.S. global involvement.

Nearly two-thirds of Country First Conservatives (64%) – the highest share of any typology group, right or left – say that “if America is too open to people from all over the world, we risk losing our identity as a nation.”

A third Republican group, Market Skeptic Republicans, sharply diverges from the GOP’s traditional support for business and lower taxes. Only about a third of Market Skeptic Republicans (34%) say banks and other financial institutions have a positive effect on the way things are going in the country, lowest among Republican-leaning typology groups.

Alone among the groups in the GOP coalition, a majority of Market Skeptic Republicans support raising tax rates on corporations and large businesses. An overwhelming share (94%) say the economic system unfairly favors powerful interests, which places the view of Market Skeptic Republicans on this issue much closer to Solid Liberals (99% mostly unfair) than Core Conservatives (21%).

In contrast to Market Skeptic Republicans, New Era Enterprisers are fundamentally optimistic about the state of the nation and its future. They are more likely than any other typology group to say the next generation of Americans will have it better than people today. Younger and somewhat less overwhelmingly white than the other GOP-leaning groups, New Era Enterprisers are strongly pro-business and generally think that immigrants strengthen, rather than burden, the country.

Divisions on the left

...

While there have long been racial, ethnic and income differences within the Democratic coalition, these gaps are especially striking today. Reflecting the changing demographic composition of the Democratic base, for the first time there are two majority-minority Democratic-leaning typology groups, along with two more affluent, mostly white groups.

Solid Liberals are the largest group in the Democratic coalition, and they make up close to half (48%) of politically engaged Democrats and Democratic-leaning … [more]
news  org:data  data  analysis  database  white-paper  politics  polisci  sociology  ideology  coalitions  things  phalanges  exploratory  distribution  poll  values  polarization  policy  populism  vampire-squid  migration  obama  gender  sex  sexuality  corporation  finance  foreign-policy  trade  diversity  race  demographics  religion  inequality  envy  left-wing  right-wing  africa  descriptive  discrimination  identity-politics  trust  institutions  quiz  business  regulation  redistribution  welfare-state  usa  government  civil-liberty  market-power  rent-seeking  nationalism-globalism  age-generation  chart  nl-and-so-can-you  🎩  homo-hetero  trump  2016-election  postmortem  charity  money  class  class-warfare  elections  multi  let-me-see  fertility  theos  geography  urban  art  drugs  opioids  flux-stasis  entrepreneurialism  2014  2017  urban-rural  twitter  social  discussion  commentary  backup  journos-pundits  study  impetus  trends  tradition  culture  society  christianity  pdf  article  sentiment  abortion-contraception-embryo 
october 2017 by nhaliday
GOP tax plan would provide major gains for richest 1%, uneven benefits for the middle class, report says - The Washington Post
https://twitter.com/ianbremmer/status/913863513038311426
https://archive.is/PYRx9
Trump tweets: For his voters.
Tax plan: Something else entirely.
https://twitter.com/tcjfs/status/913864779256692737
https://archive.is/5bzQz
This is appallingly stupid if accurate

https://www.nytimes.com/interactive/2017/11/28/upshot/what-the-tax-bill-would-look-like-for-25000-middle-class-families.html
https://www.nytimes.com/interactive/2017/11/30/us/politics/tax-cuts-increases-for-your-income.html

Treasury Removes Paper at Odds With Mnuchin’s Take on Corporate-Tax Cut’s Winners: https://www.wsj.com/articles/treasury-removes-paper-at-odds-with-mnuchins-take-on-corporate-tax-cuts-winners-1506638463

Tax changes for graduate students under the Tax Cuts and Jobs Act: https://bcide.gitlab.io/post/gop-tax-plan/
H.R.1 – 155th Congress (Tax Cuts and Jobs Act) 1 proposes changes to the US Tax Code that threatens to destroy the finances of STEM graduate students nationwide. The offending provision, 1204(a)(3), strikes section 117(d) 2 of the US Tax Code. This means that under the proposal, tuition waivers are considered taxable income.

For graduate students, this means an increase of thousands of dollars in owed federal taxes. Below I show a calculation for my own situation. The short of it is this: My federal taxes increase from ~7.5% of my income to ~31%. I will owe about $6300 more in federal taxes under this legislation. Like many other STEM students, my choices would be limited to taking on significant debt or quitting my program entirely.

The Republican War on College: https://www.theatlantic.com/business/archive/2017/11/republican-college/546308/

Trump's plan to tax colleges will harm higher education — but it's still a good idea: http://www.businessinsider.com/trump-tax-plan-taxing-colleges-is-a-good-idea-2017-11
- James Miller

The Republican Tax Plan Is a Disaster for Families With Children: http://www.motherjones.com/kevin-drum/2017/11/the-republican-tax-plan-is-a-disaster-for-families-with-children/
- Kevin Drum

The gains from cutting corporate tax rates: http://marginalrevolution.com/marginalrevolution/2017/11/corporate-taxes-2.html
I’ve been reading in this area on and off since the 1980s, and I really don’t think these are phony results.

Entrepreneurship and State Taxation: https://www.federalreserve.gov/econres/feds/files/2018003pap.pdf
We find that new firm employment is negatively—and disproportionately—affected by corporate tax rates. We find little evidence of an effect of personal and sales taxes on entrepreneurial outcomes.

https://www.nytimes.com/2017/11/26/us/politics/johnson-amendment-churches-taxes-politics.html
nobody in the comments section seems to have even considered the comparison with universities

The GOP Tax Bills Are Infrastructure Bills Too. Here’s Why.: http://www.governing.com/topics/transportation-infrastructure/gov-republican-tax-bills-impact-infrastructure.html
news  org:rec  trump  current-events  wonkish  policy  taxes  data  analysis  visualization  money  monetary-fiscal  compensation  class  hmm  :/  coalitions  multi  twitter  social  commentary  gnon  unaffiliated  right-wing  backup  class-warfare  redistribution  elite  vampire-squid  crooked  journos-pundits  tactics  strategy  politics  increase-decrease  pro-rata  labor  capital  distribution  corporation  corruption  anomie  counter-revolution  higher-ed  academia  nascent-state  mathtariat  phd  grad-school  org:mag  left-wing  econotariat  marginal-rev  links  study  summary  economics  econometrics  endogenous-exogenous  natural-experiment  longitudinal  regularizer  religion  christianity  org:gov  infrastructure  transportation  cracker-econ  org:lite  org:biz  crosstab  dynamic  let-me-see  cost-benefit  entrepreneurialism  branches  geography  usa  within-group 
september 2017 by nhaliday
Lecture 14: When's that meteor arriving
- Meteors as a random process
- Limiting approximations
- Derivation of the Exponential distribution
- Derivation of the Poisson distribution
- A "Poisson process"
nibble  org:junk  org:edu  exposition  lecture-notes  physics  mechanics  space  earth  probability  stats  distribution  stochastic-processes  closure  additive  limits  approximation  tidbits  acm  binomial  multiplicative 
september 2017 by nhaliday
PRRI: America’s Changing Religious Identity
https://www.washingtonpost.com/blogs/right-turn/wp/2017/09/06/the-demographic-change-fueling-the-angst-of-trumps-base/
https://gnxp.nofe.me/2017/09/08/as-many-americans-think-the-bible-is-a-book-of-fables-as-that-it-is-the-word-of-god/
America, that is, the United States of America, has long been a huge exception for the secularization model. Basically as a society develops and modernizes it becomes more secular. At least that’s the model.

...

Today everyone is talking about the Pew survey which shows the marginalization of the Anglo-Protestant America which I grew up in. This marginalization is due to secularization broadly, and non-Hispanic whites in particular. You don’t need Pew to tell you this.

...

Note: Robert Putnam’s American Grace is probably the best book which highlights the complex cultural forces which ushered in the second wave of secularization. The short answer is that the culture wars diminished Christianity in the eyes of liberals.

Explaining Why More Americans Have No Religious Preference: Political Backlash and Generational Succession, 1987-2012: https://www.sociologicalscience.com/articles-vol1-24-423/
the causal direction in the rise of the “Nones” likely runs from political identity as a liberal or conservative to religious identity

The Persistent and Exceptional Intensity of American Religion: A Response to Recent Research: https://osf.io/preprints/socarxiv/xd37b
But we show that rather than religion fading into irrelevance as the secularization thesis would suggest, intense religion—strong affiliation, very frequent practice, literalism, and evangelicalism—is persistent and, in fact, only moderate religion is on the decline in the United States.

https://twitter.com/avermeule/status/913823410609950721
https://archive.is/CiCok
As in the U.K., so now too in America: the left establishment is moving towards an open view that orthodox Christians are unfit for office.
https://twitter.com/avermeule/status/913880665011228673
https://archive.is/LZiyV

https://twitter.com/tcjfs/status/883764202539798529
https://archive.is/HvVrN
i've had the thought that it's a plausible future where traditional notions of theism become implicitly non-white

https://mereorthodoxy.com/bourgeois-christian-politics/

http://www.cnn.com/2015/05/12/living/pew-religion-study/index.html
http://coldcasechristianity.com/2017/are-young-people-really-leaving-christianity/
Some writers and Christian observers deny the flight of young people altogether, but the growing statistics should alarm us enough as Church leaders to do something about the dilemma. My hope in this post is to simply consolidate some of the research (many of the summaries are directly quoted) so you can decide for yourself. I’m going to organize the recent findings in a way that illuminates the problem:

'Christianity as default is gone': the rise of a non-Christian Europe: https://www.theguardian.com/world/2018/mar/21/christianity-non-christian-europe-young-people-survey-religion
In the UK, only 7% of young adults identify as Anglican, fewer than the 10% who categorise themselves as Catholic. Young Muslims, at 6%, are on the brink of overtaking those who consider themselves part of the country’s established church.

https://en.wikipedia.org/wiki/Postchristianity
Other scholars have disputed the global decline of Christianity, and instead hypothesized of an evolution of Christianity which allows it to not only survive, but actively expand its influence in contemporary societies.

Philip Jenkins hypothesized a "Christian Revolution" in the Southern nations, such as Africa, Asia and Latin America, where instead of facing decline, Christianity is actively expanding. The relevance of Christian teachings in the global South will allow the Christian population in these areas to continually increase, and together with the shrinking of the Western Christian population, will form a "new Christendom" in which the majority of the world's Christian population can be found in the South.[9]
news  org:ngo  data  analysis  database  white-paper  usa  religion  christianity  theos  politics  polisci  coalitions  trends  zeitgeist  demographics  race  latin-america  within-group  northeast  the-south  the-west  asia  migration  gender  sex  sexuality  distribution  visualization  age-generation  diversity  maps  judaism  time-series  protestant-catholic  other-xtian  gender-diff  education  compensation  india  islam  multi  org:rec  pro-rata  gnxp  scitariat  huntington  prediction  track-record  error  big-peeps  statesmen  general-survey  poll  putnam-like  study  sociology  roots  impetus  history  mostly-modern  books  recommendations  summary  stylized-facts  values  twitter  social  discussion  journos-pundits  backup  tradition  gnon  unaffiliated  right-wing  identity-politics  eric-kaufmann  preprint  uniqueness  comparison  similarity  org:lite  video  links  list  survey  internet  life-history  fertility  social-capital  wiki  reference  org:anglo  world  developing-world  europe  EU  britain  rot  a 
september 2017 by nhaliday
Which industries are the most liberal and most conservative?
How Democratic or Republican is your job? This tool tells you: https://www.washingtonpost.com/news/the-fix/wp/2015/06/03/how-democratic-or-republican-is-your-job-this-tool-tells-you/?utm_term=.e19707abd9f1

http://verdantlabs.com/politics_of_professions/index.html

What you do and how you vote: http://www.pleeps.org/2017/01/07/what-you-do-and-how-you-vote/

trending blue across white-collar professions:
https://www.nytimes.com/2019/09/18/opinion/trump-fundraising-donors.html
https://twitter.com/adam_bonica/status/1174536380329803776
https://archive.is/r7YB6

https://twitter.com/whyvert/status/1174735746088996864
https://archive.is/Cwrih
This is partly because the meaning of left and right changed during that period. Left used to about protecting workers. Now it's mainly about increasing the power of the elite class over the working class - thus their increased support.
--
yes, it is a different kind of left now

academia:
https://en.wikipedia.org/wiki/Political_views_of_American_academics

The Legal Academy's Ideological Uniformity: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2953087

Homogenous: The Political Affiliations of Elite Liberal Arts College Faculty: https://sci-hub.tw/10.1007/s12129-018-9700-x
includes crosstab by discipline

https://www.conservativecriminology.com/uploads/5/6/1/7/56173731/lounsbery_9-25.pdf#page=28
Neil Gross, Solon Simmons
THE SOCIAL AND POLITICAL VIEWS OF AMERICAN PROFESSORS

another crosstab
description of data sampling on page 21, meant to be representative of all undergraduate degree-granting institutions

Computer science 32.3 58.1 9.7

It’s finally out–The big review paper on the lack of political diversity in social psychology: https://heterodoxacademy.org/2015/09/14/bbs-paper-on-lack-of-political-diversity/
https://heterodoxacademy.org/2015/09/21/political-diversity-response-to-33-critiques/
http://righteousmind.com/viewpoint-diversity/
http://www.nationalaffairs.com/publications/detail/real-academic-diversity
http://quillette.com/2017/07/06/social-sciences-undergoing-purity-spiral/
What’s interesting about Haidt’s alternative interpretation of the liberal progress narrative is that he mentions two elements central to the narrative—private property and nations. And what has happened to a large extent is that as the failures of communism have become increasingly apparent many on the left—including social scientists—have shifted their activism away from opposing private property and towards other aspects, for example globalism.

But how do we know a similarly disastrous thing is not going to happen with globalism as happened with communism? What if some form of national and ethnic affiliation is a deep-seated part of human nature, and that trying to forcefully suppress it will eventually lead to a disastrous counter-reaction? What if nations don’t create conflict, but alleviate it? What if a decentralised structure is the best way for human society to function?
news  org:lite  data  study  summary  politics  polisci  ideology  correlation  economics  finance  law  academia  media  tech  sv  heavy-industry  energy-resources  biophysical-econ  agriculture  pharma  things  visualization  crosstab  phalanges  housing  scale  money  elite  charity  class-warfare  coalitions  demographics  business  distribution  polarization  database  multi  org:rec  dynamic  tools  calculator  list  top-n  labor  management  leadership  government  hari-seldon  gnosis-logos  career  planning  jobs  dirty-hands  long-term  scitariat  haidt  org:ngo  commentary  higher-ed  psychology  social-psych  social-science  westminster  institutions  roots  chart  discrimination  debate  critique  biases  diversity  homo-hetero  replication  org:mag  letters  org:popup  ethnocentrism  error  communism  universalism-particularism  whiggish-hegelian  us-them  tribalism  wonkish  org:data  analysis  general-survey  exploratory  stylized-facts  elections  race  education  twitter  social  backup  journos-pundits  gnon  aphorism  impetus  interests  self-interest 
september 2017 by nhaliday
tcjfs on Twitter: "Yearly legal permanent residencies 1996-2015 with a bit more disaggregated and common-sensical designations than DHS https://t.co/167ms5Xr0s"
https://archive.is/70nNG
https://twitter.com/tcjfs/status/900052649147543552
https://archive.is/5U3Mi
Asian origin according to Department of Homeland Security
not sure tbh. i was just trying to disaggregate "Asian immigration" and I was like holy shit some of these places I would never include

U.S. Lawful Permanent Residents: 2014: https://www.dhs.gov/sites/default/files/publications/Lawful_Permanent_Residents_2014.pdf
Yearbook of Immigration Statistics: https://www.dhs.gov/immigration-statistics/yearbook

https://twitter.com/tcjfs/status/933066198161469440
https://archive.is/pRTqS
Foreign born population by Chinese, Indian, Mexican birth whose residence one year ago was abroad, 2000-2013
The above chart, extended to 2000-2016, with Mexico but also all of Latin/Central/South America:
our latin american immigrants are probably getting less "huwhite"
gnon  unaffiliated  right-wing  twitter  social  discussion  data  analysis  visualization  migration  usa  history  mostly-modern  time-series  scale  distribution  world  developing-world  latin-america  india  asia  china  backup  government  intricacy  gotchas  demographics  population  multi  race  stock-flow  org:gov  white-paper  pdf  the-west  california  northeast  nyc  list  top-n  database  age-generation  gender  pop-structure  genetics  genomics  maps  pro-rata 
september 2017 by nhaliday
Does Polarization Imply Poor Representation? A New Perspective on the “Disconnect” Between Politicians and Voters*
Broockman-Ahler 2015

immigration positions under B.2: http://www.dougahler.com/uploads/2/4/6/9/24697799/ahler_broockman_ideological_innocence.pdf#page=42
distribution: http://www.dougahler.com/uploads/2/4/6/9/24697799/ahler_broockman_ideological_innocence.pdf#page=53

https://twitter.com/tcjfs/status/904024125030756356
https://archive.is/BrnpJ
38% support immediate mass deportation of all illegals (Broockman 2015). This view has zero representation in either house of congress.

Do you understand the GOP play here by the way? I'm genuinely puzzled. Is it a moral conviction? Because it can't be a vote play.
In my view it's a mix of mindless sentimentality, the donor class, and existing in an hermetically sealed ideological bubble.

https://twitter.com/tcjfs/status/951989543653265408
https://archive.is/ya2sz
cheap labor lobby, public choice (votes), & subversive elites gripped by multiculti zealotry

https://twitter.com/tcjfs/status/911291455762845696
https://archive.is/dw6OO
In a 2014 radio interview, Paul Ryan was asked if "immigrants from the 3rd world are more or less likely to support conservative policies":
pdf  study  sociology  polisci  politics  poll  values  coalitions  ideology  polarization  usa  2016-election  trump  government  🎩  policy  wonkish  drugs  energy-resources  environment  climate-change  redistribution  welfare-state  arms  law  regulation  healthcare  migration  taxes  gender  sex  sexuality  education  chart  multi  distribution  crosstab  twitter  social  discussion  gnon  unaffiliated  right-wing  strategy  backup  elite  vampire-squid  roots  speculation  impetus  interview  quotes  commentary  people  track-record  reason  labor  business  capital  social-choice  rent-seeking  rot  culture  unintended-consequences  signaling  religion  theos  christianity  media  propaganda  patho-altruism  self-interest  diversity  culture-war  kumbaya-kult  identity-politics  westminster 
september 2017 by nhaliday
« earlier      
per page:    204080120160

Copy this bookmark:





to read