recentpopularlog in

nhaliday : lens   166

« earlier  
Ask HN: What's a promising area to work on? | Hacker News
hn  discussion  q-n-a  ideas  impact  trends  the-bones  speedometer  technology  applications  tech  cs  programming  list  top-n  recommendations  lens  machine-learning  deep-learning  security  privacy  crypto  software  hardware  cloud  biotech  CRISPR  bioinformatics  biohacking  blockchain  cryptocurrency  crypto-anarchy  healthcare  graphics  SIGGRAPH  vr  automation  universalism-particularism  expert-experience  reddit  social  arbitrage  supply-demand  ubiquity  cost-benefit  compensation  chart  career  planning  strategy  long-term  advice  sub-super  commentary  rhetoric  org:com  techtariat  human-capital  prioritizing  tech-infrastructure  working-stiff  data-science 
november 2019 by nhaliday
The Future of Mathematics? [video] | Hacker News
https://news.ycombinator.com/item?id=20909404
Kevin Buzzard (the Lean guy)

- general reflection on proof asssistants/theorem provers
- Kevin Hale's formal abstracts project, etc
- thinks of available theorem provers, Lean is "[the only one currently available that may be capable of formalizing all of mathematics eventually]" (goes into more detail right at the end, eg, quotient types)
hn  commentary  discussion  video  talks  presentation  math  formal-methods  expert-experience  msr  frontier  state-of-art  proofs  rigor  education  higher-ed  optimism  prediction  lens  search  meta:research  speculation  exocortex  skunkworks  automation  research  math.NT  big-surf  software  parsimony  cost-benefit  intricacy  correctness  programming  pls  python  functional  haskell  heavyweights  research-program  review  reflection  multi  pdf  slides  oly  experiment  span-cover  git  vcs  teaching  impetus  academia  composition-decomposition  coupling-cohesion  database  trust  types  plt  lifts-projections  induction  critique  beauty  truth  elegance  aesthetics 
october 2019 by nhaliday
Two Performance Aesthetics: Never Miss a Frame and Do Almost Nothing - Tristan Hume
I’ve noticed when I think about performance nowadays that I think in terms of two different aesthetics. One aesthetic, which I’ll call Never Miss a Frame, comes from the world of game development and is focused on writing code that has good worst case performance by making good use of the hardware. The other aesthetic, which I’ll call Do Almost Nothing comes from a more academic world and is focused on algorithmically minimizing the work that needs to be done to the extent that there’s barely any work left, paying attention to the performance at all scales.

[ed.: Neither of these exactly matches TCS performance PoV but latter is closer (the focus on diffs is kinda weird).]

...

Never Miss a Frame

In game development the most important performance criteria is that your game doesn’t miss frame deadlines. You have a target frame rate and if you miss the deadline for the screen to draw a new frame your users will notice the jank. This leads to focusing on the worst case scenario and often having fixed maximum limits for various quantities. This property can also be important in areas other than game development, like other graphical applications, real-time audio, safety-critical systems and many embedded systems. A similar dynamic occurs in distributed systems where one server needs to query 100 others and combine the results, you’ll wait for the slowest of the 100 every time so speeding up some of them doesn’t make the query faster, and queries occasionally taking longer (e.g because of garbage collection) will impact almost every request!

...

In this kind of domain you’ll often run into situations where in the worst case you can’t avoid processing a huge number of things. This means you need to focus your effort on making the best use of the hardware by writing code at a low level and paying attention to properties like cache size and memory bandwidth.

Projects with inviolable deadlines need to adjust different factors than speed if the code runs too slow. For example a game might decrease the size of a level or use a more efficient but less pretty rendering technique.

Aesthetically: Data should be tightly packed, fixed size, and linear. Transcoding data to and from different formats is wasteful. Strings and their variable lengths and inefficient operations must be avoided. Only use tools that allow you to work at a low level, even if they’re annoying, because that’s the only way you can avoid piles of fixed costs making everything slow. Understand the machine and what your code does to it.

Personally I identify this aesthetic most with Jonathan Blow. He has a very strong personality and I’ve watched enough of videos of him that I find imagining “What would Jonathan Blow say?” as a good way to tap into this aesthetic. My favourite articles about designs following this aesthetic are on the Our Machinery Blog.

...

Do Almost Nothing

Sometimes, it’s important to be as fast as you can in all cases and not just orient around one deadline. The most common case is when you simply have to do something that’s going to take an amount of time noticeable to a human, and if you can make that time shorter in some situations that’s great. Alternatively each operation could be fast but you may run a server that runs tons of them and you’ll save on server costs if you can decrease the load of some requests. Another important case is when you care about power use, for example your text editor not rapidly draining a laptop’s battery, in this case you want to do the least work you possibly can.

A key technique for this approach is to never recompute something from scratch when it’s possible to re-use or patch an old result. This often involves caching: keeping a store of recent results in case the same computation is requested again.

The ultimate realization of this aesthetic is for the entire system to deal only in differences between the new state and the previous state, updating data structures with only the newly needed data and discarding data that’s no longer needed. This way each part of the system does almost no work because ideally the difference from the previous state is very small.

Aesthetically: Data must be in whatever structure scales best for the way it is accessed, lots of trees and hash maps. Computations are graphs of inputs and results so we can use all our favourite graph algorithms to optimize them! Designing optimal systems is hard so you should use whatever tools you can to make it easier, any fixed cost they incur will be made negligible when you optimize away all the work they need to do.

Personally I identify this aesthetic most with my friend Raph Levien and his articles about the design of the Xi text editor, although Raph also appreciates the other aesthetic and taps into it himself sometimes.

...

_I’m conflating the axes of deadline-oriented vs time-oriented and low-level vs algorithmic optimization, but part of my point is that while they are different, I think these axes are highly correlated._

...

Text Editors

Sublime Text is a text editor that mostly follows the Never Miss a Frame approach. ...

The Xi Editor is designed to solve this problem by being designed from the ground up to grapple with the fact that some operations, especially those interacting with slow compilers written by other people, can’t be made instantaneous. It does this using a fancy asynchronous plugin model and lots of fancy data structures.
...

...

Compilers

Jonathan Blow’s Jai compiler is clearly designed with the Never Miss a Frame aesthetic. It’s written to be extremely fast at every level, and the language doesn’t have any features that necessarily lead to slow compiles. The LLVM backend wasn’t fast enough to hit his performance goals so he wrote an alternative backend that directly writes x86 code to a buffer without doing any optimizations. Jai compiles something like 100,000 lines of code per second. Designing both the language and compiler to not do anything slow lead to clean build performance 10-100x faster than other commonly-used compilers. Jai is so fast that its clean builds are faster than most compilers incremental builds on common project sizes, due to limitations in how incremental the other compilers are.

However, Jai’s compiler is still O(n) in the codebase size where incremental compilers can be O(n) in the size of the change. Some compilers like the work-in-progress rust-analyzer and I think also Roslyn for C# take a different approach and focus incredibly hard on making everything fully incremental. For small changes (the common case) this can let them beat Jai and respond in milliseconds on arbitrarily large projects, even if they’re slower on clean builds.

Conclusion
I find both of these aesthetics appealing, but I also think there’s real trade-offs that incentivize leaning one way or the other for a given project. I think people having different performance aesthetics, often because one aesthetic really is better suited for their domain, is the source of a lot of online arguments about making fast systems. The different aesthetics also require different bases of knowledge to pursue, like knowledge of data-oriented programming in C++ vs knowledge of abstractions for incrementality like Adapton, so different people may find that one approach seems way easier and better for them than the other.

I try to choose how to dedicate my effort to pursuing each aesthetics on a per project basis by trying to predict how effort in each direction would help. Some projects I know if I code it efficiently it will always hit the performance deadline, others I know a way to drastically cut down on work by investing time in algorithmic design, some projects need a mix of both. Personally I find it helpful to think of different programmers where I have a good sense of their aesthetic and ask myself how they’d solve the problem. One reason I like Rust is that it can do both low-level optimization and also has a good ecosystem and type system for algorithmic optimization, so I can more easily mix approaches in one project. In the end the best approach to follow depends not only on the task, but your skills or the skills of the team working on it, as well as how much time you have to work towards an ambitious design that may take longer for a better result.
techtariat  reflection  things  comparison  lens  programming  engineering  cracker-prog  carmack  games  performance  big-picture  system-design  constraint-satisfaction  metrics  telos-atelos  distributed  incentives  concurrency  cost-benefit  tradeoffs  systems  metal-to-virtual  latency-throughput  abstraction  marginal  caching  editors  strings  ideas  ui  common-case  examples  applications  flux-stasis  nitty-gritty  ends-means  thinking  summary  correlation  degrees-of-freedom  c(pp)  rust  interface  integration-extension  aesthetics  interface-compatibility  efficiency  adversarial 
september 2019 by nhaliday
Unix philosophy - Wikipedia
1. Make each program do one thing well. To do a new job, build afresh rather than complicate old programs by adding new "features".
2. Expect the output of every program to become the input to another, as yet unknown, program. Don't clutter output with extraneous information. Avoid stringently columnar or binary input formats. Don't insist on interactive input.
3. Design and build software, even operating systems, to be tried early, ideally within weeks. Don't hesitate to throw away the clumsy parts and rebuild them.
4. Use tools in preference to unskilled help to lighten a programming task, even if you have to detour to build the tools and expect to throw some of them out after you've finished using them.
wiki  concept  philosophy  lens  ideas  design  system-design  programming  engineering  systems  unix  subculture  composition-decomposition  coupling-cohesion  metabuch  skeleton  hi-order-bits  summary  list  top-n  quotes  aphorism  minimalism  minimum-viable  best-practices  intricacy  parsimony  protocol-metadata 
august 2019 by nhaliday
Organizing complexity is the most important skill in software development | Hacker News
- John D. Cook

https://news.ycombinator.com/item?id=9758063
Organization is the hardest part for me personally in getting better as a developer. How to build a structure that is easy to change and extend. Any tips where to find good books or online sources?
hn  commentary  techtariat  reflection  lens  engineering  programming  software  intricacy  parsimony  structure  coupling-cohesion  composition-decomposition  multi  poast  books  recommendations  abstraction  complex-systems  system-design  design  code-organizing  human-capital  project-management 
july 2019 by nhaliday
The Scholar's Stage: Book Notes—Strategy: A History
https://twitter.com/Scholars_Stage/status/1151681120787816448
https://archive.is/Bp5eu
Freedman's book is something of a shadow history of Western intellectual thought between 1850 and 2010. Marx, Tolstoy, Foucault, game theorists, economists, business law--it is all in there.

Thus the thoughts prompted by this book have surprisingly little to do with war.
Instead I am left with questions about the long-term trajectory of Western thought. Specifically:

*Has America really dominated Western intellectual life in the post 45 world as much as English speakers seem to think it has?
*Has the professionalization/credential-iization of Western intellectual life helped or harmed our ability to understand society?
*Will we ever recover from the 1960s?
wonkish  unaffiliated  broad-econ  books  review  reflection  summary  strategy  war  higher-ed  academia  social-science  letters  organizing  nascent-state  counter-revolution  rot  westminster  culture-war  left-wing  anglosphere  usa  history  mostly-modern  coordination  lens  local-global  europe  gallic  philosophy  cultural-dynamics  anthropology  game-theory  industrial-org  schelling  flux-stasis  trends  culture  iraq-syria  MENA  military  frontier  info-dynamics  big-peeps  politics  multi  twitter  social  commentary  backup  defense 
july 2019 by nhaliday
Alon Amit's answer to Why is there no formal definition for a set in math? How can we make any statement about sets (and therefore all of math) if we don’t even know what it is? - Quora
In the realm of mathematics, an object is what it does (I keep quoting Tim Gowers with this phrase, and I will likely do so many more times). The only thing that matters about points, lines, real numbers, sets, functions, groups and tempered distributions is the properties and features and rules they obey. What they “are” is of no concern.

I've seen this idea in a lot of different places
q-n-a  qra  math  lens  abstraction  essence-existence  analytical-holistic  forms-instances  big-picture  aphorism  axioms  definition  characterization  zooming 
july 2019 by nhaliday
The Existential Risk of Math Errors - Gwern.net
How big is this upper bound? Mathematicians have often made errors in proofs. But it’s rarer for ideas to be accepted for a long time and then rejected. But we can divide errors into 2 basic cases corresponding to type I and type II errors:

1. Mistakes where the theorem is still true, but the proof was incorrect (type I)
2. Mistakes where the theorem was false, and the proof was also necessarily incorrect (type II)

Before someone comes up with a final answer, a mathematician may have many levels of intuition in formulating & working on the problem, but we’ll consider the final end-product where the mathematician feels satisfied that he has solved it. Case 1 is perhaps the most common case, with innumerable examples; this is sometimes due to mistakes in the proof that anyone would accept is a mistake, but many of these cases are due to changing standards of proof. For example, when David Hilbert discovered errors in Euclid’s proofs which no one noticed before, the theorems were still true, and the gaps more due to Hilbert being a modern mathematician thinking in terms of formal systems (which of course Euclid did not think in). (David Hilbert himself turns out to be a useful example of the other kind of error: his famous list of 23 problems was accompanied by definite opinions on the outcome of each problem and sometimes timings, several of which were wrong or questionable5.) Similarly, early calculus used ‘infinitesimals’ which were sometimes treated as being 0 and sometimes treated as an indefinitely small non-zero number; this was incoherent and strictly speaking, practically all of the calculus results were wrong because they relied on an incoherent concept - but of course the results were some of the greatest mathematical work ever conducted6 and when later mathematicians put calculus on a more rigorous footing, they immediately re-derived those results (sometimes with important qualifications), and doubtless as modern math evolves other fields have sometimes needed to go back and clean up the foundations and will in the future.7

...

Isaac Newton, incidentally, gave two proofs of the same solution to a problem in probability, one via enumeration and the other more abstract; the enumeration was correct, but the other proof totally wrong and this was not noticed for a long time, leading Stigler to remark:

...

TYPE I > TYPE II?
“Lefschetz was a purely intuitive mathematician. It was said of him that he had never given a completely correct proof, but had never made a wrong guess either.”
- Gian-Carlo Rota13

Case 2 is disturbing, since it is a case in which we wind up with false beliefs and also false beliefs about our beliefs (we no longer know that we don’t know). Case 2 could lead to extinction.

...

Except, errors do not seem to be evenly & randomly distributed between case 1 and case 2. There seem to be far more case 1s than case 2s, as already mentioned in the early calculus example: far more than 50% of the early calculus results were correct when checked more rigorously. Richard Hamming attributes to Ralph Boas a comment that while editing Mathematical Reviews that “of the new results in the papers reviewed most are true but the corresponding proofs are perhaps half the time plain wrong”.

...

Gian-Carlo Rota gives us an example with Hilbert:

...

Olga labored for three years; it turned out that all mistakes could be corrected without any major changes in the statement of the theorems. There was one exception, a paper Hilbert wrote in his old age, which could not be fixed; it was a purported proof of the continuum hypothesis, you will find it in a volume of the Mathematische Annalen of the early thirties.

...

Leslie Lamport advocates for machine-checked proofs and a more rigorous style of proofs similar to natural deduction, noting a mathematician acquaintance guesses at a broad error rate of 1/329 and that he routinely found mistakes in his own proofs and, worse, believed false conjectures30.

[more on these "structured proofs":
https://academia.stackexchange.com/questions/52435/does-anyone-actually-publish-structured-proofs
https://mathoverflow.net/questions/35727/community-experiences-writing-lamports-structured-proofs
]

We can probably add software to that list: early software engineering work found that, dismayingly, bug rates seem to be simply a function of lines of code, and one would expect diseconomies of scale. So one would expect that in going from the ~4,000 lines of code of the Microsoft DOS operating system kernel to the ~50,000,000 lines of code in Windows Server 2003 (with full systems of applications and libraries being even larger: the comprehensive Debian repository in 2007 contained ~323,551,126 lines of code) that the number of active bugs at any time would be… fairly large. Mathematical software is hopefully better, but practitioners still run into issues (eg Durán et al 2014, Fonseca et al 2017) and I don’t know of any research pinning down how buggy key mathematical systems like Mathematica are or how much published mathematics may be erroneous due to bugs. This general problem led to predictions of doom and spurred much research into automated proof-checking, static analysis, and functional languages31.

[related:
https://mathoverflow.net/questions/11517/computer-algebra-errors
I don't know any interesting bugs in symbolic algebra packages but I know a true, enlightening and entertaining story about something that looked like a bug but wasn't.

Define sinc𝑥=(sin𝑥)/𝑥.

Someone found the following result in an algebra package: ∫∞0𝑑𝑥sinc𝑥=𝜋/2
They then found the following results:

...

So of course when they got:

∫∞0𝑑𝑥sinc𝑥sinc(𝑥/3)sinc(𝑥/5)⋯sinc(𝑥/15)=(467807924713440738696537864469/935615849440640907310521750000)𝜋

hmm:
Which means that nobody knows Fourier analysis nowdays. Very sad and discouraging story... – fedja Jan 29 '10 at 18:47

--

Because the most popular systems are all commercial, they tend to guard their bug database rather closely -- making them public would seriously cut their sales. For example, for the open source project Sage (which is quite young), you can get a list of all the known bugs from this page. 1582 known issues on Feb.16th 2010 (which includes feature requests, problems with documentation, etc).

That is an order of magnitude less than the commercial systems. And it's not because it is better, it is because it is younger and smaller. It might be better, but until SAGE does a lot of analysis (about 40% of CAS bugs are there) and a fancy user interface (another 40%), it is too hard to compare.

I once ran a graduate course whose core topic was studying the fundamental disconnect between the algebraic nature of CAS and the analytic nature of the what it is mostly used for. There are issues of logic -- CASes work more or less in an intensional logic, while most of analysis is stated in a purely extensional fashion. There is no well-defined 'denotational semantics' for expressions-as-functions, which strongly contributes to the deeper bugs in CASes.]

...

Should such widely-believed conjectures as P≠NP or the Riemann hypothesis turn out be false, then because they are assumed by so many existing proofs, a far larger math holocaust would ensue38 - and our previous estimates of error rates will turn out to have been substantial underestimates. But it may be a cloud with a silver lining, if it doesn’t come at a time of danger.

https://mathoverflow.net/questions/338607/why-doesnt-mathematics-collapse-down-even-though-humans-quite-often-make-mista

more on formal methods in programming:
https://www.quantamagazine.org/formal-verification-creates-hacker-proof-code-20160920/
https://intelligence.org/2014/03/02/bob-constable/

https://softwareengineering.stackexchange.com/questions/375342/what-are-the-barriers-that-prevent-widespread-adoption-of-formal-methods
Update: measured effort
In the October 2018 issue of Communications of the ACM there is an interesting article about Formally verified software in the real world with some estimates of the effort.

Interestingly (based on OS development for military equipment), it seems that producing formally proved software requires 3.3 times more effort than with traditional engineering techniques. So it's really costly.

On the other hand, it requires 2.3 times less effort to get high security software this way than with traditionally engineered software if you add the effort to make such software certified at a high security level (EAL 7). So if you have high reliability or security requirements there is definitively a business case for going formal.

WHY DON'T PEOPLE USE FORMAL METHODS?: https://www.hillelwayne.com/post/why-dont-people-use-formal-methods/
You can see examples of how all of these look at Let’s Prove Leftpad. HOL4 and Isabelle are good examples of “independent theorem” specs, SPARK and Dafny have “embedded assertion” specs, and Coq and Agda have “dependent type” specs.6

If you squint a bit it looks like these three forms of code spec map to the three main domains of automated correctness checking: tests, contracts, and types. This is not a coincidence. Correctness is a spectrum, and formal verification is one extreme of that spectrum. As we reduce the rigour (and effort) of our verification we get simpler and narrower checks, whether that means limiting the explored state space, using weaker types, or pushing verification to the runtime. Any means of total specification then becomes a means of partial specification, and vice versa: many consider Cleanroom a formal verification technique, which primarily works by pushing code review far beyond what’s humanly possible.

...

The question, then: “is 90/95/99% correct significantly cheaper than 100% correct?” The answer is very yes. We all are comfortable saying that a codebase we’ve well-tested and well-typed is mostly correct modulo a few fixes in prod, and we’re even writing more than four lines of code a day. In fact, the vast… [more]
ratty  gwern  analysis  essay  realness  truth  correctness  reason  philosophy  math  proofs  formal-methods  cs  programming  engineering  worse-is-better/the-right-thing  intuition  giants  old-anglo  error  street-fighting  heuristic  zooming  risk  threat-modeling  software  lens  logic  inference  physics  differential  geometry  estimate  distribution  robust  speculation  nonlinearity  cost-benefit  convexity-curvature  measure  scale  trivia  cocktail  history  early-modern  europe  math.CA  rigor  news  org:mag  org:sci  miri-cfar  pdf  thesis  comparison  examples  org:junk  q-n-a  stackex  pragmatic  tradeoffs  cracker-prog  techtariat  invariance  DSL  chart  ecosystem  grokkability  heavyweights  CAS  static-dynamic  lower-bounds  complexity  tcs  open-problems  big-surf  ideas  certificates-recognition  proof-systems  PCP  mediterranean  SDP  meta:prediction  epistemic  questions  guessing  distributed  overflow  nibble  soft-question  track-record  big-list  hmm  frontier  state-of-art  move-fast-(and-break-things)  grokkability-clarity  technical-writing  trust 
july 2019 by nhaliday
C++ Core Guidelines
This document is a set of guidelines for using C++ well. The aim of this document is to help people to use modern C++ effectively. By “modern C++” we mean effective use of the ISO C++ standard (currently C++17, but almost all of our recommendations also apply to C++14 and C++11). In other words, what would you like your code to look like in 5 years’ time, given that you can start now? In 10 years’ time?

https://isocpp.github.io/CppCoreGuidelines/
“Within C++ is a smaller, simpler, safer language struggling to get out.” – Bjarne Stroustrup

...

The guidelines are focused on relatively higher-level issues, such as interfaces, resource management, memory management, and concurrency. Such rules affect application architecture and library design. Following the rules will lead to code that is statically type safe, has no resource leaks, and catches many more programming logic errors than is common in code today. And it will run fast - you can afford to do things right.

We are less concerned with low-level issues, such as naming conventions and indentation style. However, no topic that can help a programmer is out of bounds.

Our initial set of rules emphasize safety (of various forms) and simplicity. They may very well be too strict. We expect to have to introduce more exceptions to better accommodate real-world needs. We also need more rules.

...

The rules are designed to be supported by an analysis tool. Violations of rules will be flagged with references (or links) to the relevant rule. We do not expect you to memorize all the rules before trying to write code.

contrary:
https://aras-p.info/blog/2018/12/28/Modern-C-Lamentations/
This will be a long wall of text, and kinda random! My main points are:
1. C++ compile times are important,
2. Non-optimized build performance is important,
3. Cognitive load is important. I don’t expand much on this here, but if a programming language or a library makes me feel stupid, then I’m less likely to use it or like it. C++ does that a lot :)
programming  engineering  pls  best-practices  systems  c(pp)  guide  metabuch  objektbuch  reference  cheatsheet  elegance  frontier  libraries  intricacy  advanced  advice  recommendations  big-picture  novelty  lens  philosophy  state  error  types  concurrency  memory-management  performance  abstraction  plt  compilers  expert-experience  multi  checking  devtools  flux-stasis  safety  system-design  techtariat  time  measure  dotnet  comparison  examples  build-packaging  thinking  worse-is-better/the-right-thing  cost-benefit  tradeoffs  essay  commentary  oop  correctness  computer-memory  error-handling  resources-effects  latency-throughput 
june 2019 by nhaliday
What's the expected level of paper for top conferences in Computer Science - Academia Stack Exchange
Top. The top level.

My experience on program committees for STOC, FOCS, ITCS, SODA, SOCG, etc., is that there are FAR more submissions of publishable quality than can be accepted into the conference. By "publishable quality" I mean a well-written presentation of a novel, interesting, and non-trivial result within the scope of the conference.

...

There are several questions that come up over and over in the FOCS/STOC review cycle:

- How surprising / novel / elegant / interesting is the result?
- How surprising / novel / elegant / interesting / general are the techniques?
- How technically difficult is the result? Ironically, FOCS and STOC committees have a reputation for ignoring the distinction between trivial (easy to derive from scratch) and nondeterministically trivial (easy to understand after the fact).
- What is the expected impact of this result? Is this paper going to change the way people do theoretical computer science over the next five years?
- Is the result of general interest to the theoretical computer science community? Or is it only of interest to a narrow subcommunity? In particular, if the topic is outside the STOC/FOCS mainstream—say, for example, computational topology—does the paper do a good job of explaining and motivating the results to a typical STOC/FOCS audience?
nibble  q-n-a  overflow  academia  tcs  cs  meta:research  publishing  scholar  lens  properties  cost-benefit  analysis  impetus  increase-decrease  soft-question  motivation  proofs  search  complexity  analogy  problem-solving  elegance  synthesis  hi-order-bits  novelty  discovery 
june 2019 by nhaliday
Jordan Peterson is Wrong About the Case for the Left
I suggest that the tension of which he speaks is fully formed and self-contained completely within conservatism. Balancing those two forces is, in fact, what conservatism is all about. Thomas Sowell, in A Conflict of Visions: Ideological Origins of Political Struggles describes the conservative outlook as (paraphrasing): “There are no solutions, only tradeoffs.”

The real tension is between balance on the right and imbalance on the left.

In Towards a Cognitive Theory of Polics in the online magazine Quillette I make the case that left and right are best understood as psychological profiles consisting of 1) cognitive style, and 2) moral matrix.

There are two predominant cognitive styles and two predominant moral matrices.

The two cognitive styles are described by Arthur Herman in his book The Cave and the Light: Plato Versus Aristotle, and the Struggle for the Soul of Western Civilization, in which Plato and Aristotle serve as metaphors for them. These two quotes from the book summarize the two styles:

Despite their differences, Plato and Aristotle agreed on many things. They both stressed the importance of reason as our guide for understanding and shaping the world. Both believed that our physical world is shaped by certain eternal forms that are more real than matter. The difference was that Plato’s forms existed outside matter, whereas Aristotle’s forms were unrealizable without it. (p. 61)

The twentieth century’s greatest ideological conflicts do mark the violent unfolding of a Platonist versus Aristotelian view of what it means to be free and how reason and knowledge ultimately fit into our lives (p.539-540)

The Platonic cognitive style amounts to pure abstract reason, “unconstrained” by reality. It has no limiting principle. It is imbalanced. Aristotelian thinking also relies on reason, but it is “constrained” by empirical reality. It has a limiting principle. It is balanced.

The two moral matrices are described by Jonathan Haidt in his book The Righteous Mind: Why Good People Are Divided by Politics and Religion. Moral matrices are collections of moral foundations, which are psychological adaptations of social cognition created in us by hundreds of millions of years of natural selection as we evolved into the social animal. There are six moral foundations. They are:

Care/Harm
Fairness/Cheating
Liberty/Oppression
Loyalty/Betrayal
Authority/Subversion
Sanctity/Degradation
The first three moral foundations are called the “individualizing” foundations because they’re focused on the autonomy and well being of the individual person. The second three foundations are called the “binding” foundations because they’re focused on helping individuals form into cooperative groups.

One of the two predominant moral matrices relies almost entirely on the individualizing foundations, and of those mostly just care. It is all individualizing all the time. No balance. The other moral matrix relies on all of the moral foundations relatively equally; individualizing and binding in tension. Balanced.

The leftist psychological profile is made from the imbalanced Platonic cognitive style in combination with the first, imbalanced, moral matrix.

The conservative psychological profile is made from the balanced Aristotelian cognitive style in combination with the balanced moral matrix.

It is not true that the tension between left and right is a balance between the defense of the dispossessed and the defense of hierarchies.

It is true that the tension between left and right is between an imbalanced worldview unconstrained by empirical reality and a balanced worldview constrained by it.

A Venn Diagram of the two psychological profiles looks like this:
commentary  albion  canada  journos-pundits  philosophy  politics  polisci  ideology  coalitions  left-wing  right-wing  things  phalanges  reason  darwinian  tradition  empirical  the-classics  big-peeps  canon  comparison  thinking  metabuch  skeleton  lens  psychology  social-psych  morality  justice  civil-liberty  authoritarianism  love-hate  duty  tribalism  us-them  sanctity-degradation  revolution  individualism-collectivism  n-factor  europe  the-great-west-whale  pragmatic  prudence  universalism-particularism  analytical-holistic  nationalism-globalism  social-capital  whole-partial-many  pic  intersection-connectedness  links  news  org:mag  letters  rhetoric  contrarianism  intricacy  haidt  scitariat  critique  debate  forms-instances  reduction  infographic  apollonian-dionysian  being-becoming  essence-existence 
july 2018 by nhaliday
Why read old philosophy? | Meteuphoric
(This story would suggest that in physics students are maybe missing out on learning the styles of thought that produce progress in physics. My guess is that instead they learn them in grad school when they are doing research themselves, by emulating their supervisors, and that the helpfulness of this might partially explain why Nobel prizewinner advisors beget Nobel prizewinner students.)

The story I hear about philosophy—and I actually don’t know how much it is true—is that as bits of philosophy come to have any methodological tools other than ‘think about it’, they break off and become their own sciences. So this would explain philosophy’s lone status in studying old thinkers rather than impersonal methods—philosophy is the lone ur-discipline without impersonal methods but thinking.

This suggests a research project: try summarizing what Aristotle is doing rather than Aristotle’s views. Then write a nice short textbook about it.
ratty  learning  reading  studying  prioritizing  history  letters  philosophy  science  comparison  the-classics  canon  speculation  reflection  big-peeps  iron-age  mediterranean  roots  lens  core-rats  thinking  methodology  grad-school  academia  physics  giants  problem-solving  meta:research  scholar  the-trenches  explanans  crux  metameta  duplication  sociality  innovation  quixotic  meta:reading  classic 
june 2018 by nhaliday
Christian ethics - Wikipedia
Christian ethics is a branch of Christian theology that defines virtuous behavior and wrong behavior from a Christian perspective. Systematic theological study of Christian ethics is called moral theology, possibly with the name of the respective theological tradition, e.g. Catholic moral theology.

Christian virtues are often divided into four cardinal virtues and three theological virtues. Christian ethics includes questions regarding how the rich should act toward the poor, how women are to be treated, and the morality of war. Christian ethicists, like other ethicists, approach ethics from different frameworks and perspectives. The approach of virtue ethics has also become popular in recent decades, largely due to the work of Alasdair MacIntyre and Stanley Hauerwas.[2]

...

The seven Christian virtues are from two sets of virtues. The four cardinal virtues are Prudence, Justice, Restraint (or Temperance), and Courage (or Fortitude). The cardinal virtues are so called because they are regarded as the basic virtues required for a virtuous life. The three theological virtues, are Faith, Hope, and Love (or Charity).

- Prudence: also described as wisdom, the ability to judge between actions with regard to appropriate actions at a given time
- Justice: also considered as fairness, the most extensive and most important virtue[20]
- Temperance: also known as restraint, the practice of self-control, abstention, and moderation tempering the appetition
- Courage: also termed fortitude, forebearance, strength, endurance, and the ability to confront fear, uncertainty, and intimidation
- Faith: belief in God, and in the truth of His revelation as well as obedience to Him (cf. Rom 1:5:16:26)[21][22]
- Hope: expectation of and desire of receiving; refraining from despair and capability of not giving up. The belief that God will be eternally present in every human's life and never giving up on His love.
- Charity: a supernatural virtue that helps us love God and our neighbors, the same way as we love ourselves.

Seven deadly sins: https://en.wikipedia.org/wiki/Seven_deadly_sins
The seven deadly sins, also known as the capital vices or cardinal sins, is a grouping and classification of vices of Christian origin.[1] Behaviours or habits are classified under this category if they directly give birth to other immoralities.[2] According to the standard list, they are pride, greed, lust, envy, gluttony, wrath, and sloth,[2] which are also contrary to the seven virtues. These sins are often thought to be abuses or excessive versions of one's natural faculties or passions (for example, gluttony abuses one's desire to eat).

originally:
1 Gula (gluttony)
2 Luxuria/Fornicatio (lust, fornication)
3 Avaritia (avarice/greed)
4 Superbia (pride, hubris)
5 Tristitia (sorrow/despair/despondency)
6 Ira (wrath)
7 Vanagloria (vainglory)
8 Acedia (sloth)

Golden Rule: https://en.wikipedia.org/wiki/Golden_Rule
The Golden Rule (which can be considered a law of reciprocity in some religions) is the principle of treating others as one would wish to be treated. It is a maxim that is found in many religions and cultures.[1][2] The maxim may appear as _either a positive or negative injunction_ governing conduct:

- One should treat others as one would like others to treat oneself (positive or directive form).[1]
- One should not treat others in ways that one would not like to be treated (negative or prohibitive form).[1]
- What you wish upon others, you wish upon yourself (empathic or responsive form).[1]
The Golden Rule _differs from the maxim of reciprocity captured in do ut des—"I give so that you will give in return"—and is rather a unilateral moral commitment to the well-being of the other without the expectation of anything in return_.[3]

The concept occurs in some form in nearly every religion[4][5] and ethical tradition[6] and is often considered _the central tenet of Christian ethics_[7] [8]. It can also be explained from the perspectives of psychology, philosophy, sociology, human evolution, and economics. Psychologically, it involves a person empathizing with others. Philosophically, it involves a person perceiving their neighbor also as "I" or "self".[9] Sociologically, "love your neighbor as yourself" is applicable between individuals, between groups, and also between individuals and groups. In evolution, "reciprocal altruism" is seen as a distinctive advance in the capacity of human groups to survive and reproduce, as their exceptional brains demanded exceptionally long childhoods and ongoing provision and protection even beyond that of the immediate family.[10] In economics, Richard Swift, referring to ideas from David Graeber, suggests that "without some kind of reciprocity society would no longer be able to exist."[11]

...

hmm, Meta-Golden Rule already stated:
Seneca the Younger (c. 4 BC–65 AD), a practitioner of Stoicism (c. 300 BC–200 AD) expressed the Golden Rule in his essay regarding the treatment of slaves: "Treat your inferior as you would wish your superior to treat you."[23]

...

The "Golden Rule" was given by Jesus of Nazareth, who used it to summarize the Torah: "Do to others what you want them to do to you." and "This is the meaning of the law of Moses and the teaching of the prophets"[33] (Matthew 7:12 NCV, see also Luke 6:31). The common English phrasing is "Do unto others as you would have them do unto you". A similar form of the phrase appeared in a Catholic catechism around 1567 (certainly in the reprint of 1583).[34] The Golden Rule is _stated positively numerous times in the Hebrew Pentateuch_ as well as the Prophets and Writings. Leviticus 19:18 ("Forget about the wrong things people do to you, and do not try to get even. Love your neighbor as you love yourself."; see also Great Commandment) and Leviticus 19:34 ("But treat them just as you treat your own citizens. Love foreigners as you love yourselves, because you were foreigners one time in Egypt. I am the Lord your God.").

The Old Testament Deuterocanonical books of Tobit and Sirach, accepted as part of the Scriptural canon by Catholic Church, Eastern Orthodoxy, and the Non-Chalcedonian Churches, express a _negative form_ of the golden rule:

"Do to no one what you yourself dislike."

— Tobit 4:15
"Recognize that your neighbor feels as you do, and keep in mind your own dislikes."

— Sirach 31:15
Two passages in the New Testament quote Jesus of Nazareth espousing the _positive form_ of the Golden rule:

Matthew 7:12
Do to others what you want them to do to you. This is the meaning of the law of Moses and the teaching of the prophets.

Luke 6:31
Do to others what you would want them to do to you.

...

The passage in the book of Luke then continues with Jesus answering the question, "Who is my neighbor?", by telling the parable of the Good Samaritan, indicating that "your neighbor" is anyone in need.[35] This extends to all, including those who are generally considered hostile.

Jesus' teaching goes beyond the negative formulation of not doing what one would not like done to themselves, to the positive formulation of actively doing good to another that, if the situations were reversed, one would desire that the other would do for them. This formulation, as indicated in the parable of the Good Samaritan, emphasizes the needs for positive action that brings benefit to another, not simply restraining oneself from negative activities that hurt another. Taken as a rule of judgment, both formulations of the golden rule, the negative and positive, are equally applicable.[36]

The Golden Rule: Not So Golden Anymore: https://philosophynow.org/issues/74/The_Golden_Rule_Not_So_Golden_Anymore
Pluralism is the most serious problem facing liberal democracies today. We can no longer ignore the fact that cultures around the world are not simply different from one another, but profoundly so; and the most urgent area in which this realization faces us is in the realm of morality. Western democratic systems depend on there being at least a minimal consensus concerning national values, especially in regard to such things as justice, equality and human rights. But global communication, economics and the migration of populations have placed new strains on Western democracies. Suddenly we find we must adjust to peoples whose suppositions about the ultimate values and goals of life are very different from ours. A clear lesson from events such as 9/11 is that disregarding these differences is not an option. Collisions between worldviews and value systems can be cataclysmic. Somehow we must learn to manage this new situation.

For a long time, liberal democratic optimism in the West has been shored up by suppositions about other cultures and their differences from us. The cornerpiece of this optimism has been the assumption that whatever differences exist they cannot be too great. A core of ‘basic humanity’ surely must tie all of the world’s moral systems together – and if only we could locate this core we might be able to forge agreements and alliances among groups that otherwise appear profoundly opposed. We could perhaps then shelve our cultural or ideological differences and get on with the more pleasant and productive business of celebrating our core agreement. One cannot fail to see how this hope is repeated in order buoy optimism about the Middle East peace process, for example.

...

It becomes obvious immediately that no matter how widespread we want the Golden Rule to be, there are some ethical systems that we have to admit do not have it. In fact, there are a few traditions that actually disdain the Rule. In philosophy, the Nietzschean tradition holds that the virtues implicit in the Golden Rule are antithetical to the true virtues of self-assertion and the will-to-power. Among religions, there are a good many that prefer to emphasize the importance of self, cult, clan or tribe rather than of general others; and a good many other religions for whom large populations are simply excluded from goodwill, being labeled as outsiders, heretics or … [more]
article  letters  philosophy  morality  ethics  formal-values  religion  christianity  theos  n-factor  europe  the-great-west-whale  occident  justice  war  peace-violence  janus  virtu  list  sanctity-degradation  class  lens  wealth  gender  sex  sexuality  multi  concept  wiki  reference  theory-of-mind  ideology  cooperate-defect  coordination  psychology  cog-psych  social-psych  emotion  cybernetics  ecology  deep-materialism  new-religion  hsu  scitariat  aphorism  quotes  stories  fiction  gedanken  altruism  parasites-microbiome  food  diet  nutrition  individualism-collectivism  taxes  government  redistribution  analogy  lol  troll  poast  death  long-short-run  axioms  judaism  islam  tribalism  us-them  kinship  interests  self-interest  dignity  civil-liberty  values  homo-hetero  diversity  unintended-consequences  within-without  increase-decrease  signum  ascetic  axelrod  guilt-shame  patho-altruism  history  iron-age  mediterranean  the-classics  robust  egalitarianism-hierarchy  intricacy  hypocrisy  parable  roots  explanans  crux  s 
april 2018 by nhaliday
The Gelman View – spottedtoad
I have read Andrew Gelman’s blog for about five years, and gradually, I’ve decided that among his many blog posts and hundreds of academic articles, he is advancing a philosophy not just of statistics but of quantitative social science in general. Not a statistician myself, here is how I would articulate the Gelman View:

A. Purposes

1. The purpose of social statistics is to describe and understand variation in the world. The world is a complicated place, and we shouldn’t expect things to be simple.
2. The purpose of scientific publication is to allow for communication, dialogue, and critique, not to “certify” a specific finding as absolute truth.
3. The incentive structure of science needs to reward attempts to independently investigate, reproduce, and refute existing claims and observed patterns, not just to advance new hypotheses or support a particular research agenda.

B. Approach

1. Because the world is complicated, the most valuable statistical models for the world will generally be complicated. The result of statistical investigations will only rarely be to  give a stamp of truth on a specific effect or causal claim, but will generally show variation in effects and outcomes.
2. Whenever possible, the data, analytic approach, and methods should be made as transparent and replicable as possible, and should be fair game for anyone to examine, critique, or amend.
3. Social scientists should look to build upon a broad shared body of knowledge, not to “own” a particular intervention, theoretic framework, or technique. Such ownership creates incentive problems when the intervention, framework, or technique fail and the scientist is left trying to support a flawed structure.

Components

1. Measurement. How and what we measure is the first question, well before we decide on what the effects are or what is making that measurement change.
2. Sampling. Who we talk to or collect information from always matters, because we should always expect effects to depend on context.
3. Inference. While models should usually be complex, our inferential framework should be simple enough for anyone to follow along. And no p values.

He might disagree with all of this, or how it reflects his understanding of his own work. But I think it is a valuable guide to empirical work.
ratty  unaffiliated  summary  gelman  scitariat  philosophy  lens  stats  hypothesis-testing  science  meta:science  social-science  institutions  truth  is-ought  best-practices  data-science  info-dynamics  alt-inst  academia  empirical  evidence-based  checklists  strategy  epistemic 
november 2017 by nhaliday
What are the Laws of Biology?
The core finding of systems biology is that only a very small subset of possible network motifs is actually used and that these motifs recur in all kinds of different systems, from transcriptional to biochemical to neural networks. This is because only those arrangements of interactions effectively perform some useful operation, which underlies some necessary function at a cellular or organismal level. There are different arrangements for input summation, input comparison, integration over time, high-pass or low-pass filtering, negative auto-regulation, coincidence detection, periodic oscillation, bistability, rapid onset response, rapid offset response, turning a graded signal into a sharp pulse or boundary, and so on, and so on.

These are all familiar concepts and designs in engineering and computing, with well-known properties. In living organisms there is one other general property that the designs must satisfy: robustness. They have to work with noisy components, at a scale that’s highly susceptible to thermal noise and environmental perturbations. Of the subset of designs that perform some operation, only a much smaller subset will do it robustly enough to be useful in a living organism. That is, they can still perform their particular functions in the face of noisy or fluctuating inputs or variation in the number of components constituting the elements of the network itself.
scitariat  reflection  proposal  ideas  thinking  conceptual-vocab  lens  bio  complex-systems  selection  evolution  flux-stasis  network-structure  structure  composition-decomposition  IEEE  robust  signal-noise  perturbation  interdisciplinary  graphs  circuits  🌞  big-picture  hi-order-bits  nibble  synthesis 
november 2017 by nhaliday
Darwinian medicine - Randolph Nesse
The Dawn of Darwinian Medicine: https://sci-hub.tw/https://www.jstor.org/stable/2830330
TABLE 1 Examples of the use of the theory of natural selection to predict the existence of phenomena otherwise unsuspected
TABLE 2 A classification of phenomena associated with infectious disease
research-program  homepage  links  list  study  article  bio  medicine  disease  parasites-microbiome  epidemiology  evolution  darwinian  books  west-hunter  scitariat  🌞  red-queen  ideas  deep-materialism  biodet  EGT  heterodox  essay  equilibrium  incentives  survey  track-record  priors-posteriors  data  paying-rent  being-right  immune  multi  pdf  piracy  EEA  lens  nibble  🔬  maxim-gun 
november 2017 by nhaliday
Benedict Evans on Twitter: ""University can save you from the autodidact tendency to overrate himself. Democracy depends on people who know they don’t know everything.""
“The autodidact’s risk is that they think they know all of medieval history but have never heard of Charlemagne” - Umberto Eco

Facts are the least part of education. The structure and priorities they fit into matters far more, and learning how to learn far more again
techtariat  sv  twitter  social  discussion  rhetoric  info-foraging  learning  education  higher-ed  academia  expert  lens  aphorism  quotes  hi-order-bits  big-picture  synthesis  expert-experience 
october 2017 by nhaliday
All models are wrong - Wikipedia
Box repeated the aphorism in a paper that was published in the proceedings of a 1978 statistics workshop.[2] The paper contains a section entitled "All models are wrong but some are useful". The section is copied below.

Now it would be very remarkable if any system existing in the real world could be exactly represented by any simple model. However, cunningly chosen parsimonious models often do provide remarkably useful approximations. For example, the law PV = RT relating pressure P, volume V and temperature T of an "ideal" gas via a constant R is not exactly true for any real gas, but it frequently provides a useful approximation and furthermore its structure is informative since it springs from a physical view of the behavior of gas molecules.

For such a model there is no need to ask the question "Is the model true?". If "truth" is to be the "whole truth" the answer must be "No". The only question of interest is "Is the model illuminating and useful?".
thinking  metabuch  metameta  map-territory  models  accuracy  wire-guided  truth  philosophy  stats  data-science  methodology  lens  wiki  reference  complex-systems  occam  parsimony  science  nibble  hi-order-bits  info-dynamics  the-trenches  meta:science  physics  fluid  thermo  stat-mech  applicability-prereqs  theory-practice  elegance  simplification-normalization 
august 2017 by nhaliday
Introduction to Scaling Laws
https://betadecay.wordpress.com/2009/10/02/the-physics-of-scaling-laws-and-dimensional-analysis/
http://galileo.phys.virginia.edu/classes/304/scaling.pdf

Galileo’s Discovery of Scaling Laws: https://www.mtholyoke.edu/~mpeterso/classes/galileo/scaling8.pdf
Days 1 and 2 of Two New Sciences

An example of such an insight is “the surface of a small solid is comparatively greater than that of a large one” because the surface goes like the square of a linear dimension, but the volume goes like the cube.5 Thus as one scales down macroscopic objects, forces on their surfaces like viscous drag become relatively more important, and bulk forces like weight become relatively less important. Galileo uses this idea on the First Day in the context of resistance in free fall, as an explanation for why similar objects of different size do not fall exactly together, but the smaller one lags behind.
nibble  org:junk  exposition  lecture-notes  physics  mechanics  street-fighting  problem-solving  scale  magnitude  estimate  fermi  mental-math  calculation  nitty-gritty  multi  scitariat  org:bleg  lens  tutorial  guide  ground-up  tricki  skeleton  list  cheatsheet  identity  levers  hi-order-bits  yoga  metabuch  pdf  article  essay  history  early-modern  europe  the-great-west-whale  science  the-trenches  discovery  fluid  architecture  oceans  giants  tidbits  elegance 
august 2017 by nhaliday
Is the economy illegible? | askblog
In the model of the economy as a GDP factory, the most fundamental equation is the production function, Y = f(K,L).

This says that total output (Y) is determined by the total amount of capital (K) and the total amount of labor (L).

Let me stipulate that the economy is legible to the extent that this model can be applied usefully to explain economic developments. I want to point out that the economy, while never as legible as economists might have thought, is rapidly becoming less legible.
econotariat  cracker-econ  economics  macro  big-picture  empirical  legibility  let-me-see  metrics  measurement  econ-metrics  volo-avolo  securities  markets  amazon  business-models  business  tech  sv  corporation  inequality  compensation  polarization  econ-productivity  stagnation  monetary-fiscal  models  complex-systems  map-territory  thinking  nationalism-globalism  time-preference  cost-disease  education  healthcare  composition-decomposition  econometrics  methodology  lens  arrows  labor  capital  trends  intricacy  🎩  moments  winner-take-all  efficiency  input-output 
august 2017 by nhaliday
Economics empiricism - Wikipedia
Economics empiricism[1] (sometimes economic imperialism) in contemporary economics refers to economic analysis of seemingly non-economic aspects of life,[2] such as crime,[3] law,[4] the family,[5] prejudice,[6] tastes,[7] irrational behavior,[8] politics,[9] sociology,[10] culture,[11] religion,[12] war,[13] science,[14] and research.[14] Related usage of the term predates recent decades.[15]

The emergence of such analysis has been attributed to a method that, like that of the physical sciences, permits refutable implications[16] testable by standard statistical techniques.[17] Central to that approach are "[t]he combined postulates of maximizing behavior, stable preferences and market equilibrium, applied relentlessly and unflinchingly."[18] It has been asserted that these and a focus on economic efficiency have been ignored in other social sciences and "allowed economics to invade intellectual territory that was previously deemed to be outside the discipline’s realm."[17][19]

The Fluidity of Race: https://westhunt.wordpress.com/2015/01/26/the-fluidity-of-race/
So: what can we conclude about this paper? It’s a classic case of economic imperialism, informed by what ‘intellectuals’ [ those that have never been introduced to Punnet squares, Old Blue Light, the Dirac equation, or Melungeons] would like to hear.

It is wrong, not close to right.

Breadth-first search: https://westhunt.wordpress.com/2015/05/24/breadth-first-search/
When I complain about some egregious piece of research, particularly those that are in some sense cross-disciplinary, I often feel that that just knowing more would solve the problem. If Roland Fryer or Oded Galor understood genetics, they wouldn’t make these silly mistakes. If Qian and Nix understood genetics or American post-Civil War history, they would never have written that awful paper about massive passing. Or if paleoanthropologists and population geneticists had learned about mammalian hybrids, they would have been open to the idea of Neanderthal introgression.

But that really amounts to a demand that people learn about five times as much in college and grad school as they actually do. It’s not going to happen. Or, perhaps, find a systematic and effective way of collaborating with people outside their discipline without having their heads shaved. That doesn’t sound too likely either.

Hot enough for you?: https://westhunt.wordpress.com/2015/10/22/hot-enough-for-you/
There’s a new study out in Nature, claiming that economic productivity peaks at 13 degrees Centigrade and that global warming will therefore drastically decrease world GDP.

Singapore. Phoenix. Queensland. Air-conditioners!

Now that I’ve made my point, just how stupid are these people? Do they actually believe this shit? I keep seeing papers by economists – in prominent places – that rely heavily on not knowing jack shit about anything on Earth, papers that could only have been written by someone that didn’t know a damn thing about the subject they were addressing, from the influence of genetic diversity on civilization achievement (zilch) to the massive race-switching that happened after the Civil War (not). Let me tell you, there’s a difference between ‘economic imperialism’ and old-fashioned real imperialism: people like Clive of India or Raffles bothered to learn something about the territory they were conquering. They knew enough to run divide et impera in their sleep: while economists never say peccavi, no matter how badly they screw up.
economics  social-science  thinking  lens  things  conceptual-vocab  concept  academia  wiki  reference  sociology  multi  west-hunter  scitariat  rant  critique  race  usa  history  mostly-modern  methodology  conquest-empire  ideology  optimization  equilibrium  values  pseudoE  science  frontier  thick-thin  interdisciplinary  galor-like  broad-econ  info-dynamics  alt-inst  environment  climate-change  temperature  india  asia  britain  expansionism  diversity  knowledge  ability-competence  commentary  study  summary  org:nat 
july 2017 by nhaliday
가렛 존스 on Twitter: "Morality is made up. https://t.co/EWHW4hPtyG"
https://archive.is/lH8Fw

woah: https://twitter.com/GarettJones/status/889250591876161537
https://archive.is/fsaBm
Moral equality is not a lie and not dependent on the abilities of the individual. It's very dangerous to confuse ability with dignity.
But various moralities are preferences, not facts. I know of no sound proof for objective moral human equality--and de gustibus holds true.

https://twitter.com/GarettJones/status/1150543864832200705
https://archive.is/nxOWZ
Here's a Michelson-Morley-type claim: That discovering the true morality was the "Fuel for Success" for our species.

They then wrestle with the possibility that the true morality isn't the morality we moderns would prefer to embrace: maybe true morality breaks the wrong eggs.

Evolution and Moral Realism: https://academic.oup.com/bjps/article/68/4/981/2669734

RTed by QL:
https://twitter.com/intelevildust/status/1147609867189936129
https://archive.is/dATeX
econotariat  spearhead  garett-jones  twitter  social  commentary  discussion  morality  ethics  formal-values  philosophy  values  economics  lens  ideology  thinking  multi  inequality  envy  egalitarianism-hierarchy  absolute-relative  backup  social-structure  order-disorder  dignity  nihil  realness  pic  memes(ew)  gnon  is-ought  troll 
june 2017 by nhaliday
Defection – quas lacrimas peperere minoribus nostris!
https://quaslacrimas.wordpress.com/2017/06/28/discussion-of-defection/

Kindness Against The Grain: https://srconstantin.wordpress.com/2017/06/08/kindness-against-the-grain/
I’ve heard from a number of secular-ish sources (Carse, Girard, Arendt) that the essential contribution of Christianity to human thought is the concept of forgiveness. (Ribbonfarm also has a recent post on the topic of forgiveness.)

I have never been a Christian and haven’t even read all of the New Testament, so I’ll leave it to commenters to recommend Christian sources on the topic.

What I want to explore is the notion of kindness without a smooth incentive gradient.

The Social Module: https://bloodyshovel.wordpress.com/2015/10/09/the-social-module/
Now one could propose that the basic principle of human behavior is to raise the SP number. Sure there’s survival and reproduction. Most people would forget all their socialization if left hungry and thirsty for days in the jungle. But more often than not, survival and reproduction depend on being high status; having a good name among your peers is the best way to get food, housing and hot mates.

The way to raise one’s SP number depends on thousands of different factors. We could grab most of them and call them “culture”. In China having 20 teenage mistresses as an old man raises your SP; in Western polite society it is social death. In the West making a fuss about disobeying one’s parents raises your SP, everywhere else it lowers it a great deal. People know that; which is why bureaucrats in China go to great lengths to acquire a stash of young women (who they seldom have time to actually enjoy), while teenagers in the West go to great lengths to be annoying to their parents for no good reason.

...

It thus shouldn’t surprise us that something as completely absurd as Progressivism is the law of the land in most of the world today, even though it denies obvious reality. It is not the case that most people know that progressive points are all bogus, but obey because of fear or cowardice. No, an average human brain has much more neurons being used to scan the social climate and see how SP are allotted, than neurons being used to analyze patterns in reality to ascertain the truth. Surely your brain does care a great deal about truth in some very narrow areas of concern to you. Remember Conquest’s first law: Everybody is Conservative about what he knows best. You have to know the truth about what you do, if you are to do it effectively.

But you don’t really care about truth anywhere else. And why would you? It takes time and effort you can’t really spare, and it’s not really necessary. As long as you have some area of specialization where you can make a living, all the rest you must do to achieve survival and reproduction is to raise your SP so you don’t get killed and your guts sacrificed to the mountain spirits.

SP theory (I accept suggestions for a better name) can also explains the behavior of leftists. Many conservatives of a medium level of enlightenment point out the paradox that leftists historically have held completely different ideas. Leftism used to be about the livelihood of industrial workers, now they agitate about the environment, or feminism, or foreigners. Some people would say that’s just historical change, or pull a No True Scotsman about this or that group not being really leftists. But that’s transparent bullshit; very often we see a single person shifting from agitating about Communism and worker rights, to agitate about global warming or rape culture.

...

The leftist strategy could be defined as “psychopathic SP maximization”. Leftists attempt to destroy social equilibrium so that they can raise their SP number. If humans are, in a sense, programmed to constantly raise their status, well high status people by definition can’t raise it anymore (though they can squabble against each other for marginal gains), their best strategy is to freeze society in place so that they can enjoy their superiority. High status people by definition have power, and thus social hierarchy during human history tends to be quite stable.

This goes against the interests of many. First of all the lower status people, who, well, want to raise their status, but can’t manage to do so. And it also goes against the interests of the particularly annoying members of the upper class who want to raise their status on the margin. Conservative people can be defined as those who, no matter the absolute level, are in general happy with it. This doesn’t mean they don’t want higher status (by definition all humans do), but the output of other brain modules may conclude that attempts to raise SP might threaten one’s survival and reproduction; or just that the chances of raising one’s individual SP is hopeless, so one might as well stay put.

...

You can’t blame people for being logically inconsistent; because they can’t possibly know anything about all these issues. Few have any experience or knowledge about evolution and human races, or about the history of black people to make an informed judgment on HBD. Few have time to learn about sex differences, and stuff like the climate is as close to unknowable as there is. Opinions about anything but a very narrow area of expertise are always output of your SP module, not any judgment of fact. People don’t know the facts. And even when they know; I mean most people have enough experience with sex differences and black dysfunction to be quite confident that progressive ideas are false. But you can never be sure. As Hume said, the laws of physics are a judgment of habit; who is to say that a genie isn’t going to change all you know the next morning? At any rate, you’re always better off toeing the line, following the conventional wisdom, and keeping your dear SP. Perhaps you can even raise them a bit. And that is very nice. It is niceness itself.

Leftism is just an easy excuse: https://bloodyshovel.wordpress.com/2015/03/01/leftism-is-just-an-easy-excuse/
Unless you’re not the only defector. You need a way to signal your intention to defect, so that other disloyal fucks such as yourself (and they’re bound to be others) can join up, thus reducing the likely costs of defection. The way to signal your intention to defect is to come up with a good excuse. A good excuse to be disloyal becomes a rallying point through which other defectors can coordinate and cover their asses so that the ruling coalition doesn’t punish them. What is a good excuse?

Leftism is a great excuse. Claiming that the ruling coalition isn’t leftist enough, isn’t holy enough, not inclusive enough of women, of blacks, of gays, or gorillas, of pedophiles, of murderous Salafists, is the perfect way of signalling your disloyalty towards the existing power coalition. By using the existing ideology and pushing its logic just a little bit, you ensure that the powerful can’t punish you. At least not openly. And if you’re lucky, the mass of disloyal fucks in the ruling coalition might join your banner, and use your exact leftist point to jump ship and outflank the powerful.

...

The same dynamic fuels the flattery inflation one sees in monarchical or dictatorial systems. In Mao China, if you want to defect, you claim to love Mao more than your boss. In Nazi Germany, you proclaim your love for Hitler and the great insight of his plan to take Stalingrad. In the Roman Empire, you claimed that Caesar is a God, son of Hercules, and those who deny it are treacherous bastards. In Ancient Persia you loudly proclaimed your faith in the Shah being the brother of the Sun and the Moon and King of all Kings on Earth. In Reformation Europe you proclaimed that you have discovered something new in the Bible and everybody else is damned to hell. Predestined by God!

...

And again: the precise content of the ideological point doesn’t matter. Your human brain doesn’t care about ideology. Humans didn’t evolve to care about Marxist theory of class struggle, or about LGBTQWERTY theories of social identity. You just don’t know what it means. It’s all abstract points you’ve been told in a classroom. It doesn’t actually compute. Nothing that anybody ever said in a political debate ever made any actual, concrete sense to a human being.

So why do we care so much about politics? What’s the point of ideology? Ideology is just the water you swim in. It is a structured database of excuses, to be used to signal your allegiance or defection to the existing ruling coalition. Ideology is just the feed of the rationalization Hamster that runs incessantly in that corner of your brain. But it is immaterial, and in most cases actually inaccessible to the logical modules in your brain.

Nobody ever acts on their overt ideological claims if they can get away with it. Liberals proclaim their faith in the potential of black children while clustering in all white suburbs. Communist party members loudly talk about the proletariat while being hedonistic spenders. Al Gore talks about Global Warming while living in a lavish mansion. Cognitive dissonance, you say? No; those cognitive systems are not connected in the first place.

...

And so, every little step in the way, power-seekers moved the consensus to the left. And open societies, democratic systems are by their decentralized nature, and by the size of their constituencies, much more vulnerable to this sort of signalling attacks. It is but impossible to appraise and enforce the loyalty of every single individual involved in a modern state. There’s too many of them. A Medieval King had a better chance of it; hence the slow movement of ideological innovation in those days. But the bigger the organization, the harder it is to gather accurate information of the loyalty of the whole coalition; and hence the ideological movement accelerates. And there is no stopping it.

Like the Ancients, We Have Gods. They’ll Get Greater: http://www.overcomingbias.com/2018/04/like-the-ancients-we-have-gods-they-may-get… [more]
gnon  commentary  critique  politics  polisci  strategy  tactics  thinking  GT-101  game-theory  cooperate-defect  hypocrisy  institutions  incentives  anthropology  morality  ethics  formal-values  ideology  schelling  equilibrium  multi  links  debate  ethnocentrism  cultural-dynamics  decision-making  socs-and-mops  anomie  power  info-dynamics  propaganda  signaling  axelrod  organizing  impetus  democracy  antidemos  duty  coalitions  kinship  religion  christianity  theos  n-factor  trust  altruism  noble-lie  japan  asia  cohesion  reason  scitariat  status  fashun  history  mostly-modern  world-war  west-hunter  sulla  unintended-consequences  iron-age  china  sinosphere  stories  leviathan  criminal-justice  peace-violence  nihil  wiki  authoritarianism  egalitarianism-hierarchy  cocktail  ssc  parable  open-closed  death  absolute-relative  justice  management  explanans  the-great-west-whale  occident  orient  courage  vitality  domestication  revolution  europe  pop-diff  alien-character  diversity  identity-politics  westminster  kumbaya-kult  cultu 
june 2017 by nhaliday
Why I see academic economics moving left | askblog
http://www.arnoldkling.com/blog/on-the-state-of-economics/
http://www.nationalaffairs.com/publications/detail/how-effective-is-economic-theory
I have a long essay on the scientific status of economics in National Affairs. A few excerpts from the conclusion:

In the end, can we really have effective theory in economics? If by effective theory we mean theory that is verifiable and reliable for prediction and control, the answer is likely no. Instead, economics deals in speculative interpretations and must continue to do so.

Young economists who employ pluralistic methods to study problems are admired rather than marginalized, as they were in 1980. But economists who question the wisdom of interventionist economic policies seem headed toward the fringes of the profession.

This is my essay in which I say that academic economics is on the road to sociology.

example...?:
Property Is Only Another Name for Monopoly: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2818494
Hanson's take more positive: http://www.overcomingbias.com/2017/10/for-stability-rents.html

women:
http://www.arnoldkling.com/blog/college-women-and-the-future-of-economics/
http://www.arnoldkling.com/blog/road-to-sociology-watch-2/
http://www.arnoldkling.com/blog/road-to-sociology-watch-3/
econotariat  cracker-econ  commentary  prediction  trends  economics  social-science  ideology  politics  left-wing  regulation  empirical  measurement  methodology  academia  multi  links  news  org:mag  essay  longform  randy-ayndy  sociology  technocracy  realness  hypocrisy  letters  study  property-rights  taxes  civil-liberty  efficiency  arbitrage  alt-inst  proposal  incentives  westminster  lens  truth  info-foraging  ratty  hanson  summary  review  biases  concrete  abstraction  managerial-state  gender  identity-politics  higher-ed 
may 2017 by nhaliday
Lucio Russo - Wikipedia
In The Forgotten Revolution: How Science Was Born in 300 BC and Why It Had to Be Reborn (Italian: La rivoluzione dimenticata), Russo promotes the belief that Hellenistic science in the period 320-144 BC reached heights not achieved by Classical age science, and proposes that it went further than ordinarily thought, in multiple fields not normally associated with ancient science.

La Rivoluzione Dimenticata (The Forgotten Revolution), Reviewed by Sandro Graffi: http://www.ams.org/notices/199805/review-graffi.pdf

Before turning to the question of the decline of Hellenistic science, I come back to the new light shed by the book on Euclid’s Elements and on pre-Ptolemaic astronomy. Euclid’s definitions of the elementary geometric entities—point, straight line, plane—at the beginning of the Elements have long presented a problem.7 Their nature is in sharp contrast with the approach taken in the rest of the book, and continued by mathematicians ever since, of refraining from defining the fundamental entities explicitly but limiting themselves to postulating the properties which they enjoy. Why should Euclid be so hopelessly obscure right at the beginning and so smooth just after? The answer is: the definitions are not Euclid’s. Toward the beginning of the second century A.D. Heron of Alexandria found it convenient to introduce definitions of the elementary objects (a sign of decadence!) in his commentary on Euclid’s Elements, which had been written at least 400 years before. All manuscripts of the Elements copied ever since included Heron’s definitions without mention, whence their attribution to Euclid himself. The philological evidence leading to this conclusion is quite convincing.8

...

What about the general and steady (on the average) impoverishment of Hellenistic science under the Roman empire? This is a major historical problem, strongly tied to the even bigger one of the decline and fall of the antique civilization itself. I would summarize the author’s argument by saying that it basically represents an application to science of a widely accepted general theory on decadence of antique civilization going back to Max Weber. Roman society, mainly based on slave labor, underwent an ultimately unrecoverable crisis as the traditional sources of that labor force, essentially wars, progressively dried up. To save basic farming, the remaining slaves were promoted to be serfs, and poor free peasants reduced to serfdom, but this made trade disappear. A society in which production is almost entirely based on serfdom and with no trade clearly has very little need of culture, including science and technology. As Max Weber pointed out, when trade vanished, so did the marble splendor of the ancient towns, as well as the spiritual assets that went with it: art, literature, science, and sophisticated commercial laws. The recovery of Hellenistic science then had to wait until the disappearance of serfdom at the end of the Middle Ages. To quote Max Weber: “Only then with renewed vigor did the old giant rise up again.”

...

The epilogue contains the (rather pessimistic) views of the author on the future of science, threatened by the apparent triumph of today’s vogue of irrationality even in leading institutions (e.g., an astrology professorship at the Sorbonne). He looks at today’s ever-increasing tendency to teach science more on a fideistic than on a deductive or experimental basis as the first sign of a decline which could be analogous to the post-Hellenistic one.

Praising Alexandrians to excess: https://sci-hub.tw/10.1088/2058-7058/17/4/35
The Economic Record review: https://sci-hub.tw/10.1111/j.1475-4932.2004.00203.x

listed here: https://pinboard.in/u:nhaliday/b:c5c09f2687c1

Was Roman Science in Decline? (Excerpt from My New Book): https://www.richardcarrier.info/archives/13477
people  trivia  cocktail  history  iron-age  mediterranean  the-classics  speculation  west-hunter  scitariat  knowledge  wiki  ideas  wild-ideas  technology  innovation  contrarianism  multi  pdf  org:mat  books  review  critique  regularizer  todo  piracy  physics  canon  science  the-trenches  the-great-west-whale  broad-econ  the-world-is-just-atoms  frontier  speedometer  🔬  conquest-empire  giants  economics  article  growth-econ  cjones-like  industrial-revolution  empirical  absolute-relative  truth  rot  zeitgeist  gibbon  big-peeps  civilization  malthus  roots  old-anglo  britain  early-modern  medieval  social-structure  limits  quantitative-qualitative  rigor  lens  systematic-ad-hoc  analytical-holistic  cycles  space  mechanics  math  geometry  gravity  revolution  novelty  meta:science  is-ought  flexibility  trends  reason  applicability-prereqs  theory-practice  traces  evidence  psycho-atoms 
may 2017 by nhaliday
'Capital in the Twenty-First Century' by Thomas Piketty, reviewed | New Republic
by Robert Solow (positive)

The data then exhibit a clear pattern. In France and Great Britain, national capital stood fairly steadily at about seven times national income from 1700 to 1910, then fell sharply from 1910 to 1950, presumably as a result of wars and depression, reaching a low of 2.5 in Britain and a bit less than 3 in France. The capital-income ratio then began to climb in both countries, and reached slightly more than 5 in Britain and slightly less than 6 in France by 2010. The trajectory in the United States was slightly different: it started at just above 3 in 1770, climbed to 5 in 1910, fell slightly in 1920, recovered to a high between 5 and 5.5 in 1930, fell to below 4 in 1950, and was back to 4.5 in 2010.

The wealth-income ratio in the United States has always been lower than in Europe. The main reason in the early years was that land values bulked less in the wide open spaces of North America. There was of course much more land, but it was very cheap. Into the twentieth century and onward, however, the lower capital-income ratio in the United States probably reflects the higher level of productivity: a given amount of capital could support a larger production of output than in Europe. It is no surprise that the two world wars caused much less destruction and dissipation of capital in the United States than in Britain and France. The important observation for Piketty’s argument is that, in all three countries, and elsewhere as well, the wealth-income ratio has been increasing since 1950, and is almost back to nineteenth-century levels. He projects this increase to continue into the current century, with weighty consequences that will be discussed as we go on.

...

Now if you multiply the rate of return on capital by the capital-income ratio, you get the share of capital in the national income. For example, if the rate of return is 5 percent a year and the stock of capital is six years worth of national income, income from capital will be 30 percent of national income, and so income from work will be the remaining 70 percent. At last, after all this preparation, we are beginning to talk about inequality, and in two distinct senses. First, we have arrived at the functional distribution of income—the split between income from work and income from wealth. Second, it is always the case that wealth is more highly concentrated among the rich than income from labor (although recent American history looks rather odd in this respect); and this being so, the larger the share of income from wealth, the more unequal the distribution of income among persons is likely to be. It is this inequality across persons that matters most for good or ill in a society.

...

The data are complicated and not easily comparable across time and space, but here is the flavor of Piketty’s summary picture. Capital is indeed very unequally distributed. Currently in the United States, the top 10 percent own about 70 percent of all the capital, half of that belonging to the top 1 percent; the next 40 percent—who compose the “middle class”—own about a quarter of the total (much of that in the form of housing), and the remaining half of the population owns next to nothing, about 5 percent of total wealth. Even that amount of middle-class property ownership is a new phenomenon in history. The typical European country is a little more egalitarian: the top 1 percent own 25 percent of the total capital, and the middle class 35 percent. (A century ago the European middle class owned essentially no wealth at all.) If the ownership of wealth in fact becomes even more concentrated during the rest of the twenty-first century, the outlook is pretty bleak unless you have a taste for oligarchy.

Income from wealth is probably even more concentrated than wealth itself because, as Piketty notes, large blocks of wealth tend to earn a higher return than small ones. Some of this advantage comes from economies of scale, but more may come from the fact that very big investors have access to a wider range of investment opportunities than smaller investors. Income from work is naturally less concentrated than income from wealth. In Piketty’s stylized picture of the United States today, the top 1 percent earns about 12 percent of all labor income, the next 9 percent earn 23 percent, the middle class gets about 40 percent, and the bottom half about a quarter of income from work. Europe is not very different: the top 10 percent collect somewhat less and the other two groups a little more.

You get the picture: modern capitalism is an unequal society, and the rich-get-richer dynamic strongly suggest that it will get more so. But there is one more loose end to tie up, already hinted at, and it has to do with the advent of very high wage incomes. First, here are some facts about the composition of top incomes. About 60 percent of the income of the top 1 percent in the United States today is labor income. Only when you get to the top tenth of 1 percent does income from capital start to predominate. The income of the top hundredth of 1 percent is 70 percent from capital. The story for France is not very different, though the proportion of labor income is a bit higher at every level. Evidently there are some very high wage incomes, as if you didn’t know.

This is a fairly recent development. In the 1960s, the top 1 percent of wage earners collected a little more than 5 percent of all wage incomes. This fraction has risen pretty steadily until nowadays, when the top 1 percent of wage earners receive 10–12 percent of all wages. This time the story is rather different in France. There the share of total wages going to the top percentile was steady at 6 percent until very recently, when it climbed to 7 percent. The recent surge of extreme inequality at the top of the wage distribution may be primarily an American development. Piketty, who with Emmanuel Saez has made a careful study of high-income tax returns in the United States, attributes this to the rise of what he calls “supermanagers.” The very highest income class consists to a substantial extent of top executives of large corporations, with very rich compensation packages. (A disproportionate number of these, but by no means all of them, come from the financial services industry.) With or without stock options, these large pay packages get converted to wealth and future income from wealth. But the fact remains that much of the increased income (and wealth) inequality in the United States is driven by the rise of these supermanagers.

and Deirdre McCloskey (p critical): https://ejpe.org/journal/article/view/170
nice discussion of empirical economics, economic history, market failures and statism, etc., with several bon mots

Piketty’s great splash will undoubtedly bring many young economically interested scholars to devote their lives to the study of the past. That is good, because economic history is one of the few scientifically quantitative branches of economics. In economic history, as in experimental economics and a few other fields, the economists confront the evidence (as they do not for example in most macroeconomics or industrial organization or international trade theory nowadays).

...

Piketty gives a fine example of how to do it. He does not get entangled as so many economists do in the sole empirical tool they are taught, namely, regression analysis on someone else’s “data” (one of the problems is the word data, meaning “things given”: scientists should deal in capta, “things seized”). Therefore he does not commit one of the two sins of modern economics, the use of meaningless “tests” of statistical significance (he occasionally refers to “statistically insignificant” relations between, say, tax rates and growth rates, but I am hoping he does not suppose that a large coefficient is “insignificant” because R. A. Fisher in 1925 said it was). Piketty constructs or uses statistics of aggregate capital and of inequality and then plots them out for inspection, which is what physicists, for example, also do in dealing with their experiments and observations. Nor does he commit the other sin, which is to waste scientific time on existence theorems. Physicists, again, don’t. If we economists are going to persist in physics envy let us at least learn what physicists actually do. Piketty stays close to the facts, and does not, for example, wander into the pointless worlds of non-cooperative game theory, long demolished by experimental economics. He also does not have recourse to non-computable general equilibrium, which never was of use for quantitative economic science, being a branch of philosophy, and a futile one at that. On both points, bravissimo.

...

Since those founding geniuses of classical economics, a market-tested betterment (a locution to be preferred to “capitalism”, with its erroneous implication that capital accumulation, not innovation, is what made us better off) has enormously enriched large parts of a humanity now seven times larger in population than in 1800, and bids fair in the next fifty years or so to enrich everyone on the planet. [Not SSA or MENA...]

...

Then economists, many on the left but some on the right, in quick succession from 1880 to the present—at the same time that market-tested betterment was driving real wages up and up and up—commenced worrying about, to name a few of the pessimisms concerning “capitalism” they discerned: greed, alienation, racial impurity, workers’ lack of bargaining strength, workers’ bad taste in consumption, immigration of lesser breeds, monopoly, unemployment, business cycles, increasing returns, externalities, under-consumption, monopolistic competition, separation of ownership from control, lack of planning, post-War stagnation, investment spillovers, unbalanced growth, dual labor markets, capital insufficiency (William Easterly calls it “capital fundamentalism”), peasant irrationality, capital-market imperfections, public … [more]
news  org:mag  big-peeps  econotariat  economics  books  review  capital  capitalism  inequality  winner-take-all  piketty  wealth  class  labor  mobility  redistribution  growth-econ  rent-seeking  history  mostly-modern  trends  compensation  article  malaise  🎩  the-bones  whiggish-hegelian  cjones-like  multi  mokyr-allen-mccloskey  expert  market-failure  government  broad-econ  cliometrics  aphorism  lens  gallic  clarity  europe  critique  rant  optimism  regularizer  pessimism  ideology  behavioral-econ  authoritarianism  intervention  polanyi-marx  politics  left-wing  absolute-relative  regression-to-mean  legacy  empirical  data-science  econometrics  methodology  hypothesis-testing  physics  iron-age  mediterranean  the-classics  quotes  krugman  world  entrepreneurialism  human-capital  education  supply-demand  plots  manifolds  intersection  markets  evolution  darwinian  giants  old-anglo  egalitarianism-hierarchy  optimate  morality  ethics  envy  stagnation  nl-and-so-can-you  expert-experience  courage  stats  randy-ayndy  reason  intersection-connectedness  detail-architect 
april 2017 by nhaliday
Eebers and robbers | West Hunter
A year or so ago I was on a review committee for a department of biology. It was a pleasant and productive department, but it soon became apparent to us that it was in effect two departments sharing the same building. One was eeb (ecology and evolutionary biology), while the other was, in their jargon, rob (the rest of biology.) Relations were cordial between the two for the most part but there was almost no interaction nor interest across the divide.

The same divide is increasingly apparent in genetics, genomics, and human evolution. Several years ago a colleague suggested to me that the idea that mathematics is the language of science was no longer very accurate. There are, he said, two languages of science, one being mathematics and other organic chemistry. He was onto something. People who speak mathematics and models, eebers, and people who speak organic chemistry, robbers, are more and more out of touch with each other.
west-hunter  science  academia  tribalism  lens  things  bio  evolution  genetics  population-genetics  big-picture  reflection  scitariat 
march 2017 by nhaliday
« earlier      
per page:    204080120160

Copy this bookmark:





to read