recentpopularlog in

nhaliday : critique   578

« earlier  
Square Signals : Successful habits through smoothly ratcheting...
But in 2017, I shifted strategies and successfully built four new habits (of five attempted): piano practice, internetless mornings, carbless workdays, and meditation. In past years I’d feel lucky if I built just one new habit! I’d like to share my approach: smoothly ratcheted targets, in moving weekly windows, with teeth. Before I unpack that, let’s cover some background.
techtariat  michael-nielsen  advice  recommendations  habit  productivity  discipline  self-control  smoothness  wire-guided  🦉  metabuch  checklists  growth  track-record  reflection  reinforcement  quantified-self  software  critique  ui  ux  beeminder 
10 weeks ago by nhaliday
Zettlr | "Wtf is a Zettelkasten?"
The Zettelkasten Manifesto
In case you're still wondering what a Zettelkasten is and you need a little bit more incentives to get started, please have a look at a video we've made earlier this week, where we outline why the notion of a Zettelkasten has become so intrinsically linked to the name of Niklas Luhmann, why we think that this is bad and how we think we should think of Zettelkästen:
techtariat  org:com  project  software  tools  exocortex  notetaking  workflow  thinking  dbs  structure  network-structure  critique  graphs  stay-organized  germanic  metabuch 
12 weeks ago by nhaliday
The Open Steno Project | Hacker News
https://web.archive.org/web/20170315133208/http://www.danieljosephpetersen.com/posts/programming-and-stenography.html
I think at the end of the day, the Plover guys are trying to solve the wrong problem. Stenography is a dying field. I don’t wish anyone to lose their livelihood, but realistically speaking, the job should not exist once text to speech technology advances far enough. I’m not claiming that the field will be replaced by it, but I also don’t love the idea of people having to learn such an inane and archaic system.
hn  commentary  keyboard  speed  efficiency  writing  language  maker  homepage  project  multi  techtariat  cost-benefit  critique  expert-experience  programming  backup  contrarianism 
november 2019 by nhaliday
Advantages and disadvantages of building a single page web application - Software Engineering Stack Exchange
Advantages
- All data has to be available via some sort of API - this is a big advantage for my use case as I want to have an API to my application anyway. Right now about 60-70% of my calls to get/update data are done through a REST API. Doing a single page application will allow me to better test my REST API since the application itself will use it. It also means that as the application grows, the API itself will grow since that is what the application uses; no need to maintain the API as an add-on to the application.
- More responsive application - since all data loaded after the initial page is kept to a minimum and transmitted in a compact format (like JSON), data requests should generally be faster, and the server will do slightly less processing.

Disadvantages
- Duplication of code - for example, model code. I am going to have to create models both on the server side (PHP in this case) and the client side in Javascript.
- Business logic in Javascript - I can't give any concrete examples on why this would be bad but it just doesn't feel right to me having business logic in Javascript that anyone can read.
- Javascript memory leaks - since the page never reloads, Javascript memory leaks can happen, and I would not even know where to begin to debug them.

--

Disadvantages I often see with Single Page Web Applications:
- Inability to link to a specific part of the site, there's often only 1 entry point.
- Disfunctional back and forward buttons.
- The use of tabs is limited or non-existant.
(especially mobile:)
- Take very long to load.
- Don't function at all.
- Can't reload a page, a sudden loss of network takes you back to the start of the site.

This answer is outdated, Most single page application frameworks have a way to deal with the issues above – Luis May 27 '14 at 1:41
@Luis while the technology is there, too often it isn't used. – Pieter B Jun 12 '14 at 6:53

https://softwareengineering.stackexchange.com/questions/201838/building-a-web-application-that-is-almost-completely-rendered-by-javascript-whi

https://softwareengineering.stackexchange.com/questions/143194/what-advantages-are-conferred-by-using-server-side-page-rendering
Server-side HTML rendering:
- Fastest browser rendering
- Page caching is possible as a quick-and-dirty performance boost
- For "standard" apps, many UI features are pre-built
- Sometimes considered more stable because components are usually subject to compile-time validation
- Leans on backend expertise
- Sometimes faster to develop*
*When UI requirements fit the framework well.

Client-side HTML rendering:
- Lower bandwidth usage
- Slower initial page render. May not even be noticeable in modern desktop browsers. If you need to support IE6-7, or many mobile browsers (mobile webkit is not bad) you may encounter bottlenecks.
- Building API-first means the client can just as easily be an proprietary app, thin client, another web service, etc.
- Leans on JS expertise
- Sometimes faster to develop**
**When the UI is largely custom, with more interesting interactions. Also, I find coding in the browser with interpreted code noticeably speedier than waiting for compiles and server restarts.

https://softwareengineering.stackexchange.com/questions/237537/progressive-enhancement-vs-single-page-apps

https://stackoverflow.com/questions/21862054/single-page-application-advantages-and-disadvantages
=== ADVANTAGES ===
1. SPA is extremely good for very responsive sites:
2. With SPA we don't need to use extra queries to the server to download pages.
3.May be any other advantages? Don't hear about any else..

=== DISADVANTAGES ===
1. Client must enable javascript.
2. Only one entry point to the site.
3. Security.

https://softwareengineering.stackexchange.com/questions/287819/should-you-write-your-back-end-as-an-api
focused on .NET

https://softwareengineering.stackexchange.com/questions/337467/is-it-normal-design-to-completely-decouple-backend-and-frontend-web-applications
A SPA comes with a few issues associated with it. Here are just a few that pop in my mind now:
- it's mostly JavaScript. One error in a section of your application might prevent other sections of the application to work because of that Javascript error.
- CORS.
- SEO.
- separate front-end application means separate projects, deployment pipelines, extra tooling, etc;
- security is harder to do when all the code is on the client;

- completely interact in the front-end with the user and only load data as needed from the server. So better responsiveness and user experience;
- depending on the application, some processing done on the client means you spare the server of those computations.
- have a better flexibility in evolving the back-end and front-end (you can do it separately);
- if your back-end is essentially an API, you can have other clients in front of it like native Android/iPhone applications;
- the separation might make is easier for front-end developers to do CSS/HTML without needing to have a server application running on their machine.

Create your own dysfunctional single-page app: https://news.ycombinator.com/item?id=18341993
I think are three broadly assumed user benefits of single-page apps:
1. Improved user experience.
2. Improved perceived performance.
3. It’s still the web.

5 mistakes to create a dysfunctional single-page app
Mistake 1: Under-estimate long-term development and maintenance costs
Mistake 2: Use the single-page app approach unilaterally
Mistake 3: Under-invest in front end capability
Mistake 4: Use naïve dev practices
Mistake 5: Surf the waves of framework hype

The disadvantages of single page applications: https://news.ycombinator.com/item?id=9879685
You probably don't need a single-page app: https://news.ycombinator.com/item?id=19184496
https://news.ycombinator.com/item?id=20384738
MPA advantages:
- Stateless requests
- The browser knows how to deal with a traditional architecture
- Fewer, more mature tools
- SEO for free

When to go for the single page app:
- Core functionality is real-time (e.g Slack)
- Rich UI interactions are core to the product (e.g Trello)
- Lots of state shared between screens (e.g. Spotify)

Hybrid solutions
...
Github uses this hybrid approach.
...

Ask HN: Is it ok to use traditional server-side rendering these days?: https://news.ycombinator.com/item?id=13212465

https://www.reddit.com/r/webdev/comments/cp9vb8/are_people_still_doing_ssr/
https://www.reddit.com/r/webdev/comments/93n60h/best_javascript_modern_approach_to_multi_page/
https://www.reddit.com/r/webdev/comments/aax4k5/do_you_develop_solely_using_spa_these_days/
The SEO issues with SPAs is a persistent concern you hear about a lot, yet nobody ever quantifies the issues. That is because search engines keep the operation of their crawler bots and indexing secret. I have read into it some, and it seems that problem used to exist, somewhat, but is more or less gone now. Bots can deal with SPAs fine.
--
I try to avoid building a SPA nowadays if possible. Not because of SEO (there are now server-side solutions to help with that), but because a SPA increases the complexity of the code base by a magnitude. State management with Redux... Async this and that... URL routing... And don't forget to manage page history.

How about just render pages with templates and be done?

If I need a highly dynamic UI for a particular feature, then I'd probably build an embeddable JS widget for it.
q-n-a  stackex  programming  engineering  tradeoffs  system-design  design  web  frontend  javascript  cost-benefit  analysis  security  state  performance  traces  measurement  intricacy  code-organizing  applicability-prereqs  multi  comparison  smoothness  shift  critique  techtariat  chart  ui  coupling-cohesion  interface-compatibility  hn  commentary  best-practices  discussion  trends  client-server  api  composition-decomposition  cycles  frameworks  ecosystem  degrees-of-freedom  dotnet  working-stiff  reddit  social  project-management 
october 2019 by nhaliday
Ask HN: Favorite note-taking software? | Hacker News
Ask HN: What is your ideal note-taking software and/or hardware?: https://news.ycombinator.com/item?id=13221158

my wishlist as of 2019:
- web + desktop macOS + mobile iOS (at least viewing on the last but ideally also editing)
- sync across all those
- open-source data format that's easy to manipulate for scripting purposes
- flexible organization: mostly tree hierarchical (subsuming linear/unorganized) but with the option for directed (acyclic) graph (possibly a second layer of structure/linking)
- can store plain text, LaTeX, diagrams, sketches, and raster/vector images (video prob not necessary except as links to elsewhere)
- full-text search
- somehow digest/import data from Pinboard, Workflowy, Papers 3/Bookends, Skim, and iBooks/e-readers (esp. Kobo), ideally absorbing most of their functionality
- so, eg, track notes/annotations side-by-side w/ original PDF/DjVu/ePub documents (to replace Papers3/Bookends/Skim), and maybe web pages too (to replace Pinboard)
- OCR of handwritten notes (how to handle equations/diagrams?)
- various forms of NLP analysis of everything (topic models, clustering, etc)
- maybe version control (less important than export)

candidates?:
- Evernote prob ruled out do to heavy use of proprietary data formats (unless I can find some way to export with tolerably clean output)
- Workflowy/Dynalist are good but only cover a subset of functionality I want
- org-mode doesn't interact w/ mobile well (and I haven't evaluated it in detail otherwise)
- TiddlyWiki/Zim are in the running, but not sure about mobile
- idk about vimwiki but I'm not that wedded to vim and it seems less widely used than org-mode/TiddlyWiki/Zim so prob pass on that
- Quiver/Joplin/Inkdrop look similar and cover a lot of bases, TODO: evaluate more
- Trilium looks especially promising, tho read-only mobile and for macOS desktop look at this: https://github.com/zadam/trilium/issues/511
- RocketBook is interesting scanning/OCR solution but prob not sufficient due to proprietary data format
- TODO: many more candidates, eg, TreeSheets, Gingko, OneNote (macOS?...), Notion (proprietary data format...), Zotero, Nodebook (https://nodebook.io/landing), Polar (https://getpolarized.io), Roam (looks very promising)

Ask HN: What do you use for you personal note taking activity?: https://news.ycombinator.com/item?id=15736102

Ask HN: What are your note-taking techniques?: https://news.ycombinator.com/item?id=9976751

Ask HN: How do you take notes (useful note-taking strategies)?: https://news.ycombinator.com/item?id=13064215

Ask HN: How to get better at taking notes?: https://news.ycombinator.com/item?id=21419478

Ask HN: How do you keep your notes organized?: https://news.ycombinator.com/item?id=21810400

Ask HN: How did you build up your personal knowledge base?: https://news.ycombinator.com/item?id=21332957
nice comment from math guy on structure and difference between math and CS: https://news.ycombinator.com/item?id=21338628
useful comment collating related discussions: https://news.ycombinator.com/item?id=21333383
highlights:
Designing a Personal Knowledge base: https://news.ycombinator.com/item?id=8270759
Ask HN: How to organize personal knowledge?: https://news.ycombinator.com/item?id=17892731
Do you use a personal 'knowledge base'?: https://news.ycombinator.com/item?id=21108527
Ask HN: How do you share/organize knowledge at work and life?: https://news.ycombinator.com/item?id=21310030
Managing my personal knowledge base: https://news.ycombinator.com/item?id=22000791
The sad state of personal data and infrastructure: https://beepb00p.xyz/sad-infra.html
Building personal search infrastructure for your knowledge and code: https://beepb00p.xyz/pkm-search.html

How to annotate literally everything: https://beepb00p.xyz/annotating.html
Ask HN: How do you organize document digests / personal knowledge?: https://news.ycombinator.com/item?id=21642289
Ask HN: Good solution for storing notes/excerpts from books?: https://news.ycombinator.com/item?id=21920143
Ask HN: What's your cross-platform pdf / ePub reading workflow?: https://news.ycombinator.com/item?id=22170395
some related stuff in the reddit links at the bottom of this pin

https://beepb00p.xyz/grasp.html
How to capture information from your browser and stay sane

Ask HN: Best solutions for keeping a personal log?: https://news.ycombinator.com/item?id=21906650

other stuff:
plain text: https://news.ycombinator.com/item?id=21685660

https://www.getdnote.com/blog/how-i-built-personal-knowledge-base-for-myself/
Tiago Forte: https://www.buildingasecondbrain.com

hn search: https://hn.algolia.com/?query=notetaking&type=story

Slant comparison commentary: https://news.ycombinator.com/item?id=7011281

good comparison of options here in comments here (and Trilium itself looks good): https://news.ycombinator.com/item?id=18840990

https://en.wikipedia.org/wiki/Comparison_of_note-taking_software

stuff from Andy Matuschak and Michael Nielsen on general note-taking:
https://twitter.com/andy_matuschak/status/1202663202997170176
https://archive.is/1i9ep
Software interfaces undervalue peripheral vision! (a thread)
https://twitter.com/andy_matuschak/status/1199378287555829760
https://archive.is/J06UB
This morning I implemented PageRank to sort backlinks in my prototype note system. Mixed results!
https://twitter.com/andy_matuschak/status/1211487900505792512
https://archive.is/BOiCG
https://archive.is/4zB37
One way to dream up post-book media to make reading more effective and meaningful is to systematize "expert" practices (e.g. How to Read a Book), so more people can do them, more reliably and more cheaply. But… the most erudite people I know don't actually do those things!

the memex essay and comments from various people including Andy on it: https://pinboard.in/u:nhaliday/b:1cddf69c0b31

some more stuff specific to Roam below, and cf "Why books don't work": https://pinboard.in/u:nhaliday/b:b4d4461f6378

wikis:
https://www.slant.co/versus/5116/8768/~tiddlywiki_vs_zim
https://www.wikimatrix.org/compare/tiddlywiki+zim
http://tiddlymap.org/
https://www.zim-wiki.org/manual/Plugins/BackLinks_Pane.html
https://zim-wiki.org/manual/Plugins/Link_Map.html

apps:
Roam: https://news.ycombinator.com/item?id=21440289
https://www.reddit.com/r/RoamResearch/
https://twitter.com/hashtag/roamcult
https://twitter.com/search?q=RoamResearch%20fortelabs
https://twitter.com/search?q=from%3AQiaochuYuan%20RoamResearch&src=typd
https://twitter.com/vgr/status/1199391391803043840
https://archive.is/TJPQN
https://archive.is/CrNwZ
https://www.nateliason.com/blog/roam
https://twitter.com/andy_matuschak/status/1190102757430063106
https://archive.is/To30Q
https://archive.is/UrI1x
https://archive.is/Ww22V
Knowledge systems which display contextual backlinks to a node open up an interesting new behavior. You can bootstrap a new node extensionally (rather than intensionally) by simply linking to it from many other nodes—even before it has any content.
https://twitter.com/michael_nielsen/status/1220197017340612608
Curious: what are the most striking public @RoamResearch pages that you know? I'd like to see examples of people using it for interesting purposes, or in interesting ways.
https://acesounderglass.com/2019/10/24/epistemic-spot-check-the-fate-of-rome-round-2/
https://twitter.com/andy_matuschak/status/1206011493495513089
https://archive.is/xvaMh
If I weren't doing my own research on questions in knowledge systems (which necessitates tinkering with my own), and if I weren't allergic to doing serious work in webapps, I'd likely use Roam instead!
https://talk.dynalist.io/t/roam-research-new-web-based-outliner-that-supports-transclusion-wiki-features-thoughts/5911/16
http://forum.eastgate.com/t/roam-research-interesting-approach-to-note-taking/2713/10
interesting app: http://www.eastgate.com/Tinderbox/
https://www.theatlantic.com/notes/2016/09/labor-day-software-update-tinderbox-scrivener/498443/

intriguing but probably not appropriate for my needs: https://www.sophya.ai/

Inkdrop: https://news.ycombinator.com/item?id=20103589

Joplin: https://news.ycombinator.com/item?id=15815040
https://news.ycombinator.com/item?id=21555238

MindForgr: https://news.ycombinator.com/item?id=22088175
one comment links to this, mostly on Notion: https://tkainrad.dev/posts/managing-my-personal-knowledge-base/

https://wreeto.com/

Leo Editor (combines tree outlining w/ literate programming/scripting, I think?): https://news.ycombinator.com/item?id=17769892

Frame: https://news.ycombinator.com/item?id=18760079

https://www.reddit.com/r/TheMotte/comments/cb18sy/anyone_use_a_personal_wiki_software_to_catalog/
https://archive.is/xViTY
Notion: https://news.ycombinator.com/item?id=18904648
https://coda.io/welcome
https://news.ycombinator.com/item?id=15543181

accounting: https://news.ycombinator.com/item?id=19833881
Coda mentioned

https://www.reddit.com/r/slatestarcodex/comments/ap437v/modified_cornell_method_the_optimal_notetaking/
https://archive.is/e9oHu
https://www.reddit.com/r/slatestarcodex/comments/bt8a1r/im_about_to_start_a_one_month_journaling_test/
https://www.reddit.com/r/slatestarcodex/comments/9cot3m/question_how_do_you_guys_learn_things/
https://archive.is/HUH8V
https://www.reddit.com/r/slatestarcodex/comments/d7bvcp/how_to_read_a_book_for_understanding/
https://archive.is/VL2mi

Anki:
https://www.reddit.com/r/Anki/comments/as8i4t/use_anki_for_technical_books/
https://www.freecodecamp.org/news/how-anki-saved-my-engineering-career-293a90f70a73/
https://www.reddit.com/r/slatestarcodex/comments/ch24q9/anki_is_it_inferior_to_the_3x5_index_card_an/
https://archive.is/OaGc5
maybe not the best source for a review/advice

interesting comment(s) about tree outliners and spreadsheets: https://news.ycombinator.com/item?id=21170434
https://lightsheets.app/

tablet:
https://www.inkandswitch.com/muse-studio-for-ideas.html
https://www.inkandswitch.com/capstone-manuscript.html
https://news.ycombinator.com/item?id=20255457
hn  discussion  recommendations  software  tools  desktop  app  notetaking  exocortex  wkfly  wiki  productivity  multi  comparison  crosstab  properties  applicability-prereqs  nlp  info-foraging  chart  webapp  reference  q-n-a  retention  workflow  reddit  social  ratty  ssc  learning  studying  commentary  structure  thinking  network-structure  things  collaboration  ocr  trees  graphs  LaTeX  search  todo  project  money-for-time  synchrony  pinboard  state  duplication  worrydream  simplification-normalization  links  minimalism  design  neurons  ai-control  openai  miri-cfar  parsimony  intricacy  meta:reading  examples  prepping  new-religion  deep-materialism  techtariat  review  critique  mobile  integration-extension  interface-compatibility  api  twitter  backup  vgr  postrat  personal-finance  pragmatic  stay-organized  project-management  news  org:mag  epistemic  steel-man  explore-exploit  correlation  cost-benefit  convexity-curvature  michael-nielsen  hci  ux  oly  skunkworks  europe  germanic 
october 2019 by nhaliday
The Future of Mathematics? [video] | Hacker News
https://news.ycombinator.com/item?id=20909404
Kevin Buzzard (the Lean guy)

- general reflection on proof asssistants/theorem provers
- Kevin Hale's formal abstracts project, etc
- thinks of available theorem provers, Lean is "[the only one currently available that may be capable of formalizing all of mathematics eventually]" (goes into more detail right at the end, eg, quotient types)
hn  commentary  discussion  video  talks  presentation  math  formal-methods  expert-experience  msr  frontier  state-of-art  proofs  rigor  education  higher-ed  optimism  prediction  lens  search  meta:research  speculation  exocortex  skunkworks  automation  research  math.NT  big-surf  software  parsimony  cost-benefit  intricacy  correctness  programming  pls  python  functional  haskell  heavyweights  research-program  review  reflection  multi  pdf  slides  oly  experiment  span-cover  git  vcs  teaching  impetus  academia  composition-decomposition  coupling-cohesion  database  trust  types  plt  lifts-projections  induction  critique  beauty  truth  elegance  aesthetics 
october 2019 by nhaliday
Overcoming Bias : What’s So Bad About Concentration?
And occurs to me to mention that when these models allow “free entry”, i.e., when the number of firms is set by the constraint that they must all expect to make non-negative profits, then such models consistently predict that too many firms enter, not too few. These models suggest that we should worry more about insufficient, not excess, concentration.
ratty  hanson  economics  industrial-org  contrarianism  critique  models  GT-101  game-theory  examples  market-power  rent-seeking  regulation  increase-decrease  signum  error  markets  biases  efficiency 
september 2019 by nhaliday
Mars Direct | West Hunter
Send Mr Bezos. He even looks like a Martian.
--
Throw in Zuckerberg and it’s a deal…
--
We could send twice as many people half-way to Mars.

--

I don’t think that the space station has been worth anything at all.

As for a lunar base, many of the issues are difficult and one ( effects of low-gee) is probably impossible to solve.

I don’t think that there are real mysteries about what is needed for a kind-of self-sufficient base – it’s just too hard and there’s not much prospect of a payoff.

That said, there may be other ways of going about this that are more promising.

--

Venus is worth terraforming: no gravity problems. Doable.

--

It’s not impossible that Mars might harbor microbial life – with some luck, life with a different chemical basis. That might be very valuable: there are endless industrial processes that depend upon some kind of fermentation.
Why, without acetone fermentation, there might not be a state of Israel.
--
If we used a reasonable approach, like Orion, I think that people would usefully supplement those robots.

https://westhunt.wordpress.com/2019/01/11/the-great-divorce/
Jeff Bezos isn’t my favorite guy, but he has ability and has built something useful. And an ugly, contested divorce would be harsh and unfair to the children, who have done nothing wrong.

But I don’t care. The thought of tens of billions of dollars being spent on lawyers and PIs offer the possibility of a spectacle that will live forever, far wilder than the antics of Nero or Caligula. It could make Suetonius look like Pilgrim’s Progress.

Have you ever wondered whether tens of thousands of divorce lawyers should be organized into legions or phalanxes? This is our chance to finally find out.
west-hunter  scitariat  commentary  current-events  trump  politics  troll  space  expansionism  frontier  cost-benefit  ideas  speculation  roots  deep-materialism  definite-planning  geoengineering  wild-ideas  gravity  barons  amazon  facebook  sv  tech  government  debate  critique  physics  mechanics  robotics  multi  lol  law  responsibility  drama  beginning-middle-end  direct-indirect 
september 2019 by nhaliday
Karol Kuczmarski's Blog – A Haskell retrospective
Even in this hypothetical scenario, I posit that the value proposition of Haskell would still be a tough sell.

There is this old quote from Bjarne Stroustrup (creator of C++) where he says that programming languages divide into those everyone complains about, and those that no one uses.
The first group consists of old, established technologies that managed to accrue significant complexity debt through years and decades of evolution. All the while, they’ve been adapting to the constantly shifting perspectives on what are the best industry practices. Traces of those adaptations can still be found today, sticking out like a leftover appendix or residual tail bone — or like the built-in support for XML in Java.

Languages that “no one uses”, on the other hand, haven’t yet passed the industry threshold of sufficient maturity and stability. Their ecosystems are still cutting edge, and their future is uncertain, but they sometimes champion some really compelling paradigm shifts. As long as you can bear with things that are rough around the edges, you can take advantage of their novel ideas.

Unfortunately for Haskell, it manages to combine the worst parts of both of these worlds.

On one hand, it is a surprisingly old language, clocking more than two decades of fruitful research around many innovative concepts. Yet on the other hand, it bears the signs of a fresh new technology, with relatively few production-grade libraries, scarce coverage of some domains (e.g. GUI programming), and not too many stories of commercial successes.

There are many ways to do it
String theory
Errors and how to handle them
Implicit is better than explicit
Leaky modules
Namespaces are apparently a bad idea
Wild records
Purity beats practicality
techtariat  reflection  functional  haskell  programming  pls  realness  facebook  pragmatic  cost-benefit  legacy  libraries  types  intricacy  engineering  tradeoffs  frontier  homo-hetero  duplication  strings  composition-decomposition  nitty-gritty  error  error-handling  coupling-cohesion  critique  ecosystem  c(pp)  aphorism 
august 2019 by nhaliday
Panel: Systems Programming in 2014 and Beyond | Lang.NEXT 2014 | Channel 9
- Bjarne Stroustrup, Niko Matsakis, Andrei Alexandrescu, Rob Pike
- 2014 so pretty outdated but rare to find a discussion with people like this together
- pretty sure Jonathan Blow asked a couple questions
- Rob Pike compliments Rust at one point. Also kinda softly rags on dynamic typing at one point ("unit testing is what they have instead of static types").

related:
What is Systems Programming, Really?: http://willcrichton.net/notes/systems-programming/
https://news.ycombinator.com/item?id=17948265
https://news.ycombinator.com/item?id=21731878
video  presentation  debate  programming  pls  c(pp)  systems  os  rust  d-lang  golang  computer-memory  legacy  devtools  formal-methods  concurrency  compilers  syntax  parsimony  google  intricacy  thinking  cost-benefit  degrees-of-freedom  facebook  performance  people  rsc  cracker-prog  critique  types  checking  api  flux-stasis  engineering  time  wire-guided  worse-is-better/the-right-thing  static-dynamic  latency-throughput  techtariat  multi  plt  hn  commentary  metal-to-virtual  functional  abstraction  contrarianism  jargon  definition  characterization  reflection 
july 2019 by nhaliday
Integrated vs type based shrinking - Hypothesis
The big difference is whether shrinking is integrated into generation.

In Haskell’s QuickCheck, shrinking is defined based on types: Any value of a given type shrinks the same way, regardless of how it is generated. In Hypothesis, test.check, etc. instead shrinking is part of the generation, and the generator controls how the values it produces shrinks (this works differently in Hypothesis and test.check, and probably differently again in EQC, but the user visible result is largely the same)

This is not a trivial distinction. Integrating shrinking into generation has two large benefits:
- Shrinking composes nicely, and you can shrink anything you can generate regardless of whether there is a defined shrinker for the type produced.
- You can _guarantee that shrinking satisfies the same invariants as generation_.
The first is mostly important from a convenience point of view: Although there are some things it let you do that you can’t do in the type based approach, they’re mostly of secondary importance. It largely just saves you from the effort of having to write your own shrinkers.

But the second is really important, because the lack of it makes your test failures potentially extremely confusing.

...

[example: even_numbers = integers().map(lambda x: x * 2)]

...

In this example the problem was relatively obvious and so easy to work around, but as your invariants get more implicit and subtle it becomes really problematic: In Hypothesis it’s easy and convenient to generate quite complex data, and trying to recreate the invariants that are automatically satisfied with that in your tests and/or your custom shrinkers would quickly become a nightmare.

I don’t think it’s an accident that the main systems to get this right are in dynamic languages. It’s certainly not essential - the original proposal that lead to the implementation for test.check was for Haskell, and Jack is an alternative property based system for Haskell that does this - but you feel the pain much more quickly in dynamic languages because the typical workaround for this problem in Haskell is to define a newtype, which lets you turn off the default shrinking for your types and possibly define your own.

But that’s a workaround for a problem that shouldn’t be there in the first place, and using it will still result in your having to encode the invariants into your your shrinkers, which is more work and more brittle than just having it work automatically.

So although (as far as I know) none of the currently popular property based testing systems for statically typed languages implement this behaviour correctly, they absolutely can and they absolutely should. It will improve users’ lives significantly.

https://hypothesis.works/articles/compositional-shrinking/
In my last article about shrinking, I discussed the problems with basing shrinking on the type of the values to be shrunk.

In writing it though I forgot that there was a halfway house which is also somewhat bad (but significantly less so) that you see in a couple of implementations.

This is when the shrinking is not type based, but still follows the classic shrinking API that takes a value and returns a lazy list of shrinks of that value. Examples of libraries that do this are theft and QuickTheories.

This works reasonably well and solves the major problems with type directed shrinking, but it’s still somewhat fragile and importantly does not compose nearly as well as the approaches that Hypothesis or test.check take.

Ideally, as well as not being based on the types of the values being generated, shrinking should not be based on the actual values generated at all.

This may seem counter-intuitive, but it actually works pretty well.

...

We took a strategy and composed it with a function mapping over the values that that strategy produced to get a new strategy.

Suppose the Hypothesis strategy implementation looked something like the following:
...
i.e. we can generate a value and we can shrink a value that we’ve previously generated. By default we don’t know how to generate values (subclasses have to implement that) and we can’t shrink anything, which subclasses are able to fix if they want or leave as is if they’re fine with that.

(This is in fact how a very early implementation of it looked)

This is essentially the approach taken by theft or QuickTheories, and the problem with it is that under this implementation the ‘map’ function we used above is impossible to define in a way that preserves shrinking: In order to shrink a generated value, you need some way to invert the function you’re composing with (which is in general impossible even if your language somehow exposed the facilities to do it, which it almost certainly doesn’t) so you could take the generated value, map it back to the value that produced it, shrink that and then compose with the mapping function.

...

The key idea for fixing this is as follows: In order to shrink outputs it almost always suffices to shrink inputs. Although in theory you can get functions where simpler input leads to more complicated output, in practice this seems to be rare enough that it’s OK to just shrug and accept more complicated test output in those cases.

Given that, the _way to shrink the output of a mapped strategy is to just shrink the value generated from the first strategy and feed it to the mapping function_.

Which means that you need an API that can support that sort of shrinking.

https://hypothesis.works/articles/types-and-properties/
This happens a lot: Frequently there are properties that only hold in some restricted domain, and so you want more specific tests for that domain to complement your other tests for the larger range of data.

When this happens you need tools to generate something more specific, and those requirements don’t map naturally to types.

[ed.: Some examples of how this idea can be useful:
Have a type but want to test different distributions on it for different purposes. Eg, comparing worst-case and average-case guarantees for benchmarking time/memory complexity. Comparing a slow and fast implementation on small input sizes, then running some sanity checks for the fast implementation on large input sizes beyond what the slow implementation can handle.]

...

In Haskell, traditionally we would fix this with a newtype declaration which wraps the type. We could find a newtype NonEmptyList and a newtype FiniteFloat and then say that we actually wanted a NonEmptyList[FiniteFloat] there.

...

But why should we bother? Especially if we’re only using these in one test, we’re not actually interested in these types at all, and it just adds a whole bunch of syntactic noise when you could just pass the data generators directly. Defining new types for the data you want to generate is purely a workaround for a limitation of the API.

If you were working in a dependently typed language where you could already naturally express this in the type system it might be OK (I don’t have any direct experience of working in type systems that strong), but I’m sceptical of being able to make it work well - you’re unlikely to be able to automatically derive data generators in the general case, because the needs of data generation “go in the opposite direction” from types (a type is effectively a predicate which consumes a value, where a data generator is a function that produces a value, so in order to produce a generator for a type automatically you need to basically invert the predicate). I suspect most approaches here will leave you with a bunch of sharp edges, but I would be interested to see experiments in this direction.

https://www.reddit.com/r/haskell/comments/646k3d/ann_hedgehog_property_testing/dg1485c/
techtariat  rhetoric  rant  programming  libraries  pls  types  functional  haskell  python  random  checking  design  critique  multi  composition-decomposition  api  reddit  social  commentary  system-design  arrows  lifts-projections  DSL  static-dynamic 
july 2019 by nhaliday
Cleaner, more elegant, and harder to recognize | The Old New Thing
Really easy
Writing bad error-code-based code
Writing bad exception-based code

Hard
Writing good error-code-based code

Really hard
Writing good exception-based code

--

Really easy
Recognizing that error-code-based code is badly-written
Recognizing the difference between bad error-code-based code and
not-bad error-code-based code.

Hard
Recognizing that error-code-base code is not badly-written

Really hard
Recognizing that exception-based code is badly-written
Recognizing that exception-based code is not badly-written
Recognizing the difference between bad exception-based code
and not-bad exception-based code

https://ra3s.com/wordpress/dysfunctional-programming/2009/07/15/return-code-vs-exception-handling/
https://nedbatchelder.com/blog/200501/more_exception_handling_debate.html
techtariat  org:com  microsoft  working-stiff  pragmatic  carmack  error  error-handling  programming  rhetoric  debate  critique  pls  search  structure  cost-benefit  comparison  summary  intricacy  certificates-recognition  commentary  multi  contrarianism  correctness  quality  code-dive  cracker-prog  code-organizing 
july 2019 by nhaliday
Computer latency: 1977-2017
If we look at overall results, the fastest machines are ancient. Newer machines are all over the place. Fancy gaming rigs with unusually high refresh-rate displays are almost competitive with machines from the late 70s and early 80s, but “normal” modern computers can’t compete with thirty to forty year old machines.

...

If we exclude the game boy color, which is a different class of device than the rest, all of the quickest devices are Apple phones or tablets. The next quickest device is the blackberry q10. Although we don’t have enough data to really tell why the blackberry q10 is unusually quick for a non-Apple device, one plausible guess is that it’s helped by having actual buttons, which are easier to implement with low latency than a touchscreen. The other two devices with actual buttons are the gameboy color and the kindle 4.

After that iphones and non-kindle button devices, we have a variety of Android devices of various ages. At the bottom, we have the ancient palm pilot 1000 followed by the kindles. The palm is hamstrung by a touchscreen and display created in an era with much slower touchscreen technology and the kindles use e-ink displays, which are much slower than the displays used on modern phones, so it’s not surprising to see those devices at the bottom.

...

Almost every computer and mobile device that people buy today is slower than common models of computers from the 70s and 80s. Low-latency gaming desktops and the ipad pro can get into the same range as quick machines from thirty to forty years ago, but most off-the-shelf devices aren’t even close.

If we had to pick one root cause of latency bloat, we might say that it’s because of “complexity”. Of course, we all know that complexity is bad. If you’ve been to a non-academic non-enterprise tech conference in the past decade, there’s a good chance that there was at least one talk on how complexity is the root of all evil and we should aspire to reduce complexity.

Unfortunately, it's a lot harder to remove complexity than to give a talk saying that we should remove complexity. A lot of the complexity buys us something, either directly or indirectly. When we looked at the input of a fancy modern keyboard vs. the apple 2 keyboard, we saw that using a relatively powerful and expensive general purpose processor to handle keyboard inputs can be slower than dedicated logic for the keyboard, which would both be simpler and cheaper. However, using the processor gives people the ability to easily customize the keyboard, and also pushes the problem of “programming” the keyboard from hardware into software, which reduces the cost of making the keyboard. The more expensive chip increases the manufacturing cost, but considering how much of the cost of these small-batch artisanal keyboards is the design cost, it seems like a net win to trade manufacturing cost for ease of programming.

...

If you want a reference to compare the kindle against, a moderately quick page turn in a physical book appears to be about 200 ms.

https://twitter.com/gravislizard/status/927593460642615296
almost everything on computers is perceptually slower than it was in 1983
https://archive.is/G3D5K
https://archive.is/vhDTL
https://archive.is/a3321
https://archive.is/imG7S

linux terminals: https://lwn.net/Articles/751763/
techtariat  dan-luu  performance  time  hardware  consumerism  objektbuch  data  history  reflection  critique  software  roots  tainter  engineering  nitty-gritty  ui  ux  hci  ios  mobile  apple  amazon  sequential  trends  increase-decrease  measure  analysis  measurement  os  systems  IEEE  intricacy  desktop  benchmarks  rant  carmack  system-design  degrees-of-freedom  keyboard  terminal  editors  links  input-output  networking  world  s:**  multi  twitter  social  discussion  tech  programming  web  internet  speed  backup  worrydream  interface  metal-to-virtual  latency-throughput  workflow  form-design  interface-compatibility  org:junk  linux 
july 2019 by nhaliday
Which of Haskell and OCaml is more practical? For example, in which aspect will each play a key role? - Quora
- Tikhon Jelvis,

Haskell.

This is a question I'm particularly well-placed to answer because I've spent quite a bit of time with both Haskell and OCaml, seeing both in the real world (including working at Jane Street for a bit). I've also seen the languages in academic settings and know many people at startups using both languages. This gives me a good perspective on both languages, with a fairly similar amount of experience in the two (admittedly biased towards Haskell).

And so, based on my own experience rather than the languages' reputations, I can confidently say it's Haskell.

Parallelism and Concurrency

...

Libraries

...

Typeclasses vs Modules

...

In some sense, OCaml modules are better behaved and founded on a sounder theory than Haskell typeclasses, which have some serious drawbacks. However, the fact that typeclasses can be reliably inferred whereas modules have to be explicitly used all the time more than makes up for this. Moreover, extensions to the typeclass system enable much of the power provided by OCaml modules.

...

Of course, OCaml has some advantages of its own as well. It has a performance profile that's much easier to predict. The module system is awesome and often missed in Haskell. Polymorphic variants can be very useful for neatly representing certain situations, and don't have an obvious Haskell analog.

While both languages have a reasonable C FFI, OCaml's seems a bit simpler. It's hard for me to say this with any certainty because I've only used the OCaml FFI myself, but it was quite easy to use—a hard bar for Haskell's to clear. One really nice use of modules in OCaml is to pass around values directly from C as abstract types, which can help avoid extra marshalling/unmarshalling; that seemed very nice in OCaml.

However, overall, I still think Haskell is the more practical choice. Apart from the reasoning above, I simply have my own observations: my Haskell code tends to be clearer, simpler and shorter than my OCaml code. I'm also more productive in Haskell. Part of this is certainly a matter of having more Haskell experience, but the delta is limited especially as I'm working at my third OCaml company. (Of course, the first two were just internships.)

Both Haskell and OCaml are uniquivocally superb options—miles ahead of any other languages I know. While I do prefer Haskell, I'd choose either one in a pinch.

--
I've looked at F# a bit, but it feels like it makes too many tradeoffs to be on .NET. You lose the module system, which is probably OCaml's best feature, in return for an unfortunate, nominally typed OOP layer.

I'm also not invested in .NET at all: if anything, I'd prefer to avoid it in favor of simplicity. I exclusively use Linux and, from the outside, Mono doesn't look as good as it could be. I'm also far more likely to interoperate with a C library than a .NET library.

If I had some additional reason to use .NET, I'd definitely go for F#, but right now I don't.

https://www.reddit.com/r/haskell/comments/3huexy/what_are_haskellers_critiques_of_f_and_ocaml/
https://www.reddit.com/r/haskell/comments/3huexy/what_are_haskellers_critiques_of_f_and_ocaml/cub5mmb/
Thinking about it now, it boils down to a single word: expressiveness. When I'm writing OCaml, I feel more constrained than when I'm writing Haskell. And that's important: unlike so many others, what first attracted me to Haskell was expressiveness, not safety. It's easier for me to write code that looks how I want it to look in Haskell. The upper bound on code quality is higher.

...

Perhaps it all boils down to OCaml and its community feeling more "worse is better" than Haskell, something I highly disfavor.

...

Laziness or, more strictly, non-strictness is big. A controversial start, perhaps, but I stand by it. Unlike some, I do not see non-strictness as a design mistake but as a leap in abstraction. Perhaps a leap before its time, but a leap nonetheless. Haskell lets me program without constantly keeping the code's order in my head. Sure, it's not perfect and sometimes performance issues jar the illusion, but they are the exception not the norm. Coming from imperative languages where order is omnipresent (I can't even imagine not thinking about execution order as I write an imperative program!) it's incredibly liberating, even accounting for the weird issues and jinks I'd never see in a strict language.

This is what I imagine life felt like with the first garbage collectors: they may have been slow and awkward, the abstraction might have leaked here and there, but, for all that, it was an incredible advance. You didn't have to constantly think about memory allocation any more. It took a lot of effort to get where we are now and garbage collectors still aren't perfect and don't fit everywhere, but it's hard to imagine the world without them. Non-strictness feels like it has the same potential, without anywhere near the work garbage collection saw put into it.

...

The other big thing that stands out are typeclasses. OCaml might catch up on this front with implicit modules or it might not (Scala implicits are, by many reports, awkward at best—ask Edward Kmett about it, not me) but, as it stands, not having them is a major shortcoming. Not having inference is a bigger deal than it seems: it makes all sorts of idioms we take for granted in Haskell awkward in OCaml which means that people simply don't use them. Haskell's typeclasses, for all their shortcomings (some of which I find rather annoying), are incredibly expressive.

In Haskell, it's trivial to create your own numeric type and operators work as expected. In OCaml, while you can write code that's polymorphic over numeric types, people simply don't. Why not? Because you'd have to explicitly convert your literals and because you'd have to explicitly open a module with your operators—good luck using multiple numeric types in a single block of code! This means that everyone uses the default types: (63/31-bit) ints and doubles. If that doesn't scream "worse is better", I don't know what does.

...

There's more. Haskell's effect management, brought up elsewhere in this thread, is a big boon. It makes changing things more comfortable and makes informal reasoning much easier. Haskell is the only language where I consistently leave code I visit better than I found it. Even if I hadn't worked on the project in years. My Haskell code has better longevity than my OCaml code, much less other languages.

http://blog.ezyang.com/2011/02/ocaml-gotchas/
One observation about purity and randomness: I think one of the things people frequently find annoying in Haskell is the fact that randomness involves mutation of state, and thus be wrapped in a monad. This makes building probabilistic data structures a little clunkier, since you can no longer expose pure interfaces. OCaml is not pure, and as such you can query the random number generator whenever you want.

However, I think Haskell may get the last laugh in certain circumstances. In particular, if you are using a random number generator in order to generate random test cases for your code, you need to be able to reproduce a particular set of random tests. Usually, this is done by providing a seed which you can then feed back to the testing script, for deterministic behavior. But because OCaml's random number generator manipulates global state, it's very easy to accidentally break determinism by asking for a random number for something unrelated. You can work around it by manually bracketing the global state, but explicitly handling the randomness state means providing determinism is much more natural.
q-n-a  qra  programming  pls  engineering  nitty-gritty  pragmatic  functional  haskell  ocaml-sml  dotnet  types  arrows  cost-benefit  tradeoffs  concurrency  libraries  performance  expert-experience  composition-decomposition  comparison  critique  multi  reddit  social  discussion  techtariat  reflection  review  random  data-structures  numerics  rand-approx  sublinear  syntax  volo-avolo  causation  scala  jvm  ecosystem  metal-to-virtual 
june 2019 by nhaliday
An Efficiency Comparison of Document Preparation Systems Used in Academic Research and Development
The choice of an efficient document preparation system is an important decision for any academic researcher. To assist the research community, we report a software usability study in which 40 researchers across different disciplines prepared scholarly texts with either Microsoft Word or LaTeX. The probe texts included simple continuous text, text with tables and subheadings, and complex text with several mathematical equations. We show that LaTeX users were slower than Word users, wrote less text in the same amount of time, and produced more typesetting, orthographical, grammatical, and formatting errors. On most measures, expert LaTeX users performed even worse than novice Word users. LaTeX users, however, more often report enjoying using their respective software. We conclude that even experienced LaTeX users may suffer a loss in productivity when LaTeX is used, relative to other document preparation systems. Individuals, institutions, and journals should carefully consider the ramifications of this finding when choosing document preparation strategies, or requiring them of authors.

...

However, our study suggests that LaTeX should be used as a document preparation system only in cases in which a document is heavily loaded with mathematical equations. For all other types of documents, our results suggest that LaTeX reduces the user’s productivity and results in more orthographical, grammatical, and formatting errors, more typos, and less written text than Microsoft Word over the same duration of time. LaTeX users may argue that the overall quality of the text that is created with LaTeX is better than the text that is created with Microsoft Word. Although this argument may be true, the differences between text produced in more recent editions of Microsoft Word and text produced in LaTeX may be less obvious than it was in the past. Moreover, we believe that the appearance of text matters less than the scientific content and impact to the field. In particular, LaTeX is also used frequently for text that does not contain a significant amount of mathematical symbols and formula. We believe that the use of LaTeX under these circumstances is highly problematic and that researchers should reflect on the criteria that drive their preferences to use LaTeX over Microsoft Word for text that does not require significant mathematical representations.

...

A second decision criterion that factors into the choice to use a particular software system is reflection about what drives certain preferences. A striking result of our study is that LaTeX users are highly satisfied with their system despite reduced usability and productivity. From a psychological perspective, this finding may be related to motivational factors, i.e., the driving forces that compel or reinforce individuals to act in a certain way to achieve a desired goal. A vital motivational factor is the tendency to reduce cognitive dissonance. According to the theory of cognitive dissonance, each individual has a motivational drive to seek consonance between their beliefs and their actual actions. If a belief set does not concur with the individual’s actual behavior, then it is usually easier to change the belief rather than the behavior [6]. The results from many psychological studies in which people have been asked to choose between one of two items (e.g., products, objects, gifts, etc.) and then asked to rate the desirability, value, attractiveness, or usefulness of their choice, report that participants often reduce unpleasant feelings of cognitive dissonance by rationalizing the chosen alternative as more desirable than the unchosen alternative [6, 7]. This bias is usually unconscious and becomes stronger as the effort to reject the chosen alternative increases, which is similar in nature to the case of learning and using LaTeX.

...

Given these numbers it remains an open question to determine the amount of taxpayer money that is spent worldwide for researchers to use LaTeX over a more efficient document preparation system, which would free up their time to advance their respective field. Some publishers may save a significant amount of money by requesting or allowing LaTeX submissions because a well-formed LaTeX document complying with a well-designed class file (template) is much easier to bring into their publication workflow. However, this is at the expense of the researchers’ labor time and effort. We therefore suggest that leading scientific journals should consider accepting submissions in LaTeX only if this is justified by the level of mathematics presented in the paper. In all other cases, we think that scholarly journals should request authors to submit their documents in Word or PDF format. We believe that this would be a good policy for two reasons. First, we think that the appearance of the text is secondary to the scientific merit of an article and its impact to the field. And, second, preventing researchers from producing documents in LaTeX would save time and money to maximize the benefit of research and development for both the research team and the public.

[ed.: I sense some salt.

And basically no description of how "# errors" was calculated.]

https://news.ycombinator.com/item?id=8797002
I question the validity of their methodology.
At no point in the paper is exactly what is meant by a "formatting error" or a "typesetting error" defined. From what I gather, the participants in the study were required to reproduce the formatting and layout of the sample text. In theory, a LaTeX file should strictly be a semantic representation of the content of the document; while TeX may have been a raw typesetting language, this is most definitely not the intended use case of LaTeX and is overall a very poor test of its relative advantages and capabilities.
The separation of the semantic definition of the content from the rendering of the document is, in my opinion, the most important feature of LaTeX. Like CSS, this allows the actual formatting to be abstracted away, allowing plain (marked-up) content to be written without worrying about typesetting.
Word has some similar capabilities with styles, and can be used in a similar manner, though few Word users actually use the software properly. This may sound like a relatively insignificant point, but in practice, almost every Word document I have seen has some form of inconsistent formatting. If Word disallowed local formatting changes (including things such as relative spacing of nested bullet points), forcing all formatting changes to be done in document-global styles, it would be a far better typesetting system. Also, the users would be very unhappy.
Yes, LaTeX can undeniably be a pain in the arse, especially when it comes to trying to get figures in the right place; however the combination of a simple, semantic plain-text representation with a flexible and professional typesetting and rendering engine are undeniable and completely unaddressed by this study.
--
It seems that the test was heavily biased in favor of WYSIWYG.
Of course that approach makes it very simple to reproduce something, as has been tested here. Even simpler would be to scan the document and run OCR. The massive problem with both approaches (WYSIWYG and scanning) is that you can't generalize any of it. You're doomed repeating it forever.
(I'll also note the other significant issue with this study: when the ratings provided by participants came out opposite of their test results, they attributed it to irrational bias.)

https://www.nature.com/articles/d41586-019-01796-1
Over the past few years however, the line between the tools has blurred. In 2017, Microsoft made it possible to use LaTeX’s equation-writing syntax directly in Word, and last year it scrapped Word’s own equation editor. Other text editors also support elements of LaTeX, allowing newcomers to use as much or as little of the language as they like.

https://news.ycombinator.com/item?id=20191348
study  hmm  academia  writing  publishing  yak-shaving  technical-writing  software  tools  comparison  latex  scholar  regularizer  idk  microsoft  evidence-based  science  desktop  time  efficiency  multi  hn  commentary  critique  news  org:sci  flux-stasis  duplication  metrics  biases 
june 2019 by nhaliday
The End of the Editor Wars » Linux Magazine
Moreover, even if you assume a broad margin of error, the pollings aren't even close. With all the various text editors available today, Vi and Vim continue to be the choice of over a third of users, while Emacs well back in the pack, no longer a competitor for the most popular text editor.

https://www.quora.com/Are-there-more-Emacs-or-Vim-users
I believe Vim is actually more popular, but it's hard to find any real data on it. The best source I've seen is the annual StackOverflow developer survey where 15.2% of developers used Vim compared to a mere 3.2% for Emacs.

Oddly enough, the report noted that "Data scientists and machine learning developers are about 3 times more likely to use Emacs than any other type of developer," which is not necessarily what I would have expected.

[ed. NB: Vim still dominates overall.]

https://pinboard.in/u:nhaliday/b:6adc1b1ef4dc

Time To End The vi/Emacs Debate: https://cacm.acm.org/blogs/blog-cacm/226034-time-to-end-the-vi-emacs-debate/fulltext

Vim, Emacs and their forever war. Does it even matter any more?: https://blog.sourcerer.io/vim-emacs-and-their-forever-war-does-it-even-matter-any-more-697b1322d510
Like an episode of “Silicon Valley”, a discussion of Emacs vs. Vim used to have a polarizing effect that would guarantee a stimulating conversation, regardless of an engineer’s actual alignment. But nowadays, diehard Emacs and Vim users are getting much harder to find. Maybe I’m in the wrong orbit, but looking around today, I see that engineers are equally or even more likely to choose any one of a number of great (for any given definition of ‘great’) modern editors or IDEs such as Sublime Text, Visual Studio Code, Atom, IntelliJ (… or one of its siblings), Brackets, Visual Studio or Xcode, to name a few. It’s not surprising really — many top engineers weren’t even born when these editors were at version 1.0, and GUIs (for better or worse) hadn’t been invented.

...

… both forums have high traffic and up-to-the-minute comment and discussion threads. Some of the available statistics paint a reasonably healthy picture — Stackoverflow’s 2016 developer survey ranks Vim 4th out of 24 with 26.1% of respondents in the development environments category claiming to use it. Emacs came 15th with 5.2%. In combination, over 30% is, actually, quite impressive considering they’ve been around for several decades.

What’s odd, however, is that if you ask someone — say a random developer — to express a preference, the likelihood is that they will favor for one or the other even if they have used neither in anger. Maybe the meme has spread so widely that all responses are now predominantly ritualistic, and represent something more fundamental than peoples’ mere preference for an editor? There’s a rather obvious political hypothesis waiting to be made — that Emacs is the leftist, socialist, centralized state, while Vim represents the right and the free market, specialization and capitalism red in tooth and claw.

How is Emacs/Vim used in companies like Google, Facebook, or Quora? Are there any libraries or tools they share in public?: https://www.quora.com/How-is-Emacs-Vim-used-in-companies-like-Google-Facebook-or-Quora-Are-there-any-libraries-or-tools-they-share-in-public
In Google there's a fair amount of vim and emacs. I would say at least every other engineer uses one or another.

Among Software Engineers, emacs seems to be more popular, about 2:1. Among Site Reliability Engineers, vim is more popular, about 9:1.
--
People use both at Facebook, with (in my opinion) slightly better tooling for Emacs than Vim. We share a master.emacs and master.vimrc file, which contains the bare essentials (like syntactic highlighting for the Hack language). We also share a Ctags file that's updated nightly with a cron script.

Beyond the essentials, there's a group for Emacs users at Facebook that provides tips, tricks, and major-modes created by people at Facebook. That's where Adam Hupp first developed his excellent mural-mode (ahupp/mural), which does for Ctags what iDo did for file finding and buffer switching.
--
For emacs, it was very informal at Google. There wasn't a huge community of Emacs users at Google, so there wasn't much more than a wiki and a couple language styles matching Google's style guides.

https://trends.google.com/trends/explore?date=all&geo=US&q=%2Fm%2F07zh7,%2Fm%2F01yp0m

https://www.quora.com/Why-is-interest-in-Emacs-dropping
And it is still that. It’s just that emacs is no longer unique, and neither is Lisp.

Dynamically typed scripting languages with garbage collection are a dime a dozen now. Anybody in their right mind developing an extensible text editor today would just use python, ruby, lua, or JavaScript as the extension language and get all the power of Lisp combined with vibrant user communities and millions of lines of ready-made libraries that Stallman and Steele could only dream of in the 70s.

In fact, in many ways emacs and elisp have fallen behind: 40 years after Lambda, the Ultimate Imperative, elisp is still dynamically scoped, and it still doesn’t support multithreading — when I try to use dired to list the files on a slow NFS mount, the entire editor hangs just as thoroughly as it might have in the 1980s. And when I say “doesn’t support multithreading,” I don’t mean there is some other clever trick for continuing to do work while waiting on a system call, like asynchronous callbacks or something. There’s start-process which forks a whole new process, and that’s about it. It’s a concurrency model straight out of 1980s UNIX land.

But being essentially just a decent text editor has robbed emacs of much of its competitive advantage. In a world where every developer tool is scriptable with languages and libraries an order of magnitude more powerful than cranky old elisp, the reason to use emacs is not that it lets a programmer hit a button and evaluate the current expression interactively (which must have been absolutely amazing at one point in the past).

https://www.reddit.com/r/emacs/comments/bh5kk7/why_do_many_new_users_still_prefer_vim_over_emacs/

more general comparison, not just popularity:
Differences between Emacs and Vim: https://stackoverflow.com/questions/1430164/differences-between-Emacs-and-vim

https://www.reddit.com/r/emacs/comments/9hen7z/what_are_the_benefits_of_emacs_over_vim/

https://unix.stackexchange.com/questions/986/what-are-the-pros-and-cons-of-vim-and-emacs

https://www.quora.com/Why-is-Vim-the-programmers-favorite-editor
- Adrien Lucas Ecoffet,

Because it is hard to use. Really.

However, the second part of this sentence applies to just about every good editor out there: if you really learn Sublime Text, you will become super productive. If you really learn Emacs, you will become super productive. If you really learn Visual Studio… you get the idea.

Here’s the thing though, you never actually need to really learn your text editor… Unless you use vim.

...

For many people new to programming, this is the first time they have been a power user of… well, anything! And because they’ve been told how great Vim is, many of them will keep at it and actually become productive, not because Vim is particularly more productive than any other editor, but because it didn’t provide them with a way to not be productive.

They then go on to tell their friends how great Vim is, and their friends go on to become power users and tell their friends in turn, and so forth. All these people believe they became productive because they changed their text editor. Little do they realize that they became productive because their text editor changed them[1].

This is in no way a criticism of Vim. I myself was a beneficiary of such a phenomenon when I learned to type using the Dvorak layout: at that time, I believed that Dvorak would help you type faster. Now I realize the evidence is mixed and that Dvorak might not be much better than Qwerty. However, learning Dvorak forced me to develop good typing habits because I could no longer rely on looking at my keyboard (since I was still using a Qwerty physical keyboard), and this has made me a much more productive typist.

Technical Interview Performance by Editor/OS/Language: https://triplebyte.com/blog/technical-interview-performance-by-editor-os-language
[ed.: I'm guessing this is confounded to all hell.]

The #1 most common editor we see used in interviews is Sublime Text, with Vim close behind.

Emacs represents a fairly small market share today at just about a quarter the userbase of Vim in our interviews. This nicely matches the 4:1 ratio of Google Search Trends for the two editors.

...

Vim takes the prize here, but PyCharm and Emacs are close behind. We’ve found that users of these editors tend to pass our interview at an above-average rate.

On the other end of the spectrum is Eclipse: it appears that someone using either Vim or Emacs is more than twice as likely to pass our technical interview as an Eclipse user.

...

In this case, we find that the average Ruby, Swift, and C# users tend to be stronger, with Python and Javascript in the middle of the pack.

...

Here’s what happens after we select engineers to work with and send them to onsites:

[Python does best.]

There are no wild outliers here, but let’s look at the C++ segment. While C++ programmers have the most challenging time passing Triplebyte’s technical interview on average, the ones we choose to work with tend to have a relatively easier time getting offers at each onsite.

The Rise of Microsoft Visual Studio Code: https://triplebyte.com/blog/editor-report-the-rise-of-visual-studio-code
This chart shows the rates at which each editor's users pass our interview compared to the mean pass rate for all candidates. First, notice the preeminence of Emacs and Vim! Engineers who use these editors pass our interview at significantly higher rates than other engineers. And the effect size is not small. Emacs users pass our interview at a rate 50… [more]
news  linux  oss  tech  editors  devtools  tools  comparison  ranking  flux-stasis  trends  ubiquity  unix  increase-decrease  multi  q-n-a  qra  data  poll  stackex  sv  facebook  google  integration-extension  org:med  politics  stereotypes  coalitions  decentralized  left-wing  right-wing  chart  scale  time-series  distribution  top-n  list  discussion  ide  parsimony  intricacy  cost-benefit  tradeoffs  confounding  analysis  crosstab  pls  python  c(pp)  jvm  microsoft  golang  hmm  correlation  debate  critique  quora  contrarianism  ecosystem  DSL  techtariat  org:com  org:nat  cs 
june 2019 by nhaliday
One week of bugs
If I had to guess, I'd say I probably work around hundreds of bugs in an average week, and thousands in a bad week. It's not unusual for me to run into a hundred new bugs in a single week. But I often get skepticism when I mention that I run into multiple new (to me) bugs per day, and that this is inevitable if we don't change how we write tests. Well, here's a log of one week of bugs, limited to bugs that were new to me that week. After a brief description of the bugs, I'll talk about what we can do to improve the situation. The obvious answer to spend more effort on testing, but everyone already knows we should do that and no one does it. That doesn't mean it's hopeless, though.

...

Here's where I'm supposed to write an appeal to take testing more seriously and put real effort into it. But we all know that's not going to work. It would take 90k LOC of tests to get Julia to be as well tested as a poorly tested prototype (falsely assuming linear complexity in size). That's two person-years of work, not even including time to debug and fix bugs (which probably brings it closer to four of five years). Who's going to do that? No one. Writing tests is like writing documentation. Everyone already knows you should do it. Telling people they should do it adds zero information1.

Given that people aren't going to put any effort into testing, what's the best way to do it?

Property-based testing. Generative testing. Random testing. Concolic Testing (which was done long before the term was coined). Static analysis. Fuzzing. Statistical bug finding. There are lots of options. Some of them are actually the same thing because the terminology we use is inconsistent and buggy. I'm going to arbitrarily pick one to talk about, but they're all worth looking into.

...

There are a lot of great resources out there, but if you're just getting started, I found this description of types of fuzzers to be one of those most helpful (and simplest) things I've read.

John Regehr has a udacity course on software testing. I haven't worked through it yet (Pablo Torres just pointed to it), but given the quality of Dr. Regehr's writing, I expect the course to be good.

For more on my perspective on testing, there's this.

Everything's broken and nobody's upset: https://www.hanselman.com/blog/EverythingsBrokenAndNobodysUpset.aspx
https://news.ycombinator.com/item?id=4531549

https://hypothesis.works/articles/the-purpose-of-hypothesis/
From the perspective of a user, the purpose of Hypothesis is to make it easier for you to write better tests.

From my perspective as the primary author, that is of course also a purpose of Hypothesis. I write a lot of code, it needs testing, and the idea of trying to do that without Hypothesis has become nearly unthinkable.

But, on a large scale, the true purpose of Hypothesis is to drag the world kicking and screaming into a new and terrifying age of high quality software.

Software is everywhere. We have built a civilization on it, and it’s only getting more prevalent as more services move online and embedded and “internet of things” devices become cheaper and more common.

Software is also terrible. It’s buggy, it’s insecure, and it’s rarely well thought out.

This combination is clearly a recipe for disaster.

The state of software testing is even worse. It’s uncontroversial at this point that you should be testing your code, but it’s a rare codebase whose authors could honestly claim that they feel its testing is sufficient.

Much of the problem here is that it’s too hard to write good tests. Tests take up a vast quantity of development time, but they mostly just laboriously encode exactly the same assumptions and fallacies that the authors had when they wrote the code, so they miss exactly the same bugs that you missed when they wrote the code.

Preventing the Collapse of Civilization [video]: https://news.ycombinator.com/item?id=19945452
- Jonathan Blow

NB: DevGAMM is a game industry conference

- loss of technological knowledge (Antikythera mechanism, aqueducts, etc.)
- hardware driving most gains, not software
- software's actually less robust, often poorly designed and overengineered these days
- *list of bugs he's encountered recently*:
https://youtu.be/pW-SOdj4Kkk?t=1387
- knowledge of trivia becomes [ed.: missing the word "valued" here, I think?]more than general, deep knowledge
- does at least acknowledge value of DRY, reusing code, abstraction saving dev time
techtariat  dan-luu  tech  software  error  list  debugging  linux  github  robust  checking  oss  troll  lol  aphorism  webapp  email  google  facebook  games  julia  pls  compilers  communication  mooc  browser  rust  programming  engineering  random  jargon  formal-methods  expert-experience  prof  c(pp)  course  correctness  hn  commentary  video  presentation  carmack  pragmatic  contrarianism  pessimism  sv  unix  rhetoric  critique  worrydream  hardware  performance  trends  multiplicative  roots  impact  comparison  history  iron-age  the-classics  mediterranean  conquest-empire  gibbon  technology  the-world-is-just-atoms  flux-stasis  increase-decrease  graphics  hmm  idk  systems  os  abstraction  intricacy  worse-is-better/the-right-thing  build-packaging  microsoft  osx  apple  reflection  assembly  things  knowledge  detail-architecture  thick-thin  trivia  info-dynamics  caching  frameworks  generalization  systematic-ad-hoc  universalism-particularism  analytical-holistic  structure  tainter  libraries  tradeoffs  prepping  threat-modeling  network-structure  writing  risk  local-glob 
may 2019 by nhaliday
Is backing up a MySQL database in Git a good idea? - Software Engineering Stack Exchange
*no: list of alternatives*

https://stackoverflow.com/questions/115369/do-you-use-source-control-for-your-database-items
Top 2 answers contradict each other but both agree that you should at least version the schema and other scripts.

My impression is that the guy linked in the accepted answer is arguing for a minority practice.
q-n-a  stackex  programming  engineering  dbs  vcs  git  debate  critique  backup  best-practices  flux-stasis  nitty-gritty  gotchas  init  advice  code-organizing  multi  hmm  idk  contrarianism  rhetoric  links  system-design 
may 2019 by nhaliday
Why is reverse debugging rarely used? - Software Engineering Stack Exchange
(time travel)

For one, running in debug mode with recording on is very expensive compared to even normal debug mode; it also consumes a lot more memory.

It is easier to decrease the granularity from line level to function call level. For example, the standard debugger in eclipse allows you to "drop to frame," which is essentially a jump back to the start of the function with a reset of all the parameters (nothing done on the heap is reverted, and finally blocks are not executed, so it is not a true reverse debugger; be careful about that).

Note that this has been available for several years now and works hand in hand with hot-code replacement.
--
As mentioned already, performance is key e.g. with gdb's reversible debugging, running something like gzip sees a slowdown of 50,000x compared to running natively. There are commercial alternatives however: I work for Undo undo.io, and our UndoDB product does the same but with a slowdown of less than 2x. There are other commercial reversible debuggers available too.

https://undo.io
Based on GDB, UndoDB supports source-level debugging for applications written in any language supported by GDB, including C/C++, Rust and Ada.
q-n-a  stackex  programming  engineering  impetus  debugging  time  increase-decrease  worrydream  hci  devtools  direction  roots  money-for-time  review  comparison  critique  tools  software  multi  systems  c(pp)  rust  state 
may 2019 by nhaliday
Applied Cryptography Engineering — Quarrelsome
You should own Ferguson and Schneier’s follow-up, Cryptography Engineering (C.E.). Written partly in penance, the new book deftly handles material the older book stumbles over. C.E. wants to teach you the right way to work with cryptography without wasting time on GOST and El Gamal.
techtariat  books  recommendations  critique  security  crypto  best-practices  gotchas  programming  engineering  advice  hn 
may 2019 by nhaliday
The Architect as Totalitarian: Le Corbusier’s baleful influence | City Journal
Le Corbusier was to architecture what Pol Pot was to social reform. In one sense, he had less excuse for his activities than Pol Pot: for unlike the Cambodian, he possessed great talent, even genius. Unfortunately, he turned his gifts to destructive ends, and it is no coincidence that he willingly served both Stalin and Vichy.
news  org:mag  right-wing  albion  gnon  isteveish  architecture  essay  rhetoric  critique  contrarianism  communism  comparison  aphorism  modernity  authoritarianism  universalism-particularism  europe  gallic  history  mostly-modern  urban-rural  revolution  art  culture 
april 2019 by nhaliday
Links 3/19: Linkguini | Slate Star Codex
How did the descendants of the Mayan Indians end up in the Eastern Orthodox Church?

Does Parental Quality Matter? Study using three sources of parental variation that are mostly immune to genetic confounding find that “the strong parent-child correlation in education is largely causal”. For example, “the parent-child correlation in education is stronger with the parent that spends more time with the child”.

Before and after pictures of tech leaders like Jeff Bezos, Elon Musk, and Sergey Brin suggest they’re taking supplemental testosterone. And though it may help them keep looking young, Palladium points out that there might be other effects from having some of our most powerful businessmen on a hormone that increases risk-taking and ambition. They ask whether the new availability of testosterone supplements is prolonging Silicon Valley businessmen’s “brash entrepreneur” phase well past the point where they would normally become mature respectable elders. But it also hints at an almost opposite take: average testosterone levels have been falling for decades, so at this point these businessmen would be the only “normal” (by 1950s standards) men out there, and everyone else would be unprecedently risk-averse and boring. Paging Peter Thiel and everyone else who takes about how things “just worked better” in Eisenhower’s day.

China’s SesameCredit social monitoring system, widely portrayed as dystopian, has an 80% approval rate in China (vs. 19% neutral and 1% disapproval). The researchers admit that although all data is confidential and they are not affiliated with the Chinese government, their participants might not believe that confidently enough to answer honestly.

I know how much you guys love attacking EAs for “pathological altruism” or whatever terms you’re using nowadays, so here’s an article where rationalist community member John Beshir describes his experience getting malaria on purpose to help researchers test a vaccine.

Some evidence against the theory that missing fathers cause earlier menarche.

John Nerst of EverythingStudies’ political compass.
ratty  yvain  ssc  links  multi  biodet  behavioral-gen  regularizer  causation  contrarianism  education  correlation  parenting  developmental  direct-indirect  time  religion  christianity  eastern-europe  russia  latin-america  other-xtian  endocrine  trends  malaise  stagnation  thiel  barons  tech  sv  business  rot  zeitgeist  outcome-risk  critique  environmental-effects  poll  china  asia  authoritarianism  alt-inst  sentiment  policy  n-factor  individualism-collectivism  pro-rata  technocracy  managerial-state  civil-liberty  effective-altruism  subculture  wtf  disease  parasites-microbiome  patho-altruism  self-interest  lol  africa  experiment  medicine  expression-survival  things  dimensionality  degrees-of-freedom  sex  composition-decomposition  analytical-holistic  systematic-ad-hoc  coordination  alignment  cooperate-defect  politics  coalitions  ideology  left-wing  right-wing  summary  exit-voice  redistribution  randy-ayndy  welfare-state 
march 2019 by nhaliday
Jordan Peterson is Wrong About the Case for the Left
I suggest that the tension of which he speaks is fully formed and self-contained completely within conservatism. Balancing those two forces is, in fact, what conservatism is all about. Thomas Sowell, in A Conflict of Visions: Ideological Origins of Political Struggles describes the conservative outlook as (paraphrasing): “There are no solutions, only tradeoffs.”

The real tension is between balance on the right and imbalance on the left.

In Towards a Cognitive Theory of Polics in the online magazine Quillette I make the case that left and right are best understood as psychological profiles consisting of 1) cognitive style, and 2) moral matrix.

There are two predominant cognitive styles and two predominant moral matrices.

The two cognitive styles are described by Arthur Herman in his book The Cave and the Light: Plato Versus Aristotle, and the Struggle for the Soul of Western Civilization, in which Plato and Aristotle serve as metaphors for them. These two quotes from the book summarize the two styles:

Despite their differences, Plato and Aristotle agreed on many things. They both stressed the importance of reason as our guide for understanding and shaping the world. Both believed that our physical world is shaped by certain eternal forms that are more real than matter. The difference was that Plato’s forms existed outside matter, whereas Aristotle’s forms were unrealizable without it. (p. 61)

The twentieth century’s greatest ideological conflicts do mark the violent unfolding of a Platonist versus Aristotelian view of what it means to be free and how reason and knowledge ultimately fit into our lives (p.539-540)

The Platonic cognitive style amounts to pure abstract reason, “unconstrained” by reality. It has no limiting principle. It is imbalanced. Aristotelian thinking also relies on reason, but it is “constrained” by empirical reality. It has a limiting principle. It is balanced.

The two moral matrices are described by Jonathan Haidt in his book The Righteous Mind: Why Good People Are Divided by Politics and Religion. Moral matrices are collections of moral foundations, which are psychological adaptations of social cognition created in us by hundreds of millions of years of natural selection as we evolved into the social animal. There are six moral foundations. They are:

Care/Harm
Fairness/Cheating
Liberty/Oppression
Loyalty/Betrayal
Authority/Subversion
Sanctity/Degradation
The first three moral foundations are called the “individualizing” foundations because they’re focused on the autonomy and well being of the individual person. The second three foundations are called the “binding” foundations because they’re focused on helping individuals form into cooperative groups.

One of the two predominant moral matrices relies almost entirely on the individualizing foundations, and of those mostly just care. It is all individualizing all the time. No balance. The other moral matrix relies on all of the moral foundations relatively equally; individualizing and binding in tension. Balanced.

The leftist psychological profile is made from the imbalanced Platonic cognitive style in combination with the first, imbalanced, moral matrix.

The conservative psychological profile is made from the balanced Aristotelian cognitive style in combination with the balanced moral matrix.

It is not true that the tension between left and right is a balance between the defense of the dispossessed and the defense of hierarchies.

It is true that the tension between left and right is between an imbalanced worldview unconstrained by empirical reality and a balanced worldview constrained by it.

A Venn Diagram of the two psychological profiles looks like this:
commentary  albion  canada  journos-pundits  philosophy  politics  polisci  ideology  coalitions  left-wing  right-wing  things  phalanges  reason  darwinian  tradition  empirical  the-classics  big-peeps  canon  comparison  thinking  metabuch  skeleton  lens  psychology  social-psych  morality  justice  civil-liberty  authoritarianism  love-hate  duty  tribalism  us-them  sanctity-degradation  revolution  individualism-collectivism  n-factor  europe  the-great-west-whale  pragmatic  prudence  universalism-particularism  analytical-holistic  nationalism-globalism  social-capital  whole-partial-many  pic  intersection-connectedness  links  news  org:mag  letters  rhetoric  contrarianism  intricacy  haidt  scitariat  critique  debate  forms-instances  reduction  infographic  apollonian-dionysian  being-becoming  essence-existence 
july 2018 by nhaliday
Who We Are | West Hunter
I’m going to review David Reich’s new book, Who We Are and How We Got Here. Extensively: in a sense I’ve already been doing this for a long time. Probably there will be a podcast. The GoFundMe link is here. You can also send money via Paypal (Use the donate button), or bitcoins to 1Jv4cu1wETM5Xs9unjKbDbCrRF2mrjWXr5. In-kind donations, such as orichalcum or mithril, are always appreciated.

This is the book about the application of ancient DNA to prehistory and history.

height difference between northern and southern europeans: https://westhunt.wordpress.com/2018/03/29/who-we-are-1/
mixing, genocide of males, etc.: https://westhunt.wordpress.com/2018/03/29/who-we-are-2-purity-of-essence/
rapid change in polygenic traits (appearance by Kevin Mitchell and funny jab at Brad Delong ("regmonkey")): https://westhunt.wordpress.com/2018/03/30/rapid-change-in-polygenic-traits/
schiz, bipolar, and IQ: https://westhunt.wordpress.com/2018/03/30/rapid-change-in-polygenic-traits/#comment-105605
Dan Graur being dumb: https://westhunt.wordpress.com/2018/04/02/the-usual-suspects/
prediction of neanderthal mixture and why: https://westhunt.wordpress.com/2018/04/03/who-we-are-3-neanderthals/
New Guineans tried to use Denisovan admixture to avoid UN sanctions (by "not being human"): https://westhunt.wordpress.com/2018/04/04/who-we-are-4-denisovans/
also some commentary on decline of Out-of-Africa, including:
"Homo Naledi, a small-brained homonin identified from recently discovered fossils in South Africa, appears to have hung around way later that you’d expect (up to 200,000 years ago, maybe later) than would be the case if modern humans had occupied that area back then. To be blunt, we would have eaten them."

Live Not By Lies: https://westhunt.wordpress.com/2018/04/08/live-not-by-lies/
Next he slams people that suspect that upcoming genetic genetic analysis will, in most cases, confirm traditional stereotypes about race – the way the world actually looks.

The people Reich dumps on are saying perfectly reasonable things. He criticizes Henry Harpending for saying that he’d never seen an African with a hobby. Of course, Henry had actually spent time in Africa, and that’s what he’d seen. The implication is that people in Malthusian farming societies – which Africa was not – were selected to want to work, even where there was no immediate necessity to do so. Thus hobbies, something like a gerbil running in an exercise wheel.

He criticized Nicholas Wade, for saying that different races have different dispositions. Wade’s book wasn’t very good, but of course personality varies by race: Darwin certainly thought so. You can see differences at birth. Cover a baby’s nose with a cloth: Chinese and Navajo babies quietly breathe through their mouth, European and African babies fuss and fight.

Then he attacks Watson, for asking when Reich was going to look at Jewish genetics – the kind that has led to greater-than-average intelligence. Watson was undoubtedly trying to get a rise out of Reich, but it’s a perfectly reasonable question. Ashkenazi Jews are smarter than the average bear and everybody knows it. Selection is the only possible explanation, and the conditions in the Middle ages – white-collar job specialization and a high degree of endogamy, were just what the doctor ordered.

Watson’s a prick, but he’s a great prick, and what he said was correct. Henry was a prince among men, and Nick Wade is a decent guy as well. Reich is totally out of line here: he’s being a dick.

Now Reich may be trying to burnish his anti-racist credentials, which surely need some renewal after having pointing out that race as colloquially used is pretty reasonable, there’s no reason pops can’t be different, people that said otherwise ( like Lewontin, Gould, Montagu, etc. ) were lying, Aryans conquered Europe and India, while we’re tied to the train tracks with scary genetic results coming straight at us. I don’t care: he’s being a weasel, slandering the dead and abusing the obnoxious old genius who laid the foundations of his field. Reich will also get old someday: perhaps he too will someday lose track of all the nonsense he’s supposed to say, or just stop caring. Maybe he already has… I’m pretty sure that Reich does not like lying – which is why he wrote this section of the book (not at all logically necessary for his exposition of the ancient DNA work) but the required complex juggling of lies and truth required to get past the demented gatekeepers of our society may not be his forte. It has been said that if it was discovered that someone in the business was secretly an android, David Reich would be the prime suspect. No Talleyrand he.

https://westhunt.wordpress.com/2018/04/12/who-we-are-6-the-americas/
The population that accounts for the vast majority of Native American ancestry, which we will call Amerinds, came into existence somewhere in northern Asia. It was formed from a mix of Ancient North Eurasians and a population related to the Han Chinese – about 40% ANE and 60% proto-Chinese. Is looks as if most of the paternal ancestry was from the ANE, while almost all of the maternal ancestry was from the proto-Han. [Aryan-Transpacific ?!?] This formation story – ANE boys, East-end girls – is similar to the formation story for the Indo-Europeans.

https://westhunt.wordpress.com/2018/04/18/who-we-are-7-africa/
In some ways, on some questions, learning more from genetics has left us less certain. At this point we really don’t know where anatomically humans originated. Greater genetic variety in sub-Saharan African has been traditionally considered a sign that AMH originated there, but it possible that we originated elsewhere, perhaps in North Africa or the Middle East, and gained extra genetic variation when we moved into sub-Saharan Africa and mixed with various archaic groups that already existed. One consideration is that finding recent archaic admixture in a population may well be a sign that modern humans didn’t arise in that region ( like language substrates) – which makes South Africa and West Africa look less likely. The long-continued existence of homo naledi in South Africa suggests that modern humans may not have been there for all that long – if we had co-existed with homo naledi, they probably wouldn’t lasted long. The oldest known skull that is (probably) AMh was recently found in Morocco, while modern humans remains, already known from about 100,000 years ago in Israel, have recently been found in northern Saudi Arabia.

While work by Nick Patterson suggests that modern humans were formed by a fusion between two long-isolated populations, a bit less than half a million years ago.

So: genomics had made recent history Africa pretty clear. Bantu agriculuralists expanded and replaced hunter-gatherers, farmers and herders from the Middle East settled North Africa, Egypt and northeaat Africa, while Nilotic herdsmen expanded south from the Sudan. There are traces of earlier patterns and peoples, but today, only traces. As for questions back further in time, such as the origins of modern humans – we thought we knew, and now we know we don’t. But that’s progress.

https://westhunt.wordpress.com/2018/04/18/reichs-journey/
David Reich’s professional path must have shaped his perspective on the social sciences. Look at the record. He starts his professional career examining the role of genetics in the elevated prostate cancer risk seen in African-American men. Various social-science fruitcakes oppose him even looking at the question of ancestry ( African vs European). But they were wrong: certain African-origin alleles explain the increased risk. Anthropologists (and human geneticists) were sure (based on nothing) that modern humans hadn’t interbred with Neanderthals – but of course that happened. Anthropologists and archaeologists knew that Gustaf Kossina couldn’t have been right when he said that widespread material culture corresponded to widespread ethnic groups, and that migration was the primary explanation for changes in the archaeological record – but he was right. They knew that the Indo-European languages just couldn’t have been imposed by fire and sword – but Reich’s work proved them wrong. Lots of people – the usual suspects plus Hindu nationalists – were sure that the AIT ( Aryan Invasion Theory) was wrong, but it looks pretty good today.

Some sociologists believed that caste in India was somehow imposed or significantly intensified by the British – but it turns out that most jatis have been almost perfectly endogamous for two thousand years or more…

It may be that Reich doesn’t take these guys too seriously anymore. Why should he?

varnas, jatis, aryan invastion theory: https://westhunt.wordpress.com/2018/04/22/who-we-are-8-india/

europe and EEF+WHG+ANE: https://westhunt.wordpress.com/2018/05/01/who-we-are-9-europe/

https://www.nationalreview.com/2018/03/book-review-david-reich-human-genes-reveal-history/
The massive mixture events that occurred in the recent past to give rise to Europeans and South Asians, to name just two groups, were likely “male mediated.” That’s another way of saying that men on the move took local women as brides or concubines. In the New World there are many examples of this, whether it be among African Americans, where most European ancestry seems to come through men, or in Latin America, where conquistadores famously took local women as paramours. Both of these examples are disquieting, and hint at the deep structural roots of patriarchal inequality and social subjugation that form the backdrop for the emergence of many modern peoples.
west-hunter  scitariat  books  review  sapiens  anthropology  genetics  genomics  history  antiquity  iron-age  world  europe  gavisti  aDNA  multi  politics  culture-war  kumbaya-kult  social-science  academia  truth  westminster  environmental-effects  embodied  pop-diff  nordic  mediterranean  the-great-west-whale  germanic  the-classics  shift  gene-flow  homo-hetero  conquest-empire  morality  diversity  aphorism  migration  migrant-crisis  EU  africa  MENA  gender  selection  speed  time  population-genetics  error  concrete  econotariat  economics  regression  troll  lol  twitter  social  media  street-fighting  methodology  robust  disease  psychiatry  iq  correlation  usa  obesity  dysgenics  education  track-record  people  counterexample  reason  thinking  fisher  giants  old-anglo  scifi-fantasy  higher-ed  being-right  stories  reflection  critique  multiplicative  iteration-recursion  archaics  asia  developing-world  civil-liberty  anglo  oceans  food  death  horror  archaeology  gnxp  news  org:mag  right-wing  age-of-discovery  latin-america  ea 
march 2018 by nhaliday
China’s Ideological Spectrum
We find that public preferences are weakly constrained, and the configuration of preferences is multidimensional, but the latent traits of these dimensions are highly correlated. Those who prefer authoritarian rule are more likely to support nationalism, state intervention in the economy, and traditional social values; those who prefer democratic institutions and values are more likely to support market reforms but less likely to be nationalistic and less likely to support traditional social values. This latter set of preferences appears more in provinces with higher levels of development and among wealthier and better-educated respondents.

Enlightened One-Party Rule? Ideological Differences between Chinese Communist Party Members and the Mass Public: https://journals.sagepub.com/doi/abs/10.1177/1065912919850342
A popular view of nondemocratic regimes is that they draw followers mainly from those with an illiberal, authoritarian mind-set. We challenge this view by arguing that there exist a different class of autocracies that rule with a relatively enlightened base. Leveraging multiple nationally representative surveys from China over the past decade, we substantiate this claim by estimating and comparing the ideological preferences of Chinese Communist Party members and ordinary citizens. We find that party members on average hold substantially more modern and progressive views than the public on issues such as gender equality, political pluralism, and openness to international exchange. We also explore two mechanisms that may account for this party–public value gap—selection and socialization. We find that while education-based selection is the most dominant mechanism overall, socialization also plays a role, especially among older and less educated party members.

https://twitter.com/chenchenzh/status/1140929230072623104
https://archive.is/ktcOY
Does this control for wealth and education?
--
Perhaps about half the best educated youth joined party.
pdf  study  economics  polisci  sociology  politics  ideology  coalitions  china  asia  things  phalanges  dimensionality  degrees-of-freedom  markets  democracy  capitalism  communism  authoritarianism  government  leviathan  tradition  values  correlation  exploratory  nationalism-globalism  heterodox  sinosphere  multi  antidemos  class  class-warfare  enlightenment-renaissance-restoration-reformation  left-wing  egalitarianism-hierarchy  gender  contrarianism  hmm  regularizer  poll  roots  causation  endogenous-exogenous  selection  network-structure  education  twitter  social  commentary  critique  backup 
march 2018 by nhaliday
Books 2017 | West Hunter
Arabian Sands
The Aryans
The Big Show
The Camel and the Wheel
Civil War on Western Waters
Company Commander
Double-edged Secrets
The Forgotten Soldier
Genes in Conflict
Hive Mind
The horse, the wheel, and language
The Penguin Atlas of Medieval History
Habitable Planets for Man
The genetical theory of natural selection
The Rise of the Greeks
To Lose a Battle
The Jewish War
Tropical Gangsters
The Forgotten Revolution
Egil’s Saga
Shapers
Time Patrol

Russo: https://westhunt.wordpress.com/2017/12/14/books-2017/#comment-98568
west-hunter  scitariat  books  recommendations  list  top-n  confluence  2017  info-foraging  canon  🔬  ideas  s:*  history  mostly-modern  world-war  britain  old-anglo  travel  MENA  frontier  reflection  europe  gallic  war  sapiens  antiquity  archaeology  technology  divergence  the-great-west-whale  transportation  nature  long-short-run  intel  tradecraft  japan  asia  usa  spearhead  garett-jones  hive-mind  economics  broad-econ  giants  fisher  space  iron-age  medieval  the-classics  civilization  judaism  conquest-empire  africa  developing-world  institutions  science  industrial-revolution  the-trenches  wild-ideas  innovation  speedometer  nordic  mediterranean  speculation  fiction  scifi-fantasy  time  encyclopedic  multi  poast  critique  cost-benefit  tradeoffs  quixotic 
december 2017 by nhaliday
Why do stars twinkle?
According to many astronomers and educators, twinkle (stellar scintillation) is caused by atmospheric structure that works like ordinary lenses and prisms. Pockets of variable temperature - and hence index of refraction - randomly shift and focus starlight, perceived by eye as changes in brightness. Pockets also disperse colors like prisms, explaining the flashes of color often seen in bright stars. Stars appear to twinkle more than planets because they are points of light, whereas the twinkling points on planetary disks are averaged to a uniform appearance. Below, figure 1 is a simulation in glass of the kind of turbulence structure posited in the lens-and-prism theory of stellar scintillation, shown over the Penrose tile floor to demonstrate the random lensing effects.

However appealing and ubiquitous on the internet, this popular explanation is wrong, and my aim is to debunk the myth. This research is mostly about showing that the lens-and-prism theory just doesn't work, but I also have a stellar list of references that explain the actual cause of scintillation, starting with two classic papers by C.G. Little and S. Chandrasekhar.
nibble  org:junk  space  sky  visuo  illusion  explanans  physics  electromag  trivia  cocktail  critique  contrarianism  explanation  waves  simulation  experiment  hmm  magnitude  atmosphere  roots  idk 
december 2017 by nhaliday
Lynn Margulis | West Hunter
Margulis went on to theorize that symbiotic relationships between organisms are the dominant driving force of evolution. There certainly are important examples of this: as far as I know, every complex organism that digests cellulose manages it thru a symbiosis with various prokaryotes. Many organisms with a restricted diet have symbiotic bacteria that provide essential nutrients – aphids, for example. Tall fescue, a popular turf grass on golf courses, carries an endosymbiotic fungus. And so on, and on on.

She went on to oppose neodarwinism, particularly rejecting inter-organismal competition (and population genetics itself). From Wiki: [ She also believed that proponents of the standard theory “wallow in their zoological, capitalistic, competitive, cost-benefit interpretation of Darwin – having mistaken him… Neo-Darwinism, which insists on [the slow accrual of mutations by gene-level natural selection], is in a complete funk.”[8] ‘

...

You might think that Lynn Margulis is an example of someone that could think outside the box because she’d never even been able to find it in the first place – but that’s more true of autistic types [like Dirac or Turing], which I doubt she was in any way. I’d say that some traditional prejudices [dislike of capitalism and individual competition], combined with the sort of general looniness that leaves one open to unconventional ideas, drove her in a direction that bore fruit, more or less by coincidence. A successful creative scientist does not have to be right about everything, or indeed about much of anything: they need to contribute at least one new, true, and interesting thing.

https://westhunt.wordpress.com/2017/11/25/lynn-margulis/#comment-98174
“A successful creative scientist does not have to be right about everything, or indeed about much of anything: they need to contribute at least one new, true, and interesting thing.” Yes – it’s like old bands. As long as they have just one song in heavy rotation on the classic rock stations, they can tour endlessly – it doesn’t matter that they have only one or even no original members performing. A scientific example of this phenomena is Kary Mullins. He’ll always have PCR, even if a glowing raccoon did greet him with the words, “Good evening, Doctor.”

Nobel Savage: https://www.lrb.co.uk/v21/n13/steven-shapin/nobel-savage
Dancing Naked in the Mind Field by Kary Mullis

jet fuel can't melt steel beams: https://westhunt.wordpress.com/2017/11/25/lynn-margulis/#comment-98201
You have to understand a subject extremely well to make arguments why something couldn’t have happened. The easiest cases involve some purported explanation violating a conservation law of physics: that wasn’t the case here.

Do I think you’re a hotshot, deeply knowledgeable about structural engineering, properties of materials, using computer models, etc? A priori, pretty unlikely. What are the odds that you know as much simple mechanics as I do? a priori, still pretty unlikely. Most likely, you’re talking through your hat.

Next, the conspiracy itself is unlikely: quite a few people would be involved – unlikely that none of them would talk. It’s not that easy to find people that would go along with such a thing, believe it or not. The Communists were pretty good at conspiracy, but people defected, people talked: not just Whittaker Chambers, not just Igor Gouzenko.
west-hunter  scitariat  discussion  people  profile  science  the-trenches  innovation  discovery  ideas  turing  giants  autism  👽  bio  evolution  eden  roots  darwinian  capitalism  competition  cooperate-defect  being-right  info-dynamics  frontier  curiosity  creative  multi  poast  prudence  org:mag  org:anglo  letters  books  review  critique  summary  lol  genomics  social-science  sociology  psychology  psychiatry  ability-competence  rationality  epistemic  reason  events  terrorism  usa  islam  communism  coordination  organizing  russia  dirty-hands  degrees-of-freedom  alignment 
november 2017 by nhaliday
Review of Yuval Harari's Sapiens: A Brief History of Humankind.
https://twitter.com/whyvert/status/928472237052649472
https://archive.is/MPO5Q
Yuval Harari's prominent book Sapiens: A Brief History of Humankind gets a thorough and well deserved fisking by C.R. Hallpike.

For Harari the great innovation that separated us from the apes was what he calls the Cognitive Revolution, around 70,000 years ago when we started migrating out of Africa, which he thinks gave us the same sort of modern minds that we have now. 'At the individual level, ancient foragers were the most knowledgeable and skilful people in history...Survival in that area required superb mental abilities from everyone' (55), and 'The people who carved the Stadel lion-man some 30,000 years ago had the same physical, emotional, and intellectual abilities we have' (44). Not surprisingly, then, 'We'd be able to explain to them everything we know - from the adventures of Alice in Wonderland to the paradoxes of quantum physics - and they could teach us how their people view the world' (23).

It's a sweet idea, and something like this imagined meeting actually took place a few years ago between the linguist Daniel Everett and the Piraha foragers of the Amazon in Peru (Everett 2008). But far from being able to discuss quantum theory with them, he found that the Piraha couldn't even count, and had no numbers of any kind, They could teach Everett how they saw the world, which was entirely confined to the immediate experience of the here-and-now, with no interest in past or future, or really in anything that could not be seen or touched. They had no myths or stories, so Alice in Wonderland would have fallen rather flat as well.

...

Summing up the book as a whole, one has often had to point out how surprisingly little he seems to have read on quite a number of essential topics. It would be fair to say that whenever his facts are broadly correct they are not new, and whenever he tries to strike out on his own he often gets things wrong, sometimes seriously. So we should not judge Sapiens as a serious contribution to knowledge but as 'infotainment', a publishing event to titillate its readers by a wild intellectual ride across the landscape of history, dotted with sensational displays of speculation, and ending with blood-curdling predictions about human destiny. By these criteria it is a most successful book.
pdf  books  review  expert-experience  critique  sapiens  history  antiquity  anthropology  multi  twitter  social  scitariat  commentary  quotes  attaq  westminster  backup  culture  realness  farmers-and-foragers  language  egalitarianism-hierarchy  inequality  learning  absolute-relative  malthus  tribalism  kinship  leviathan  government  leadership  volo-avolo  social-structure  taxes  studying  technology  religion  theos  sequential  universalism-particularism  antidemos  revolution  enlightenment-renaissance-restoration-reformation  science  europe  the-great-west-whale  age-of-discovery  iron-age  mediterranean  the-classics  reason  empirical  experiment  early-modern  islam  MENA  civic  institutions  the-trenches  innovation  agriculture  gnon 
november 2017 by nhaliday
Is the keyboard faster than the mouse?
Conclusion

It’s entirely possible that the mysterious studies Tog’s org spent $50M on prove that the mouse is faster than the keyboard for all tasks other than raw text input, but there doesn’t appear to be enough information to tell what the actual studies were. There are many public studies on user input, but I couldn’t find any that are relevant to whether or not I should use the mouse more or less at the margin.

When I look at various tasks myself, the results are mixed, and they’re mixed in the way that most programmers I polled predicted. This result is so boring that it would barely be worth mentioning if not for the large groups of people who believe that either the keyboard is always faster than the mouse or vice versa.

Please let me know if there are relevant studies on this topic that I should read! I’m not familiar with the relevant fields, so it’s possible that I’m searching with the wrong keywords and reading the wrong papers.

[ed.: Incidentally it looks like Dan uses Emacs.]
techtariat  dan-luu  engineering  programming  productivity  workflow  hci  hardware  working-stiff  benchmarks  time  time-use  keyboard  ui  ux  editors  critique  debate  meta-analysis  study  summary  commentary  comparison  bangbang 
november 2017 by nhaliday
Brush Up Your Aeschylus
For most of its 100-year heyday, Greek tragedy's preference for the stylized, the abstract and the formal over anything we might think of as naturalistic was reflected both in the circumstances of its production (which, with elaborate masks and costumes and rigidly formalized music and dances, seems to have been a cross between Noh and grand opera) and in the dense, allusive, often hieratic formal language that the tragedians employed, one that fully exploited the considerable suppleness of ancient Greek. Needless to say, this language has posed problems for translators, who usually have to choose between conveying the literal meaning of the Greek -- crucial to understanding tragedy's intellectual and moral themes -- and its considerable poetic allure.

David Grene and Richmond Lattimore, the classicist translators who edited the University of Chicago Press series, which for the past two generations has been the standard American translation, generally opted for clarity of sense -- no doubt in reaction to the flowery, high (and highfalutin) diction that notoriously characterized earlier translations, none more so than the widely consulted Loeb Library translations. (''How camest thou to Earth's prophetic navel?'' the Loeb's Medea inquires of someone returning from Delphi.) The Chicago translations, with their crisp, straightforward and dignified English, manage on the whole to convey the meaning, if not always the varied styles, of the originals.

It is primarily in reaction to cautious translations like Chicago's that the University of Pennsylvania Press has begun its own ambitious new project: the reinterpretation of all the extant Greek drama -- both tragedy and comedy -- not by classics scholars but by contemporary poets, many of whom don't know any Greek at all and achieved their translations by consulting earlier English versions. According to the series' co-editor, David R. Slavitt, its aim is to freshen up the plays, providing the general public with translations that are ''performable, faithful to the texture . . . and the moment on stage, rather than the individual sequence of adjectives.'' Yet even if you accept these versions as interpretations rather than straight translations, as Slavitt invites us to do (ignoring the fact that the Penn versions are presented as ''translations'' that are ''loyal to the Greek originals'' and will become ''the standard for decades to come''), this new series is a disappointment thus far. Six of the projected 12 volumes are now available, and they are extremely uneven, providing neither the intellectual satisfactions of the original nor, it must be said, the compensatory pleasures of really distinguished English poetry. Indeed, the strained and unconvincing efforts at novelty that are this series' distinguishing characteristic suggest that the real goal here was, in the words of the press's director, Eric Halpern, ''to create a publishing event.''
news  org:rec  the-classics  literature  art  classic  language  foreign-lang  mediterranean  poetry  review  critique  canon 
october 2017 by nhaliday
[1709.06560] Deep Reinforcement Learning that Matters
https://twitter.com/WAWilsonIV/status/912505885565452288
I’ve been experimenting w/ various kinds of value function approaches to RL lately, and its striking how primitive and bad things seem to be
At first I thought it was just that my code sucks, but then I played with the OpenAI baselines and nope, it’s the children that are wrong.
And now, what comes across my desk but this fantastic paper: (link: https://arxiv.org/abs/1709.06560) arxiv.org/abs/1709.06560 How long until the replication crisis hits AI?

https://twitter.com/WAWilsonIV/status/911318326504153088
Seriously I’m not blown away by the PhDs’ records over the last 30 years. I bet you’d get better payoff funding eccentrics and amateurs.
There are essentially zero fundamentally new ideas in AI, the papers are all grotesquely hyperparameter tuned, nobody knows why it works.

Deep Reinforcement Learning Doesn't Work Yet: https://www.alexirpan.com/2018/02/14/rl-hard.html
Once, on Facebook, I made the following claim.

Whenever someone asks me if reinforcement learning can solve their problem, I tell them it can’t. I think this is right at least 70% of the time.
papers  preprint  machine-learning  acm  frontier  speedometer  deep-learning  realness  replication  state-of-art  survey  reinforcement  multi  twitter  social  discussion  techtariat  ai  nibble  org:mat  unaffiliated  ratty  acmtariat  liner-notes  critique  sample-complexity  cost-benefit  todo 
september 2017 by nhaliday
Which industries are the most liberal and most conservative?
How Democratic or Republican is your job? This tool tells you: https://www.washingtonpost.com/news/the-fix/wp/2015/06/03/how-democratic-or-republican-is-your-job-this-tool-tells-you/?utm_term=.e19707abd9f1

http://verdantlabs.com/politics_of_professions/index.html

What you do and how you vote: http://www.pleeps.org/2017/01/07/what-you-do-and-how-you-vote/

trending blue across white-collar professions:
https://www.nytimes.com/2019/09/18/opinion/trump-fundraising-donors.html
https://twitter.com/adam_bonica/status/1174536380329803776
https://archive.is/r7YB6

https://twitter.com/whyvert/status/1174735746088996864
https://archive.is/Cwrih
This is partly because the meaning of left and right changed during that period. Left used to about protecting workers. Now it's mainly about increasing the power of the elite class over the working class - thus their increased support.
--
yes, it is a different kind of left now

academia:
https://en.wikipedia.org/wiki/Political_views_of_American_academics

The Legal Academy's Ideological Uniformity: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2953087

Homogenous: The Political Affiliations of Elite Liberal Arts College Faculty: https://sci-hub.tw/10.1007/s12129-018-9700-x
includes crosstab by discipline

https://www.conservativecriminology.com/uploads/5/6/1/7/56173731/lounsbery_9-25.pdf#page=28
Neil Gross, Solon Simmons
THE SOCIAL AND POLITICAL VIEWS OF AMERICAN PROFESSORS

another crosstab
description of data sampling on page 21, meant to be representative of all undergraduate degree-granting institutions

Computer science 32.3 58.1 9.7

It’s finally out–The big review paper on the lack of political diversity in social psychology: https://heterodoxacademy.org/2015/09/14/bbs-paper-on-lack-of-political-diversity/
https://heterodoxacademy.org/2015/09/21/political-diversity-response-to-33-critiques/
http://righteousmind.com/viewpoint-diversity/
http://www.nationalaffairs.com/publications/detail/real-academic-diversity
http://quillette.com/2017/07/06/social-sciences-undergoing-purity-spiral/
What’s interesting about Haidt’s alternative interpretation of the liberal progress narrative is that he mentions two elements central to the narrative—private property and nations. And what has happened to a large extent is that as the failures of communism have become increasingly apparent many on the left—including social scientists—have shifted their activism away from opposing private property and towards other aspects, for example globalism.

But how do we know a similarly disastrous thing is not going to happen with globalism as happened with communism? What if some form of national and ethnic affiliation is a deep-seated part of human nature, and that trying to forcefully suppress it will eventually lead to a disastrous counter-reaction? What if nations don’t create conflict, but alleviate it? What if a decentralised structure is the best way for human society to function?
news  org:lite  data  study  summary  politics  polisci  ideology  correlation  economics  finance  law  academia  media  tech  sv  heavy-industry  energy-resources  biophysical-econ  agriculture  pharma  things  visualization  crosstab  phalanges  housing  scale  money  elite  charity  class-warfare  coalitions  demographics  business  distribution  polarization  database  multi  org:rec  dynamic  tools  calculator  list  top-n  labor  management  leadership  government  hari-seldon  gnosis-logos  career  planning  jobs  dirty-hands  long-term  scitariat  haidt  org:ngo  commentary  higher-ed  psychology  social-psych  social-science  westminster  institutions  roots  chart  discrimination  debate  critique  biases  diversity  homo-hetero  replication  org:mag  letters  org:popup  ethnocentrism  error  communism  universalism-particularism  whiggish-hegelian  us-them  tribalism  wonkish  org:data  analysis  general-survey  exploratory  stylized-facts  elections  race  education  twitter  social  backup  journos-pundits  gnon  aphorism  impetus  interests  self-interest 
september 2017 by nhaliday
The Complacent Intellectual Class | David V. Johnson
I WOULD LIKE TO COIN A PHRASE, the complacent intellectual class, to describe the overwhelming number of pundits, thought leaders, and policy wonks who accept, welcome, or even enforce slovenly scholarship. These people might, in the abstract, like research that maintains the highest standards, they might even consider themselves academics or bona fide researchers, when in fact they have lost the capacity of maintaining even the most basic standards of rigor.

I am motivated to do so after reading Tyler Cowen’s new book The Complacent Class: The Self-Defeating Quest for the American Dream. I propose the term with some trepidation. Cowen—a George Mason University economist, libertarian theorist, and “legendary blogger” (to quote the book’s inset)—is often a smart commentator who puts his finger on a lot of interesting social phenomena, introduces novel ideas, and proves worth reading from time to time.

But books are different from blog posts and op-eds. And this book fails so glaringly that it makes me despair for this country’s literary culture and intellectual life in general. So let me use Cowen’s latest venture to illustrate what we should all demand from the work of our intellectual class, lest our nation continue to vegetate in the pretend-thinking of #AspenIdeas pseudo-academia.
news  org:lite  books  review  critique  rant  track-record  econotariat  marginal-rev  economics  usa  zeitgeist  trends  flexibility  flux-stasis  stagnation  malaise  class  realness  left-wing 
september 2017 by nhaliday
My Retort to the Adam Spliff Institute's Latest Effusion of Drivel - Mail Online - Peter Hitchens blog
It’s time to rename the ‘Adam Smith Institute’, now captured by drug legalisers. Something tells me the Scotch moralist wouldn’t have thought a stupefied and acquiescent population the best basis for civilisation, wealth or morality.

So I suggest calling this screeching nest of mentally pubescent drug zealots

The Adam Spliff Institute

I am moved to this suggestion by their latest attempt to debate the drug issue. I say ‘attempt’ because they really have very little idea of how to argue.
news  org:lite  org:anglo  lol  journos-pundits  right-wing  albion  org:ngo  commentary  critique  rant  rhetoric  debate  politics  ideology  civil-liberty  randy-ayndy  drugs  tradition  econotariat  aphorism  paleocon 
august 2017 by nhaliday
It made their brown eyes blue.. | West Hunter
Assuming population continuity, the selective advantage of the alleles they examined must have been very high. In order to see if there had been population continuity, they looked at the mtDNA frequencies of the ancient populations and compared them with mtDNA frequencies of modern populations in the same areas. Since they’re different, but not wildly different, they conclude that there has been population continuity, which was their null assumption.

That null assumption might have been reasonable, if someone had burned every history book ever written, while at the same time suppressing all the ancient DNA evidence.

Since that has not yet happened, I think their assumption is downright embarrassing. People have been moving in and out of this area for all of recorded history (as Razib Khan has also pointed out) : Cimmerians, Scythians, Goths, Khazars, Kievian Rus, the Golden Horde, eventually Russians.

There is no logical reason for geneticists to ignore information outside their field. Ignorance is no excuse. I could say the same for every other discipline. Cross the streams – it would be good.
west-hunter  scitariat  commentary  study  summary  critique  rant  sapiens  genetics  genomics  aDNA  migration  gene-flow  europe  eastern-europe  gavisti  conquest-empire  recent-selection  science  interdisciplinary  knowledge  gnxp 
august 2017 by nhaliday
« earlier      
per page:    204080120160

Copy this bookmark:





to read