recentpopularlog in

nhaliday : discussion   612

« earlier  
As We May Think - Wikipedia
"As We May Think" is a 1945 essay by Vannevar Bush which has been described as visionary and influential, anticipating many aspects of information society. It was first published in The Atlantic in July 1945 and republished in an abridged version in September 1945—before and after the atomic bombings of Hiroshima and Nagasaki. Bush expresses his concern for the direction of scientific efforts toward destruction, rather than understanding, and explicates a desire for a sort of collective memory machine with his concept of the memex that would make knowledge more accessible, believing that it would help fix these problems. Through this machine, Bush hoped to transform an information explosion into a knowledge explosion.[1]

https://twitter.com/michael_nielsen/status/979193577229004800
https://archive.is/FrF8Q
https://archive.is/19hHT
https://archive.is/G7yLl
https://archive.is/wFbbj
A few notes on Vannevar Bush's amazing essay, "As We May Think", from the 1945(!) @TheAtlantic :

https://twitter.com/andy_matuschak/status/1147928384510390277
https://archive.is/tm6fB
https://archive.is/BIok9
When I first read As We May Think* as a teenager, I was astonished by how much it predicted of the computer age in 1945—but recently I’ve been feeling wistful about some pieces it predicts which never came to pass. [thread]

*

http://ceasarbautista.com/posts/memex_meetup_2.html
wiki  org:mag  essay  big-peeps  history  mostly-modern  classic  ideas  worrydream  exocortex  thinking  network-structure  graphs  internet  structure  notetaking  design  skunkworks  multi  techtariat  twitter  social  discussion  reflection  backup  speedometer  software  org:junk  michael-nielsen 
9 weeks ago by nhaliday
Ask HN: Getting into NLP in 2018? | Hacker News
syllogism (spaCy author):
I think it's probably a bad strategy to try to be the "NLP guy" to potential employers. You'd do much better off being a software engineer on a project with people with ML or NLP expertise.

NLP projects fail a lot. If you line up a job as a company's first NLP person, you'll probably be setting yourself up for failure. You'll get handed an idea that can't work, you won't know enough about how to push back to change it into something that might, etc. After the project fails, you might get a chance to fail at a second one, but maybe not a third. This isn't a great way to move into any new field.

I think a cunning plan would be to angle to be the person who "productionises" models.
...
.--
...

Basically, don't just work on having more powerful solutions. Make sure you've tried hard to have easier problems as well --- that part tends to be higher leverage.

https://news.ycombinator.com/item?id=14008752
https://news.ycombinator.com/item?id=12916498
https://algorithmia.com/blog/introduction-natural-language-processing-nlp
hn  q-n-a  discussion  tech  programming  machine-learning  nlp  strategy  career  planning  human-capital  init  advice  books  recommendations  course  unit  links  automation  project  examples  applications  multi  mooc  lectures  video  data-science  org:com  roadmap  summary  error  applicability-prereqs  ends-means  telos-atelos  cost-benefit 
november 2019 by nhaliday
REST is the new SOAP | Hacker News
hn  commentary  techtariat  org:ngo  programming  engineering  web  client-server  networking  rant  rhetoric  contrarianism  idk  org:med  best-practices  working-stiff  api  models  protocol-metadata  internet  state  structure  chart  multi  q-n-a  discussion  expert-experience  track-record  reflection  cost-benefit  design  system-design  comparison  code-organizing  flux-stasis  interface-compatibility  trends  gotchas  stackex  state-of-art  distributed  concurrency  abstraction  concept  conceptual-vocab  python  ubiquity  list  top-n  duplication  synchrony  performance  caching 
november 2019 by nhaliday
Ask HN: What's a promising area to work on? | Hacker News
hn  discussion  q-n-a  ideas  impact  trends  the-bones  speedometer  technology  applications  tech  cs  programming  list  top-n  recommendations  lens  machine-learning  deep-learning  security  privacy  crypto  software  hardware  cloud  biotech  CRISPR  bioinformatics  biohacking  blockchain  cryptocurrency  crypto-anarchy  healthcare  graphics  SIGGRAPH  vr  automation  universalism-particularism  expert-experience  reddit  social  arbitrage  supply-demand  ubiquity  cost-benefit  compensation  chart  career  planning  strategy  long-term  advice  sub-super  commentary  rhetoric  org:com  techtariat  human-capital  prioritizing  tech-infrastructure  working-stiff  data-science 
november 2019 by nhaliday
Advantages and disadvantages of building a single page web application - Software Engineering Stack Exchange
Advantages
- All data has to be available via some sort of API - this is a big advantage for my use case as I want to have an API to my application anyway. Right now about 60-70% of my calls to get/update data are done through a REST API. Doing a single page application will allow me to better test my REST API since the application itself will use it. It also means that as the application grows, the API itself will grow since that is what the application uses; no need to maintain the API as an add-on to the application.
- More responsive application - since all data loaded after the initial page is kept to a minimum and transmitted in a compact format (like JSON), data requests should generally be faster, and the server will do slightly less processing.

Disadvantages
- Duplication of code - for example, model code. I am going to have to create models both on the server side (PHP in this case) and the client side in Javascript.
- Business logic in Javascript - I can't give any concrete examples on why this would be bad but it just doesn't feel right to me having business logic in Javascript that anyone can read.
- Javascript memory leaks - since the page never reloads, Javascript memory leaks can happen, and I would not even know where to begin to debug them.

--

Disadvantages I often see with Single Page Web Applications:
- Inability to link to a specific part of the site, there's often only 1 entry point.
- Disfunctional back and forward buttons.
- The use of tabs is limited or non-existant.
(especially mobile:)
- Take very long to load.
- Don't function at all.
- Can't reload a page, a sudden loss of network takes you back to the start of the site.

This answer is outdated, Most single page application frameworks have a way to deal with the issues above – Luis May 27 '14 at 1:41
@Luis while the technology is there, too often it isn't used. – Pieter B Jun 12 '14 at 6:53

https://softwareengineering.stackexchange.com/questions/201838/building-a-web-application-that-is-almost-completely-rendered-by-javascript-whi

https://softwareengineering.stackexchange.com/questions/143194/what-advantages-are-conferred-by-using-server-side-page-rendering
Server-side HTML rendering:
- Fastest browser rendering
- Page caching is possible as a quick-and-dirty performance boost
- For "standard" apps, many UI features are pre-built
- Sometimes considered more stable because components are usually subject to compile-time validation
- Leans on backend expertise
- Sometimes faster to develop*
*When UI requirements fit the framework well.

Client-side HTML rendering:
- Lower bandwidth usage
- Slower initial page render. May not even be noticeable in modern desktop browsers. If you need to support IE6-7, or many mobile browsers (mobile webkit is not bad) you may encounter bottlenecks.
- Building API-first means the client can just as easily be an proprietary app, thin client, another web service, etc.
- Leans on JS expertise
- Sometimes faster to develop**
**When the UI is largely custom, with more interesting interactions. Also, I find coding in the browser with interpreted code noticeably speedier than waiting for compiles and server restarts.

https://softwareengineering.stackexchange.com/questions/237537/progressive-enhancement-vs-single-page-apps

https://stackoverflow.com/questions/21862054/single-page-application-advantages-and-disadvantages
=== ADVANTAGES ===
1. SPA is extremely good for very responsive sites:
2. With SPA we don't need to use extra queries to the server to download pages.
3.May be any other advantages? Don't hear about any else..

=== DISADVANTAGES ===
1. Client must enable javascript.
2. Only one entry point to the site.
3. Security.

https://softwareengineering.stackexchange.com/questions/287819/should-you-write-your-back-end-as-an-api
focused on .NET

https://softwareengineering.stackexchange.com/questions/337467/is-it-normal-design-to-completely-decouple-backend-and-frontend-web-applications
A SPA comes with a few issues associated with it. Here are just a few that pop in my mind now:
- it's mostly JavaScript. One error in a section of your application might prevent other sections of the application to work because of that Javascript error.
- CORS.
- SEO.
- separate front-end application means separate projects, deployment pipelines, extra tooling, etc;
- security is harder to do when all the code is on the client;

- completely interact in the front-end with the user and only load data as needed from the server. So better responsiveness and user experience;
- depending on the application, some processing done on the client means you spare the server of those computations.
- have a better flexibility in evolving the back-end and front-end (you can do it separately);
- if your back-end is essentially an API, you can have other clients in front of it like native Android/iPhone applications;
- the separation might make is easier for front-end developers to do CSS/HTML without needing to have a server application running on their machine.

Create your own dysfunctional single-page app: https://news.ycombinator.com/item?id=18341993
I think are three broadly assumed user benefits of single-page apps:
1. Improved user experience.
2. Improved perceived performance.
3. It’s still the web.

5 mistakes to create a dysfunctional single-page app
Mistake 1: Under-estimate long-term development and maintenance costs
Mistake 2: Use the single-page app approach unilaterally
Mistake 3: Under-invest in front end capability
Mistake 4: Use naïve dev practices
Mistake 5: Surf the waves of framework hype

The disadvantages of single page applications: https://news.ycombinator.com/item?id=9879685
You probably don't need a single-page app: https://news.ycombinator.com/item?id=19184496
https://news.ycombinator.com/item?id=20384738
MPA advantages:
- Stateless requests
- The browser knows how to deal with a traditional architecture
- Fewer, more mature tools
- SEO for free

When to go for the single page app:
- Core functionality is real-time (e.g Slack)
- Rich UI interactions are core to the product (e.g Trello)
- Lots of state shared between screens (e.g. Spotify)

Hybrid solutions
...
Github uses this hybrid approach.
...

Ask HN: Is it ok to use traditional server-side rendering these days?: https://news.ycombinator.com/item?id=13212465

https://www.reddit.com/r/webdev/comments/cp9vb8/are_people_still_doing_ssr/
https://www.reddit.com/r/webdev/comments/93n60h/best_javascript_modern_approach_to_multi_page/
https://www.reddit.com/r/webdev/comments/aax4k5/do_you_develop_solely_using_spa_these_days/
The SEO issues with SPAs is a persistent concern you hear about a lot, yet nobody ever quantifies the issues. That is because search engines keep the operation of their crawler bots and indexing secret. I have read into it some, and it seems that problem used to exist, somewhat, but is more or less gone now. Bots can deal with SPAs fine.
--
I try to avoid building a SPA nowadays if possible. Not because of SEO (there are now server-side solutions to help with that), but because a SPA increases the complexity of the code base by a magnitude. State management with Redux... Async this and that... URL routing... And don't forget to manage page history.

How about just render pages with templates and be done?

If I need a highly dynamic UI for a particular feature, then I'd probably build an embeddable JS widget for it.
q-n-a  stackex  programming  engineering  tradeoffs  system-design  design  web  frontend  javascript  cost-benefit  analysis  security  state  performance  traces  measurement  intricacy  code-organizing  applicability-prereqs  multi  comparison  smoothness  shift  critique  techtariat  chart  ui  coupling-cohesion  interface-compatibility  hn  commentary  best-practices  discussion  trends  client-server  api  composition-decomposition  cycles  frameworks  ecosystem  degrees-of-freedom  dotnet  working-stiff  reddit  social  project-management 
october 2019 by nhaliday
Ask HN: How do you manage your one-man project? | Hacker News
The main thing is to not fall into the "productivity porn" trap of trying to find the best tool instead of actually getting stuff done - when something simple is more than enough.
hn  discussion  productivity  workflow  exocortex  management  prioritizing  parsimony  recommendations  software  desktop  app  webapp  notetaking  discipline  q-n-a  stay-organized  project-management 
october 2019 by nhaliday
Ask HN: Learning modern web design and CSS | Hacker News
Ask HN: Best way to learn HTML and CSS for web design?: https://news.ycombinator.com/item?id=11048409
Ask HN: How to learn design as a hacker?: https://news.ycombinator.com/item?id=8182084

Ask HN: How to learn front-end beyond the basics?: https://news.ycombinator.com/item?id=19468043
Ask HN: What is the best JavaScript stack for a beginner to learn?: https://news.ycombinator.com/item?id=8780385
Free resources for learning full-stack web development: https://news.ycombinator.com/item?id=13890114

Ask HN: What is essential reading for learning modern web development?: https://news.ycombinator.com/item?id=14888251
Ask HN: A Syllabus for Modern Web Development?: https://news.ycombinator.com/item?id=2184645

Ask HN: Modern day web development for someone who last did it 15 years ago: https://news.ycombinator.com/item?id=20656411
hn  discussion  design  form-design  frontend  web  tutorial  links  recommendations  init  pareto  efficiency  minimum-viable  move-fast-(and-break-things)  advice  roadmap  multi  hacker  games  puzzles  learning  guide  dynamic  retention  DSL  working-stiff  q-n-a  javascript  frameworks  ecosystem  libraries  client-server  hci  ux  books  chart 
october 2019 by nhaliday
Ask HN: Favorite note-taking software? | Hacker News
Ask HN: What is your ideal note-taking software and/or hardware?: https://news.ycombinator.com/item?id=13221158

my wishlist as of 2019:
- web + desktop macOS + mobile iOS (at least viewing on the last but ideally also editing)
- sync across all those
- open-source data format that's easy to manipulate for scripting purposes
- flexible organization: mostly tree hierarchical (subsuming linear/unorganized) but with the option for directed (acyclic) graph (possibly a second layer of structure/linking)
- can store plain text, LaTeX, diagrams, sketches, and raster/vector images (video prob not necessary except as links to elsewhere)
- full-text search
- somehow digest/import data from Pinboard, Workflowy, Papers 3/Bookends, Skim, and iBooks/e-readers (esp. Kobo), ideally absorbing most of their functionality
- so, eg, track notes/annotations side-by-side w/ original PDF/DjVu/ePub documents (to replace Papers3/Bookends/Skim), and maybe web pages too (to replace Pinboard)
- OCR of handwritten notes (how to handle equations/diagrams?)
- various forms of NLP analysis of everything (topic models, clustering, etc)
- maybe version control (less important than export)

candidates?:
- Evernote prob ruled out do to heavy use of proprietary data formats (unless I can find some way to export with tolerably clean output)
- Workflowy/Dynalist are good but only cover a subset of functionality I want
- org-mode doesn't interact w/ mobile well (and I haven't evaluated it in detail otherwise)
- TiddlyWiki/Zim are in the running, but not sure about mobile
- idk about vimwiki but I'm not that wedded to vim and it seems less widely used than org-mode/TiddlyWiki/Zim so prob pass on that
- Quiver/Joplin/Inkdrop look similar and cover a lot of bases, TODO: evaluate more
- Trilium looks especially promising, tho read-only mobile and for macOS desktop look at this: https://github.com/zadam/trilium/issues/511
- RocketBook is interesting scanning/OCR solution but prob not sufficient due to proprietary data format
- TODO: many more candidates, eg, TreeSheets, Gingko, OneNote (macOS?...), Notion (proprietary data format...), Zotero, Nodebook (https://nodebook.io/landing), Polar (https://getpolarized.io), Roam (looks very promising)

Ask HN: What do you use for you personal note taking activity?: https://news.ycombinator.com/item?id=15736102

Ask HN: What are your note-taking techniques?: https://news.ycombinator.com/item?id=9976751

Ask HN: How do you take notes (useful note-taking strategies)?: https://news.ycombinator.com/item?id=13064215

Ask HN: How to get better at taking notes?: https://news.ycombinator.com/item?id=21419478

Ask HN: How do you keep your notes organized?: https://news.ycombinator.com/item?id=21810400

Ask HN: How did you build up your personal knowledge base?: https://news.ycombinator.com/item?id=21332957
nice comment from math guy on structure and difference between math and CS: https://news.ycombinator.com/item?id=21338628
useful comment collating related discussions: https://news.ycombinator.com/item?id=21333383
highlights:
Designing a Personal Knowledge base: https://news.ycombinator.com/item?id=8270759
Ask HN: How to organize personal knowledge?: https://news.ycombinator.com/item?id=17892731
Do you use a personal 'knowledge base'?: https://news.ycombinator.com/item?id=21108527
Ask HN: How do you share/organize knowledge at work and life?: https://news.ycombinator.com/item?id=21310030
Managing my personal knowledge base: https://news.ycombinator.com/item?id=22000791
The sad state of personal data and infrastructure: https://beepb00p.xyz/sad-infra.html
Building personal search infrastructure for your knowledge and code: https://beepb00p.xyz/pkm-search.html

How to annotate literally everything: https://beepb00p.xyz/annotating.html
Ask HN: How do you organize document digests / personal knowledge?: https://news.ycombinator.com/item?id=21642289
Ask HN: Good solution for storing notes/excerpts from books?: https://news.ycombinator.com/item?id=21920143
Ask HN: What's your cross-platform pdf / ePub reading workflow?: https://news.ycombinator.com/item?id=22170395
some related stuff in the reddit links at the bottom of this pin

https://beepb00p.xyz/grasp.html
How to capture information from your browser and stay sane

Ask HN: Best solutions for keeping a personal log?: https://news.ycombinator.com/item?id=21906650

other stuff:
plain text: https://news.ycombinator.com/item?id=21685660

https://www.getdnote.com/blog/how-i-built-personal-knowledge-base-for-myself/
Tiago Forte: https://www.buildingasecondbrain.com

hn search: https://hn.algolia.com/?query=notetaking&type=story

Slant comparison commentary: https://news.ycombinator.com/item?id=7011281

good comparison of options here in comments here (and Trilium itself looks good): https://news.ycombinator.com/item?id=18840990

https://en.wikipedia.org/wiki/Comparison_of_note-taking_software

stuff from Andy Matuschak and Michael Nielsen on general note-taking:
https://twitter.com/andy_matuschak/status/1202663202997170176
https://archive.is/1i9ep
Software interfaces undervalue peripheral vision! (a thread)
https://twitter.com/andy_matuschak/status/1199378287555829760
https://archive.is/J06UB
This morning I implemented PageRank to sort backlinks in my prototype note system. Mixed results!
https://twitter.com/andy_matuschak/status/1211487900505792512
https://archive.is/BOiCG
https://archive.is/4zB37
One way to dream up post-book media to make reading more effective and meaningful is to systematize "expert" practices (e.g. How to Read a Book), so more people can do them, more reliably and more cheaply. But… the most erudite people I know don't actually do those things!

the memex essay and comments from various people including Andy on it: https://pinboard.in/u:nhaliday/b:1cddf69c0b31

some more stuff specific to Roam below, and cf "Why books don't work": https://pinboard.in/u:nhaliday/b:b4d4461f6378

wikis:
https://www.slant.co/versus/5116/8768/~tiddlywiki_vs_zim
https://www.wikimatrix.org/compare/tiddlywiki+zim
http://tiddlymap.org/
https://www.zim-wiki.org/manual/Plugins/BackLinks_Pane.html
https://zim-wiki.org/manual/Plugins/Link_Map.html

apps:
Roam: https://news.ycombinator.com/item?id=21440289
https://www.reddit.com/r/RoamResearch/
https://twitter.com/hashtag/roamcult
https://twitter.com/search?q=RoamResearch%20fortelabs
https://twitter.com/search?q=from%3AQiaochuYuan%20RoamResearch&src=typd
https://twitter.com/vgr/status/1199391391803043840
https://archive.is/TJPQN
https://archive.is/CrNwZ
https://www.nateliason.com/blog/roam
https://twitter.com/andy_matuschak/status/1190102757430063106
https://archive.is/To30Q
https://archive.is/UrI1x
https://archive.is/Ww22V
Knowledge systems which display contextual backlinks to a node open up an interesting new behavior. You can bootstrap a new node extensionally (rather than intensionally) by simply linking to it from many other nodes—even before it has any content.
https://twitter.com/michael_nielsen/status/1220197017340612608
Curious: what are the most striking public @RoamResearch pages that you know? I'd like to see examples of people using it for interesting purposes, or in interesting ways.
https://acesounderglass.com/2019/10/24/epistemic-spot-check-the-fate-of-rome-round-2/
https://twitter.com/andy_matuschak/status/1206011493495513089
https://archive.is/xvaMh
If I weren't doing my own research on questions in knowledge systems (which necessitates tinkering with my own), and if I weren't allergic to doing serious work in webapps, I'd likely use Roam instead!
https://talk.dynalist.io/t/roam-research-new-web-based-outliner-that-supports-transclusion-wiki-features-thoughts/5911/16
http://forum.eastgate.com/t/roam-research-interesting-approach-to-note-taking/2713/10
interesting app: http://www.eastgate.com/Tinderbox/
https://www.theatlantic.com/notes/2016/09/labor-day-software-update-tinderbox-scrivener/498443/

intriguing but probably not appropriate for my needs: https://www.sophya.ai/

Inkdrop: https://news.ycombinator.com/item?id=20103589

Joplin: https://news.ycombinator.com/item?id=15815040
https://news.ycombinator.com/item?id=21555238

MindForgr: https://news.ycombinator.com/item?id=22088175
one comment links to this, mostly on Notion: https://tkainrad.dev/posts/managing-my-personal-knowledge-base/

https://wreeto.com/

Leo Editor (combines tree outlining w/ literate programming/scripting, I think?): https://news.ycombinator.com/item?id=17769892

Frame: https://news.ycombinator.com/item?id=18760079

https://www.reddit.com/r/TheMotte/comments/cb18sy/anyone_use_a_personal_wiki_software_to_catalog/
https://archive.is/xViTY
Notion: https://news.ycombinator.com/item?id=18904648
https://coda.io/welcome
https://news.ycombinator.com/item?id=15543181

accounting: https://news.ycombinator.com/item?id=19833881
Coda mentioned

https://www.reddit.com/r/slatestarcodex/comments/ap437v/modified_cornell_method_the_optimal_notetaking/
https://archive.is/e9oHu
https://www.reddit.com/r/slatestarcodex/comments/bt8a1r/im_about_to_start_a_one_month_journaling_test/
https://www.reddit.com/r/slatestarcodex/comments/9cot3m/question_how_do_you_guys_learn_things/
https://archive.is/HUH8V
https://www.reddit.com/r/slatestarcodex/comments/d7bvcp/how_to_read_a_book_for_understanding/
https://archive.is/VL2mi

Anki:
https://www.reddit.com/r/Anki/comments/as8i4t/use_anki_for_technical_books/
https://www.freecodecamp.org/news/how-anki-saved-my-engineering-career-293a90f70a73/
https://www.reddit.com/r/slatestarcodex/comments/ch24q9/anki_is_it_inferior_to_the_3x5_index_card_an/
https://archive.is/OaGc5
maybe not the best source for a review/advice

interesting comment(s) about tree outliners and spreadsheets: https://news.ycombinator.com/item?id=21170434
https://lightsheets.app/

tablet:
https://www.inkandswitch.com/muse-studio-for-ideas.html
https://www.inkandswitch.com/capstone-manuscript.html
https://news.ycombinator.com/item?id=20255457
hn  discussion  recommendations  software  tools  desktop  app  notetaking  exocortex  wkfly  wiki  productivity  multi  comparison  crosstab  properties  applicability-prereqs  nlp  info-foraging  chart  webapp  reference  q-n-a  retention  workflow  reddit  social  ratty  ssc  learning  studying  commentary  structure  thinking  network-structure  things  collaboration  ocr  trees  graphs  LaTeX  search  todo  project  money-for-time  synchrony  pinboard  state  duplication  worrydream  simplification-normalization  links  minimalism  design  neurons  ai-control  openai  miri-cfar  parsimony  intricacy  meta:reading  examples  prepping  new-religion  deep-materialism  techtariat  review  critique  mobile  integration-extension  interface-compatibility  api  twitter  backup  vgr  postrat  personal-finance  pragmatic  stay-organized  project-management  news  org:mag  epistemic  steel-man  explore-exploit  correlation  cost-benefit  convexity-curvature  michael-nielsen  hci  ux  oly  skunkworks  europe  germanic 
october 2019 by nhaliday
Software Testing Anti-patterns | Hacker News
I haven't read this but both the article and commentary/discussion look interesting from a glance

hmm: https://news.ycombinator.com/item?id=16896390
In small companies where there is no time to "waste" on tests, my view is that 80% of the problems can be caught with 20% of the work by writing integration tests that cover large areas of the application. Writing unit tests would be ideal, but time-consuming. For a web project, that would involve testing all pages for HTTP 200 (< 1 hour bash script that will catch most major bugs), automatically testing most interfaces to see if filling data and clicking "save" works. Of course, for very important/dangerous/complex algorithms in the code, unit tests are useful, but generally, that represents a very low fraction of a web application's code.
hn  commentary  techtariat  discussion  programming  engineering  methodology  best-practices  checklists  thinking  correctness  api  interface-compatibility  jargon  list  metabuch  objektbuch  workflow  documentation  debugging  span-cover  checking  metrics  abstraction  within-without  characterization  error  move-fast-(and-break-things)  minimum-viable  efficiency  multi  poast  pareto  coarse-fine 
october 2019 by nhaliday
The Future of Mathematics? [video] | Hacker News
https://news.ycombinator.com/item?id=20909404
Kevin Buzzard (the Lean guy)

- general reflection on proof asssistants/theorem provers
- Kevin Hale's formal abstracts project, etc
- thinks of available theorem provers, Lean is "[the only one currently available that may be capable of formalizing all of mathematics eventually]" (goes into more detail right at the end, eg, quotient types)
hn  commentary  discussion  video  talks  presentation  math  formal-methods  expert-experience  msr  frontier  state-of-art  proofs  rigor  education  higher-ed  optimism  prediction  lens  search  meta:research  speculation  exocortex  skunkworks  automation  research  math.NT  big-surf  software  parsimony  cost-benefit  intricacy  correctness  programming  pls  python  functional  haskell  heavyweights  research-program  review  reflection  multi  pdf  slides  oly  experiment  span-cover  git  vcs  teaching  impetus  academia  composition-decomposition  coupling-cohesion  database  trust  types  plt  lifts-projections  induction  critique  beauty  truth  elegance  aesthetics 
october 2019 by nhaliday
Carryover vs “Far Transfer” | West Hunter
It used to be thought that studying certain subjects ( like Latin) made you better at learning others, or smarter generally – “They supple the mind, sir; they render it pliant and receptive.” This doesn’t appear to be the case, certainly not for Latin – although it seems to me that math can help you understand other subjects?

A different question: to what extent does being (some flavor of) crazy, or crazy about one subject, or being really painfully wrong about some subject, predict how likely you are to be wrong on other things? We know that someone can be strange, downright crazy, or utterly unsound on some topic and still do good mathematics… but that is not the same as saying that there is no statistical tendency for people on crazy-train A to be more likely to be wrong about subject B. What do the data suggest?
west-hunter  scitariat  discussion  reflection  learning  thinking  neurons  intelligence  generalization  math  abstraction  truth  prudence  correlation  psychology  cog-psych  education  quotes  aphorism  foreign-lang  mediterranean  the-classics  contiguity-proximity 
october 2019 by nhaliday
Computer latency: 1977-2017
If we look at overall results, the fastest machines are ancient. Newer machines are all over the place. Fancy gaming rigs with unusually high refresh-rate displays are almost competitive with machines from the late 70s and early 80s, but “normal” modern computers can’t compete with thirty to forty year old machines.

...

If we exclude the game boy color, which is a different class of device than the rest, all of the quickest devices are Apple phones or tablets. The next quickest device is the blackberry q10. Although we don’t have enough data to really tell why the blackberry q10 is unusually quick for a non-Apple device, one plausible guess is that it’s helped by having actual buttons, which are easier to implement with low latency than a touchscreen. The other two devices with actual buttons are the gameboy color and the kindle 4.

After that iphones and non-kindle button devices, we have a variety of Android devices of various ages. At the bottom, we have the ancient palm pilot 1000 followed by the kindles. The palm is hamstrung by a touchscreen and display created in an era with much slower touchscreen technology and the kindles use e-ink displays, which are much slower than the displays used on modern phones, so it’s not surprising to see those devices at the bottom.

...

Almost every computer and mobile device that people buy today is slower than common models of computers from the 70s and 80s. Low-latency gaming desktops and the ipad pro can get into the same range as quick machines from thirty to forty years ago, but most off-the-shelf devices aren’t even close.

If we had to pick one root cause of latency bloat, we might say that it’s because of “complexity”. Of course, we all know that complexity is bad. If you’ve been to a non-academic non-enterprise tech conference in the past decade, there’s a good chance that there was at least one talk on how complexity is the root of all evil and we should aspire to reduce complexity.

Unfortunately, it's a lot harder to remove complexity than to give a talk saying that we should remove complexity. A lot of the complexity buys us something, either directly or indirectly. When we looked at the input of a fancy modern keyboard vs. the apple 2 keyboard, we saw that using a relatively powerful and expensive general purpose processor to handle keyboard inputs can be slower than dedicated logic for the keyboard, which would both be simpler and cheaper. However, using the processor gives people the ability to easily customize the keyboard, and also pushes the problem of “programming” the keyboard from hardware into software, which reduces the cost of making the keyboard. The more expensive chip increases the manufacturing cost, but considering how much of the cost of these small-batch artisanal keyboards is the design cost, it seems like a net win to trade manufacturing cost for ease of programming.

...

If you want a reference to compare the kindle against, a moderately quick page turn in a physical book appears to be about 200 ms.

https://twitter.com/gravislizard/status/927593460642615296
almost everything on computers is perceptually slower than it was in 1983
https://archive.is/G3D5K
https://archive.is/vhDTL
https://archive.is/a3321
https://archive.is/imG7S

linux terminals: https://lwn.net/Articles/751763/
techtariat  dan-luu  performance  time  hardware  consumerism  objektbuch  data  history  reflection  critique  software  roots  tainter  engineering  nitty-gritty  ui  ux  hci  ios  mobile  apple  amazon  sequential  trends  increase-decrease  measure  analysis  measurement  os  systems  IEEE  intricacy  desktop  benchmarks  rant  carmack  system-design  degrees-of-freedom  keyboard  terminal  editors  links  input-output  networking  world  s:**  multi  twitter  social  discussion  tech  programming  web  internet  speed  backup  worrydream  interface  metal-to-virtual  latency-throughput  workflow  form-design  interface-compatibility  org:junk  linux 
july 2019 by nhaliday
Why is Google Translate so bad for Latin? A longish answer. : latin
hmm:
> All it does its correlate sequences of up to five consecutive words in texts that have been manually translated into two or more languages.
That sort of system ought to be perfect for a dead language, though. Dump all the Cicero, Livy, Lucretius, Vergil, and Oxford Latin Course into a database and we're good.

We're not exactly inundated with brand new Latin to translate.
--
> Dump all the Cicero, Livy, Lucretius, Vergil, and Oxford Latin Course into a database and we're good.
What makes you think that the Google folks haven't done so and used that to create the language models they use?
> That sort of system ought to be perfect for a dead language, though.
Perhaps. But it will be bad at translating novel English sentences to Latin.
foreign-lang  reddit  social  discussion  language  the-classics  literature  dataset  measurement  roots  traces  syntax  anglo  nlp  stackex  links  q-n-a  linguistics  lexical  deep-learning  sequential  hmm  project  arrows  generalization  state-of-art  apollonian-dionysian  machine-learning  google 
june 2019 by nhaliday
Which of Haskell and OCaml is more practical? For example, in which aspect will each play a key role? - Quora
- Tikhon Jelvis,

Haskell.

This is a question I'm particularly well-placed to answer because I've spent quite a bit of time with both Haskell and OCaml, seeing both in the real world (including working at Jane Street for a bit). I've also seen the languages in academic settings and know many people at startups using both languages. This gives me a good perspective on both languages, with a fairly similar amount of experience in the two (admittedly biased towards Haskell).

And so, based on my own experience rather than the languages' reputations, I can confidently say it's Haskell.

Parallelism and Concurrency

...

Libraries

...

Typeclasses vs Modules

...

In some sense, OCaml modules are better behaved and founded on a sounder theory than Haskell typeclasses, which have some serious drawbacks. However, the fact that typeclasses can be reliably inferred whereas modules have to be explicitly used all the time more than makes up for this. Moreover, extensions to the typeclass system enable much of the power provided by OCaml modules.

...

Of course, OCaml has some advantages of its own as well. It has a performance profile that's much easier to predict. The module system is awesome and often missed in Haskell. Polymorphic variants can be very useful for neatly representing certain situations, and don't have an obvious Haskell analog.

While both languages have a reasonable C FFI, OCaml's seems a bit simpler. It's hard for me to say this with any certainty because I've only used the OCaml FFI myself, but it was quite easy to use—a hard bar for Haskell's to clear. One really nice use of modules in OCaml is to pass around values directly from C as abstract types, which can help avoid extra marshalling/unmarshalling; that seemed very nice in OCaml.

However, overall, I still think Haskell is the more practical choice. Apart from the reasoning above, I simply have my own observations: my Haskell code tends to be clearer, simpler and shorter than my OCaml code. I'm also more productive in Haskell. Part of this is certainly a matter of having more Haskell experience, but the delta is limited especially as I'm working at my third OCaml company. (Of course, the first two were just internships.)

Both Haskell and OCaml are uniquivocally superb options—miles ahead of any other languages I know. While I do prefer Haskell, I'd choose either one in a pinch.

--
I've looked at F# a bit, but it feels like it makes too many tradeoffs to be on .NET. You lose the module system, which is probably OCaml's best feature, in return for an unfortunate, nominally typed OOP layer.

I'm also not invested in .NET at all: if anything, I'd prefer to avoid it in favor of simplicity. I exclusively use Linux and, from the outside, Mono doesn't look as good as it could be. I'm also far more likely to interoperate with a C library than a .NET library.

If I had some additional reason to use .NET, I'd definitely go for F#, but right now I don't.

https://www.reddit.com/r/haskell/comments/3huexy/what_are_haskellers_critiques_of_f_and_ocaml/
https://www.reddit.com/r/haskell/comments/3huexy/what_are_haskellers_critiques_of_f_and_ocaml/cub5mmb/
Thinking about it now, it boils down to a single word: expressiveness. When I'm writing OCaml, I feel more constrained than when I'm writing Haskell. And that's important: unlike so many others, what first attracted me to Haskell was expressiveness, not safety. It's easier for me to write code that looks how I want it to look in Haskell. The upper bound on code quality is higher.

...

Perhaps it all boils down to OCaml and its community feeling more "worse is better" than Haskell, something I highly disfavor.

...

Laziness or, more strictly, non-strictness is big. A controversial start, perhaps, but I stand by it. Unlike some, I do not see non-strictness as a design mistake but as a leap in abstraction. Perhaps a leap before its time, but a leap nonetheless. Haskell lets me program without constantly keeping the code's order in my head. Sure, it's not perfect and sometimes performance issues jar the illusion, but they are the exception not the norm. Coming from imperative languages where order is omnipresent (I can't even imagine not thinking about execution order as I write an imperative program!) it's incredibly liberating, even accounting for the weird issues and jinks I'd never see in a strict language.

This is what I imagine life felt like with the first garbage collectors: they may have been slow and awkward, the abstraction might have leaked here and there, but, for all that, it was an incredible advance. You didn't have to constantly think about memory allocation any more. It took a lot of effort to get where we are now and garbage collectors still aren't perfect and don't fit everywhere, but it's hard to imagine the world without them. Non-strictness feels like it has the same potential, without anywhere near the work garbage collection saw put into it.

...

The other big thing that stands out are typeclasses. OCaml might catch up on this front with implicit modules or it might not (Scala implicits are, by many reports, awkward at best—ask Edward Kmett about it, not me) but, as it stands, not having them is a major shortcoming. Not having inference is a bigger deal than it seems: it makes all sorts of idioms we take for granted in Haskell awkward in OCaml which means that people simply don't use them. Haskell's typeclasses, for all their shortcomings (some of which I find rather annoying), are incredibly expressive.

In Haskell, it's trivial to create your own numeric type and operators work as expected. In OCaml, while you can write code that's polymorphic over numeric types, people simply don't. Why not? Because you'd have to explicitly convert your literals and because you'd have to explicitly open a module with your operators—good luck using multiple numeric types in a single block of code! This means that everyone uses the default types: (63/31-bit) ints and doubles. If that doesn't scream "worse is better", I don't know what does.

...

There's more. Haskell's effect management, brought up elsewhere in this thread, is a big boon. It makes changing things more comfortable and makes informal reasoning much easier. Haskell is the only language where I consistently leave code I visit better than I found it. Even if I hadn't worked on the project in years. My Haskell code has better longevity than my OCaml code, much less other languages.

http://blog.ezyang.com/2011/02/ocaml-gotchas/
One observation about purity and randomness: I think one of the things people frequently find annoying in Haskell is the fact that randomness involves mutation of state, and thus be wrapped in a monad. This makes building probabilistic data structures a little clunkier, since you can no longer expose pure interfaces. OCaml is not pure, and as such you can query the random number generator whenever you want.

However, I think Haskell may get the last laugh in certain circumstances. In particular, if you are using a random number generator in order to generate random test cases for your code, you need to be able to reproduce a particular set of random tests. Usually, this is done by providing a seed which you can then feed back to the testing script, for deterministic behavior. But because OCaml's random number generator manipulates global state, it's very easy to accidentally break determinism by asking for a random number for something unrelated. You can work around it by manually bracketing the global state, but explicitly handling the randomness state means providing determinism is much more natural.
q-n-a  qra  programming  pls  engineering  nitty-gritty  pragmatic  functional  haskell  ocaml-sml  dotnet  types  arrows  cost-benefit  tradeoffs  concurrency  libraries  performance  expert-experience  composition-decomposition  comparison  critique  multi  reddit  social  discussion  techtariat  reflection  review  random  data-structures  numerics  rand-approx  sublinear  syntax  volo-avolo  causation  scala  jvm  ecosystem  metal-to-virtual 
june 2019 by nhaliday
The End of the Editor Wars » Linux Magazine
Moreover, even if you assume a broad margin of error, the pollings aren't even close. With all the various text editors available today, Vi and Vim continue to be the choice of over a third of users, while Emacs well back in the pack, no longer a competitor for the most popular text editor.

https://www.quora.com/Are-there-more-Emacs-or-Vim-users
I believe Vim is actually more popular, but it's hard to find any real data on it. The best source I've seen is the annual StackOverflow developer survey where 15.2% of developers used Vim compared to a mere 3.2% for Emacs.

Oddly enough, the report noted that "Data scientists and machine learning developers are about 3 times more likely to use Emacs than any other type of developer," which is not necessarily what I would have expected.

[ed. NB: Vim still dominates overall.]

https://pinboard.in/u:nhaliday/b:6adc1b1ef4dc

Time To End The vi/Emacs Debate: https://cacm.acm.org/blogs/blog-cacm/226034-time-to-end-the-vi-emacs-debate/fulltext

Vim, Emacs and their forever war. Does it even matter any more?: https://blog.sourcerer.io/vim-emacs-and-their-forever-war-does-it-even-matter-any-more-697b1322d510
Like an episode of “Silicon Valley”, a discussion of Emacs vs. Vim used to have a polarizing effect that would guarantee a stimulating conversation, regardless of an engineer’s actual alignment. But nowadays, diehard Emacs and Vim users are getting much harder to find. Maybe I’m in the wrong orbit, but looking around today, I see that engineers are equally or even more likely to choose any one of a number of great (for any given definition of ‘great’) modern editors or IDEs such as Sublime Text, Visual Studio Code, Atom, IntelliJ (… or one of its siblings), Brackets, Visual Studio or Xcode, to name a few. It’s not surprising really — many top engineers weren’t even born when these editors were at version 1.0, and GUIs (for better or worse) hadn’t been invented.

...

… both forums have high traffic and up-to-the-minute comment and discussion threads. Some of the available statistics paint a reasonably healthy picture — Stackoverflow’s 2016 developer survey ranks Vim 4th out of 24 with 26.1% of respondents in the development environments category claiming to use it. Emacs came 15th with 5.2%. In combination, over 30% is, actually, quite impressive considering they’ve been around for several decades.

What’s odd, however, is that if you ask someone — say a random developer — to express a preference, the likelihood is that they will favor for one or the other even if they have used neither in anger. Maybe the meme has spread so widely that all responses are now predominantly ritualistic, and represent something more fundamental than peoples’ mere preference for an editor? There’s a rather obvious political hypothesis waiting to be made — that Emacs is the leftist, socialist, centralized state, while Vim represents the right and the free market, specialization and capitalism red in tooth and claw.

How is Emacs/Vim used in companies like Google, Facebook, or Quora? Are there any libraries or tools they share in public?: https://www.quora.com/How-is-Emacs-Vim-used-in-companies-like-Google-Facebook-or-Quora-Are-there-any-libraries-or-tools-they-share-in-public
In Google there's a fair amount of vim and emacs. I would say at least every other engineer uses one or another.

Among Software Engineers, emacs seems to be more popular, about 2:1. Among Site Reliability Engineers, vim is more popular, about 9:1.
--
People use both at Facebook, with (in my opinion) slightly better tooling for Emacs than Vim. We share a master.emacs and master.vimrc file, which contains the bare essentials (like syntactic highlighting for the Hack language). We also share a Ctags file that's updated nightly with a cron script.

Beyond the essentials, there's a group for Emacs users at Facebook that provides tips, tricks, and major-modes created by people at Facebook. That's where Adam Hupp first developed his excellent mural-mode (ahupp/mural), which does for Ctags what iDo did for file finding and buffer switching.
--
For emacs, it was very informal at Google. There wasn't a huge community of Emacs users at Google, so there wasn't much more than a wiki and a couple language styles matching Google's style guides.

https://trends.google.com/trends/explore?date=all&geo=US&q=%2Fm%2F07zh7,%2Fm%2F01yp0m

https://www.quora.com/Why-is-interest-in-Emacs-dropping
And it is still that. It’s just that emacs is no longer unique, and neither is Lisp.

Dynamically typed scripting languages with garbage collection are a dime a dozen now. Anybody in their right mind developing an extensible text editor today would just use python, ruby, lua, or JavaScript as the extension language and get all the power of Lisp combined with vibrant user communities and millions of lines of ready-made libraries that Stallman and Steele could only dream of in the 70s.

In fact, in many ways emacs and elisp have fallen behind: 40 years after Lambda, the Ultimate Imperative, elisp is still dynamically scoped, and it still doesn’t support multithreading — when I try to use dired to list the files on a slow NFS mount, the entire editor hangs just as thoroughly as it might have in the 1980s. And when I say “doesn’t support multithreading,” I don’t mean there is some other clever trick for continuing to do work while waiting on a system call, like asynchronous callbacks or something. There’s start-process which forks a whole new process, and that’s about it. It’s a concurrency model straight out of 1980s UNIX land.

But being essentially just a decent text editor has robbed emacs of much of its competitive advantage. In a world where every developer tool is scriptable with languages and libraries an order of magnitude more powerful than cranky old elisp, the reason to use emacs is not that it lets a programmer hit a button and evaluate the current expression interactively (which must have been absolutely amazing at one point in the past).

https://www.reddit.com/r/emacs/comments/bh5kk7/why_do_many_new_users_still_prefer_vim_over_emacs/

more general comparison, not just popularity:
Differences between Emacs and Vim: https://stackoverflow.com/questions/1430164/differences-between-Emacs-and-vim

https://www.reddit.com/r/emacs/comments/9hen7z/what_are_the_benefits_of_emacs_over_vim/

https://unix.stackexchange.com/questions/986/what-are-the-pros-and-cons-of-vim-and-emacs

https://www.quora.com/Why-is-Vim-the-programmers-favorite-editor
- Adrien Lucas Ecoffet,

Because it is hard to use. Really.

However, the second part of this sentence applies to just about every good editor out there: if you really learn Sublime Text, you will become super productive. If you really learn Emacs, you will become super productive. If you really learn Visual Studio… you get the idea.

Here’s the thing though, you never actually need to really learn your text editor… Unless you use vim.

...

For many people new to programming, this is the first time they have been a power user of… well, anything! And because they’ve been told how great Vim is, many of them will keep at it and actually become productive, not because Vim is particularly more productive than any other editor, but because it didn’t provide them with a way to not be productive.

They then go on to tell their friends how great Vim is, and their friends go on to become power users and tell their friends in turn, and so forth. All these people believe they became productive because they changed their text editor. Little do they realize that they became productive because their text editor changed them[1].

This is in no way a criticism of Vim. I myself was a beneficiary of such a phenomenon when I learned to type using the Dvorak layout: at that time, I believed that Dvorak would help you type faster. Now I realize the evidence is mixed and that Dvorak might not be much better than Qwerty. However, learning Dvorak forced me to develop good typing habits because I could no longer rely on looking at my keyboard (since I was still using a Qwerty physical keyboard), and this has made me a much more productive typist.

Technical Interview Performance by Editor/OS/Language: https://triplebyte.com/blog/technical-interview-performance-by-editor-os-language
[ed.: I'm guessing this is confounded to all hell.]

The #1 most common editor we see used in interviews is Sublime Text, with Vim close behind.

Emacs represents a fairly small market share today at just about a quarter the userbase of Vim in our interviews. This nicely matches the 4:1 ratio of Google Search Trends for the two editors.

...

Vim takes the prize here, but PyCharm and Emacs are close behind. We’ve found that users of these editors tend to pass our interview at an above-average rate.

On the other end of the spectrum is Eclipse: it appears that someone using either Vim or Emacs is more than twice as likely to pass our technical interview as an Eclipse user.

...

In this case, we find that the average Ruby, Swift, and C# users tend to be stronger, with Python and Javascript in the middle of the pack.

...

Here’s what happens after we select engineers to work with and send them to onsites:

[Python does best.]

There are no wild outliers here, but let’s look at the C++ segment. While C++ programmers have the most challenging time passing Triplebyte’s technical interview on average, the ones we choose to work with tend to have a relatively easier time getting offers at each onsite.

The Rise of Microsoft Visual Studio Code: https://triplebyte.com/blog/editor-report-the-rise-of-visual-studio-code
This chart shows the rates at which each editor's users pass our interview compared to the mean pass rate for all candidates. First, notice the preeminence of Emacs and Vim! Engineers who use these editors pass our interview at significantly higher rates than other engineers. And the effect size is not small. Emacs users pass our interview at a rate 50… [more]
news  linux  oss  tech  editors  devtools  tools  comparison  ranking  flux-stasis  trends  ubiquity  unix  increase-decrease  multi  q-n-a  qra  data  poll  stackex  sv  facebook  google  integration-extension  org:med  politics  stereotypes  coalitions  decentralized  left-wing  right-wing  chart  scale  time-series  distribution  top-n  list  discussion  ide  parsimony  intricacy  cost-benefit  tradeoffs  confounding  analysis  crosstab  pls  python  c(pp)  jvm  microsoft  golang  hmm  correlation  debate  critique  quora  contrarianism  ecosystem  DSL  techtariat  org:com  org:nat  cs 
june 2019 by nhaliday
Reverse salients | West Hunter
Edison thought in terms of reverse salients and critical problems.

“Reverse salients are areas of research and development that are lagging in some obvious way behind the general line of advance. Critical problems are the research questions, cast in terms of the concrete particulars of currently available knowledge and technique and of specific exemplars or models that are solvable and whose solutions would eliminate the reverse salients. ”

What strikes you as as important current example of a reverse salient, and the associated critical problem or problems?
west-hunter  scitariat  discussion  science  technology  innovation  low-hanging  list  top-n  research  open-problems  the-world-is-just-atoms  marginal  definite-planning  frontier  🔬  speedometer  ideas  the-trenches  hi-order-bits  prioritizing  judgement 
may 2019 by nhaliday
Why books don’t work | Andy Matuschak
https://www.spreaker.com/user/10197011/designing-and-developing-new-tools-for-t
https://twitter.com/andy_matuschak/status/1190675776036687878
https://archive.is/hNIFG
https://archive.is/f9Bwh
hmm: "zettelkasten like note systems have you do a linear search for connections, that gets exponentially more expensive as your note body grows",
https://twitter.com/Meaningness/status/1210309788141117440
https://archive.is/P6PH2
https://archive.is/uD9ls
https://archive.is/Sb9Jq

https://twitter.com/Scholars_Stage/status/1199702832728948737
https://archive.is/cc4zf
I reviewed today my catalogue of 420~ books I have read over the last six years and I am in despair. There are probably 100~ whose contents I can tell you almost nothing about—nothing noteworthy anyway.
techtariat  worrydream  learning  education  teaching  higher-ed  neurons  thinking  rhetoric  essay  michael-nielsen  retention  better-explained  bounded-cognition  info-dynamics  info-foraging  books  communication  lectures  contrarianism  academia  scholar  design  meta:reading  studying  form-design  writing  technical-writing  skunkworks  multi  broad-econ  wonkish  unaffiliated  twitter  social  discussion  backup  reflection  metameta  podcast  audio  interview  impetus  space  open-problems  questions  tech  hard-tech  startups  commentary  postrat  europe  germanic  notetaking  graphs  network-structure  similarity  intersection-connectedness  magnitude  cost-benefit  multiplicative 
may 2019 by nhaliday
A Recipe for Training Neural Networks
acmtariat  org:bleg  nibble  machine-learning  deep-learning  howto  tutorial  guide  nitty-gritty  gotchas  init  list  checklists  expert-experience  abstraction  composition-decomposition  gradient-descent  data-science  error  debugging  benchmarks  programming  engineering  best-practices  dataviz  checking  plots  generalization  regularization  unsupervised  optimization  ensembles  random  methodology  multi  twitter  social  discussion  techtariat  links  org:med  pdf  visualization  python  recommendations  advice  devtools 
april 2019 by nhaliday
Language Log » English or Mandarin as the World Language?
- writing system frequently mentioned as barrier
- also imprecision of Chinese might hurt its use for technical writing
- most predicting it won't (but English might be replaced by absence of lingua franca per Nicholas Ostler)
linguistics  language  foreign-lang  china  asia  anglo  world  trends  prediction  speculation  expert-experience  analytical-holistic  writing  network-structure  science  discussion  commentary  flux-stasis  nationalism-globalism  comparison  org:edu 
february 2019 by nhaliday
Timothy Heath - China's New Governing Party Paradigm - YouTube
https://twitter.com/GarettJones/status/1079807448741863425
https://archive.is/NnO9U
What percentage of CCP elites sincerely believe in the official ideology?

https://twitter.com/BennettJonah/status/1153757516867633152
https://archive.is/PI3QS
One of the most useful things to aid understanding is reading the other side in their own words, rather than reading yet more vague analyses about "what the Chinese are up to."

Which is why you need to read this Xi Jinping speech:
https://palladiummag.com/2019/05/31/xi-jinping-in-translation-chinas-guiding-ideology/
--
I like this speech because it is a clear expression of Marxism as an "organizing philosophy of the state" - nothing about equality, barely even anything about "workers"
video  presentation  china  asia  government  institutions  communism  polisci  ideology  technocracy  leviathan  management  science  polanyi-marx  economics  growth-econ  multi  twitter  social  discussion  speculation  backup  realness  revolution  history  mostly-modern  poll  impetus  garett-jones  quotes  statesmen 
february 2019 by nhaliday
T. Greer on Twitter: "Genesis 1st half of Exodus Basic passages of the Deuteronomic Covenant Select scenes from Numbers-Judges Samuel I-II Job Ecclesiastes Proverbs Select Psalms Select passages of Isiah, Jeremiah, and Ezekiel Jonah 4 Gospels+Acts Romans
https://archive.is/YtwVb
I would pair letters from Paul with Flannery O'Connor's "A Good Man is Hard to Find."

I designed a hero's journey course that included Gilgamesh, Odyssey, and Gawain and the Green Knight. Before reading Gawain you'd read the Sermon on the Mount + few parts of gospels.
The idea with that last one being that Gawain was an attempt to make a hero who (unlike Odysseus) accorded with Christian ethics. As one of its discussion points, the class can debate over how well it actually did that.
...
So I would preface Lord of the Flies with a stylized account of Hobbes and Rosseau, and we would read a great deal of Genesis alongside LOTF.

Same approach was taken to Greece and Rome. Classical myths would be paired with poems from the 1600s-1900s that alluded to them.
...
Genesis
1st half of Exodus
Basic passages of the Deuteronomic Covenant
Select scenes from Numbers-Judges
Samuel I-II
Job
Ecclesiastes
Proverbs
Select Psalms
Select passages of Isiah, Jeremiah, and Ezekiel
Jonah
4 Gospels+Acts
Romans
1 Corinthians
Hebrews
Revelation
twitter  social  discussion  backup  literature  letters  reading  canon  the-classics  history  usa  europe  the-great-west-whale  religion  christianity  ideology  philosophy  ethics  medieval  china  asia  sinosphere  comparison  culture  civilization  roots  spreading  multi 
february 2019 by nhaliday
The first ethical revolution – Gene Expression
Fifty years ago Julian Jaynes published The Origin of Consciousness in the Breakdown of the Bicameral Mind. Seventy years ago Karl Jaspers introduced the concept of the Axial Age. Both point to the same dynamic historically.

Something happened in the centuries around 500 BCE all around the world. Great religions and philosophies arose. The Indian religious traditions, the Chinese philosophical-political ones, and the roots of what we can recognize as Judaism. In Greece, the precursors of many modern philosophical streams emerged formally, along with a variety of political systems.

The next few centuries saw some more innovation. Rabbinical Judaism transformed a ritualistic tribal religion into an ethical one, and Christianity universalized Jewish religious thought, as well as infusing it with Greek systematic concepts. Meanwhile, Indian and Chinese thought continued to evolve, often due to interactions each other (it is hard to imagine certain later developments in Confucianism without the Buddhist stimulus). Finally, in the 7th century, Islam emerges as the last great world religion.

...

Living in large complex societies with social stratification posed challenges. A religion such as Christianity was not a coincidence, something of its broad outlines may have been inevitable. Universal, portable, ethical, and infused with transcendence and coherency. Similarly, god-kings seem to have universally transformed themselves into the human who binds heaven to earth in some fashion.

The second wave of social-ethical transformation occurred in the early modern period, starting in Europe. My own opinion is that economic growth triggered by innovation and gains in productivity unleashed constraints which had dampened further transformations in the domain of ethics. But the new developments ultimately were simply extensions and modifications on the earlier “source code” (e.g., whereas for nearly two thousand years Christianity had had to make peace with the existence of slavery, in the 19th century anti-slavery activists began marshaling Christian language against the institution).
gnxp  scitariat  discussion  reflection  religion  christianity  theos  judaism  china  asia  sinosphere  orient  india  the-great-west-whale  occident  history  antiquity  iron-age  mediterranean  the-classics  canon  philosophy  morality  ethics  universalism-particularism  systematic-ad-hoc  analytical-holistic  confucian  big-peeps  innovation  stagnation  technology  economics  biotech  enhancement  genetics  bio  flux-stasis  automation  ai  low-hanging  speedometer  time  distribution  smoothness  shift  dennett  simler  volo-avolo  👽  mystic  marginal  farmers-and-foragers  wealth  egalitarianism-hierarchy  values  formal-values  ideology  good-evil 
april 2018 by nhaliday
Theories of humor - Wikipedia
There are many theories of humor which attempt to explain what humor is, what social functions it serves, and what would be considered humorous. Among the prevailing types of theories that attempt to account for the existence of humor, there are psychological theories, the vast majority of which consider humor to be very healthy behavior; there are spiritual theories, which consider humor to be an inexplicable mystery, very much like a mystical experience.[1] Although various classical theories of humor and laughter may be found, in contemporary academic literature, three theories of humor appear repeatedly: relief theory, superiority theory, and incongruity theory.[2] Among current humor researchers, there is no consensus about which of these three theories of humor is most viable.[2] Proponents of each one originally claimed their theory to be capable of explaining all cases of humor.[2][3] However, they now acknowledge that although each theory generally covers its own area of focus, many instances of humor can be explained by more than one theory.[2][3][4][5] Incongruity and superiority theories, for instance, seem to describe complementary mechanisms which together create humor.[6]

...

Relief theory
Relief theory maintains that laughter is a homeostatic mechanism by which psychological tension is reduced.[2][3][7] Humor may thus for example serve to facilitate relief of the tension caused by one's fears.[8] Laughter and mirth, according to relief theory, result from this release of nervous energy.[2] Humor, according to relief theory, is used mainly to overcome sociocultural inhibitions and reveal suppressed desires. It is believed that this is the reason we laugh whilst being tickled, due to a buildup of tension as the tickler "strikes".[2][9] According to Herbert Spencer, laughter is an "economical phenomenon" whose function is to release "psychic energy" that had been wrongly mobilized by incorrect or false expectations. The latter point of view was supported also by Sigmund Freud.

Superiority theory
The superiority theory of humor traces back to Plato and Aristotle, and Thomas Hobbes' Leviathan. The general idea is that a person laughs about misfortunes of others (so called schadenfreude), because these misfortunes assert the person's superiority on the background of shortcomings of others.[10] Socrates was reported by Plato as saying that the ridiculous was characterized by a display of self-ignorance.[11] For Aristotle, we laugh at inferior or ugly individuals, because we feel a joy at feeling superior to them.[12]

Incongruous juxtaposition theory
The incongruity theory states that humor is perceived at the moment of realization of incongruity between a concept involved in a certain situation and the real objects thought to be in some relation to the concept.[10]

Since the main point of the theory is not the incongruity per se, but its realization and resolution (i.e., putting the objects in question into the real relation), it is often called the incongruity-resolution theory.[10]

...

Detection of mistaken reasoning
In 2011, three researchers, Hurley, Dennett and Adams, published a book that reviews previous theories of humor and many specific jokes. They propose the theory that humor evolved because it strengthens the ability of the brain to find mistakes in active belief structures, that is, to detect mistaken reasoning.[46] This is somewhat consistent with the sexual selection theory, because, as stated above, humor would be a reliable indicator of an important survival trait: the ability to detect mistaken reasoning. However, the three researchers argue that humor is fundamentally important because it is the very mechanism that allows the human brain to excel at practical problem solving. Thus, according to them, humor did have survival value even for early humans, because it enhanced the neural circuitry needed to survive.

Misattribution theory
Misattribution is one theory of humor that describes an audience's inability to identify exactly why they find a joke to be funny. The formal theory is attributed to Zillmann & Bryant (1980) in their article, "Misattribution Theory of Tendentious Humor", published in Journal of Experimental Social Psychology. They derived the critical concepts of the theory from Sigmund Freud's Wit and Its Relation to the Unconscious (note: from a Freudian perspective, wit is separate from humor), originally published in 1905.

Benign violation theory
The benign violation theory (BVT) is developed by researchers A. Peter McGraw and Caleb Warren.[47] The BVT integrates seemingly disparate theories of humor to predict that humor occurs when three conditions are satisfied: 1) something threatens one's sense of how the world "ought to be", 2) the threatening situation seems benign, and 3) a person sees both interpretations at the same time.

From an evolutionary perspective, humorous violations likely originated as apparent physical threats, like those present in play fighting and tickling. As humans evolved, the situations that elicit humor likely expanded from physical threats to other violations, including violations of personal dignity (e.g., slapstick, teasing), linguistic norms (e.g., puns, malapropisms), social norms (e.g., strange behaviors, risqué jokes), and even moral norms (e.g., disrespectful behaviors). The BVT suggests that anything that threatens one's sense of how the world "ought to be" will be humorous, so long as the threatening situation also seems benign.

...

Sense of humor, sense of seriousness
One must have a sense of humor and a sense of seriousness to distinguish what is supposed to be taken literally or not. An even more keen sense is needed when humor is used to make a serious point.[48][49] Psychologists have studied how humor is intended to be taken as having seriousness, as when court jesters used humor to convey serious information. Conversely, when humor is not intended to be taken seriously, bad taste in humor may cross a line after which it is taken seriously, though not intended.[50]

Philosophy of humor bleg: http://marginalrevolution.com/marginalrevolution/2017/03/philosophy-humor-bleg.html

Inside Jokes: https://mitpress.mit.edu/books/inside-jokes
humor as reward for discovering inconsistency in inferential chain

https://twitter.com/search?q=comedy%20OR%20humor%20OR%20humour%20from%3Asarahdoingthing&src=typd
https://twitter.com/sarahdoingthing/status/500000435529195520

https://twitter.com/sarahdoingthing/status/568346955811663872
https://twitter.com/sarahdoingthing/status/600792582453465088
https://twitter.com/sarahdoingthing/status/603215362033778688
https://twitter.com/sarahdoingthing/status/605051508472713216
https://twitter.com/sarahdoingthing/status/606197597699604481
https://twitter.com/sarahdoingthing/status/753514548787683328

https://en.wikipedia.org/wiki/Humour
People of all ages and cultures respond to humour. Most people are able to experience humour—be amused, smile or laugh at something funny—and thus are considered to have a sense of humour. The hypothetical person lacking a sense of humour would likely find the behaviour inducing it to be inexplicable, strange, or even irrational.

...

Ancient Greece
Western humour theory begins with Plato, who attributed to Socrates (as a semi-historical dialogue character) in the Philebus (p. 49b) the view that the essence of the ridiculous is an ignorance in the weak, who are thus unable to retaliate when ridiculed. Later, in Greek philosophy, Aristotle, in the Poetics (1449a, pp. 34–35), suggested that an ugliness that does not disgust is fundamental to humour.

...

China
Confucianist Neo-Confucian orthodoxy, with its emphasis on ritual and propriety, has traditionally looked down upon humour as subversive or unseemly. The Confucian "Analects" itself, however, depicts the Master as fond of humorous self-deprecation, once comparing his wanderings to the existence of a homeless dog.[10] Early Daoist philosophical texts such as "Zhuangzi" pointedly make fun of Confucian seriousness and make Confucius himself a slow-witted figure of fun.[11] Joke books containing a mix of wordplay, puns, situational humor, and play with taboo subjects like sex and scatology, remained popular over the centuries. Local performing arts, storytelling, vernacular fiction, and poetry offer a wide variety of humorous styles and sensibilities.

...

Physical attractiveness
90% of men and 81% of women, all college students, report having a sense of humour is a crucial characteristic looked for in a romantic partner.[21] Humour and honesty were ranked as the two most important attributes in a significant other.[22] It has since been recorded that humour becomes more evident and significantly more important as the level of commitment in a romantic relationship increases.[23] Recent research suggests expressions of humour in relation to physical attractiveness are two major factors in the desire for future interaction.[19] Women regard physical attractiveness less highly compared to men when it came to dating, a serious relationship, and sexual intercourse.[19] However, women rate humorous men more desirable than nonhumorous individuals for a serious relationship or marriage, but only when these men were physically attractive.[19]

Furthermore, humorous people are perceived by others to be more cheerful but less intellectual than nonhumorous people. Self-deprecating humour has been found to increase the desirability of physically attractive others for committed relationships.[19] The results of a study conducted by McMaster University suggest humour can positively affect one’s desirability for a specific relationship partner, but this effect is only most likely to occur when men use humour and are evaluated by women.[24] No evidence was found to suggest men prefer women with a sense of humour as partners, nor women preferring other women with a sense of humour as potential partners.[24] When women were given the forced-choice design in the study, they chose funny men as potential … [more]
article  list  wiki  reference  psychology  cog-psych  social-psych  emotion  things  phalanges  concept  neurons  instinct  👽  comedy  models  theory-of-mind  explanans  roots  evopsych  signaling  humanity  logic  sex  sexuality  cost-benefit  iq  intelligence  contradiction  homo-hetero  egalitarianism-hierarchy  humility  reinforcement  EEA  eden  play  telos-atelos  impetus  theos  mystic  philosophy  big-peeps  the-classics  literature  inequality  illusion  within-without  dennett  dignity  social-norms  paradox  parallax  analytical-holistic  multi  econotariat  marginal-rev  discussion  speculation  books  impro  carcinisation  postrat  cool  twitter  social  quotes  commentary  search  farmers-and-foragers  🦀  evolution  sapiens  metameta  insight  novelty  wire-guided  realness  chart  beauty  nietzschean  class  pop-diff  culture  alien-character  confucian  order-disorder  sociality  🐝  integrity  properties  gender  gender-diff  china  asia  sinosphere  long-short-run  trust  religion  ideology  elegance  psycho-atoms 
april 2018 by nhaliday
Charade | West Hunter
I was watching Charade the other day, not for the first time, and was noticing that the action scenes with Cary Grant (human fly, and fighting George Kennedy) really weren’t very convincing.  Age. But think what it would be like today: we’d see Audrey Hepburn kicking the living shit out of Kennedy, probably cutting his throat with his own claw – while still being utterly adorable.

https://westhunt.wordpress.com/2018/03/04/shtrafbats/
Was thinking about how there are far too many reviewers, and far too few movies worth reviewing. It might be fun to review the movies that should have been made, instead. Someone ought to make a movie about the life of Konstantin Rokossovsky – an officer arrested and tortured by Stalin (ended up with denailed fingers and steel teeth) who became one of the top Soviet generals. The story would be focused on his command of 16th Army in the final defense of Moscow – an army group composed entirely of penal battalions. The Legion of the Damned.

https://westhunt.wordpress.com/2018/03/04/shtrafbats/#comment-103767
There hasn’t been a good Gulag Archipelago movie, has there?

One historical movie that I’d really like to see would be about the defense of Malta by the Knights of St. John. That or the defense of Vienna. Either one would be very “timely”, which is a word many reviewers seem to misuse quite laughably these days.
--
My oldest son made the same suggestion – The Great Siege

Siege of Vienna – Drawing of the Dark?

https://westhunt.wordpress.com/2018/03/04/shtrafbats/#comment-103846
The Conquest of New Spain.
--
Only Cortez was fully awake. Him and von Neumann.
west-hunter  scitariat  reflection  discussion  aphorism  troll  film  culture  classic  multi  review  history  mostly-modern  world-war  russia  communism  authoritarianism  military  war  poast  medieval  conquest-empire  expansionism  age-of-discovery  europe  eastern-europe  mediterranean  usa  latin-america  big-peeps  von-neumann  giants 
march 2018 by nhaliday
akira on Twitter: "It is almost impressive how quickly we destroyed everything worth preserving."
Now that we have nothing left to fight for, we are utterly free to choose the path forward. There is nothing to hold us back.
Our advice to you: Save who you can. Ditch anyone who has chosen poorly. Do not get left behind. Do not look back.
Things are in fact very, very bad, and you should not be living in cities come the turn of the decade.
twitter  social  discussion  gnon  politics  current-events  rant  urban-rural  usa 
march 2018 by nhaliday
Diving into Chinese philosophy – Gene Expression
Back when I was in college one of my roommates was taking a Chinese philosophy class for a general education requirement. A double major in mathematics and economics (he went on to get an economics Ph.D.) he found the lack of formal rigor in the field rather maddening. I thought this was fair, but I suggested to him that the this-worldy and often non-metaphysical orientation of much of Chinese philosophy made it less amenable to formal and logical analysis.

...

IMO the much more problematic thing about premodern Chinese political philosophy from the point of view of the West is its lack of interest in constitutionalism and the rule of law, stemming from a generally less rationalist approach than the Classical Westerns, than any sort of inherent anti-individualism or collectivism or whatever. For someone like Aristotle the constitutional rule of law was the highest moral good in itself and the definition of justice, very much not so for Confucius or for Zhu Xi. They still believed in Justice in the sense of people getting what they deserve, but they didn’t really consider the written rule of law an appropriate way to conceptualize it. OG Confucius leaned more towards the unwritten traditions and rituals passed down from the ancestors, and Neoconfucianism leaned more towards a sort of Universal Reason that could be accessed by the individual’s subjective understanding but which again need not be written down necessarily (although unlike Kant/the Enlightenment it basically implies that such subjective reasoning will naturally lead one to reaffirming the ancient traditions). In left-right political spectrum terms IMO this leads to a well-defined right and left and a big old hole in the center where classical republicanism would be in the West. This resonates pretty well with modern East Asian political history IMO

https://www.radicalphilosophy.com/article/is-logos-a-proper-noun
Is logos a proper noun?
Or, is Aristotelian Logic translatable into Chinese?
gnxp  scitariat  books  recommendations  discussion  reflection  china  asia  sinosphere  philosophy  logic  rigor  rigidity  flexibility  leviathan  law  individualism-collectivism  analytical-holistic  systematic-ad-hoc  the-classics  canon  morality  ethics  formal-values  justice  reason  tradition  government  polisci  left-wing  right-wing  order-disorder  eden-heaven  analogy  similarity  comparison  thinking  summary  top-n  n-factor  universalism-particularism  duality  rationality  absolute-relative  subjective-objective  the-self  apollonian-dionysian  big-peeps  history  iron-age  antidemos  democracy  institutions  darwinian  multi  language  concept  conceptual-vocab  inference  linguistics  foreign-lang  mediterranean  europe  germanic  mostly-modern  gallic  culture 
march 2018 by nhaliday
Existential Risks: Analyzing Human Extinction Scenarios
https://twitter.com/robinhanson/status/981291048965087232
https://archive.is/dUTD5
Would you endorse choosing policy to max the expected duration of civilization, at least as a good first approximation?
Can anyone suggest a different first approximation that would get more votes?

https://twitter.com/robinhanson/status/981335898502545408
https://archive.is/RpygO
How useful would it be to agree on a relatively-simple first-approximation observable-after-the-fact metric for what we want from the future universe, such as total life years experienced, or civilization duration?

We're Underestimating the Risk of Human Extinction: https://www.theatlantic.com/technology/archive/2012/03/were-underestimating-the-risk-of-human-extinction/253821/
An Oxford philosopher argues that we are not adequately accounting for technology's risks—but his solution to the problem is not for Luddites.

Anderson: You have argued that we underrate existential risks because of a particular kind of bias called observation selection effect. Can you explain a bit more about that?

Bostrom: The idea of an observation selection effect is maybe best explained by first considering the simpler concept of a selection effect. Let's say you're trying to estimate how large the largest fish in a given pond is, and you use a net to catch a hundred fish and the biggest fish you find is three inches long. You might be tempted to infer that the biggest fish in this pond is not much bigger than three inches, because you've caught a hundred of them and none of them are bigger than three inches. But if it turns out that your net could only catch fish up to a certain length, then the measuring instrument that you used would introduce a selection effect: it would only select from a subset of the domain you were trying to sample.

Now that's a kind of standard fact of statistics, and there are methods for trying to correct for it and you obviously have to take that into account when considering the fish distribution in your pond. An observation selection effect is a selection effect introduced not by limitations in our measurement instrument, but rather by the fact that all observations require the existence of an observer. This becomes important, for instance, in evolutionary biology. For instance, we know that intelligent life evolved on Earth. Naively, one might think that this piece of evidence suggests that life is likely to evolve on most Earth-like planets. But that would be to overlook an observation selection effect. For no matter how small the proportion of all Earth-like planets that evolve intelligent life, we will find ourselves on a planet that did. Our data point-that intelligent life arose on our planet-is predicted equally well by the hypothesis that intelligent life is very improbable even on Earth-like planets as by the hypothesis that intelligent life is highly probable on Earth-like planets. When it comes to human extinction and existential risk, there are certain controversial ways that observation selection effects might be relevant.
bostrom  ratty  miri-cfar  skunkworks  philosophy  org:junk  list  top-n  frontier  speedometer  risk  futurism  local-global  scale  death  nihil  technology  simulation  anthropic  nuclear  deterrence  environment  climate-change  arms  competition  ai  ai-control  genetics  genomics  biotech  parasites-microbiome  disease  offense-defense  physics  tails  network-structure  epidemiology  space  geoengineering  dysgenics  ems  authoritarianism  government  values  formal-values  moloch  enhancement  property-rights  coordination  cooperate-defect  flux-stasis  ideas  prediction  speculation  humanity  singularity  existence  cybernetics  study  article  letters  eden-heaven  gedanken  multi  twitter  social  discussion  backup  hanson  metrics  optimization  time  long-short-run  janus  telos-atelos  poll  forms-instances  threat-modeling  selection  interview  expert-experience  malthus  volo-avolo  intel  leviathan  drugs  pharma  data  estimate  nature  longevity  expansionism  homo-hetero  utopia-dystopia 
march 2018 by nhaliday
Effects of Education on Political Opinions: An International Study | International Journal of Public Opinion Research | Oxford Academic
Education and Political Party: The Effects of College or Social Class?: https://www.jstor.org/stable/2778029
The impact of education on political ideology: Evidence from European compulsory education reforms: https://www.sciencedirect.com/science/article/pii/S0272775716301704
correlation is with leftism, causal effect is shift to right

Greg thinks there are some effects: https://pinboard.in/u:nhaliday/b:5adca8f16265

https://twitter.com/GarettJones/status/964209775419457536
https://archive.is/oFELz
https://archive.is/f1DBF
https://archive.is/5iiqn

http://econlog.econlib.org/archives/2008/12/education_ideol.html

https://twitter.com/pseudoerasmus/status/963451867912130561
https://archive.is/sHI7g
https://archive.is/B5Gdv
https://archive.is/hFERC
https://archive.is/8IUDm
Bryan Caplan has written a very persuasive book suggesting that retention/transfer of learning is very low. how do we know it’s not the same with the “PoMo ethos”

https://twitter.com/whyvert/status/1056674832530714628
https://archive.is/xRmZ4
Why are people with degrees more culturally liberal? A selection effect (such people more likely to want to go to college) or an education/university effect (eg the influence of professors)? Article uses longitudinal data to show the answer is: both.
study  polisci  sociology  education  higher-ed  intervention  branches  politics  ideology  world  general-survey  correlation  causation  left-wing  right-wing  phalanges  multi  coalitions  history  mostly-modern  usa  cold-war  europe  EU  natural-experiment  endogenous-exogenous  direction  west-hunter  scitariat  twitter  social  discussion  backup  econotariat  garett-jones  cracker-econ  data  analysis  regression  org:econlib  biodet  behavioral-gen  variance-components  environmental-effects  counter-revolution  strategy  tactics  pseudoE  demographics  race  gender  markets  impetus  roots  explanans  migration  social-norms  persuasion  selection  confounding 
february 2018 by nhaliday
Adam Smith, David Hume, Liberalism, and Esotericism - Call for Papers - Elsevier
https://twitter.com/davidmanheim/status/963071765995032576
https://archive.is/njT4P
A very good economics journal--famously an outlet for rigorous, outside the box thinking--is publishing a special issue on hidden meanings in the work of two of the world's greatest thinkers.

Another sign the new Straussian age is upon us: Bayesians update accordingly!
big-peeps  old-anglo  economics  hmm  roots  politics  ideology  political-econ  philosophy  straussian  history  early-modern  britain  anglo  speculation  questions  events  multi  twitter  social  commentary  discussion  backup  econotariat  garett-jones  spearhead 
february 2018 by nhaliday
Information Processing: US Needs a National AI Strategy: A Sputnik Moment?
FT podcasts on US-China competition and AI: http://infoproc.blogspot.com/2018/05/ft-podcasts-on-us-china-competition-and.html

A new recommended career path for effective altruists: China specialist: https://80000hours.org/articles/china-careers/
Our rough guess is that it would be useful for there to be at least ten people in the community with good knowledge in this area within the next few years.

By “good knowledge” we mean they’ve spent at least 3 years studying these topics and/or living in China.

We chose ten because that would be enough for several people to cover each of the major areas listed (e.g. 4 within AI, 2 within biorisk, 2 within foreign relations, 1 in another area).

AI Policy and Governance Internship: https://www.fhi.ox.ac.uk/ai-policy-governance-internship/

https://www.fhi.ox.ac.uk/deciphering-chinas-ai-dream/
https://www.fhi.ox.ac.uk/wp-content/uploads/Deciphering_Chinas_AI-Dream.pdf
Deciphering China’s AI Dream
The context, components, capabilities, and consequences of
China’s strategy to lead the world in AI

Europe’s AI delusion: https://www.politico.eu/article/opinion-europes-ai-delusion/
Brussels is failing to grasp threats and opportunities of artificial intelligence.
By BRUNO MAÇÃES

When the computer program AlphaGo beat the Chinese professional Go player Ke Jie in a three-part match, it didn’t take long for Beijing to realize the implications.

If algorithms can already surpass the abilities of a master Go player, it can’t be long before they will be similarly supreme in the activity to which the classic board game has always been compared: war.

As I’ve written before, the great conflict of our time is about who can control the next wave of technological development: the widespread application of artificial intelligence in the economic and military spheres.

...

If China’s ambitions sound plausible, that’s because the country’s achievements in deep learning are so impressive already. After Microsoft announced that its speech recognition software surpassed human-level language recognition in October 2016, Andrew Ng, then head of research at Baidu, tweeted: “We had surpassed human-level Chinese recognition in 2015; happy to see Microsoft also get there for English less than a year later.”

...

One obvious advantage China enjoys is access to almost unlimited pools of data. The machine-learning technologies boosting the current wave of AI expansion are as good as the amount of data they can use. That could be the number of people driving cars, photos labeled on the internet or voice samples for translation apps. With 700 or 800 million Chinese internet users and fewer data protection rules, China is as rich in data as the Gulf States are in oil.

How can Europe and the United States compete? They will have to be commensurately better in developing algorithms and computer power. Sadly, Europe is falling behind in these areas as well.

...

Chinese commentators have embraced the idea of a coming singularity: the moment when AI surpasses human ability. At that point a number of interesting things happen. First, future AI development will be conducted by AI itself, creating exponential feedback loops. Second, humans will become useless for waging war. At that point, the human mind will be unable to keep pace with robotized warfare. With advanced image recognition, data analytics, prediction systems, military brain science and unmanned systems, devastating wars might be waged and won in a matter of minutes.

...

The argument in the new strategy is fully defensive. It first considers how AI raises new threats and then goes on to discuss the opportunities. The EU and Chinese strategies follow opposite logics. Already on its second page, the text frets about the legal and ethical problems raised by AI and discusses the “legitimate concerns” the technology generates.

The EU’s strategy is organized around three concerns: the need to boost Europe’s AI capacity, ethical issues and social challenges. Unfortunately, even the first dimension quickly turns out to be about “European values” and the need to place “the human” at the center of AI — forgetting that the first word in AI is not “human” but “artificial.”

https://twitter.com/mr_scientism/status/983057591298351104
https://archive.is/m3Njh
US military: "LOL, China thinks it's going to be a major player in AI, but we've got all the top AI researchers. You guys will help us develop weapons, right?"

US AI researchers: "No."

US military: "But... maybe just a computer vision app."

US AI researchers: "NO."

https://www.theverge.com/2018/4/4/17196818/ai-boycot-killer-robots-kaist-university-hanwha
https://www.nytimes.com/2018/04/04/technology/google-letter-ceo-pentagon-project.html
https://twitter.com/mr_scientism/status/981685030417326080
https://archive.is/3wbHm
AI-risk was a mistake.
hsu  scitariat  commentary  video  presentation  comparison  usa  china  asia  sinosphere  frontier  technology  science  ai  speedometer  innovation  google  barons  deepgoog  stories  white-paper  strategy  migration  iran  human-capital  corporation  creative  alien-character  military  human-ml  nationalism-globalism  security  investing  government  games  deterrence  defense  nuclear  arms  competition  risk  ai-control  musk  optimism  multi  news  org:mag  europe  EU  80000-hours  effective-altruism  proposal  article  realness  offense-defense  war  biotech  altruism  language  foreign-lang  philosophy  the-great-west-whale  enhancement  foreign-policy  geopolitics  anglo  jobs  career  planning  hmm  travel  charity  tech  intel  media  teaching  tutoring  russia  india  miri-cfar  pdf  automation  class  labor  polisci  society  trust  n-factor  corruption  leviathan  ethics  authoritarianism  individualism-collectivism  revolution  economics  inequality  civic  law  regulation  data  scale  pro-rata  capital  zero-positive-sum  cooperate-defect  distribution  time-series  tre 
february 2018 by nhaliday
National Defense Strategy of the United States of America
National Defense Strategy released with clear priority: Stay ahead of Russia and China: https://www.defensenews.com/breaking-news/2018/01/19/national-defense-strategy-released-with-clear-priority-stay-ahead-of-russia-and-china/

https://twitter.com/AngloRemnant/status/985341571410341893
https://archive.is/RhBdG
https://archive.is/wRzRN
A saner allocation of US 'defense' funds would be something like 10% nuclear trident, 10% border patrol, & spend the rest innoculating against cyber & biological attacks.
and since the latter 2 are hopeless, just refund 80% of the defense budget.
--
Monopoly on force at sea is arguably worthwhile.
--
Given the value of the US market to any would-be adversary, id be willing to roll the dice & let it ride.
--
subs are part of the triad, surface ships are sitting ducks this day and age
--
But nobody does sink them, precisely because of the monopoly on force. It's a path-dependent equilibirum where (for now) no other actor can reap the benefits of destabilizing the monopoly, and we're probably drastically underestimating the ramifications if/when it goes away.
--
can lethal autonomous weapon systems get some
pdf  white-paper  org:gov  usa  government  trump  policy  nascent-state  foreign-policy  realpolitik  authoritarianism  china  asia  russia  antidemos  military  defense  world  values  enlightenment-renaissance-restoration-reformation  democracy  chart  politics  current-events  sulla  nuclear  arms  deterrence  strategy  technology  sky  oceans  korea  communism  innovation  india  europe  EU  MENA  multi  org:foreign  war  great-powers  thucydides  competition  twitter  social  discussion  backup  gnon  🐸  markets  trade  nationalism-globalism  equilibrium  game-theory  tactics  top-n  hi-order-bits  security  hacker  biotech  terrorism  disease  parasites-microbiome  migration  walls  internet 
january 2018 by nhaliday
The Western Elite from a Chinese Perspective - American Affairs Journal
I don’t claim to be a modern-day Alexis de Tocqueville, nor do I have much in common with this famous observer of American life. He grew up in Paris, a city renowned for its culture and architecture. I grew up in Shijiazhuang, a city renowned for being the headquarters of the company that produced toxic infant formula. He was a child of aristocrats; I am the child of modest workers.

Nevertheless, I hope my candid observations can provide some insights into the elite institutions of the West. Certain beliefs are as ubiquitous among the people I went to school with as smog was in Shijiazhuang. The doctrines that shape the worldviews and cultural assumptions at elite Western institutions like Cambridge, Stanford, and Goldman Sachs have become almost religious. Nevertheless, I hope that the perspective of a candid Chinese atheist can be of some instruction to them.

...

So I came to the UK in 2001, when I was 16 years old. Much to my surprise, I found the UK’s exam-focused educational system very similar to the one in China. What is more, in both countries, going to the “right schools” and getting the “right job” are seen as very important by a large group of eager parents. As a result, scoring well on exams and doing well in school interviews—or even the play session for the nursery or pre-prep school—become the most important things in the world. Even at the university level, the undergraduate degree from the University of Cambridge depends on nothing else but an exam at the end of the last year.

On the other hand, although the UK’s university system is considered superior to China’s, with a population that is only one-twentieth the size of my native country, competition, while tough, is less intimidating. For example, about one in ten applicants gets into Oxbridge in the UK, and Stanford and Harvard accept about one in twenty-five applicants. But in Hebei province in China, where I am from, only one in fifteen hundred applicants gets into Peking or Qinghua University.

Still, I found it hard to believe how much easier everything became. I scored first nationwide in the GCSE (high school) math exam, and my photo was printed in a national newspaper. I was admitted into Trinity College, University of Cambridge, once the home of Sir Isaac Newton, Francis Bacon, and Prince Charles.

I studied economics at Cambridge, a field which has become more and more mathematical since the 1970s. The goal is always to use a mathematical model to find a closed-form solution to a real-world problem. Looking back, I’m not sure why my professors were so focused on these models. I have since found that the mistake of blindly relying on models is quite widespread in both trading and investing—often with disastrous results, such as the infamous collapse of the hedge fund Long-Term Capital Management. Years later, I discovered the teaching of Warren Buffett: it is better to be approximately right than precisely wrong. But our professors taught us to think of the real world as a math problem.

The culture of Cambridge followed the dogmas of the classroom: a fervent adherence to rules and models established by tradition. For example, at Cambridge, students are forbidden to walk on grass. This right is reserved for professors only. The only exception is for those who achieve first class honors in exams; they are allowed to walk on one area of grass on one day of the year.

The behavior of my British classmates demonstrated an even greater herd mentality than what is often mocked in American MBAs. For example, out of the thirteen economists in my year at Trinity, twelve would go on to join investment banks, and five of us went to work for Goldman Sachs.

...

To me, Costco represents the best of American capitalism. It is a corporation known for having its customers and employees in mind, while at the same time it has compensated its shareholders handsomely over the years. To the customers, it offers the best combination of quality and low cost. Whenever it manages to reduce costs, it passes the savings on to customers immediately. Achieving a 10 percent gross margin with prices below Amazon’s is truly incredible. After I had been there once, I found it hard to shop elsewhere.

Meanwhile, its salaries are much higher than similar retail jobs. When the recession hit in 2008, the company increased salaries to help employees cope with the difficult environment. From the name tags the staff wear, I have seen that frontline employees work there for decades, something hard to imagine elsewhere.

Stanford was for me a distant second to Costco in terms of the American capitalist experience. Overall, I enjoyed the curriculum at the GSB. Inevitably I found some classes less interesting, but the professors all seemed to be quite understanding, even when they saw me reading my kindle during class.

One class was about strategy. It focused on how corporate mottos and logos could inspire employees. Many of the students had worked for nonprofits or health care or tech companies, all of which had mottos about changing the world, saving lives, saving the planet, etc. The professor seemed to like these mottos. I told him that at Goldman our motto was “be long-term greedy.” The professor couldn’t understand this motto or why it was inspiring. I explained to him that everyone else in the market was short-term greedy and, as a result, we took all their money. Since traders like money, this was inspiring. He asked if perhaps there was another motto or logo that my other classmates might connect with. I told him about the black swan I kept on my desk as a reminder that low probability events happen with high frequency. He didn’t like that motto either and decided to call on another student, who had worked at Pfizer. Their motto was “all people deserve to live healthy lives.” The professor thought this was much better. I didn’t understand how it would motivate employees, but this was exactly why I had come to Stanford: to learn the key lessons of interpersonal communication and leadership.

On the communication and leadership front, I came to the GSB knowing I was not good and hoped to get better. My favorite class was called “Interpersonal Dynamics” or, as students referred to it, “Touchy Feely.” In “Touchy Feely,” students get very candid feedback on how their words and actions affect others in a small group that meets several hours per week for a whole quarter.

We talked about microaggressions and feelings and empathy and listening. Sometimes in class the professor would say things to me like “Puzhong, when Mary said that, I could see you were really feeling something,” or “Puzhong, I could see in your eyes that Peter’s story affected you.” And I would tell them I didn’t feel anything. I was quite confused.

One of the papers we studied mentioned that subjects are often not conscious of their own feelings when fully immersed in a situation. But body indicators such as heart rate would show whether the person is experiencing strong emotions. I thought that I generally didn’t have a lot of emotions and decided that this might be a good way for me to discover my hidden emotions that the professor kept asking about.

So I bought a heart rate monitor and checked my resting heart rate. Right around 78. And when the professor said to me in class “Puzhong, I can see that story brought up some emotions in you,” I rolled up my sleeve and checked my heart rate. It was about 77. And so I said, “nope, no emotion.” The experiment seemed to confirm my prior belief: my heart rate hardly moved, even when I was criticized, though it did jump when I became excited or laughed.

This didn’t land well on some of my classmates. They felt I was not treating these matters with the seriousness that they deserved. The professor was very angry. My takeaway was that my interpersonal skills were so bad that I could easily offend people unintentionally, so I concluded that after graduation I should do something that involved as little human interaction as possible.

Therefore, I decided I needed to return to work in financial markets rather than attempting something else. I went to the career service office and told them that my primary goal after the MBA was to make money. I told them that $500,000 sounded like a good number. They were very confused, though, as they said their goal was to help me find my passion and my calling. I told them that my calling was to make money for my family. They were trying to be helpful, but in my case, their advice didn’t turn out to be very helpful.

Eventually I was able to meet the chief financial officer of my favorite company, Costco. He told me that they don’t hire any MBAs. Everyone starts by pushing trolleys. (I have seriously thought about doing just that. But my wife is strongly against it.) Maybe, I thought, that is why the company is so successful—no MBAs!

...

Warren Buffett has said that the moment one was born in the United States or another Western country, that person has essentially won a lottery. If someone is born a U.S. citizen, he or she enjoys a huge advantage in almost every aspect of life, including expected wealth, education, health care, environment, safety, etc., when compared to someone born in developing countries. For someone foreign to “purchase” these privileges, the price tag at the moment is $1 million dollars (the rough value of the EB-5 investment visa). Even at this price level, the demand from certain countries routinely exceeds the annual allocated quota, resulting in long waiting times. In that sense, American citizens were born millionaires!

Yet one wonders how long such luck will last. This brings me back to the title of Rubin’s book, his “uncertain world.” In such a world, the vast majority things are outside our control, determined by God or luck. After we have given our best and once the final card is drawn, we should neither become too excited by what we have achieved nor too depressed by what we failed to … [more]
news  org:mag  org:popup  letters  lol  :/  china  asia  sinosphere  orient  usa  the-great-west-whale  occident  rot  zeitgeist  tocqueville  culture  comparison  malaise  aphorism  random  realness  hypocrisy  emotion  success  counter-revolution  nascent-state  communism  capitalism  education  higher-ed  britain  anglosphere  competition  oxbridge  tradition  flux-stasis  finance  innovation  autism  👽  near-far  within-without  business  gnon  🐸  twitter  social  commentary  discussion  backup  mena4  futurism  trends  elite  institutions  religion  christianity  theos  truth  scale  population  courage  vitality  models  map-territory  long-short-run  time-preference  patience  temperance  virtu  cultural-dynamics  input-output  impact  investing  monetary-fiscal  is-ought  pic  unaffiliated  right-wing  analytical-holistic  systematic-ad-hoc  stanford  n-factor  civilization  management  industrial-org  people  stream  alien-character  pro-rata  tails  gnosis-logos  signal-noise  pragmatic 
january 2018 by nhaliday
Team *Decorations Until Epiphany* on Twitter: "@RoundSqrCupola maybe just C https://t.co/SFPXb3qrAE"
https://archive.is/k0fsS
Remember ‘BRICs’? Now it’s just ICs.
--
maybe just C
Solow predicts that if 2 countries have the same TFP, then the poorer nation should grow faster. But poorer India grows more slowly than China.

Solow thinking leads one to suspect India has substantially lower TFP.

Recent growth is great news, but alas 5 years isn't the long run!

FWIW under Solow conditional convergence assumptions--historically robust--the fact that a country as poor as India grows only a few % faster than the world average is a sign they'll end up poorer than S Europe.

see his spreadsheet here: http://mason.gmu.edu/~gjonesb/SolowForecast.xlsx
spearhead  econotariat  garett-jones  unaffiliated  twitter  social  discussion  india  asia  china  economics  macro  growth-econ  econ-metrics  wealth  wealth-of-nations  convergence  world  developing-world  trends  time-series  cjones-like  prediction  multi  backup  the-bones  long-short-run  europe  mediterranean  comparison  simulation  econ-productivity  great-powers  thucydides  broad-econ  pop-diff  microfoundations  🎩  marginal  hive-mind  rindermann-thompson  hari-seldon  tools  calculator  estimate 
december 2017 by nhaliday
Asabiyyah in Steve King’s Iowa – Gene Expression
What will happen if and when institutions collapse? I do not believe much of America has the social capital of Orange City, Iowa. We have become rational actors, utility optimizers. To some extent, bureaucratic corporate life demands us to behave in this manner. Individual attainment and achievement are lionized, while sacrifice in the public good is the lot of the exceptional saint.
gnxp  scitariat  discussion  usa  culture  society  cultural-dynamics  american-nations  cohesion  trust  social-capital  trends  institutions  data  education  human-capital  britain  anglo  europe  germanic  nordic  individualism-collectivism  values  language  trivia  cocktail  shakespeare  religion  christianity  protestant-catholic  community 
december 2017 by nhaliday
How sweet it is! | West Hunter
This has probably been going on for a long, long, time. It may well go back before anatomically modern humans. I say that because of the greater honeyguide, which guides people to beehives in Africa. After we take the honey, the honeyguide eats the grubs and wax. A guiding bird attracts your attention with wavering, chattering ‘tya’ notes compounded with peeps and pipes. It flies towards an occupied hive and then stops and calls again. It has only been seen to guide humans.

I would not be surprised to find that this symbiotic relationship is far older than the the domestication of dogs. But it is not domestication: we certainly don’t control their reproduction. I wouldn’t count on it, but if you could determine the genetic basis of this signaling behavior, you might be able to get an idea of how old it is.

Honeyguides may be mankind’s oldest buds, but they’re nasty little creatures: brood parasites, like cuckoos.
west-hunter  scitariat  discussion  trivia  cocktail  africa  speculation  history  antiquity  sapiens  farmers-and-foragers  food  nature  domestication  cooperate-defect  ed-yong  org:sci  popsci  survival  outdoors 
december 2017 by nhaliday
Sources on Technical History | Salo Forum - Chic Nihilism
This is a thread where people can chip in and list some good sources for the history of technology and mechanisms (hopefully with illustrations), books on infrastructure or industrial geography, or survey books in engineering. This is a thread that remains focused on the "technical" and not historical side.

Now, on the history of technology alone if I comprehensively listed every book, paper, etc., I've read on the subject since childhood then this thread would run well over 100 pages (seriously). I'll try to compress it by dealing with entire authors, journals, and publishers even.

First, a note on preliminaries: the best single-volume primer on the physics, internal components and subsystems of military weapons (including radar, submarines) is Craig Payne's Principles of Naval Weapons Systems. Make sure to get the second edition, the first edition is useless.
gnon  🐸  chan  poast  links  reading  technology  dirty-hands  the-world-is-just-atoms  military  defense  letters  discussion  list  books  recommendations  confluence  arms  war  heavy-industry  mostly-modern  world-war  history  encyclopedic  meta:war  offense-defense  quixotic  war-nerd 
november 2017 by nhaliday
Lynn Margulis | West Hunter
Margulis went on to theorize that symbiotic relationships between organisms are the dominant driving force of evolution. There certainly are important examples of this: as far as I know, every complex organism that digests cellulose manages it thru a symbiosis with various prokaryotes. Many organisms with a restricted diet have symbiotic bacteria that provide essential nutrients – aphids, for example. Tall fescue, a popular turf grass on golf courses, carries an endosymbiotic fungus. And so on, and on on.

She went on to oppose neodarwinism, particularly rejecting inter-organismal competition (and population genetics itself). From Wiki: [ She also believed that proponents of the standard theory “wallow in their zoological, capitalistic, competitive, cost-benefit interpretation of Darwin – having mistaken him… Neo-Darwinism, which insists on [the slow accrual of mutations by gene-level natural selection], is in a complete funk.”[8] ‘

...

You might think that Lynn Margulis is an example of someone that could think outside the box because she’d never even been able to find it in the first place – but that’s more true of autistic types [like Dirac or Turing], which I doubt she was in any way. I’d say that some traditional prejudices [dislike of capitalism and individual competition], combined with the sort of general looniness that leaves one open to unconventional ideas, drove her in a direction that bore fruit, more or less by coincidence. A successful creative scientist does not have to be right about everything, or indeed about much of anything: they need to contribute at least one new, true, and interesting thing.

https://westhunt.wordpress.com/2017/11/25/lynn-margulis/#comment-98174
“A successful creative scientist does not have to be right about everything, or indeed about much of anything: they need to contribute at least one new, true, and interesting thing.” Yes – it’s like old bands. As long as they have just one song in heavy rotation on the classic rock stations, they can tour endlessly – it doesn’t matter that they have only one or even no original members performing. A scientific example of this phenomena is Kary Mullins. He’ll always have PCR, even if a glowing raccoon did greet him with the words, “Good evening, Doctor.”

Nobel Savage: https://www.lrb.co.uk/v21/n13/steven-shapin/nobel-savage
Dancing Naked in the Mind Field by Kary Mullis

jet fuel can't melt steel beams: https://westhunt.wordpress.com/2017/11/25/lynn-margulis/#comment-98201
You have to understand a subject extremely well to make arguments why something couldn’t have happened. The easiest cases involve some purported explanation violating a conservation law of physics: that wasn’t the case here.

Do I think you’re a hotshot, deeply knowledgeable about structural engineering, properties of materials, using computer models, etc? A priori, pretty unlikely. What are the odds that you know as much simple mechanics as I do? a priori, still pretty unlikely. Most likely, you’re talking through your hat.

Next, the conspiracy itself is unlikely: quite a few people would be involved – unlikely that none of them would talk. It’s not that easy to find people that would go along with such a thing, believe it or not. The Communists were pretty good at conspiracy, but people defected, people talked: not just Whittaker Chambers, not just Igor Gouzenko.
west-hunter  scitariat  discussion  people  profile  science  the-trenches  innovation  discovery  ideas  turing  giants  autism  👽  bio  evolution  eden  roots  darwinian  capitalism  competition  cooperate-defect  being-right  info-dynamics  frontier  curiosity  creative  multi  poast  prudence  org:mag  org:anglo  letters  books  review  critique  summary  lol  genomics  social-science  sociology  psychology  psychiatry  ability-competence  rationality  epistemic  reason  events  terrorism  usa  islam  communism  coordination  organizing  russia  dirty-hands  degrees-of-freedom  alignment 
november 2017 by nhaliday
SEXUAL DIMORPHISM, SEXUAL SELECTION, AND ADAPTATION IN POLYGENIC CHARACTERS - Lande - 1980 - Evolution - Wiley Online Library
https://twitter.com/gcochran99/status/970758341990367232
https://archive.is/mcKvr
Lol, that's nothing, my biology teacher in high school told me sex differences couldn't evolve since all of us inherit 50% of genes from parents of both sexes. Being a raucous hispanic kid I burst out laughing, she was not pleased
--
Sex differences actually evolve more slowly because of that: something like 80 times more slowly.
...
Doesn't have that number, but in the same ballpark.

Sexual Dimorphism, Sexual Selection, And Adaptation In Polygenic Characters

Russell Lande

https://twitter.com/gcochran99/status/999189778867208193
https://archive.is/AR8FY
I believe it, because sex differences [ in cases where the trait is not sex-limited ] evolve far more slowly than other things, on the order of 100 times more slowly. Lande 1980: https://onlinelibrary.wiley.com/doi/pdf/10.1111/j.1558-5646.1980.tb04817.x

The deep past has a big vote in such cases.
...
as for the extent that women were voluntarily choosing mates 20k years ago, or 100k years ago - I surely don't know.

other time mentioned: https://pinboard.in/u:nhaliday/b:3a7c5b42dd50
study  article  bio  biodet  gender  gender-diff  evolution  genetics  population-genetics  methodology  nibble  sex  🌞  todo  pdf  piracy  marginal  comparison  pro-rata  data  multi  twitter  social  discussion  backup  west-hunter  scitariat  farmers-and-foragers  sexuality  evopsych  EEA 
november 2017 by nhaliday
Bouncing Off the Bottom | West Hunter
Actually going extinct would seem to be a bad thing, but a close call can, in principle, be a good thing.

Pathogens can be a heavy burden on a species, worse than a 50-lb sack of cement. Lifting that burden can have a big effect: we know that many species flourish madly once they escape their typical parasites. That’s often the case with invasive species. It’s also a major strategy in agriculture: crops often do best in a country far away from their place of origin – where the climate is familiar, but most parasites have been left behind. For example, rubber trees originated in South America, but they’re a lot easier to grow in Liberia or Malaysia.

Consider a situation with a really burdensome pathogen – one that specializes in and depends on a single host species. That pathogen has to find new host individuals every so often in order to survive, and in order for that to happen, the host population has to exceed a certain number, usually called the critical community size. That size depends on the parasite’s persistence and mode of propagation: it can vary over a huge range. CCS is something like a quarter of a million for measles, ~300 for chickenpox, surely smaller than that for Epstein-Barr.

A brush with extinction- say from an asteroid strike – might well take a species below the CCS for a number of its pathogens. If those pathogens were limited to that species, they’d go extinct: no more burden. That alone might be enough to generate a rapid recovery from the population bottleneck. Or a single, highly virulent pathogen might cause a population crash that resulted in the extinction of several of that species’s major pathogens – quite possibly including the virulent pathogen itself. It’s a bottleneck in time, rather than one in space as you often see in colonization.

Such positive effects could last a long time – things need not go back to the old normal. The flea-unbitten species might be able to survive and prosper in ecological niches that it couldn’t before. You might see a range expansion. New evolutionary paths could open up. That brush with extinction could be the making of them.

When you add it all up, you begin to wonder if a population crash isn’t just what the doctor ordered. Sure, it wouldn’t be fun to be one of the billions of casualties, but just think how much better off the billions living after the bottleneck will be. Don’t be selfish.
west-hunter  scitariat  ideas  speculation  discussion  parasites-microbiome  spreading  disease  scale  population  density  bio  nature  long-short-run  nihil  equilibrium  death  unintended-consequences  red-queen  tradeoffs  cost-benefit  gedanken 
november 2017 by nhaliday
Austria-Hungary | West Hunter
Diversity is our strength? I’m Hungarian, and I can tell you one thing: in the case of Austria-Hungary, diversity was a weakness.

It was much different from the modern US & Western Europe, where it is indeed a strength. At least, that’s what politicians and cultural Marxist intellectuals over there are saying, and I’m sure I can trust them, because if you can’t trust politicians and cultural Marxist intellectuals, who can you trust at all?
--
It’s a hodgepodge of languages, like “mi” is “we” in Hungarian (and if you say “a mi erőnk”, it’d mean “our strength”, so I guess it stands for “our” here), or “Stärke” is “strength” in German. The first word is some Slavic language, could be Czech or Slovakian.
--
Resilient until a serious war came around with a mass conscripted army. It did better in the 18th century with small professional armies. It’d do better again in the 21st century, with the return of the small professional armies.
west-hunter  scitariat  discussion  troll  europe  eastern-europe  diversity  putnam-like  conquest-empire  aphorism  lol  history  mostly-modern  pre-ww2  world-war  cohesion  early-modern  cost-benefit  war  military  context  germanic  meta:war  defense 
november 2017 by nhaliday
The Same Old Story | West Hunter
People often reinterpret past events, recast them in terms of some contemporary ideology. When historians talk about the Monophysites in Byzantine times, they often suggest that those struggle are a mask for a kind of proto-nationalism. Maybe they were: and maybe nobody involved was thinking anything remotely like that. The Communists tried to come up with Marxist interpretations of ancient history, which led them to spend way too much time talking about Mazdakites in Sassanian Persia and the Zealots of Thessalonika . And Spartacus: but at least Spartacus was cool.

Then there are feminist versions of history. Let us never speak of them again.

Generally, this is all crap. But we could at least hope for something new along these lines: bullshit perhaps, but at least fresh bullshit. Obviously the reality underlying both the Punic Wars and the Crusades is the ancient struggle between EEF and ANE.
west-hunter  scitariat  discussion  rant  troll  letters  academia  politics  ideology  biases  is-ought  history  iron-age  medieval  mediterranean  the-classics  MENA  class  class-warfare  polanyi-marx  communism  gender  europe  war  genetics  genomics  sapiens  pop-structure 
november 2017 by nhaliday
The weirdest people in the world?
Abstract: Behavioral scientists routinely publish broad claims about human psychology and behavior in the world’s top journals based on samples drawn entirely from Western, Educated, Industrialized, Rich, and Democratic (WEIRD) societies. Researchers – often implicitly – assume that either there is little variation across human populations, or that these “standard subjects” are as representative of the species as any other population. Are these assumptions justified? Here, our review of the comparative database from across the behavioral sciences suggests both that there is substantial variability in experimental results across populations and that WEIRD subjects are particularly unusual compared with the rest of the species – frequent outliers. The domains reviewed include visual perception, fairness, cooperation, spatial reasoning, categorization and inferential induction, moral reasoning, reasoning styles, self-concepts and related motivations, and the heritability of IQ. The findings suggest that members of WEIRD societies, including young children, are among the least representative populations one could find for generalizing about humans. Many of these findings involve domains that are associated with fundamental aspects of psychology, motivation, and behavior – hence, there are no obvious a priori grounds for claiming that a particular behavioral phenomenon is universal based on sampling from a single subpopulation. Overall, these empirical patterns suggests that we need to be less cavalier in addressing questions of human nature on the basis of data drawn from this particularly thin, and rather unusual, slice of humanity. We close by proposing ways to structurally re-organize the behavioral sciences to best tackle these challenges.

https://twitter.com/JoHenrich/status/1143322655178801152
https://archive.is/D2QZ5
When I discuss my concern that psychologists and behavioral economists rely on a thin and peculiar slice of humanity in order to understand HUMAN psychology, they often reply with the strong intuition that they (but perhaps not others) are studying “basic processes,” etc.
To assess how difficult it is to identify these “basic process” without both evolutionary theory and serious cross-cultural research, let’s put aside psychology and focus on physiology and anatomy. Surely, those are “basic.” #WEIRDPeopleProblem
...
pdf  study  microfoundations  anthropology  cultural-dynamics  sociology  psychology  social-psych  cog-psych  iq  biodet  behavioral-gen  variance-components  psychometrics  psych-architecture  visuo  spatial  morality  individualism-collectivism  n-factor  justice  egalitarianism-hierarchy  cooperate-defect  outliers  homo-hetero  evopsych  generalization  henrich  europe  the-great-west-whale  occident  organizing  🌞  universalism-particularism  applicability-prereqs  hari-seldon  extrema  comparison  GT-101  ecology  EGT  reinforcement  anglo  language  gavisti  heavy-industry  marginal  absolute-relative  reason  stylized-facts  nature  systematic-ad-hoc  analytical-holistic  science  modernity  behavioral-econ  s:*  illusion  cool  hmm  coordination  self-interest  social-norms  population  density  humanity  sapiens  farmers-and-foragers  free-riding  anglosphere  cost-benefit  china  asia  sinosphere  MENA  world  developing-world  neurons  theory-of-mind  network-structure  nordic  orient  signum  biases  usa  optimism  hypocrisy  humility  within-without  volo-avolo  domes 
november 2017 by nhaliday
Fish on Friday | West Hunter
There are parts of Europe, Switzerland and Bavaria for example, that are seriously iodine deficient. This used to be a problem. I wonder if fish on Friday ameliorated it: A three-ounce serving size of cod provides your body with 99 micrograms of iodine, or 66% of the recommended amount per day.

Thinking further, it wasn’t just Fridays: there were ~130 days a years when the Catholic Church banned flesh.

Gwern on modern iodine-deficiency: https://westhunt.wordpress.com/2017/10/28/fish-on-friday/#comment-97137
population surveys indicate lots of people are iodine-insufficient even in the US or UK where the problem should’ve been permanently solved a century ago
west-hunter  scitariat  discussion  ideas  speculation  sapiens  europe  the-great-west-whale  history  medieval  germanic  religion  christianity  protestant-catholic  institutions  food  diet  nutrition  metabolic  iq  neuro  unintended-consequences  multi  gwern  poast  hmm  planning  parenting  developmental  public-health  gotchas  biodet  deep-materialism  health  embodied-street-fighting  ritual  roots  explanans 
october 2017 by nhaliday
Measles and immunological amnesia | West Hunter
A new paper in Science , by Michael Mina et al,  strongly suggests that measles messes up your immunological defenses for two or three years. This is the likely explanation for the fact that measles inoculation causes much greater decreases in child morbidity and mortality than you’d expect from preventing the deaths directly due to measles infection. The thought is that measles whacks the cells that carry immunological memory, leaving the kid ripe for reinfections.  I think there can be a similar effect with anti-cancer chemotherapy.

If correct, this means that measles is much nastier than previously thought. It must have played a significant role in the demographic collapse of long-isolated peoples (such as the Amerindians). Its advent may have played a role in the population decrease associated with the decline of the Classical world.  Even though it is relatively new (having split off from rinderpest a couple of thousand years ago) strong selection for resistance may have  favored some fairly expensive genetic defenses (something like sickle-cell) in Eurasian populations.

We already know of quite a few complex side effects of infectious disease, such the different kind of immunosuppression we see with AIDs, Burkitt’s lymphoma hitting kids with severe Epstein-Barr infections followed by malaria, acute dengue fever that requires a previous infection by a different strain of dengue, etc: there may well be other important interactions and side effects, news of which has not yet come to Harvard.
west-hunter  scitariat  discussion  ideas  commentary  study  summary  org:nat  epidemiology  bio  health  immune  disease  parasites-microbiome  unintended-consequences  cancer  medicine  long-short-run  usa  farmers-and-foragers  age-of-discovery  speculation  nihil  history  iron-age  mediterranean  the-classics  demographics  population  gibbon  rot  harvard  elite  low-hanging  info-dynamics  being-right  heterodox 
october 2017 by nhaliday
« earlier      
per page:    204080120160

Copy this bookmark:





to read