recentpopularlog in

nhaliday : cost-benefit   408

« earlier  
Saturday assorted links - Marginal REVOLUTION
2. Who are the workers most needing support and how can we get cash to them?

3. Recommended occupational licensing reforms. And Certificate of Need and nurse practitioner laws. And the case for relaxing pharmacy regulations.

4. Why did U.S. testing get so held up? (quite good)

5. Covid-19 forecasting site from The Future of Humanity Institute, Oxford.

6. How slim are restaurant margins?

...

8. On why the German death rate is lower.

9. New Haven asks for coronavirus housing help, Yale says no.

10. Are Italian deaths being undercounted? And it seems Spanish deaths are being undercounted (in Spanish).

11. Japan now admits the situation there is much worse than had been recognized.
econotariat  marginal-rev  links  current-events  wuhan-coronavirus  regulation  usa  checking  medicine  public-health  error  ratty  bostrom  org:ngo  miri-cfar  prediction  economics  money  cost-benefit  data  objektbuch  food  business  europe  germanic  death  pro-rata  mediterranean  map-territory  japan  asia  twitter  social  commentary  quotes  statesmen  northeast  higher-ed 
8 days ago by nhaliday
Michael Akilian: Worker-in-the-loop Retrospective
Over the last ten years, many companies have created human-in-the-loop services that combine a mix of humans and algorithms. Now that some time has passed, we can tease out some patterns from their collective successes and failures. As someone who started a company in this space, my hope is that this retrospective can help prospective founders, investors, or companies navigating this space save time and fund more impactful projects.

A service is considered human-in-the-loop if it organizes its workflows with the intent to introduce models or heuristics that learn from the work of the humans executing the workflows. In this post, I will make reference to two common forms of human-in-the-loop:

User-in-the-loop (UITL): The end-user is interacting with suggestions from a software heuristic/ML system.
Worker-in-the-loop (WITL): A worker is paid to monitor suggestions from a software heuristic/ML system developed by the same company that pays the worker, but for the ultimate benefit of an end-user.
techtariat  reflection  business  tech  postmortem  automation  startups  hard-tech  ai  machine-learning  human-ml  cost-benefit  analysis  thinking  business-models  things  dimensionality  exploratory  markets  labor  economics  tech-infrastructure  gig-econ 
12 weeks ago by nhaliday
Depreciation Calculator - Insurance Claims Tools & Databases - Claims Pages
depreciation calculator for different product categories/commodities (tbh I would prefer just a table of rates)
tools  calculator  personal-finance  money  increase-decrease  flux-stasis  cost-benefit  economics  time  correlation  manifolds  data  database  objektbuch  quality 
january 2020 by nhaliday
The role of exercise capacity in the health and longevity of centenarians
Life-long spontaneous exercise does not prolong lifespan but improves health span in mice: https://www.ncbi.nlm.nih.gov/pubmed/24472376
Life-long spontaneous exercise did not prolong longevity but prevented several signs of frailty (that is, decrease in strength, endurance and motor coordination).
Survival of the fittest: VO2max, a key predictor of longevity?: https://www.bioscience.org/2018/v23/af/4657/2.htm
7. CONCLUSION

As yet, it is not possible to extend the genetically fixed lifespan with regular exercise training, but the chance to reach the later end of natural lifespan increases with higher physical fitness in midlife, where targeted preventative efforts may be launched. CRF (VO2max) is the strongest independent predictor of future life expectancy in both healthy and cardiorespiratory-diseased individuals. In addition, muscle stimulation is essential in order to prevent muscle wasting, disability, and increased hospitalization in old age, all crucial ways to avoid long-term care, thereby promoting quality of life in aging humans (Figure 2). Thus, extending life is not as important as giving those years more life. This is where physical fitness plays an important role.
study  survey  health  fitness  fitsci  longevity  aging  embodied  death  multi  model-organism  endurance  metabolic  metrics  epidemiology  public-health  regularizer  cost-benefit 
january 2020 by nhaliday
The combination of cardiorespiratory fitness and muscle strength, and mortality risk | SpringerLink
Compared with the lowest CRF category, the hazard ratio (HR) for all-cause mortality was 0.76 [95% confidence interval (CI) 0.64–0.89] and 0.65 (95% CI 0.55–0.78) for the middle and highest CRF categories, respectively, after adjustment for confounders and GS. The highest GS category had an HR of 0.79 (95% CI 0.66–0.95) for all-cause mortality compared with the lowest, after adjustment for confounders and CRF. Similar results were found for cardiovascular and cancer mortality. The HRs for the combination of highest CRF and GS were 0.53 (95% CI 0.39–0.72) for all-cause mortality and 0.31 (95% CI 0.14–0.67) for cardiovascular mortality, compared with the reference category of lowest CRF and GS: no significant association for cancer mortality (HR 0.70; 95% CI 0.48–1.02). CRF and GS are both independent predictors of mortality. Improving both CRF and muscle strength, as opposed to either of the two alone, may be the most effective behavioral strategy to reduce all-cause and cardiovascular mortality risk.

Cardiorespiratory fitness, muscular strength, and obesity in adolescence and later chronic disability due to cardiovascular disease: a cohort study of 1 million men: https://academic.oup.com/eurheartj/advance-article/doi/10.1093/eurheartj/ehz774/5618700
This population-based cohort study included 1 078 685 male adolescents (16–19 years) from the Swedish military conscription register from 1972 to 1994. Cardiorespiratory fitness (bicycle ergometer test), muscular strength (knee extension strength), and BMI were measured during the conscription examination. Information about disability pension due to CVD was retrieved from the Social Insurance Agency during a mean follow-up of 28.4 years. Cardiorespiratory fitness was strongly and inversely associated with later risk of chronic CVD disability for all investigated causes. The association was particularly strong for ischaemic heart diseases (hazard ratio 0.11, 95% confidence interval 0.05–0.29 for highest vs. lowest fitness-quintiles). Furthermore, overweight/obesity were associated with CVD disability for all investigated causes. Conversely, associations of muscular strength with CVD disability were generally weak.

Association between V̇O2max, handgrip strength, and musculoskeletal pain among construction and health care workers: https://bmcpublichealth.biomedcentral.com/articles/10.1186/s12889-017-4173-3
study  epidemiology  public-health  health  fitness  fitsci  longevity  aging  embodied  endurance  metabolic  evidence-based  weightlifting  multi  metrics  solid-study  nordic  longitudinal  cardio  cost-benefit 
january 2020 by nhaliday
Do Cardio and Strength Training Work Against Each Other? | ISSA
- If the client’s primary goal is to improve power (e.g. improving sprint speed, vertical jumping, Olympic Lifting, etc.), long duration/low intensity aerobic training should be kept to a minimum.

- If the client’s primary goal is to improve strength and/or hypertrophy and he/she wishes to train concurrently with aerobic training, it is best to keep the duration of aerobic training to less than 30 minutes and the frequency of aerobic training to fewer than 3 days per week. Furthermore, a low-impact mode of aerobic training such as cycling or rowing appears to be a more appropriate option than running.

...

- If the client’s primary goal is to improve aerobic performance, concurrent training is advisable as resistance training has not been shown to significantly interfere with aerobic capacity gains.

https://sci-hub.tw/10.1519/JSC.0b013e31823a3e2d
org:health  health  fitness  fitsci  get-fit  evidence-based  study  summary  commentary  tradeoffs  endurance  weightlifting  strategy  running  multi  pdf  piracy  meta-analysis  intervention  metabolic  embodied  cycling  metrics  cost-benefit 
january 2020 by nhaliday
Climate Change Worst-Case Scenario Now Looks Unrealistic
As best as we can understand and project the medium- and long-term trajectories of energy use and emissions, the window of possible climate futures is probably narrowing, with both the most optimistic scenarios and the most pessimistic ones seeming, now, less likely.
news  org:mag  org:local  environment  climate-change  trends  current-events  prediction  cost-benefit 
december 2019 by nhaliday
You’re Probably Asking the Wrong People For Career Advice | Hunter Walk
Here’s what I believe: when considering a specific career path decision or evaluating an offer with a particular company, I’ve found people tend to concentrate mostly on the opinions and inputs of two groups: their friends in similar jobs and the most “successful” people they know within the industry. Seems like a reasonable strategy, right? Depends.

...

Ok, so who do advice seekers usually *undervalue*? (A) People who know you very deeply regardless of expertise in your specific professional work and (B) individuals who have direct experience with the company, role and people you’re considering.
techtariat  career  advice  communication  strategy  working-stiff  tech  judgement  decision-making  theory-of-mind  expert-experience  track-record  arbitrage  cost-benefit  contrarianism  rhetoric 
december 2019 by nhaliday
Is the bounty system effective? - Meta Stack Exchange
https://math.meta.stackexchange.com/questions/20155/how-effective-are-bounties
could do some kinda econometric analysis using the data explorer to determine this once and for all: https://pinboard.in/u:nhaliday/b:c0cd449b9e69
maybe some kinda RDD in time, or difference-in-differences?
I don't think answer quality/quantity by time meets the common trend assumption for DD, tho... Questions that eventually receive bounty are prob higher quality in the first place, and higher quality answers accumulate more and better answers regardless. Hmm.
q-n-a  stackex  forum  community  info-foraging  efficiency  cost-benefit  data  analysis  incentives  attention  quality  ubiquity  supply-demand  multi  math  causation  endogenous-exogenous  intervention  branches  control  tactics  sleuthin  hmm  idk  todo  data-science  overflow  dbs  regression  shift  methodology  econometrics 
november 2019 by nhaliday
Ask HN: Getting into NLP in 2018? | Hacker News
syllogism (spaCy author):
I think it's probably a bad strategy to try to be the "NLP guy" to potential employers. You'd do much better off being a software engineer on a project with people with ML or NLP expertise.

NLP projects fail a lot. If you line up a job as a company's first NLP person, you'll probably be setting yourself up for failure. You'll get handed an idea that can't work, you won't know enough about how to push back to change it into something that might, etc. After the project fails, you might get a chance to fail at a second one, but maybe not a third. This isn't a great way to move into any new field.

I think a cunning plan would be to angle to be the person who "productionises" models.
...
.--
...

Basically, don't just work on having more powerful solutions. Make sure you've tried hard to have easier problems as well --- that part tends to be higher leverage.

https://news.ycombinator.com/item?id=14008752
https://news.ycombinator.com/item?id=12916498
https://algorithmia.com/blog/introduction-natural-language-processing-nlp
hn  q-n-a  discussion  tech  programming  machine-learning  nlp  strategy  career  planning  human-capital  init  advice  books  recommendations  course  unit  links  automation  project  examples  applications  multi  mooc  lectures  video  data-science  org:com  roadmap  summary  error  applicability-prereqs  ends-means  telos-atelos  cost-benefit 
november 2019 by nhaliday
REST is the new SOAP | Hacker News
hn  commentary  techtariat  org:ngo  programming  engineering  web  client-server  networking  rant  rhetoric  contrarianism  idk  org:med  best-practices  working-stiff  api  models  protocol-metadata  internet  state  structure  chart  multi  q-n-a  discussion  expert-experience  track-record  reflection  cost-benefit  design  system-design  comparison  code-organizing  flux-stasis  interface-compatibility  trends  gotchas  stackex  state-of-art  distributed  concurrency  abstraction  concept  conceptual-vocab  python  ubiquity  list  top-n  duplication  synchrony  performance  caching 
november 2019 by nhaliday
Ask HN: What's a promising area to work on? | Hacker News
hn  discussion  q-n-a  ideas  impact  trends  the-bones  speedometer  technology  applications  tech  cs  programming  list  top-n  recommendations  lens  machine-learning  deep-learning  security  privacy  crypto  software  hardware  cloud  biotech  CRISPR  bioinformatics  biohacking  blockchain  cryptocurrency  crypto-anarchy  healthcare  graphics  SIGGRAPH  vr  automation  universalism-particularism  expert-experience  reddit  social  arbitrage  supply-demand  ubiquity  cost-benefit  compensation  chart  career  planning  strategy  long-term  advice  sub-super  commentary  rhetoric  org:com  techtariat  human-capital  prioritizing  tech-infrastructure  working-stiff  data-science 
november 2019 by nhaliday
The Open Steno Project | Hacker News
https://web.archive.org/web/20170315133208/http://www.danieljosephpetersen.com/posts/programming-and-stenography.html
I think at the end of the day, the Plover guys are trying to solve the wrong problem. Stenography is a dying field. I don’t wish anyone to lose their livelihood, but realistically speaking, the job should not exist once text to speech technology advances far enough. I’m not claiming that the field will be replaced by it, but I also don’t love the idea of people having to learn such an inane and archaic system.
hn  commentary  keyboard  speed  efficiency  writing  language  maker  homepage  project  multi  techtariat  cost-benefit  critique  expert-experience  programming  backup  contrarianism 
november 2019 by nhaliday
Reasoning From First Principles: The Dumbest Thing Smart People Do
Most middle-class Americans at least act as if:
- Exactly four years of higher education is precisely the right level of training for the overwhelming majority of good careers.
- You should spend most of your waking hours most days of the week for the previous twelve+ years preparing for those four years. In your free time, be sure to do the kinds of things guidance counselors think are impressive; we as a society know that these people are the best arbiters of arete.
- Forty hours per week is exactly how long it takes to be reasonably successful in most jobs.
- On the margin, the cost of paying for money management exceeds the cost of adverse selection from not paying for it.
- You will definitely learn important information about someone’s spousal qualifications in years two through five of dating them.
-Human beings need about 50% more square feet per capita than they did a generation or two ago, and you should probably buy rather than rent it.
- Books are very boring, but TV is interesting.

All of these sound kind of dumb when you write them out. Even if they’re arguably true, you’d expect a good argument. You can be a low-risk contrarian by just picking a handful of these, articulating an alternative — either a way to get 80% of the benefit at 20% of the cost, or a way to pay a higher cost to get massively more benefits — and then living it.[1]
techtariat  econotariat  unaffiliated  wonkish  org:med  thinking  skeleton  being-right  paying-rent  rationality  pareto  cost-benefit  arbitrage  spock  epistemic  contrarianism  finance  personal-finance  investing  stories  metameta  advice  metabuch  strategy  education  higher-ed  labor  sex  housing  tv  meta:reading  axioms  truth  worse-is-better/the-right-thing  human-bean 
october 2019 by nhaliday
Scarred Consumption
Abstract: We show that prior lifetime experiences can “scar” consumers. Consumers who have lived through times of high unemployment exhibit persistent pessimism about their future financial situation and spend significantly less, controlling for the standard life-cycle consumption factors, even though their actual future income is uncorrelated with past experiences. Due to their experience-induced frugality, scarred consumers build up more wealth.

The Career Effects Of Graduating In A Recession: http://www.econ.ucla.edu/tvwachter/papers/grad_recession_vonwachter_oreopoulos_heisz_final.pdf
https://www.nber.org/digest/nov06/w12159.html
https://siepr.stanford.edu/research/publications/recession-graduates-effects-unlucky

Do youths graduating in a recession incur
permanent losses?: https://pdfs.semanticscholar.org/e30a/190bd49364623c76f4e4b86e079e86acbcc6.pdf
pdf  study  economics  longitudinal  branches  long-short-run  labor  pessimism  time-preference  investing  wealth  cycles  expert-experience  behavioral-econ  microfoundations  cost-benefit  regularizer  increase-decrease  multi  crosstab  nonlinearity  mediterranean  usa  japan  asia  comparison  culture  n-factor  individualism-collectivism  markets  matching  flux-stasis  flexibility  rigidity  europe  gallic  germanic  nordic  anglosphere  mobility  education  class  health  death  age-generation  pro-rata  effect-size  data 
october 2019 by nhaliday
Advantages and disadvantages of building a single page web application - Software Engineering Stack Exchange
Advantages
- All data has to be available via some sort of API - this is a big advantage for my use case as I want to have an API to my application anyway. Right now about 60-70% of my calls to get/update data are done through a REST API. Doing a single page application will allow me to better test my REST API since the application itself will use it. It also means that as the application grows, the API itself will grow since that is what the application uses; no need to maintain the API as an add-on to the application.
- More responsive application - since all data loaded after the initial page is kept to a minimum and transmitted in a compact format (like JSON), data requests should generally be faster, and the server will do slightly less processing.

Disadvantages
- Duplication of code - for example, model code. I am going to have to create models both on the server side (PHP in this case) and the client side in Javascript.
- Business logic in Javascript - I can't give any concrete examples on why this would be bad but it just doesn't feel right to me having business logic in Javascript that anyone can read.
- Javascript memory leaks - since the page never reloads, Javascript memory leaks can happen, and I would not even know where to begin to debug them.

--

Disadvantages I often see with Single Page Web Applications:
- Inability to link to a specific part of the site, there's often only 1 entry point.
- Disfunctional back and forward buttons.
- The use of tabs is limited or non-existant.
(especially mobile:)
- Take very long to load.
- Don't function at all.
- Can't reload a page, a sudden loss of network takes you back to the start of the site.

This answer is outdated, Most single page application frameworks have a way to deal with the issues above – Luis May 27 '14 at 1:41
@Luis while the technology is there, too often it isn't used. – Pieter B Jun 12 '14 at 6:53

https://softwareengineering.stackexchange.com/questions/201838/building-a-web-application-that-is-almost-completely-rendered-by-javascript-whi

https://softwareengineering.stackexchange.com/questions/143194/what-advantages-are-conferred-by-using-server-side-page-rendering
Server-side HTML rendering:
- Fastest browser rendering
- Page caching is possible as a quick-and-dirty performance boost
- For "standard" apps, many UI features are pre-built
- Sometimes considered more stable because components are usually subject to compile-time validation
- Leans on backend expertise
- Sometimes faster to develop*
*When UI requirements fit the framework well.

Client-side HTML rendering:
- Lower bandwidth usage
- Slower initial page render. May not even be noticeable in modern desktop browsers. If you need to support IE6-7, or many mobile browsers (mobile webkit is not bad) you may encounter bottlenecks.
- Building API-first means the client can just as easily be an proprietary app, thin client, another web service, etc.
- Leans on JS expertise
- Sometimes faster to develop**
**When the UI is largely custom, with more interesting interactions. Also, I find coding in the browser with interpreted code noticeably speedier than waiting for compiles and server restarts.

https://softwareengineering.stackexchange.com/questions/237537/progressive-enhancement-vs-single-page-apps

https://stackoverflow.com/questions/21862054/single-page-application-advantages-and-disadvantages
=== ADVANTAGES ===
1. SPA is extremely good for very responsive sites:
2. With SPA we don't need to use extra queries to the server to download pages.
3.May be any other advantages? Don't hear about any else..

=== DISADVANTAGES ===
1. Client must enable javascript.
2. Only one entry point to the site.
3. Security.

https://softwareengineering.stackexchange.com/questions/287819/should-you-write-your-back-end-as-an-api
focused on .NET

https://softwareengineering.stackexchange.com/questions/337467/is-it-normal-design-to-completely-decouple-backend-and-frontend-web-applications
A SPA comes with a few issues associated with it. Here are just a few that pop in my mind now:
- it's mostly JavaScript. One error in a section of your application might prevent other sections of the application to work because of that Javascript error.
- CORS.
- SEO.
- separate front-end application means separate projects, deployment pipelines, extra tooling, etc;
- security is harder to do when all the code is on the client;

- completely interact in the front-end with the user and only load data as needed from the server. So better responsiveness and user experience;
- depending on the application, some processing done on the client means you spare the server of those computations.
- have a better flexibility in evolving the back-end and front-end (you can do it separately);
- if your back-end is essentially an API, you can have other clients in front of it like native Android/iPhone applications;
- the separation might make is easier for front-end developers to do CSS/HTML without needing to have a server application running on their machine.

Create your own dysfunctional single-page app: https://news.ycombinator.com/item?id=18341993
I think are three broadly assumed user benefits of single-page apps:
1. Improved user experience.
2. Improved perceived performance.
3. It’s still the web.

5 mistakes to create a dysfunctional single-page app
Mistake 1: Under-estimate long-term development and maintenance costs
Mistake 2: Use the single-page app approach unilaterally
Mistake 3: Under-invest in front end capability
Mistake 4: Use naïve dev practices
Mistake 5: Surf the waves of framework hype

The disadvantages of single page applications: https://news.ycombinator.com/item?id=9879685
You probably don't need a single-page app: https://news.ycombinator.com/item?id=19184496
https://news.ycombinator.com/item?id=20384738
MPA advantages:
- Stateless requests
- The browser knows how to deal with a traditional architecture
- Fewer, more mature tools
- SEO for free

When to go for the single page app:
- Core functionality is real-time (e.g Slack)
- Rich UI interactions are core to the product (e.g Trello)
- Lots of state shared between screens (e.g. Spotify)

Hybrid solutions
...
Github uses this hybrid approach.
...

Ask HN: Is it ok to use traditional server-side rendering these days?: https://news.ycombinator.com/item?id=13212465

https://www.reddit.com/r/webdev/comments/cp9vb8/are_people_still_doing_ssr/
https://www.reddit.com/r/webdev/comments/93n60h/best_javascript_modern_approach_to_multi_page/
https://www.reddit.com/r/webdev/comments/aax4k5/do_you_develop_solely_using_spa_these_days/
The SEO issues with SPAs is a persistent concern you hear about a lot, yet nobody ever quantifies the issues. That is because search engines keep the operation of their crawler bots and indexing secret. I have read into it some, and it seems that problem used to exist, somewhat, but is more or less gone now. Bots can deal with SPAs fine.
--
I try to avoid building a SPA nowadays if possible. Not because of SEO (there are now server-side solutions to help with that), but because a SPA increases the complexity of the code base by a magnitude. State management with Redux... Async this and that... URL routing... And don't forget to manage page history.

How about just render pages with templates and be done?

If I need a highly dynamic UI for a particular feature, then I'd probably build an embeddable JS widget for it.
q-n-a  stackex  programming  engineering  tradeoffs  system-design  design  web  frontend  javascript  cost-benefit  analysis  security  state  performance  traces  measurement  intricacy  code-organizing  applicability-prereqs  multi  comparison  smoothness  shift  critique  techtariat  chart  ui  coupling-cohesion  interface-compatibility  hn  commentary  best-practices  discussion  trends  client-server  api  composition-decomposition  cycles  frameworks  ecosystem  degrees-of-freedom  dotnet  working-stiff  reddit  social  project-management 
october 2019 by nhaliday
Ask HN: Favorite note-taking software? | Hacker News
Ask HN: What is your ideal note-taking software and/or hardware?: https://news.ycombinator.com/item?id=13221158

my wishlist as of 2019:
- web + desktop macOS + mobile iOS (at least viewing on the last but ideally also editing)
- sync across all those
- open-source data format that's easy to manipulate for scripting purposes
- flexible organization: mostly tree hierarchical (subsuming linear/unorganized) but with the option for directed (acyclic) graph (possibly a second layer of structure/linking)
- can store plain text, LaTeX, diagrams, sketches, and raster/vector images (video prob not necessary except as links to elsewhere)
- full-text search
- somehow digest/import data from Pinboard, Workflowy, Papers 3/Bookends, Skim, and iBooks/e-readers (esp. Kobo), ideally absorbing most of their functionality
- so, eg, track notes/annotations side-by-side w/ original PDF/DjVu/ePub documents (to replace Papers3/Bookends/Skim), and maybe web pages too (to replace Pinboard)
- OCR of handwritten notes (how to handle equations/diagrams?)
- various forms of NLP analysis of everything (topic models, clustering, etc)
- maybe version control (less important than export)

candidates?:
- Evernote prob ruled out do to heavy use of proprietary data formats (unless I can find some way to export with tolerably clean output)
- Workflowy/Dynalist are good but only cover a subset of functionality I want
- org-mode doesn't interact w/ mobile well (and I haven't evaluated it in detail otherwise)
- TiddlyWiki/Zim are in the running, but not sure about mobile
- idk about vimwiki but I'm not that wedded to vim and it seems less widely used than org-mode/TiddlyWiki/Zim so prob pass on that
- Quiver/Joplin/Inkdrop look similar and cover a lot of bases, TODO: evaluate more
- Trilium looks especially promising, tho read-only mobile and for macOS desktop look at this: https://github.com/zadam/trilium/issues/511
- RocketBook is interesting scanning/OCR solution but prob not sufficient due to proprietary data format
- TODO: many more candidates, eg, TreeSheets, Gingko, OneNote (macOS?...), Notion (proprietary data format...), Zotero, Nodebook (https://nodebook.io/landing), Polar (https://getpolarized.io), Roam (looks very promising)

Ask HN: What do you use for you personal note taking activity?: https://news.ycombinator.com/item?id=15736102

Ask HN: What are your note-taking techniques?: https://news.ycombinator.com/item?id=9976751

Ask HN: How do you take notes (useful note-taking strategies)?: https://news.ycombinator.com/item?id=13064215

Ask HN: How to get better at taking notes?: https://news.ycombinator.com/item?id=21419478

Ask HN: How do you keep your notes organized?: https://news.ycombinator.com/item?id=21810400

Ask HN: How did you build up your personal knowledge base?: https://news.ycombinator.com/item?id=21332957
nice comment from math guy on structure and difference between math and CS: https://news.ycombinator.com/item?id=21338628
useful comment collating related discussions: https://news.ycombinator.com/item?id=21333383
highlights:
Designing a Personal Knowledge base: https://news.ycombinator.com/item?id=8270759
Ask HN: How to organize personal knowledge?: https://news.ycombinator.com/item?id=17892731
Do you use a personal 'knowledge base'?: https://news.ycombinator.com/item?id=21108527
Ask HN: How do you share/organize knowledge at work and life?: https://news.ycombinator.com/item?id=21310030
Managing my personal knowledge base: https://news.ycombinator.com/item?id=22000791
The sad state of personal data and infrastructure: https://beepb00p.xyz/sad-infra.html
Building personal search infrastructure for your knowledge and code: https://beepb00p.xyz/pkm-search.html

How to annotate literally everything: https://beepb00p.xyz/annotating.html
Ask HN: How do you organize document digests / personal knowledge?: https://news.ycombinator.com/item?id=21642289
Ask HN: Good solution for storing notes/excerpts from books?: https://news.ycombinator.com/item?id=21920143
Ask HN: What's your cross-platform pdf / ePub reading workflow?: https://news.ycombinator.com/item?id=22170395
some related stuff in the reddit links at the bottom of this pin

https://beepb00p.xyz/grasp.html
How to capture information from your browser and stay sane

Ask HN: Best solutions for keeping a personal log?: https://news.ycombinator.com/item?id=21906650

other stuff:
plain text: https://news.ycombinator.com/item?id=21685660

https://www.getdnote.com/blog/how-i-built-personal-knowledge-base-for-myself/
Tiago Forte: https://www.buildingasecondbrain.com

hn search: https://hn.algolia.com/?query=notetaking&type=story

Slant comparison commentary: https://news.ycombinator.com/item?id=7011281

good comparison of options here in comments here (and Trilium itself looks good): https://news.ycombinator.com/item?id=18840990

https://en.wikipedia.org/wiki/Comparison_of_note-taking_software

stuff from Andy Matuschak and Michael Nielsen on general note-taking:
https://twitter.com/andy_matuschak/status/1202663202997170176
https://archive.is/1i9ep
Software interfaces undervalue peripheral vision! (a thread)
https://twitter.com/andy_matuschak/status/1199378287555829760
https://archive.is/J06UB
This morning I implemented PageRank to sort backlinks in my prototype note system. Mixed results!
https://twitter.com/andy_matuschak/status/1211487900505792512
https://archive.is/BOiCG
https://archive.is/4zB37
One way to dream up post-book media to make reading more effective and meaningful is to systematize "expert" practices (e.g. How to Read a Book), so more people can do them, more reliably and more cheaply. But… the most erudite people I know don't actually do those things!

the memex essay and comments from various people including Andy on it: https://pinboard.in/u:nhaliday/b:1cddf69c0b31

some more stuff specific to Roam below, and cf "Why books don't work": https://pinboard.in/u:nhaliday/b:b4d4461f6378

wikis:
https://www.slant.co/versus/5116/8768/~tiddlywiki_vs_zim
https://www.wikimatrix.org/compare/tiddlywiki+zim
http://tiddlymap.org/
https://www.zim-wiki.org/manual/Plugins/BackLinks_Pane.html
https://zim-wiki.org/manual/Plugins/Link_Map.html

apps:
Roam: https://news.ycombinator.com/item?id=21440289
https://www.reddit.com/r/RoamResearch/
https://twitter.com/hashtag/roamcult
https://twitter.com/search?q=RoamResearch%20fortelabs
https://twitter.com/search?q=from%3AQiaochuYuan%20RoamResearch&src=typd
https://twitter.com/vgr/status/1199391391803043840
https://archive.is/TJPQN
https://archive.is/CrNwZ
https://www.nateliason.com/blog/roam
https://twitter.com/andy_matuschak/status/1190102757430063106
https://archive.is/To30Q
https://archive.is/UrI1x
https://archive.is/Ww22V
Knowledge systems which display contextual backlinks to a node open up an interesting new behavior. You can bootstrap a new node extensionally (rather than intensionally) by simply linking to it from many other nodes—even before it has any content.
https://twitter.com/michael_nielsen/status/1220197017340612608
Curious: what are the most striking public @RoamResearch pages that you know? I'd like to see examples of people using it for interesting purposes, or in interesting ways.
https://acesounderglass.com/2019/10/24/epistemic-spot-check-the-fate-of-rome-round-2/
https://twitter.com/andy_matuschak/status/1206011493495513089
https://archive.is/xvaMh
If I weren't doing my own research on questions in knowledge systems (which necessitates tinkering with my own), and if I weren't allergic to doing serious work in webapps, I'd likely use Roam instead!
https://talk.dynalist.io/t/roam-research-new-web-based-outliner-that-supports-transclusion-wiki-features-thoughts/5911/16
http://forum.eastgate.com/t/roam-research-interesting-approach-to-note-taking/2713/10
interesting app: http://www.eastgate.com/Tinderbox/
https://www.theatlantic.com/notes/2016/09/labor-day-software-update-tinderbox-scrivener/498443/

intriguing but probably not appropriate for my needs: https://www.sophya.ai/

Inkdrop: https://news.ycombinator.com/item?id=20103589

Joplin: https://news.ycombinator.com/item?id=15815040
https://news.ycombinator.com/item?id=21555238

MindForgr: https://news.ycombinator.com/item?id=22088175
one comment links to this, mostly on Notion: https://tkainrad.dev/posts/managing-my-personal-knowledge-base/

https://wreeto.com/

Leo Editor (combines tree outlining w/ literate programming/scripting, I think?): https://news.ycombinator.com/item?id=17769892

Frame: https://news.ycombinator.com/item?id=18760079

https://www.reddit.com/r/TheMotte/comments/cb18sy/anyone_use_a_personal_wiki_software_to_catalog/
https://archive.is/xViTY
Notion: https://news.ycombinator.com/item?id=18904648
https://coda.io/welcome
https://news.ycombinator.com/item?id=15543181

accounting: https://news.ycombinator.com/item?id=19833881
Coda mentioned

https://www.reddit.com/r/slatestarcodex/comments/ap437v/modified_cornell_method_the_optimal_notetaking/
https://archive.is/e9oHu
https://www.reddit.com/r/slatestarcodex/comments/bt8a1r/im_about_to_start_a_one_month_journaling_test/
https://www.reddit.com/r/slatestarcodex/comments/9cot3m/question_how_do_you_guys_learn_things/
https://archive.is/HUH8V
https://www.reddit.com/r/slatestarcodex/comments/d7bvcp/how_to_read_a_book_for_understanding/
https://archive.is/VL2mi

Anki:
https://www.reddit.com/r/Anki/comments/as8i4t/use_anki_for_technical_books/
https://www.freecodecamp.org/news/how-anki-saved-my-engineering-career-293a90f70a73/
https://www.reddit.com/r/slatestarcodex/comments/ch24q9/anki_is_it_inferior_to_the_3x5_index_card_an/
https://archive.is/OaGc5
maybe not the best source for a review/advice

interesting comment(s) about tree outliners and spreadsheets: https://news.ycombinator.com/item?id=21170434
https://lightsheets.app/

tablet:
https://www.inkandswitch.com/muse-studio-for-ideas.html
https://www.inkandswitch.com/capstone-manuscript.html
https://news.ycombinator.com/item?id=20255457
hn  discussion  recommendations  software  tools  desktop  app  notetaking  exocortex  wkfly  wiki  productivity  multi  comparison  crosstab  properties  applicability-prereqs  nlp  info-foraging  chart  webapp  reference  q-n-a  retention  workflow  reddit  social  ratty  ssc  learning  studying  commentary  structure  thinking  network-structure  things  collaboration  ocr  trees  graphs  LaTeX  search  todo  project  money-for-time  synchrony  pinboard  state  duplication  worrydream  simplification-normalization  links  minimalism  design  neurons  ai-control  openai  miri-cfar  parsimony  intricacy  meta:reading  examples  prepping  new-religion  deep-materialism  techtariat  review  critique  mobile  integration-extension  interface-compatibility  api  twitter  backup  vgr  postrat  personal-finance  pragmatic  stay-organized  project-management  news  org:mag  epistemic  steel-man  explore-exploit  correlation  cost-benefit  convexity-curvature  michael-nielsen  hci  ux  oly  skunkworks  europe  germanic 
october 2019 by nhaliday
The Future of Mathematics? [video] | Hacker News
https://news.ycombinator.com/item?id=20909404
Kevin Buzzard (the Lean guy)

- general reflection on proof asssistants/theorem provers
- Kevin Hale's formal abstracts project, etc
- thinks of available theorem provers, Lean is "[the only one currently available that may be capable of formalizing all of mathematics eventually]" (goes into more detail right at the end, eg, quotient types)
hn  commentary  discussion  video  talks  presentation  math  formal-methods  expert-experience  msr  frontier  state-of-art  proofs  rigor  education  higher-ed  optimism  prediction  lens  search  meta:research  speculation  exocortex  skunkworks  automation  research  math.NT  big-surf  software  parsimony  cost-benefit  intricacy  correctness  programming  pls  python  functional  haskell  heavyweights  research-program  review  reflection  multi  pdf  slides  oly  experiment  span-cover  git  vcs  teaching  impetus  academia  composition-decomposition  coupling-cohesion  database  trust  types  plt  lifts-projections  induction  critique  beauty  truth  elegance  aesthetics 
october 2019 by nhaliday
Why You Should Learn Cultures, Not Languages | LinkedIn
Given the nature of my work, I’m constantly asked what languages people should invest their energies learning. If you do have a knack for languages, and lots of time on your hands, study Chinese. It’s a business language of the future, and China remains the most politically disruptive force in the world order today. That’s where the smart money is.

But looking back, I would have spent less time on learning a language. Seriously. Though I studied Russian for many years, my Russian skills have atrophied over time. What remains is the insight I gathered into the mentality, familiarizing myself with how other people in the world live by interacting with them directly. That’s never gone away. It gave me the type of perspective that remains critical to my work to this day.

A lot has changed since those days. The rise of technology plays a big role in the advice I’m handing out here. Technology will soon make language barriers obsolete, probably in the next five to 10 years. Google Translate has already removed most of the barriers of the written word, though with occasionally hilarious results, and new generations of smartphone apps will do the same for verbal communication in short order. Of course, it’s important to show people that you care enough about their country to learn their language.
journos-pundits  world  developing-world  language  foreign-lang  cost-benefit  prioritizing  culture  cultural-dynamics  alien-character  china  asia  travel  tip-of-tongue  human-capital  strategy 
october 2019 by nhaliday
Measuring actual learning versus feeling of learning in response to being actively engaged in the classroom | PNAS
This article addresses the long-standing question of why students and faculty remain resistant to active learning. Comparing passive lectures with active learning using a randomized experimental approach and identical course materials, we find that students in the active classroom learn more, but they feel like they learn less. We show that this negative correlation is caused in part by the increased cognitive effort required during active learning.

https://news.ycombinator.com/item?id=21164005
study  org:nat  psychology  cog-psych  education  learning  studying  teaching  productivity  higher-ed  cost-benefit  aversion  🦉  growth  stamina  multi  hn  commentary  sentiment  thinking  neurons  wire-guided  emotion  subjective-objective  self-report  objective-measure  comparison 
october 2019 by nhaliday
The Effect of High-Tech Clusters on the Productivity of Top Inventors
I use longitudinal data on top inventors based on the universe of US patents 1971 - 2007 to quantify the productivity advantages of Silicon-Valley style clusters and their implications for the overall production of patents in the US. I relate the number of patents produced by an inventor in a year to the size of the local cluster, defined as a city × research field × year. I first study the experience of Rochester NY, whose high-tech cluster declined due to the demise of its main employer, Kodak. Due to the growth of digital photography, Kodak employment collapsed after 1996, resulting in a 49.2% decline in the size of the Rochester high-tech cluster. I test whether the change in cluster size affected the productivity of inventors outside Kodak and the photography sector. I find that between 1996 and 2007 the productivity of non-Kodak inventors in Rochester declined by 20.6% relative to inventors in other cities, conditional on inventor fixed effects. In the second part of the paper, I turn to estimates based on all the data in the sample. I find that when an inventor moves to a larger cluster she experiences significant increases in the number of patents produced and the number of citations received.

...

In a counterfactual scenario where the quality of U.S. inventors is held constant but their geographical location is changed so that all cities have the same number of inventors in each field, inventor productivity would increase in small clusters and decline in large clusters. On net, the overall number of patents produced in the US in a year would be 11.07% smaller.

[ed.: I wonder whether the benefits of less concentration (eg, lower cost of living propping up demographics) are actually smaller than the downsides overall.]
study  economics  growth-econ  innovation  roots  branches  sv  tech  econ-productivity  density  urban-rural  winner-take-all  polarization  top-n  pro-rata  distribution  usa  longitudinal  intellectual-property  northeast  natural-experiment  population  endogenous-exogenous  intervention  counterfactual  cost-benefit 
september 2019 by nhaliday
Mars Direct | West Hunter
Send Mr Bezos. He even looks like a Martian.
--
Throw in Zuckerberg and it’s a deal…
--
We could send twice as many people half-way to Mars.

--

I don’t think that the space station has been worth anything at all.

As for a lunar base, many of the issues are difficult and one ( effects of low-gee) is probably impossible to solve.

I don’t think that there are real mysteries about what is needed for a kind-of self-sufficient base – it’s just too hard and there’s not much prospect of a payoff.

That said, there may be other ways of going about this that are more promising.

--

Venus is worth terraforming: no gravity problems. Doable.

--

It’s not impossible that Mars might harbor microbial life – with some luck, life with a different chemical basis. That might be very valuable: there are endless industrial processes that depend upon some kind of fermentation.
Why, without acetone fermentation, there might not be a state of Israel.
--
If we used a reasonable approach, like Orion, I think that people would usefully supplement those robots.

https://westhunt.wordpress.com/2019/01/11/the-great-divorce/
Jeff Bezos isn’t my favorite guy, but he has ability and has built something useful. And an ugly, contested divorce would be harsh and unfair to the children, who have done nothing wrong.

But I don’t care. The thought of tens of billions of dollars being spent on lawyers and PIs offer the possibility of a spectacle that will live forever, far wilder than the antics of Nero or Caligula. It could make Suetonius look like Pilgrim’s Progress.

Have you ever wondered whether tens of thousands of divorce lawyers should be organized into legions or phalanxes? This is our chance to finally find out.
west-hunter  scitariat  commentary  current-events  trump  politics  troll  space  expansionism  frontier  cost-benefit  ideas  speculation  roots  deep-materialism  definite-planning  geoengineering  wild-ideas  gravity  barons  amazon  facebook  sv  tech  government  debate  critique  physics  mechanics  robotics  multi  lol  law  responsibility  drama  beginning-middle-end  direct-indirect 
september 2019 by nhaliday
Overcoming Bias : How Idealists Aid Cheaters
I’ve been reading the book Moral Mazes for the last few months; it is excellent, but also depressing, which is why it takes so long to read. It makes a strong case, through many detailed examples, that in typical business organizations, norms are actually enforced far less than members pretend. The typical level of checking is in fact far too little to effectively enforce common norms, such as against self-dealing, bribery, accounting lies, fair evaluation of employees, and treating similar customers differently. Combining this data with other things I know, I’m convinced that this applies not only in business, but in human behavior more generally.

We often argue about this key parameter of how hard or necessary it is to enforce norms. Cynics tend to say that it is hard and necessary, while idealists tend to say that it is easy and unnecessary. This data suggests that cynics tend more to be right, even as idealists tend to win our social arguments.

One reason idealists tend to win arguments is that they impugn the character and motives of cynics. They suggest that cynics can more easily see opportunities for cheating because cynics in fact intend to and do cheat more, or that cynics are losers who seek to make excuses for their failures, by blaming the cheating of others. Idealists also tend to say what while other groups may have norm enforcement problems, our group is better, which suggests that cynics are disloyal to our group.

Norm enforcement is expensive, but worth it if we have good social norms, that discourage harmful behaviors. Yet if we under-estimate how hard norms are to enforce, we won’t check enough, and cheaters will get away with cheating, canceling much of the benefit of the norm. People who privately know this fact will gain by cheating often, as they know they can get away with it. Conversely, people who trust norm enforcement to work will be cheated on, and lose.

When confronted with data, idealists often argue, successfully, that it is good if people tend to overestimate the effectiveness of norm enforcement, as this will make them obey norms more, to everyone’s benefit. They give this as a reason to teach this overestimate in schools and in our standard public speeches. And so that is what societies tend to do. Which benefits those who, even if they give lip service to this claim in public, are privately selfish enough to know it is a lie, and are willing to cheat on the larger pool of gullible victims that this policy creates.

That is, idealists aid cheaters.
ratty  hanson  books  review  speculation  branches  ideology  cooperate-defect  coordination  alignment  social-norms  cynicism-idealism  unintended-consequences  propaganda  info-dynamics  cost-benefit  realness  truth  summary  contrarianism  things  rhetoric  phalanges 
september 2019 by nhaliday
Two Performance Aesthetics: Never Miss a Frame and Do Almost Nothing - Tristan Hume
I’ve noticed when I think about performance nowadays that I think in terms of two different aesthetics. One aesthetic, which I’ll call Never Miss a Frame, comes from the world of game development and is focused on writing code that has good worst case performance by making good use of the hardware. The other aesthetic, which I’ll call Do Almost Nothing comes from a more academic world and is focused on algorithmically minimizing the work that needs to be done to the extent that there’s barely any work left, paying attention to the performance at all scales.

[ed.: Neither of these exactly matches TCS performance PoV but latter is closer (the focus on diffs is kinda weird).]

...

Never Miss a Frame

In game development the most important performance criteria is that your game doesn’t miss frame deadlines. You have a target frame rate and if you miss the deadline for the screen to draw a new frame your users will notice the jank. This leads to focusing on the worst case scenario and often having fixed maximum limits for various quantities. This property can also be important in areas other than game development, like other graphical applications, real-time audio, safety-critical systems and many embedded systems. A similar dynamic occurs in distributed systems where one server needs to query 100 others and combine the results, you’ll wait for the slowest of the 100 every time so speeding up some of them doesn’t make the query faster, and queries occasionally taking longer (e.g because of garbage collection) will impact almost every request!

...

In this kind of domain you’ll often run into situations where in the worst case you can’t avoid processing a huge number of things. This means you need to focus your effort on making the best use of the hardware by writing code at a low level and paying attention to properties like cache size and memory bandwidth.

Projects with inviolable deadlines need to adjust different factors than speed if the code runs too slow. For example a game might decrease the size of a level or use a more efficient but less pretty rendering technique.

Aesthetically: Data should be tightly packed, fixed size, and linear. Transcoding data to and from different formats is wasteful. Strings and their variable lengths and inefficient operations must be avoided. Only use tools that allow you to work at a low level, even if they’re annoying, because that’s the only way you can avoid piles of fixed costs making everything slow. Understand the machine and what your code does to it.

Personally I identify this aesthetic most with Jonathan Blow. He has a very strong personality and I’ve watched enough of videos of him that I find imagining “What would Jonathan Blow say?” as a good way to tap into this aesthetic. My favourite articles about designs following this aesthetic are on the Our Machinery Blog.

...

Do Almost Nothing

Sometimes, it’s important to be as fast as you can in all cases and not just orient around one deadline. The most common case is when you simply have to do something that’s going to take an amount of time noticeable to a human, and if you can make that time shorter in some situations that’s great. Alternatively each operation could be fast but you may run a server that runs tons of them and you’ll save on server costs if you can decrease the load of some requests. Another important case is when you care about power use, for example your text editor not rapidly draining a laptop’s battery, in this case you want to do the least work you possibly can.

A key technique for this approach is to never recompute something from scratch when it’s possible to re-use or patch an old result. This often involves caching: keeping a store of recent results in case the same computation is requested again.

The ultimate realization of this aesthetic is for the entire system to deal only in differences between the new state and the previous state, updating data structures with only the newly needed data and discarding data that’s no longer needed. This way each part of the system does almost no work because ideally the difference from the previous state is very small.

Aesthetically: Data must be in whatever structure scales best for the way it is accessed, lots of trees and hash maps. Computations are graphs of inputs and results so we can use all our favourite graph algorithms to optimize them! Designing optimal systems is hard so you should use whatever tools you can to make it easier, any fixed cost they incur will be made negligible when you optimize away all the work they need to do.

Personally I identify this aesthetic most with my friend Raph Levien and his articles about the design of the Xi text editor, although Raph also appreciates the other aesthetic and taps into it himself sometimes.

...

_I’m conflating the axes of deadline-oriented vs time-oriented and low-level vs algorithmic optimization, but part of my point is that while they are different, I think these axes are highly correlated._

...

Text Editors

Sublime Text is a text editor that mostly follows the Never Miss a Frame approach. ...

The Xi Editor is designed to solve this problem by being designed from the ground up to grapple with the fact that some operations, especially those interacting with slow compilers written by other people, can’t be made instantaneous. It does this using a fancy asynchronous plugin model and lots of fancy data structures.
...

...

Compilers

Jonathan Blow’s Jai compiler is clearly designed with the Never Miss a Frame aesthetic. It’s written to be extremely fast at every level, and the language doesn’t have any features that necessarily lead to slow compiles. The LLVM backend wasn’t fast enough to hit his performance goals so he wrote an alternative backend that directly writes x86 code to a buffer without doing any optimizations. Jai compiles something like 100,000 lines of code per second. Designing both the language and compiler to not do anything slow lead to clean build performance 10-100x faster than other commonly-used compilers. Jai is so fast that its clean builds are faster than most compilers incremental builds on common project sizes, due to limitations in how incremental the other compilers are.

However, Jai’s compiler is still O(n) in the codebase size where incremental compilers can be O(n) in the size of the change. Some compilers like the work-in-progress rust-analyzer and I think also Roslyn for C# take a different approach and focus incredibly hard on making everything fully incremental. For small changes (the common case) this can let them beat Jai and respond in milliseconds on arbitrarily large projects, even if they’re slower on clean builds.

Conclusion
I find both of these aesthetics appealing, but I also think there’s real trade-offs that incentivize leaning one way or the other for a given project. I think people having different performance aesthetics, often because one aesthetic really is better suited for their domain, is the source of a lot of online arguments about making fast systems. The different aesthetics also require different bases of knowledge to pursue, like knowledge of data-oriented programming in C++ vs knowledge of abstractions for incrementality like Adapton, so different people may find that one approach seems way easier and better for them than the other.

I try to choose how to dedicate my effort to pursuing each aesthetics on a per project basis by trying to predict how effort in each direction would help. Some projects I know if I code it efficiently it will always hit the performance deadline, others I know a way to drastically cut down on work by investing time in algorithmic design, some projects need a mix of both. Personally I find it helpful to think of different programmers where I have a good sense of their aesthetic and ask myself how they’d solve the problem. One reason I like Rust is that it can do both low-level optimization and also has a good ecosystem and type system for algorithmic optimization, so I can more easily mix approaches in one project. In the end the best approach to follow depends not only on the task, but your skills or the skills of the team working on it, as well as how much time you have to work towards an ambitious design that may take longer for a better result.
techtariat  reflection  things  comparison  lens  programming  engineering  cracker-prog  carmack  games  performance  big-picture  system-design  constraint-satisfaction  metrics  telos-atelos  distributed  incentives  concurrency  cost-benefit  tradeoffs  systems  metal-to-virtual  latency-throughput  abstraction  marginal  caching  editors  strings  ideas  ui  common-case  examples  applications  flux-stasis  nitty-gritty  ends-means  thinking  summary  correlation  degrees-of-freedom  c(pp)  rust  interface  integration-extension  aesthetics  interface-compatibility  efficiency  adversarial 
september 2019 by nhaliday
python - Why do some languages like C++ and Java have a built-in LinkedList datastructure? - Stack Overflow
I searched through Guido's Python History blog, because I was sure he'd written about this, but apparently that's not where he did so. So, this is based on a combination of reasoning (aka educated guessing) and memory (possibly faulty).

Let's start from the end: Without knowing why Guido didn't add linked lists in Python 0.x, do we at least know why the core devs haven't added them since then, even though they've added a bunch of other types from OrderedDict to set?

Yes, we do. The short version is: Nobody has asked for it, in over two decades. Almost of what's been added to builtins or the standard library over the years has been (a variation on) something that's proven to be useful and popular on PyPI or the ActiveState recipes. That's where OrderedDict and defaultdict came from, for example, and enum and dataclass (based on attrs). There are popular libraries for a few other container types—various permutations of sorted dict/set, OrderedSet, trees and tries, etc., and both SortedContainers and blist have been proposed, but rejected, for inclusion in the stdlib.

But there are no popular linked list libraries, and that's why they're never going to be added.

So, that brings the question back a step: Why are there no popular linked list libraries?
q-n-a  stackex  impetus  roots  programming  pls  python  tradeoffs  cost-benefit  design  data-structures 
august 2019 by nhaliday
Friends with malefit. The effects of keeping dogs and cats, sustaining animal-related injuries and Toxoplasma infection on health and quality of life | bioRxiv
The main problem of many studies was the autoselection – participants were informed about the aims of the study during recruitment and later likely described their health and wellbeing according to their personal beliefs and wishes, not according to their real status. To avoid this source of bias, we did not mention pets during participant recruitment and hid the pet-related questions among many hundreds of questions in an 80-minute Internet questionnaire. Results of our study performed on a sample of on 10,858 subjects showed that liking cats and dogs has a weak positive association with quality of life. However, keeping pets, especially cats, and even more being injured by pets, were strongly negatively associated with many facets of quality of life. Our data also confirmed that infection by the cat parasite Toxoplasma had a very strong negative effect on quality of life, especially on mental health. However, the infection was not responsible for the observed negative effects of keeping pets, as these effects were much stronger in 1,527 Toxoplasma-free subjects than in the whole population. Any cross-sectional study cannot discriminate between a cause and an effect. However, because of the large and still growing popularity of keeping pets, the existence and nature of the reverse pet phenomenon deserve the outmost attention.
study  bio  preprint  wut  psychology  social-psych  nature  regularizer  cost-benefit  emotion  sentiment  poll  methodology  sampling-bias  confounding  happy-sad  intervention  sociology  disease  parasites-microbiome  correlation  contrarianism  branches  increase-decrease  measurement  internet  weird  🐸 
august 2019 by nhaliday
[Tutorial] A way to Practice Competitive Programming : From Rating 1000 to 2400+ - Codeforces
this guy really didn't take that long to reach red..., as of today he's done 20 contests in 2y to my 44 contests in 7y (w/ a long break)...>_>

tho he has 3 times as many submissions as me. maybe he does a lot of virtual rounds?

some snippets from the PDF guide linked:
1400-1900:
To be rating 1900, skills as follows are needed:
- You know and can use major algorithms like these:
Brute force DP DFS BFS Dijkstra
Binary Indexed Tree nCr, nPr Mod inverse Bitmasks Binary Search
- You can code faster (For example, 5 minutes for R1100 problems, 10 minutes for
R1400 problems)

If you are not good at fast-coding and fast-debugging, you should solve AtCoder problems. Actually, and statistically, many Japanese are good at fast-coding relatively while not so good at solving difficult problems. I think that’s because of AtCoder.

I recommend to solve problem C and D in AtCoder Beginner Contest. On average, if you can solve problem C of AtCoder Beginner Contest within 10 minutes and problem D within 20 minutes, you are Div1 in FastCodingForces :)

...

Interestingly, typical problems are concentrated in Div2-only round problems. If you are not good at Div2-only round, it is likely that you are not good at using typical algorithms, especially 10 algorithms that are written above.

If you can use some typical problem but not good at solving more than R1500 in Codeforces, you should begin TopCoder. This type of practice is effective for people who are good at Div.2 only round but not good at Div.1+Div.2 combined or Div.1+Div.2 separated round.

Sometimes, especially in Div1+Div2 round, some problems need mathematical concepts or thinking. Since there are a lot of problems which uses them (and also light-implementation!) in TopCoder, you should solve TopCoder problems.

I recommend to solve Div1Easy of recent 100 SRMs. But some problems are really difficult, (e.g. even red-ranked coder could not solve) so before you solve, you should check how many percent of people did solve this problem. You can use https://competitiveprogramming.info/ to know some informations.

1900-2200:
To be rating 2200, skills as follows are needed:
- You know and can use 10 algorithms which I stated in pp.11 and segment trees
(including lazy propagations)
- You can solve problems very fast: For example, 5 mins for R1100, 10 mins for
R1500, 15 mins for R1800, 40 mins for R2000.
- You have decent skills for mathematical-thinking or considering problems
- Strong mental which can think about the solution more than 1 hours, and don’t give up even if you are below average in Div1 in the middle of the contest

This is only my way to practice, but I did many virtual contests when I was rating 2000. In this page, virtual contest does not mean “Virtual Participation” in Codeforces. It means choosing 4 or 5 problems which the difficulty is near your rating (For example, if you are rating 2000, choose R2000 problems in Codeforces) and solve them within 2 hours. You can use https://vjudge.net/. In this website, you can make virtual contests from problems on many online judges. (e.g. AtCoder, Codeforces, Hackerrank, Codechef, POJ, ...)

If you cannot solve problem within the virtual contests and could not be able to find the solution during the contest, you should read editorial. Google it. (e.g. If you want to know editorial of Codeforces Round #556 (Div. 1), search “Codeforces Round #556 editorial” in google) There is one more important thing to gain rating in Codeforces. To solve problem fast, you should equip some coding library (or template code). For example, I think that equipping segment tree libraries, lazy segment tree libraries, modint library, FFT library, geometry library, etc. is very effective.

2200 to 2400:
Rating 2200 and 2400 is actually very different ...

To be rating 2400, skills as follows are needed:
- You should have skills that stated in previous section (rating 2200)
- You should solve difficult problems which are only solved by less than 100 people in Div1 contests

...

At first, there are a lot of educational problems in AtCoder. I recommend you should solve problem E and F (especially 700-900 points problem in AtCoder) of AtCoder Regular Contest, especially ARC058-ARC090. Though old AtCoder Regular Contests are balanced for “considering” and “typical”, but sadly, AtCoder Grand Contest and recent AtCoder Regular Contest problems are actually too biased for considering I think, so I don’t recommend if your goal is gain rating in Codeforces. (Though if you want to gain rating more than 2600, you should solve problems from AtCoder Grand Contest)

For me, actually, after solving AtCoder Regular Contests, my average performance in CF virtual contest increased from 2100 to 2300 (I could not reach 2400 because start was early)

If you cannot solve problems, I recommend to give up and read editorial as follows:
Point value 600 700 800 900 1000-
CF rating R2000 R2200 R2400 R2600 R2800
Time to editorial 40 min 50 min 60 min 70 min 80 min

If you solve AtCoder educational problems, your skills of competitive programming will be increased. But there is one more problem. Without practical skills, you rating won’t increase. So, you should do 50+ virtual participations (especially Div.1) in Codeforces. In virtual participation, you can learn how to compete as a purple/orange-ranked coder (e.g. strategy) and how to use skills in Codeforces contests that you learned in AtCoder. I strongly recommend to read editorial of all problems except too difficult one (e.g. Less than 30 people solved in contest) after the virtual contest. I also recommend to write reflections about strategy, learns and improvements after reading editorial on notebooks after the contests/virtual.

In addition, about once a week, I recommend you to make time to think about much difficult problem (e.g. R2800 in Codeforces) for couple of hours. If you could not reach the solution after thinking couple of hours, I recommend you to read editorial because you can learn a lot. Solving high-level problems may give you chance to gain over 100 rating in a single contest, but also can give you chance to solve easier problems faster.
oly  oly-programming  problem-solving  learning  practice  accretion  strategy  hmm  pdf  guide  reflection  advice  wire-guided  marginal  stylized-facts  speed  time  cost-benefit  tools  multi  sleuthin  review  comparison  puzzles  contest  aggregator  recommendations  objektbuch  time-use  growth  studying  🖥  👳  yoga 
august 2019 by nhaliday
The 'science' of training in competitive programming - Codeforces
"Hard problems" is subjective. A good rule of thumb for learning problem solving (at least according to me) is that your problem selection is good if you fail to solve roughly 50% of problems you attempt. Anything in [20%,80%] should still be fine, although many people have problems staying motivated if they fail too often. Read solutions for problems you fail to solve.

(There is some actual math behind this. Hopefully one day I'll have the time to write it down.)
- misof in a comment
--
I don't believe in any of things like "either you solve it in 30mins — few hours, or you never solve it at all". There are some magic at first glance algorithms like polynomial hashing, interval tree or FFT (which is magic even at tenth glance :P), but there are not many of them and vast majority of algorithms are possible to be invented on our own, for example dp. In high school I used to solve many problems from IMO and PMO and when I didn't solve a problem I tried it once again for some time. And I have solved some problems after third or sth like that attempt. Though, if we are restricting ourselves to beginners, I think that it still holds true, but it would be better to read solutions after some time, because there are so many other things which we can learn, so better not get stuck at one particular problem, when there are hundreds of other important concepts to be learnt.
oly  oly-programming  problem-solving  learning  practice  accretion  strategy  marginal  wire-guided  stylized-facts  hmm  advice  tactics  time  time-use  cost-benefit  growth  studying  🖥  👳 
august 2019 by nhaliday
How good are decisions?
A statement I commonly hear in tech-utopian circles is that some seeming inefficiency can’t actually be inefficient because the market is efficient and inefficiencies will quickly be eliminated. A contentious example of this is the claim that companies can’t be discriminating because the market is too competitive to tolerate discrimination. A less contentious example is that when you see a big company doing something that seems bizarrely inefficient, maybe it’s not inefficient and you just lack the information necessary to understand why the decision was efficient.

Unfortunately, arguments like this are difficult to settle because, even in retrospect, it’s usually not possible to get enough information to determine the precise “value” of a decision. Even in cases where the decision led to an unambiguous success or failure, there are so many factors that led to the result that it’s difficult to figure out precisely why something happened.

One nice thing about sports is that they often have detailed play-by-play data and well-defined win criteria which lets us tell, on average, what the expected value of a decision is. In this post, we’ll look at the cost of bad decision making in one sport and then briefly discuss why decision quality in sports might be the same or better as decision quality in other fields.

Just to have a concrete example, we’re going to look at baseball, but you could do the same kind of analysis for football, hockey, basketball, etc., and my understanding is that you’d get a roughly similar result in all of those cases.

We’re going to model baseball as a state machine, both because that makes it easy to understand the expected value of particular decisions and because this lets us talk about the value of decisions without having to go over most of the rules of baseball.

exactly the kinda thing Dad likes
techtariat  dan-luu  data  analysis  examples  nitty-gritty  sports  street-fighting  automata-languages  models  optimization  arbitrage  data-science  cost-benefit  tactics  baseball  low-hanging 
august 2019 by nhaliday
Treadmill desk observations - Gwern.net
Notes relating to my use of a treadmill desk and 2 self-experiments showing walking treadmill use interferes with typing and memory performance.

...

While the result seems highly likely to be true for me, I don’t know how well it might generalize to other people. For example, perhaps more fit people can use a treadmill without harm and the negative effect is due to the treadmill usage tiring & distracting me; I try to walk 2 miles a day, but that’s not much compared to some people.

Given this harmful impact, I will avoid doing spaced repetition on my treadmill in the future, and given this & the typing result, will relegate any computer+treadmill usage to non-intellectually-demanding work like watching movies. This turned out to not be a niche use I cared about and I hardly ever used my treadmill afterwards, so in October 2016 I sold my treadmill for $70. I might investigate standing desks next for providing some exercise beyond sitting but without the distracting movement of walking on a treadmill.
ratty  gwern  data  analysis  quantified-self  health  fitness  get-fit  working-stiff  intervention  cost-benefit  psychology  cog-psych  retention  iq  branches  keyboard  ergo  efficiency  accuracy  null-result  increase-decrease  experiment  hypothesis-testing 
august 2019 by nhaliday
Karol Kuczmarski's Blog – A Haskell retrospective
Even in this hypothetical scenario, I posit that the value proposition of Haskell would still be a tough sell.

There is this old quote from Bjarne Stroustrup (creator of C++) where he says that programming languages divide into those everyone complains about, and those that no one uses.
The first group consists of old, established technologies that managed to accrue significant complexity debt through years and decades of evolution. All the while, they’ve been adapting to the constantly shifting perspectives on what are the best industry practices. Traces of those adaptations can still be found today, sticking out like a leftover appendix or residual tail bone — or like the built-in support for XML in Java.

Languages that “no one uses”, on the other hand, haven’t yet passed the industry threshold of sufficient maturity and stability. Their ecosystems are still cutting edge, and their future is uncertain, but they sometimes champion some really compelling paradigm shifts. As long as you can bear with things that are rough around the edges, you can take advantage of their novel ideas.

Unfortunately for Haskell, it manages to combine the worst parts of both of these worlds.

On one hand, it is a surprisingly old language, clocking more than two decades of fruitful research around many innovative concepts. Yet on the other hand, it bears the signs of a fresh new technology, with relatively few production-grade libraries, scarce coverage of some domains (e.g. GUI programming), and not too many stories of commercial successes.

There are many ways to do it
String theory
Errors and how to handle them
Implicit is better than explicit
Leaky modules
Namespaces are apparently a bad idea
Wild records
Purity beats practicality
techtariat  reflection  functional  haskell  programming  pls  realness  facebook  pragmatic  cost-benefit  legacy  libraries  types  intricacy  engineering  tradeoffs  frontier  homo-hetero  duplication  strings  composition-decomposition  nitty-gritty  error  error-handling  coupling-cohesion  critique  ecosystem  c(pp)  aphorism 
august 2019 by nhaliday
Sage: Open Source Mathematics Software: You don't really think that Sage has failed, do you?
> P.S. You don't _really_ think that Sage has failed, do you?

After almost exactly 10 years of working on the Sage project, I absolutely do think it has failed to accomplish the stated goal of the mission statement: "Create a viable free open source alternative to Magma, Maple, Mathematica and Matlab.".     When it was only a few years into the project, it was really hard to evaluate progress against such a lofty mission statement.  However, after 10 years, it's clear to me that not only have we not got there, we are not going to ever get there before I retire.   And that's definitely a failure.   
mathtariat  reflection  failure  cost-benefit  oss  software  math  CAS  tools  state-of-art  expert-experience  review  comparison  saas  cloud  :/ 
july 2019 by nhaliday
The returns to speaking a second language
Does speaking a foreign language have an impact on earnings? The authors use a variety of empirical strategies to address this issue for a representative sample of U.S. college graduates. OLS regressions with a complete set of controls to minimize concerns about omitted variable biases, propensity score methods, and panel data techniques all lead to similar conclusions. The hourly earnings of those who speak a foreign language are more than 2 percent higher than the earnings of those who do not. The authors obtain higher and more imprecise point estimates using state high school graduation and college entry and graduation requirements as instrumental variables.

...

We find that college graduates who speak a second language earn, on average, wages that are 2 percent higher than those who don’t. We include a complete set of controls for general ability using information on grades and college admission tests and reduce the concern that selection drives the results controlling for the academic major chosen by the student. We obtain similar results with simple regression methods if we use nonparametric methods based on the propensity score and if we exploit the temporal variation in the knowledge of a second language. The estimates, thus, are not driven by observable differences in the composition of the pools of bilinguals and monolinguals, by the linear functional form that we impose in OLS regressions, or by constant unobserved heterogeneity. To reduce the concern that omitted variables bias our estimates, we make use of several instrumental variables (IVs). Using high school and college graduation requirements as instruments, we estimate more substantial returns to learning a second language, on the order of 14 to 30 percent. These results have high standard errors, but they suggest that OLS estimates may actually be biased downward.

...

In separate (unreported) regressions, we explore the labor market returns to speaking specific languages. We estimate OLS regressions following the previous specifications but allow the coefficient to vary by language spoken. In our sample, German is the language that obtains the highest rewards in the labor market. The returns to speaking German are 3.8 percent, while they are 2.3 for speaking French and 1.5 for speaking Spanish. In fact, only the returns to speaking German remain statistically significant in this regression. The results indicate that those who speak languages known by a smaller number of people obtain higher rewards in the labor market.14

The Relative Importance of the European Languages: https://ideas.repec.org/p/kud/kuiedp/0623.html
study  economics  labor  cost-benefit  hmm  language  foreign-lang  usa  empirical  evidence-based  education  human-capital  compensation  correlation  endogenous-exogenous  natural-experiment  policy  wonkish  🎩  french  germanic  latin-america  multi  spanish  china  asia  japan 
july 2019 by nhaliday
Foreign-Born Teaching Assistants and the Academic Performance of Undergraduates
The data suggest that foreign-born Teaching Assistants have an adverse impact on the class performance of undergraduate students.
study  economics  education  higher-ed  borjas  migration  labor  cost-benefit  tradeoffs  branches  language  foreign-lang  grad-school  teaching  attaq  wonkish  lol 
july 2019 by nhaliday
« earlier      
per page:    204080120160

Copy this bookmark:





to read