recentpopularlog in

nhaliday : trivia   314

« earlier  
Skin turgor: MedlinePlus Medical Encyclopedia
To check for skin turgor, the health care provider grasps the skin between two fingers so that it is tented up. Commonly on the lower arm or abdomen is checked. The skin is held for a few seconds then released.

Skin with normal turgor snaps rapidly back to its normal position. Skin with poor turgor takes time to return to its normal position.
tip-of-tongue  prepping  fluid  embodied  trivia  survival  howto  medicine  safety  measurement 
november 2019 by nhaliday
javascript - What is the purpose of double curly braces in React's JSX syntax? - Stack Overflow
The exterior set of curly braces are letting JSX know you want a JS expression. The interior set of curly braces represent a JavaScript object, meaning you’re passing in an object to the style attribute.
q-n-a  stackex  programming  explanation  trivia  gotchas  syntax  javascript  frontend  DSL  intricacy  facebook  libraries  frameworks 
october 2019 by nhaliday
How is definiteness expressed in languages with no definite article, clitic or affix? - Linguistics Stack Exchange
All languages, as far as we know, do something to mark information status. Basically this means that when you refer to an X, you have to do something to indicate the answer to questions like:
1. Do you have a specific X in mind?
2. If so, you think your hearer is familiar with the X you're talking about?
3. If so, have you already been discussing that X for a while, or is it new to the conversation?
4. If you've been discussing the X for a while, has it been the main topic of conversation?

Question #2 is more or less what we mean by "definiteness."
...

But there are lots of other information-status-marking strategies that don't directly involve definiteness marking. For example:
...
q-n-a  stackex  language  foreign-lang  linguistics  lexical  syntax  concept  conceptual-vocab  thinking  things  span-cover  direction  degrees-of-freedom  communication  anglo  japan  china  asia  russia  mediterranean  grokkability-clarity  intricacy  uniqueness  number  universalism-particularism  whole-partial-many  usa  latin-america  farmers-and-foragers  nordic  novelty  trivia  duplication  dependence-independence  spanish  context  orders  water  comparison 
october 2019 by nhaliday
What does it mean when a CSS rule is grayed out in Chrome's element inspector? - Stack Overflow
It seems that a strike-through indicates that a rule was overridden, but what does it mean when a style is grayed out?

--

Greyed/dimmed out text, can mean either

1. it's a default rule/property the browser applies, which includes defaulted short-hand properties.
2. It involves inheritance which is a bit more complicated.

...

In the case where a rule is applied to the currently selected element due to inheritance (i.e. the rule was applied to an ancestor, and the selected element inherited it), chrome will again display the entire ruleset.

The rules which are applied to the currently selected element appear in normal text.

If a rule exists in that set but is not applied because it's a non-inheritable property (e.g. background color), it will appear as greyed/dimmed text.

https://stackoverflow.com/questions/34712218/what-does-it-mean-when-chrome-dev-tools-shows-a-computed-property-greyed-out
Please note, not the Styles panel (I know what greyed-out means in that context—not applied), but the next panel over, the Computed properties panel.
--
The gray calculated properties are neither default, nor inherited. This only occurs on properties that were not defined for the element, but _calculated_ from either its children or parent _based on runtime layout rendering_.

Take this simple page as an example, display is default and font-size is inherited:
q-n-a  stackex  programming  frontend  web  DSL  form-design  devtools  explanation  trivia  tip-of-tongue  direct-indirect  trees  spreading  multi  nitty-gritty  static-dynamic  constraint-satisfaction  ui  browser  properties 
october 2019 by nhaliday
exponential function - Feynman's Trick for Approximating $e^x$ - Mathematics Stack Exchange
1. e^2.3 ~ 10
2. e^.7 ~ 2
3. e^x ~ 1+x
e = 2.71828...

errors (absolute, relative):
1. +0.0258, 0.26%
2. -0.0138, -0.68%
3. 1 + x approximates e^x on [-.3, .3] with absolute error < .05, and relative error < 5.6% (3.7% for [0, .3]).
nibble  q-n-a  overflow  math  feynman  giants  mental-math  calculation  multiplicative  AMT  identity  objektbuch  explanation  howto  estimate  street-fighting  stories  approximation  data  trivia  nitty-gritty 
october 2019 by nhaliday
Japanese sound symbolism - Wikipedia
Japanese has a large inventory of sound symbolic or mimetic words, known in linguistics as ideophones.[1][2] Sound symbolic words are found in written as well as spoken Japanese.[3] Known popularly as onomatopoeia, these words are not just imitative of sounds but cover a much wider range of meanings;[1] indeed, many sound-symbolic words in Japanese are for things that don't make any noise originally, most clearly demonstrated by shiinto (しいんと), meaning "silently".
language  foreign-lang  trivia  wiki  reference  audio  hmm  alien-character  culture  list  objektbuch  japan  asia  writing 
october 2019 by nhaliday
online resources - How to write special set notation by hand? - Mathematics Stack Exchange
Your ℕN is “incorrect” in that a capital N in any serif font has the diagonal thickened, not the verticals. In fact, the rule (in Latin alphabet) is that negative slopes are thick, positive ones are thin. Verticals are sometimes thin, sometimes thick. Unique exception: Z. Just look in a newspaper at A, V, X, M, and N.
nibble  q-n-a  overflow  math  writing  notetaking  howto  pic  notation  trivia 
october 2019 by nhaliday
2019 Growth Theory Conference - May 11-12 | Economics Department at Brown University
Guillaume Blanc (Brown) and Romain Wacziarg (UCLA and NBER) "Change and Persistence in the Age of Modernization:
Saint-Germain-d’Anxure, 1730-1895∗"

Figure 4.1.1.1 – Fertility
Figure 4.2.1.1 – Mortality
Figure 5.1.0.1 – Literacy

https://twitter.com/GarettJones/status/1127999888359346177
https://archive.is/1EnZg
Short pre-modern lives weren't overwhelmingly about infant mortality:

From this weekend's excellent Deep Roots conference at @Brown_Economics, new evidence from a small French town, an ancestral home of coauthor Romain Wacziarg:
--
European Carpe Diem poems made a lot more sense when 20-year-olds were halfway done with life:
...
--
...
N.B. that's not a correction at all, it's telling the same story as the above figure:

Conditioned on surviving childhood, usually living to less than 50 years total in 1750s France and in medieval times.
study  economics  broad-econ  cliometrics  demographics  history  early-modern  europe  gallic  fertility  longevity  mobility  human-capital  garett-jones  writing  class  data  time-series  demographic-transition  regularizer  lived-experience  gender  gender-diff  pro-rata  trivia  cocktail  econotariat  twitter  social  backup  commentary  poetry  medieval  modernity  alien-character 
september 2019 by nhaliday
Shuffling - Wikipedia
The Gilbert–Shannon–Reeds model provides a mathematical model of the random outcomes of riffling, that has been shown experimentally to be a good fit to human shuffling[2] and that forms the basis for a recommendation that card decks be riffled seven times in order to randomize them thoroughly.[3] Later, mathematicians Lloyd M. Trefethen and Lloyd N. Trefethen authored a paper using a tweaked version of the Gilbert-Shannon-Reeds model showing that the minimum number of riffles for total randomization could also be 5, if the method of defining randomness is changed.[4][5]
nibble  tidbits  trivia  cocktail  wiki  reference  games  howto  random  models  math  applications  probability  math.CO  mixing  markov  sampling  best-practices  acm 
august 2019 by nhaliday
Could diving into water save you from a hail of bullets like in movies? - Quora
I believe that while water could help keep you safe in a hail from bullets, there are many flaws, such as being stuck in a low depth pool or running out of oxygen.
q-n-a  qra  embodied  safety  death  fighting  stylized-facts  trivia  fluid  swimming  survival  arms  martial  prepping  short-circuit 
august 2019 by nhaliday
history - Why are UNIX/POSIX system call namings so illegible? - Unix & Linux Stack Exchange
It's due to the technical constraints of the time. The POSIX standard was created in the 1980s and referred to UNIX, which was born in the 1970. Several C compilers at that time were limited to identifiers that were 6 or 8 characters long, so that settled the standard for the length of variable and function names.

http://neverworkintheory.org/2017/11/26/abbreviated-full-names.html
We carried out a family of controlled experiments to investigate whether the use of abbreviated identifier names, with respect to full-word identifier names, affects fault fixing in C and Java source code. This family consists of an original (or baseline) controlled experiment and three replications. We involved 100 participants with different backgrounds and experiences in total. Overall results suggested that there is no difference in terms of effort, effectiveness, and efficiency to fix faults, when source code contains either only abbreviated or only full-word identifier names. We also conducted a qualitative study to understand the values, beliefs, and assumptions that inform and shape fault fixing when identifier names are either abbreviated or full-word. We involved in this qualitative study six professional developers with 1--3 years of work experience. A number of insights emerged from this qualitative study and can be considered a useful complement to the quantitative results from our family of experiments. One of the most interesting insights is that developers, when working on source code with abbreviated identifier names, adopt a more methodical approach to identify and fix faults by extending their focus point and only in a few cases do they expand abbreviated identifiers.
q-n-a  stackex  trivia  programming  os  systems  legacy  legibility  ux  libraries  unix  linux  hacker  cracker-prog  multi  evidence-based  empirical  expert-experience  engineering  study  best-practices  comparison  quality  debugging  efficiency  time  code-organizing  grokkability  grokkability-clarity 
july 2019 by nhaliday
Laurence Tratt: What Challenges and Trade-Offs do Optimising Compilers Face?
Summary
It's important to be realistic: most people don't care about program performance most of the time. Modern computers are so fast that most programs run fast enough even with very slow language implementations. In that sense, I agree with Daniel's premise: optimising compilers are often unimportant. But “often” is often unsatisfying, as it is here. Users find themselves transitioning from not caring at all about performance to suddenly really caring, often in the space of a single day.

This, to me, is where optimising compilers come into their own: they mean that even fewer people need care about program performance. And I don't mean that they get us from, say, 98 to 99 people out of 100 not needing to care: it's probably more like going from 80 to 99 people out of 100 not needing to care. This is, I suspect, more significant than it seems: it means that many people can go through an entire career without worrying about performance. Martin Berger reminded me of A N Whitehead’s wonderful line that “civilization advances by extending the number of important operations which we can perform without thinking about them” and this seems a classic example of that at work. Even better, optimising compilers are widely tested and thus generally much more reliable than the equivalent optimisations performed manually.

But I think that those of us who work on optimising compilers need to be honest with ourselves, and with users, about what performance improvement one can expect to see on a typical program. We have a tendency to pick the maximum possible improvement and talk about it as if it's the mean, when there's often a huge difference between the two. There are many good reasons for that gap, and I hope in this blog post I've at least made you think about some of the challenges and trade-offs that optimising compilers are subject to.

[1]
Most readers will be familiar with Knuth’s quip that “premature optimisation is the root of all evil.” However, I doubt that any of us have any real idea what proportion of time is spent in the average part of the average program. In such cases, I tend to assume that Pareto’s principle won't be far too wrong (i.e. that 80% of execution time is spent in 20% of code). In 1971 a study by Knuth and others of Fortran programs, found that 50% of execution time was spent in 4% of code. I don't know of modern equivalents of this study, and for them to be truly useful, they'd have to be rather big. If anyone knows of something along these lines, please let me know!
techtariat  programming  compilers  performance  tradeoffs  cost-benefit  engineering  yak-shaving  pareto  plt  c(pp)  rust  golang  trivia  data  objektbuch  street-fighting  estimate  distribution  pro-rata 
july 2019 by nhaliday
Rational Sines of Rational Multiples of p
For which rational multiples of p is the sine rational? We have the three trivial cases
[0, pi/2, pi/6]
and we wish to show that these are essentially the only distinct rational sines of rational multiples of p.

The assertion about rational sines of rational multiples of p follows from two fundamental lemmas. The first is

Lemma 1: For any rational number q the value of sin(qp) is a root of a monic polynomial with integer coefficients.

[Pf uses some ideas unfamiliar to me: similarity parameter of Moebius (linear fraction) transformations, and finding a polynomial for a desired root by constructing a Moebius transformation with a finite period.]

...

Lemma 2: Any root of a monic polynomial f(x) with integer coefficients must either be an integer or irrational.

[Gauss's Lemma, cf Dummit-Foote.]

...
nibble  tidbits  org:junk  analysis  trivia  math  algebra  polynomials  fields  characterization  direction  math.CA  math.CV  ground-up 
july 2019 by nhaliday
The Existential Risk of Math Errors - Gwern.net
How big is this upper bound? Mathematicians have often made errors in proofs. But it’s rarer for ideas to be accepted for a long time and then rejected. But we can divide errors into 2 basic cases corresponding to type I and type II errors:

1. Mistakes where the theorem is still true, but the proof was incorrect (type I)
2. Mistakes where the theorem was false, and the proof was also necessarily incorrect (type II)

Before someone comes up with a final answer, a mathematician may have many levels of intuition in formulating & working on the problem, but we’ll consider the final end-product where the mathematician feels satisfied that he has solved it. Case 1 is perhaps the most common case, with innumerable examples; this is sometimes due to mistakes in the proof that anyone would accept is a mistake, but many of these cases are due to changing standards of proof. For example, when David Hilbert discovered errors in Euclid’s proofs which no one noticed before, the theorems were still true, and the gaps more due to Hilbert being a modern mathematician thinking in terms of formal systems (which of course Euclid did not think in). (David Hilbert himself turns out to be a useful example of the other kind of error: his famous list of 23 problems was accompanied by definite opinions on the outcome of each problem and sometimes timings, several of which were wrong or questionable5.) Similarly, early calculus used ‘infinitesimals’ which were sometimes treated as being 0 and sometimes treated as an indefinitely small non-zero number; this was incoherent and strictly speaking, practically all of the calculus results were wrong because they relied on an incoherent concept - but of course the results were some of the greatest mathematical work ever conducted6 and when later mathematicians put calculus on a more rigorous footing, they immediately re-derived those results (sometimes with important qualifications), and doubtless as modern math evolves other fields have sometimes needed to go back and clean up the foundations and will in the future.7

...

Isaac Newton, incidentally, gave two proofs of the same solution to a problem in probability, one via enumeration and the other more abstract; the enumeration was correct, but the other proof totally wrong and this was not noticed for a long time, leading Stigler to remark:

...

TYPE I > TYPE II?
“Lefschetz was a purely intuitive mathematician. It was said of him that he had never given a completely correct proof, but had never made a wrong guess either.”
- Gian-Carlo Rota13

Case 2 is disturbing, since it is a case in which we wind up with false beliefs and also false beliefs about our beliefs (we no longer know that we don’t know). Case 2 could lead to extinction.

...

Except, errors do not seem to be evenly & randomly distributed between case 1 and case 2. There seem to be far more case 1s than case 2s, as already mentioned in the early calculus example: far more than 50% of the early calculus results were correct when checked more rigorously. Richard Hamming attributes to Ralph Boas a comment that while editing Mathematical Reviews that “of the new results in the papers reviewed most are true but the corresponding proofs are perhaps half the time plain wrong”.

...

Gian-Carlo Rota gives us an example with Hilbert:

...

Olga labored for three years; it turned out that all mistakes could be corrected without any major changes in the statement of the theorems. There was one exception, a paper Hilbert wrote in his old age, which could not be fixed; it was a purported proof of the continuum hypothesis, you will find it in a volume of the Mathematische Annalen of the early thirties.

...

Leslie Lamport advocates for machine-checked proofs and a more rigorous style of proofs similar to natural deduction, noting a mathematician acquaintance guesses at a broad error rate of 1/329 and that he routinely found mistakes in his own proofs and, worse, believed false conjectures30.

[more on these "structured proofs":
https://academia.stackexchange.com/questions/52435/does-anyone-actually-publish-structured-proofs
https://mathoverflow.net/questions/35727/community-experiences-writing-lamports-structured-proofs
]

We can probably add software to that list: early software engineering work found that, dismayingly, bug rates seem to be simply a function of lines of code, and one would expect diseconomies of scale. So one would expect that in going from the ~4,000 lines of code of the Microsoft DOS operating system kernel to the ~50,000,000 lines of code in Windows Server 2003 (with full systems of applications and libraries being even larger: the comprehensive Debian repository in 2007 contained ~323,551,126 lines of code) that the number of active bugs at any time would be… fairly large. Mathematical software is hopefully better, but practitioners still run into issues (eg Durán et al 2014, Fonseca et al 2017) and I don’t know of any research pinning down how buggy key mathematical systems like Mathematica are or how much published mathematics may be erroneous due to bugs. This general problem led to predictions of doom and spurred much research into automated proof-checking, static analysis, and functional languages31.

[related:
https://mathoverflow.net/questions/11517/computer-algebra-errors
I don't know any interesting bugs in symbolic algebra packages but I know a true, enlightening and entertaining story about something that looked like a bug but wasn't.

Define sinc𝑥=(sin𝑥)/𝑥.

Someone found the following result in an algebra package: ∫∞0𝑑𝑥sinc𝑥=𝜋/2
They then found the following results:

...

So of course when they got:

∫∞0𝑑𝑥sinc𝑥sinc(𝑥/3)sinc(𝑥/5)⋯sinc(𝑥/15)=(467807924713440738696537864469/935615849440640907310521750000)𝜋

hmm:
Which means that nobody knows Fourier analysis nowdays. Very sad and discouraging story... – fedja Jan 29 '10 at 18:47

--

Because the most popular systems are all commercial, they tend to guard their bug database rather closely -- making them public would seriously cut their sales. For example, for the open source project Sage (which is quite young), you can get a list of all the known bugs from this page. 1582 known issues on Feb.16th 2010 (which includes feature requests, problems with documentation, etc).

That is an order of magnitude less than the commercial systems. And it's not because it is better, it is because it is younger and smaller. It might be better, but until SAGE does a lot of analysis (about 40% of CAS bugs are there) and a fancy user interface (another 40%), it is too hard to compare.

I once ran a graduate course whose core topic was studying the fundamental disconnect between the algebraic nature of CAS and the analytic nature of the what it is mostly used for. There are issues of logic -- CASes work more or less in an intensional logic, while most of analysis is stated in a purely extensional fashion. There is no well-defined 'denotational semantics' for expressions-as-functions, which strongly contributes to the deeper bugs in CASes.]

...

Should such widely-believed conjectures as P≠NP or the Riemann hypothesis turn out be false, then because they are assumed by so many existing proofs, a far larger math holocaust would ensue38 - and our previous estimates of error rates will turn out to have been substantial underestimates. But it may be a cloud with a silver lining, if it doesn’t come at a time of danger.

https://mathoverflow.net/questions/338607/why-doesnt-mathematics-collapse-down-even-though-humans-quite-often-make-mista

more on formal methods in programming:
https://www.quantamagazine.org/formal-verification-creates-hacker-proof-code-20160920/
https://intelligence.org/2014/03/02/bob-constable/

https://softwareengineering.stackexchange.com/questions/375342/what-are-the-barriers-that-prevent-widespread-adoption-of-formal-methods
Update: measured effort
In the October 2018 issue of Communications of the ACM there is an interesting article about Formally verified software in the real world with some estimates of the effort.

Interestingly (based on OS development for military equipment), it seems that producing formally proved software requires 3.3 times more effort than with traditional engineering techniques. So it's really costly.

On the other hand, it requires 2.3 times less effort to get high security software this way than with traditionally engineered software if you add the effort to make such software certified at a high security level (EAL 7). So if you have high reliability or security requirements there is definitively a business case for going formal.

WHY DON'T PEOPLE USE FORMAL METHODS?: https://www.hillelwayne.com/post/why-dont-people-use-formal-methods/
You can see examples of how all of these look at Let’s Prove Leftpad. HOL4 and Isabelle are good examples of “independent theorem” specs, SPARK and Dafny have “embedded assertion” specs, and Coq and Agda have “dependent type” specs.6

If you squint a bit it looks like these three forms of code spec map to the three main domains of automated correctness checking: tests, contracts, and types. This is not a coincidence. Correctness is a spectrum, and formal verification is one extreme of that spectrum. As we reduce the rigour (and effort) of our verification we get simpler and narrower checks, whether that means limiting the explored state space, using weaker types, or pushing verification to the runtime. Any means of total specification then becomes a means of partial specification, and vice versa: many consider Cleanroom a formal verification technique, which primarily works by pushing code review far beyond what’s humanly possible.

...

The question, then: “is 90/95/99% correct significantly cheaper than 100% correct?” The answer is very yes. We all are comfortable saying that a codebase we’ve well-tested and well-typed is mostly correct modulo a few fixes in prod, and we’re even writing more than four lines of code a day. In fact, the vast… [more]
ratty  gwern  analysis  essay  realness  truth  correctness  reason  philosophy  math  proofs  formal-methods  cs  programming  engineering  worse-is-better/the-right-thing  intuition  giants  old-anglo  error  street-fighting  heuristic  zooming  risk  threat-modeling  software  lens  logic  inference  physics  differential  geometry  estimate  distribution  robust  speculation  nonlinearity  cost-benefit  convexity-curvature  measure  scale  trivia  cocktail  history  early-modern  europe  math.CA  rigor  news  org:mag  org:sci  miri-cfar  pdf  thesis  comparison  examples  org:junk  q-n-a  stackex  pragmatic  tradeoffs  cracker-prog  techtariat  invariance  DSL  chart  ecosystem  grokkability  heavyweights  CAS  static-dynamic  lower-bounds  complexity  tcs  open-problems  big-surf  ideas  certificates-recognition  proof-systems  PCP  mediterranean  SDP  meta:prediction  epistemic  questions  guessing  distributed  overflow  nibble  soft-question  track-record  big-list  hmm  frontier  state-of-art  move-fast-(and-break-things)  grokkability-clarity  technical-writing  trust 
july 2019 by nhaliday
Regex cheatsheet
Many programs use regular expression to find & replace text. However, they tend to come with their own different flavor.

You can probably expect most modern software and programming languages to be using some variation of the Perl flavor, "PCRE"; however command-line tools (grep, less, ...) will often use the POSIX flavor (sometimes with an extended variant, e.g. egrep or sed -r). ViM also comes with its own syntax (a superset of what Vi accepts).

This cheatsheet lists the respective syntax of each flavor, and the software that uses it.

accidental complexity galore
techtariat  reference  cheatsheet  documentation  howto  yak-shaving  editors  strings  syntax  examples  crosstab  objektbuch  python  comparison  gotchas  tip-of-tongue  automata-languages  pls  trivia  properties  libraries  nitty-gritty  intricacy  degrees-of-freedom  DSL  programming 
june 2019 by nhaliday
c - Aligning to cache line and knowing the cache line size - Stack Overflow
To know the sizes, you need to look it up using the documentation for the processor, afaik there is no programatic way to do it. On the plus side however, most cache lines are of a standard size, based on intels standards. On x86 cache lines are 64 bytes, however, to prevent false sharing, you need to follow the guidelines of the processor you are targeting (intel has some special notes on its netburst based processors), generally you need to align to 64 bytes for this (intel states that you should also avoid crossing 16 byte boundries).

To do this in C or C++ requires that you use the standard aligned_alloc function or one of the compiler specific specifiers such as __attribute__((align(64))) or __declspec(align(64)). To pad between members in a struct to split them onto different cache lines, you need on insert a member big enough to align it to the next 64 byte boundery

...

sysctl hw.cachelinesize
q-n-a  stackex  trivia  systems  programming  c(pp)  assembly  howto  caching 
may 2019 by nhaliday
c++ - What is the difference between #include <filename> and #include "filename"? - Stack Overflow
In practice, the difference is in the location where the preprocessor searches for the included file.

For #include <filename> the preprocessor searches in an implementation dependent manner, normally in search directories pre-designated by the compiler/IDE. This method is normally used to include standard library header files.

For #include "filename" the preprocessor searches first in the same directory as the file containing the directive, and then follows the search path used for the #include <filename> form. This method is normally used to include programmer-defined header files.
q-n-a  stackex  programming  c(pp)  trivia  pls  code-organizing 
may 2019 by nhaliday
One week of bugs
If I had to guess, I'd say I probably work around hundreds of bugs in an average week, and thousands in a bad week. It's not unusual for me to run into a hundred new bugs in a single week. But I often get skepticism when I mention that I run into multiple new (to me) bugs per day, and that this is inevitable if we don't change how we write tests. Well, here's a log of one week of bugs, limited to bugs that were new to me that week. After a brief description of the bugs, I'll talk about what we can do to improve the situation. The obvious answer to spend more effort on testing, but everyone already knows we should do that and no one does it. That doesn't mean it's hopeless, though.

...

Here's where I'm supposed to write an appeal to take testing more seriously and put real effort into it. But we all know that's not going to work. It would take 90k LOC of tests to get Julia to be as well tested as a poorly tested prototype (falsely assuming linear complexity in size). That's two person-years of work, not even including time to debug and fix bugs (which probably brings it closer to four of five years). Who's going to do that? No one. Writing tests is like writing documentation. Everyone already knows you should do it. Telling people they should do it adds zero information1.

Given that people aren't going to put any effort into testing, what's the best way to do it?

Property-based testing. Generative testing. Random testing. Concolic Testing (which was done long before the term was coined). Static analysis. Fuzzing. Statistical bug finding. There are lots of options. Some of them are actually the same thing because the terminology we use is inconsistent and buggy. I'm going to arbitrarily pick one to talk about, but they're all worth looking into.

...

There are a lot of great resources out there, but if you're just getting started, I found this description of types of fuzzers to be one of those most helpful (and simplest) things I've read.

John Regehr has a udacity course on software testing. I haven't worked through it yet (Pablo Torres just pointed to it), but given the quality of Dr. Regehr's writing, I expect the course to be good.

For more on my perspective on testing, there's this.

Everything's broken and nobody's upset: https://www.hanselman.com/blog/EverythingsBrokenAndNobodysUpset.aspx
https://news.ycombinator.com/item?id=4531549

https://hypothesis.works/articles/the-purpose-of-hypothesis/
From the perspective of a user, the purpose of Hypothesis is to make it easier for you to write better tests.

From my perspective as the primary author, that is of course also a purpose of Hypothesis. I write a lot of code, it needs testing, and the idea of trying to do that without Hypothesis has become nearly unthinkable.

But, on a large scale, the true purpose of Hypothesis is to drag the world kicking and screaming into a new and terrifying age of high quality software.

Software is everywhere. We have built a civilization on it, and it’s only getting more prevalent as more services move online and embedded and “internet of things” devices become cheaper and more common.

Software is also terrible. It’s buggy, it’s insecure, and it’s rarely well thought out.

This combination is clearly a recipe for disaster.

The state of software testing is even worse. It’s uncontroversial at this point that you should be testing your code, but it’s a rare codebase whose authors could honestly claim that they feel its testing is sufficient.

Much of the problem here is that it’s too hard to write good tests. Tests take up a vast quantity of development time, but they mostly just laboriously encode exactly the same assumptions and fallacies that the authors had when they wrote the code, so they miss exactly the same bugs that you missed when they wrote the code.

Preventing the Collapse of Civilization [video]: https://news.ycombinator.com/item?id=19945452
- Jonathan Blow

NB: DevGAMM is a game industry conference

- loss of technological knowledge (Antikythera mechanism, aqueducts, etc.)
- hardware driving most gains, not software
- software's actually less robust, often poorly designed and overengineered these days
- *list of bugs he's encountered recently*:
https://youtu.be/pW-SOdj4Kkk?t=1387
- knowledge of trivia becomes [ed.: missing the word "valued" here, I think?]more than general, deep knowledge
- does at least acknowledge value of DRY, reusing code, abstraction saving dev time
techtariat  dan-luu  tech  software  error  list  debugging  linux  github  robust  checking  oss  troll  lol  aphorism  webapp  email  google  facebook  games  julia  pls  compilers  communication  mooc  browser  rust  programming  engineering  random  jargon  formal-methods  expert-experience  prof  c(pp)  course  correctness  hn  commentary  video  presentation  carmack  pragmatic  contrarianism  pessimism  sv  unix  rhetoric  critique  worrydream  hardware  performance  trends  multiplicative  roots  impact  comparison  history  iron-age  the-classics  mediterranean  conquest-empire  gibbon  technology  the-world-is-just-atoms  flux-stasis  increase-decrease  graphics  hmm  idk  systems  os  abstraction  intricacy  worse-is-better/the-right-thing  build-packaging  microsoft  osx  apple  reflection  assembly  things  knowledge  detail-architecture  thick-thin  trivia  info-dynamics  caching  frameworks  generalization  systematic-ad-hoc  universalism-particularism  analytical-holistic  structure  tainter  libraries  tradeoffs  prepping  threat-modeling  network-structure  writing  risk  local-glob 
may 2019 by nhaliday
Dump include paths from g++ - Stack Overflow
g++ -E -x c++ - -v < /dev/null
clang++ -E -x c++ - -v < /dev/null
q-n-a  stackex  trivia  howto  programming  c(pp)  debugging 
may 2019 by nhaliday
What’s In A Name? Understanding Classical Music Titles | Parker Symphony Orchestra
Composition Type:
Symphony, sonata, piano quintet, concerto – these are all composition types. Classical music composers wrote works in many of these forms and often the same composer wrote multiple pieces in the same type. This is why saying you enjoy listening to “the Serenade” or “the Concerto” or “the Mazurka” is confusing. Even using the composer name often does not narrow down which piece you are referring to. For example, it is not enough to say “Beethoven Symphony”. He wrote 9 of them!

Generic Name:
Compositions often have a generic name that can describe the work’s composition type, key signature, featured instruments, etc. This could be something as simple as Symphony No. 2 (meaning the 2nd symphony written by that composer), Minuet in G major (minuet being a type of dance), or Concerto for Two Cellos (an orchestral work featuring two cellos as soloists). The problem with referring to a piece by the generic name, even along with the composer, is that, again, that may not enough to identify the exact work. While Symphony No. 2 by Mahler is sufficient since it is his only 2nd symphony, Minuet by Bach is not since he wrote many minuets over his lifetime.

Non-Generic Names:
Non-generic names, or classical music nicknames and sub-titles, are often more well-known than generic names. They can even be so famous that the composer name is not necessary to clarify which piece you are referring to. Eine Kleine Nachtmusik, the Trout Quintet, and the Surprise Symphony are all examples of non-generic names.

Who gave classical music works their non-generic names? Sometimes the composer added a subsidiary name to a work. These are called sub-titles and are considered part of the work’s formal title. The sub-title for Tchaikovsky’s Symphony No. 6 in B minor is “Pathetique”.

A nickname, on the other hand, is not part of the official title and was not assigned by the composer. It is a name that has become associated with a work. For example, Bach’s “Six Concerts à plusieurs instruments” are commonly known as the Brandenburg Concertos because they were presented as a gift to the Margrave of Brandenburg. The name was given by Bach’s biographer, Philipp Spitta, and it stuck. Mozart’s Symphony No. 41 earned the nickname Jupiter most likely because of its exuberant energy and grand scale. Schubert’s Symphony No. 8 is known as the Unfinished Symphony because he died and left it with only 2 complete movements.

In many cases, referring to a work by its non-generic name, especially with the composer name, is enough to identify a piece. Most classical music fans know which work you are referring to when you say “Beethoven’s Eroica Symphony”.

Non-Numeric Titles:
Some classical compositions do not have a generic name, but rather a non-numeric title. These are formal titles given by the composer that do not follow a sequential numeric naming convention. Works that fall into this category include the Symphony Fantastique by Berlioz, Handel’s Messiah, and Also Sprach Zarathustra by Richard Strauss.

Opus Number:
Opus numbers, abbreviated op., are used to distinguish compositions with similar titles and indicate the chronological order of production. Some composers assigned numbers to their own works, but many were inconsistent in their methods. As a result, some composers’ works are referred to with a catalogue number assigned by musicologists. The various catalogue-number systems commonly used include Köchel-Verzeichnis for Mozart (K) and Bach-Werke-Verzeichnis (BWV).

https://music.stackexchange.com/questions/6688/why-is-the-key-included-in-classical-music-titles
I was always curious why classical composers use names like this Étude in E-flat minor (Frédéric_Chopin) or Missa in G major (Johann Sebastian Bach). Is this from scales of this songs? Weren't they blocked to ever use this scale again? Why didn't they create unique titles?

--

Using a key did not prohibit a composer from using that key again (there are only thirty keys). Using a key did not prohibit them from using the same key on a work with the same form either. Bach wrote over thirty Prelude and Fugues. Four of these were Prelude and Fugue in A minor. They are now differentiated by their own BWV catalog numbers (assigned in 1950). Many pieces did have unique titles, but with the amounts of pieces the composers composed, unique titles were difficult to come up with. Also, most pieces had no lyrics. It is much easier to come up with a title when there are lyrics. So, they turned to this technique. It was used frequently during the Common Practice Period.

https://fredericksymphony.org/how-are-classical-music-compositions-named/
explanation  music  classical  trivia  duplication  q-n-a  stackex  music-theory  init  notation  multi  jargon 
may 2019 by nhaliday
Information Processing: Moore's Law and AI
Hint to technocratic planners: invest more in physicists, chemists, and materials scientists. The recent explosion in value from technology has been driven by physical science -- software gets way too much credit. From the former we got a factor of a million or more in compute power, data storage, and bandwidth. From the latter, we gained (perhaps) an order of magnitude or two in effectiveness: how much better are current OSes and programming languages than Unix and C, both of which are ~50 years old now?

...

Of relevance to this discussion: a big chunk of AlphaGo's performance improvement over other Go programs is due to raw compute power (link via Jess Riedel). The vertical axis is ELO score. You can see that without multi-GPU compute, AlphaGo has relatively pedestrian strength.
hsu  scitariat  comparison  software  hardware  performance  sv  tech  trends  ai  machine-learning  deep-learning  deepgoog  google  roots  impact  hard-tech  multiplicative  the-world-is-just-atoms  technology  trivia  cocktail  big-picture  hi-order-bits 
may 2019 by nhaliday
c++ - Pointer to class data member "::*" - Stack Overflow
[ed.: First encountered in emil-e/rapidcheck (gen::set).]

Is this checked statically? That is, does the compiler allow me to pass an arbitrary value or does it check that every passed pointer to member pFooMember is created using &T::*fooMember? I think it's feasible to do that?
q-n-a  stackex  programming  pls  c(pp)  gotchas  weird  trivia  hmm  explanation  types  oop  static-dynamic  direct-indirect  atoms  lexical 
may 2019 by nhaliday
language design - Why does C++ need a separate header file? - Stack Overflow
C++ does it that way because C did it that way, so the real question is why did C do it that way? Wikipedia speaks a little to this.

Newer compiled languages (such as Java, C#) do not use forward declarations; identifiers are recognized automatically from source files and read directly from dynamic library symbols. This means header files are not needed.
q-n-a  stackex  programming  pls  c(pp)  compilers  trivia  roots  yak-shaving  flux-stasis  comparison  jvm  code-organizing 
may 2019 by nhaliday
c++ - Why are forward declarations necessary? - Stack Overflow
C++, while created almost 17 years later, was defined as a superset of C, and therefore had to use the same mechanism.

By the time Java rolled around in 1995, average computers had enough memory that holding a symbolic table, even for a complex project, was no longer a substantial burden. And Java wasn't designed to be backwards-compatible with C, so it had no need to adopt a legacy mechanism. C# was similarly unencumbered.

As a result, their designers chose to shift the burden of compartmentalizing symbolic declaration back off the programmer and put it on the computer again, since its cost in proportion to the total effort of compilation was minimal.
q-n-a  stackex  programming  pls  c(pp)  trivia  yak-shaving  roots  compilers  flux-stasis  comparison  jvm  static-dynamic 
may 2019 by nhaliday
ellipsis - Why is the subject omitted in sentences like "Thought you'd never ask"? - English Language & Usage Stack Exchange
This is due to a phenomenon that occurs in intimate conversational spoken English called "Conversational Deletion". It was discussed and exemplified quite thoroughly in a 1974 PhD dissertation in linguistics at the University of Michigan that I had the honor of directing.

Thrasher, Randolph H. Jr. 1974. Shouldn't Ignore These Strings: A Study of Conversational Deletion, Ph.D. Dissertation, Linguistics, University of Michigan, Ann Arbor

...

"The phenomenon can be viewed as erosion of the beginning of sentences, deleting (some, but not all) articles, dummies, auxiliaries, possessives, conditional if, and [most relevantly for this discussion -jl] subject pronouns. But it only erodes up to a point, and only in some cases.

"Whatever is exposed (in sentence initial position) can be swept away. If erosion of the first element exposes another vulnerable element, this too may be eroded. The process continues until a hard (non-vulnerable) element is encountered." [ibidem p.9]

Dad calls this and some similar omissions "Kiplinger style": https://en.wikipedia.org/wiki/Kiplinger
q-n-a  stackex  anglo  language  writing  speaking  linguistics  thesis  trivia  cocktail  parsimony  compression  multi  wiki  organization  technical-writing  protocol-metadata  simplification-normalization 
march 2019 by nhaliday
A cross-language perspective on speech information rate
Figure 2.

English (IREN = 1.08) shows a higher Information Rate than Vietnamese (IRVI = 1). On the contrary, Japanese exhibits the lowest IRL value of the sample. Moreover, one can observe that several languages may reach very close IRL with different encoding strategies: Spanish is characterized by a fast rate of low-density syllables while Mandarin exhibits a 34% slower syllabic rate with syllables ‘denser’ by a factor of 49%. Finally, their Information Rates differ only by 4%.

Is spoken English more efficient than other languages?: https://linguistics.stackexchange.com/questions/2550/is-spoken-english-more-efficient-than-other-languages
As a translator, I can assure you that English is no more efficient than other languages.
--
[some comments on a different answer:]
Russian, when spoken, is somewhat less efficient than English, and that is for sure. No one who has ever worked as an interpreter can deny it. You can convey somewhat more information in English than in Russian within an hour. The English language is not constrained by the rigid case and gender systems of the Russian language, which somewhat reduce the information density of the Russian language. The rules of the Russian language force the speaker to incorporate sometimes unnecessary details in his speech, which can be problematic for interpreters – user74809 Nov 12 '18 at 12:48
But in writing, though, I do think that Russian is somewhat superior. However, when it comes to common daily speech, I do not think that anyone can claim that English is less efficient than Russian. As a matter of fact, I also find Russian to be somewhat more mentally taxing than English when interpreting. I mean, anyone who has lived in the world of Russian and then moved to the world of English is certain to notice that English is somewhat more efficient in everyday life. It is not a night-and-day difference, but it is certainly noticeable. – user74809 Nov 12 '18 at 13:01
...
By the way, I am not knocking Russian. I love Russian, it is my mother tongue and the only language, in which I sound like a native speaker. I mean, I still have a pretty thick Russian accent. I am not losing it anytime soon, if ever. But like I said, living in both worlds, the Moscow world and the Washington D.C. world, I do notice that English is objectively more efficient, even if I am myself not as efficient in it as most other people. – user74809 Nov 12 '18 at 13:40

Do most languages need more space than English?: https://english.stackexchange.com/questions/2998/do-most-languages-need-more-space-than-english
Speaking as a translator, I can share a few rules of thumb that are popular in our profession:
- Hebrew texts are usually shorter than their English equivalents by approximately 1/3. To a large extent, that can be attributed to cheating, what with no vowels and all.
- Spanish, Portuguese and French (I guess we can just settle on Romance) texts are longer than their English counterparts by about 1/5 to 1/4.
- Scandinavian languages are pretty much on par with English. Swedish is a tiny bit more compact.
- Whether or not Russian (and by extension, Ukrainian and Belorussian) is more compact than English is subject to heated debate, and if you ask five people, you'll be presented with six different opinions. However, everybody seems to agree that the difference is just a couple percent, be it this way or the other.

--

A point of reference from the website I maintain. The files where we store the translations have the following sizes:

English: 200k
Portuguese: 208k
Spanish: 209k
German: 219k
And the translations are out of date. That is, there are strings in the English file that aren't yet in the other files.

For Chinese, the situation is a bit different because the character encoding comes into play. Chinese text will have shorter strings, because most words are one or two characters, but each character takes 3–4 bytes (for UTF-8 encoding), so each word is 3–12 bytes long on average. So visually the text takes less space but in terms of the information exchanged it uses more space. This Language Log post suggests that if you account for the encoding and remove redundancy in the data using compression you find that English is slightly more efficient than Chinese.

Is English more efficient than Chinese after all?: https://languagelog.ldc.upenn.edu/nll/?p=93
[Executive summary: Who knows?]

This follows up on a series of earlier posts about the comparative efficiency — in terms of text size — of different languages ("One world, how many bytes?", 8/5/2005; "Comparing communication efficiency across languages", 4/4/2008; "Mailbag: comparative communication efficiency", 4/5/2008). Hinrich Schütze wrote:
pdf  study  language  foreign-lang  linguistics  pro-rata  bits  communication  efficiency  density  anglo  japan  asia  china  mediterranean  data  multi  comparison  writing  meta:reading  measure  compression  empirical  evidence-based  experiment  analysis  chart  trivia  cocktail  org:edu 
february 2019 by nhaliday
WHO | Priority environment and health risks
also: http://www.who.int/heli/risks/vectors/vector/en/

Environmental factors are a root cause of a significant disease burden, particularly in developing countries. An estimated 25% of death and disease globally, and nearly 35% in regions such as sub-Saharan Africa, is linked to environmental hazards. Some key areas of risk include the following:

- Unsafe water, poor sanitation and hygiene kill an estimated 1.7 million people annually, particularly as a result of diarrhoeal disease.
- Indoor smoke from solid fuels kills an estimated 1.6 million people annually due to respiratory diseases.
- Malaria kills over 1.2 million people annually, mostly African children under the age of five. Poorly designed irrigation and water systems, inadequate housing, poor waste disposal and water storage, deforestation and loss of biodiversity, all may be contributing factors to the most common vector-borne diseases including malaria, dengue and leishmaniasis.
- Urban air pollution generated by vehicles, industries and energy production kills approximately 800 000 people annually.
- Unintentional acute poisonings kill 355 000 people globally each year. In developing countries, where two-thirds of these deaths occur, such poisonings are associated strongly with excessive exposure to, and inappropriate use of, toxic chemicals and pesticides present in occupational and/or domestic environments.
- Climate change impacts including more extreme weather events, changed patterns of disease and effects on agricultural production, are estimated to cause over 150 000 deaths annually.

ed.:
Note the high point at human origin (Africa, Middle East) and Asia. Low points in New World and Europe/Russia. Probably key factor in explaining human psychological variation (Haidt axes, individualism-collectivism, kinship structure, etc.). E.g., compare Islam/Judaism (circumcision, food preparation/hygiene rules) and Christianity (orthodoxy more than orthopraxy, no arbitrary practices for group-marking).

I wonder if the dietary and hygiene laws of Christianity get up-regulated in higher parasite load places (the US South, Middle Eastern Christianity, etc.)?

Also the reason for this variation probably basically boils down how long local microbes have had time to adapt to the human immune system.

obv. correlation: https://pinboard.in/u:nhaliday/b:074ecdf30c50

Tropical disease: https://en.wikipedia.org/wiki/Tropical_disease
Tropical diseases are diseases that are prevalent in or unique to tropical and subtropical regions.[1] The diseases are less prevalent in temperate climates, due in part to the occurrence of a cold season, which controls the insect population by forcing hibernation. However, many were present in northern Europe and northern America in the 17th and 18th centuries before modern understanding of disease causation. The initial impetus for tropical medicine was to protect the health of colonialists, notably in India under the British Raj.[2] Insects such as mosquitoes and flies are by far the most common disease carrier, or vector. These insects may carry a parasite, bacterium or virus that is infectious to humans and animals. Most often disease is transmitted by an insect "bite", which causes transmission of the infectious agent through subcutaneous blood exchange. Vaccines are not available for most of the diseases listed here, and many do not have cures.

cf. Galton: https://pinboard.in/u:nhaliday/b:f72f8e03e729
org:gov  org:ngo  trivia  maps  data  visualization  pro-rata  demographics  death  disease  spreading  parasites-microbiome  world  developing-world  africa  MENA  asia  china  sinosphere  orient  europe  the-great-west-whale  occident  explanans  individualism-collectivism  n-factor  things  phalanges  roots  values  anthropology  cultural-dynamics  haidt  scitariat  morality  correlation  causation  migration  sapiens  history  antiquity  time  bio  EEA  eden-heaven  religion  christianity  islam  judaism  theos  ideology  database  list  tribalism  us-them  archaeology  environment  nature  climate-change  atmosphere  health  fluid  farmers-and-foragers  age-of-discovery  usa  the-south  speculation  questions  flexibility  epigenetics  diet  food  sanctity-degradation  multi  henrich  kinship  gnon  temperature  immune  investing  cost-benefit  tradeoffs  org:davos 
july 2018 by nhaliday
Does left-handedness occur more in certain ethnic groups than others?
Yes. There are some aboriginal tribes in Australia who have about 70% of their population being left-handed. It’s also more than 50% for some South American tribes.

The reason is the same in both cases: a recent past of extreme aggression with other tribes. Left-handedness is caused by recessive genes, but being left-handed is a boost when in hand-to-hand combat with a right-handed guy (who usually has trained extensively with other right-handed guys, as this disposition is genetically dominant so right-handed are majority in most human populations, so lacks experience with a left-handed). Should a particular tribe enter too much war time periods, it’s proportion of left-handeds will naturally rise. As their enemy tribe’s proportion of left-handed people is rising as well, there’s a point at which the natural advantage they get in fighting disipates and can only climb higher should they continuously find new groups to fight with, who are also majority right-handed.

...

So the natural question is: given their advantages in 1-on-1 combat, why doesn’t the percentage grow all the way up to 50% or slightly higher? Because there are COSTS associated with being left-handed, as apparently our neural network is pre-wired towards right-handedness - showing as a reduced life expectancy for lefties. So a mathematical model was proposed to explain their distribution among different societies

THE FIGHTING HYPOTHESIS: STABILITY OF POLYMORPHISM IN HUMAN HANDEDNESS

http://gepv.univ-lille1.fr/downl...

Further, it appears the average left-handedness for humans (~10%) hasn’t changed in thousands of years (judging by the paintings of hands on caves)

Frequency-dependent maintenance of left handedness in humans.

Handedness frequency over more than 10,000 years

[ed.: Compare with Julius Evola's "left-hand path".]
q-n-a  qra  trivia  cocktail  farmers-and-foragers  history  antiquity  race  demographics  bio  EEA  evolution  context  peace-violence  war  ecology  EGT  unintended-consequences  game-theory  equilibrium  anthropology  cultural-dynamics  sapiens  data  database  trends  cost-benefit  strategy  time-series  art  archaeology  measurement  oscillation  pro-rata  iteration-recursion  gender  male-variability  cliometrics  roots  explanation  explanans  correlation  causation  branches 
july 2018 by nhaliday
Psychopathy by U.S. State by Ryan Murphy :: SSRN
Rentfrow et al. (2013) constructs a cross-section of the “Big Five” personality traits and demonstrates their relationship with outcomes variables for the continental United States and the District of Columbia. Hyatt et al. (Forthcoming) creates a means of describing psychopathy in terms of the Big Five personality traits. When these two findings are combined, a state-level estimate of psychopathy is produced. Among the typical predictions made regarding psychopathy, the variable with the closest univariate relationship with this new statistical aggregate is the percentage of the population in the state living in an urban area. There is not a clear univariate relationship with homicide rates.

Washington, D.C., harbors the greatest share of psychopaths in the US, "a fact that can be readily explained either by its very high population density or by the type of person who may be drawn a literal seat of power."
study  psychology  cog-psych  personality  disease  psychiatry  guilt-shame  the-devil  usa  the-south  virginia-DC  government  politics  institutions  leadership  power  trivia  cocktail  pro-rata  maps  within-group  geography  urban-rural  correlation  northeast  population  density  sociology  stylized-facts  data  database  objektbuch  psych-architecture 
june 2018 by nhaliday
Is the human brain analog or digital? - Quora
The brain is neither analog nor digital, but works using a signal processing paradigm that has some properties in common with both.
 
Unlike a digital computer, the brain does not use binary logic or binary addressable memory, and it does not perform binary arithmetic. Information in the brain is represented in terms of statistical approximations and estimations rather than exact values. The brain is also non-deterministic and cannot replay instruction sequences with error-free precision. So in all these ways, the brain is definitely not "digital".
 
At the same time, the signals sent around the brain are "either-or" states that are similar to binary. A neuron fires or it does not. These all-or-nothing pulses are the basic language of the brain. So in this sense, the brain is computing using something like binary signals. Instead of 1s and 0s, or "on" and "off", the brain uses "spike" or "no spike" (referring to the firing of a neuron).
q-n-a  qra  expert-experience  neuro  neuro-nitgrit  analogy  deep-learning  nature  discrete  smoothness  IEEE  bits  coding-theory  communication  trivia  bio  volo-avolo  causation  random  order-disorder  ems  models  methodology  abstraction  nitty-gritty  computation  physics  electromag  scale  coarse-fine 
april 2018 by nhaliday
Bragging Rights: Does Corporate Boasting Imply Value Creation? by Pratik Kothari, Don M. Chance, Stephen P. Ferris :: SSRN
We examine all S&P 500 firms over 1999-2014 that publicly characterize their annual performance with extreme positive language. We find that only 18% of such firms increase shareholder value, while nearly 75% have insignificant performance, and the remaining 7% actually destroy shareholder value. Our evidence suggests that firms often base their positive claims on high raw returns or strong relative accounting performance. In comparison to firms that generate positive abnormal returns without boasting, our sample firms tend to have superior accounting performance. These results show that boasting about performance is rarely associated with value creation and is consistent with executive narcissism.
study  economics  business  management  stylized-facts  trivia  leadership  finance  investing  objektbuch  correlation  language  emotion 
april 2018 by nhaliday
Which Countries Create the Most Ocean Trash? - WSJ
China and Indonesia Are Top Sources of Plastic Garbage Reaching Oceans, Researchers Say
news  org:rec  china  asia  developing-world  environment  oceans  attaq  trivia  cocktail  data  visualization  maps  world  scale  top-n  ranking 
january 2018 by nhaliday
orbit - Best approximation for Sun's trajectory around galactic center? - Astronomy Stack Exchange
The Sun orbits in the Galactic potential. The motion is complex; it takes about 230 million years to make a circuit with an orbital speed of around 220 km/s, but at the same time it oscillates up and down with respect to the Galactic plane every ∼70∼70 million years and also wobbles in and out every ∼150∼150 million years (this is called epicyclic motion). The spatial amplitudes of these oscillations are around 100 pc vertically and 300 pc in the radial direction inwards and outwards around an average orbital radius (I am unable to locate a precise figure for the latter).
nibble  q-n-a  overflow  space  oscillation  time  cycles  spatial  trivia  manifolds 
december 2017 by nhaliday
Why do stars twinkle?
According to many astronomers and educators, twinkle (stellar scintillation) is caused by atmospheric structure that works like ordinary lenses and prisms. Pockets of variable temperature - and hence index of refraction - randomly shift and focus starlight, perceived by eye as changes in brightness. Pockets also disperse colors like prisms, explaining the flashes of color often seen in bright stars. Stars appear to twinkle more than planets because they are points of light, whereas the twinkling points on planetary disks are averaged to a uniform appearance. Below, figure 1 is a simulation in glass of the kind of turbulence structure posited in the lens-and-prism theory of stellar scintillation, shown over the Penrose tile floor to demonstrate the random lensing effects.

However appealing and ubiquitous on the internet, this popular explanation is wrong, and my aim is to debunk the myth. This research is mostly about showing that the lens-and-prism theory just doesn't work, but I also have a stellar list of references that explain the actual cause of scintillation, starting with two classic papers by C.G. Little and S. Chandrasekhar.
nibble  org:junk  space  sky  visuo  illusion  explanans  physics  electromag  trivia  cocktail  critique  contrarianism  explanation  waves  simulation  experiment  hmm  magnitude  atmosphere  roots  idk 
december 2017 by nhaliday
light - Why doesn't the moon twinkle? - Astronomy Stack Exchange
As you mention, when light enters our atmosphere, it goes through several parcels of gas with varying density, temperature, pressure, and humidity. These differences make the refractive index of the parcels different, and since they move around (the scientific term for air moving around is "wind"), the light rays take slightly different paths through the atmosphere.

Stars are point sources
…the Moon is not
nibble  q-n-a  overflow  space  physics  trivia  cocktail  navigation  sky  visuo  illusion  measure  random  electromag  signal-noise  flux-stasis  explanation  explanans  magnitude  atmosphere  roots 
december 2017 by nhaliday
Behaving Discretely: Heuristic Thinking in the Emergency Department
I find compelling evidence of heuristic thinking in this setting: patients arriving in the emergency department just after their 40th birthday are roughly 10% more likely to be tested for and 20% more likely to be diagnosed with ischemic heart disease (IHD) than patients arriving just before this date, despite the fact that the incidence of heart disease increases smoothly with age.

Figure 1: Proportion of ED patients tested for heart attack
pdf  study  economics  behavioral-econ  field-study  biases  heuristic  error  healthcare  medicine  meta:medicine  age-generation  aging  cardio  bounded-cognition  shift  trivia  cocktail  pro-rata 
december 2017 by nhaliday
Asabiyyah in Steve King’s Iowa – Gene Expression
What will happen if and when institutions collapse? I do not believe much of America has the social capital of Orange City, Iowa. We have become rational actors, utility optimizers. To some extent, bureaucratic corporate life demands us to behave in this manner. Individual attainment and achievement are lionized, while sacrifice in the public good is the lot of the exceptional saint.
gnxp  scitariat  discussion  usa  culture  society  cultural-dynamics  american-nations  cohesion  trust  social-capital  trends  institutions  data  education  human-capital  britain  anglo  europe  germanic  nordic  individualism-collectivism  values  language  trivia  cocktail  shakespeare  religion  christianity  protestant-catholic  community 
december 2017 by nhaliday
How sweet it is! | West Hunter
This has probably been going on for a long, long, time. It may well go back before anatomically modern humans. I say that because of the greater honeyguide, which guides people to beehives in Africa. After we take the honey, the honeyguide eats the grubs and wax. A guiding bird attracts your attention with wavering, chattering ‘tya’ notes compounded with peeps and pipes. It flies towards an occupied hive and then stops and calls again. It has only been seen to guide humans.

I would not be surprised to find that this symbiotic relationship is far older than the the domestication of dogs. But it is not domestication: we certainly don’t control their reproduction. I wouldn’t count on it, but if you could determine the genetic basis of this signaling behavior, you might be able to get an idea of how old it is.

Honeyguides may be mankind’s oldest buds, but they’re nasty little creatures: brood parasites, like cuckoos.
west-hunter  scitariat  discussion  trivia  cocktail  africa  speculation  history  antiquity  sapiens  farmers-and-foragers  food  nature  domestication  cooperate-defect  ed-yong  org:sci  popsci  survival  outdoors 
december 2017 by nhaliday
The Long-run Effects of Agricultural Productivity on Conflict, 1400-1900∗
This paper provides evidence of the long-run effects of a permanent increase in agricultural productivity on conflict. We construct a newly digitized and geo-referenced dataset of battles in Europe, the Near East and North Africa covering the period between 1400 and 1900 CE. For variation in permanent improvements in agricultural productivity, we exploit the introduction of potatoes from the Americas to the Old World after the Columbian Exchange. We find that the introduction of potatoes permanently reduced conflict for roughly two centuries. The results are driven by a reduction in civil conflicts

http://marginalrevolution.com/marginalrevolution/2017/12/monday-assorted-links-135.html#comment-159746885
#4 An obvious counterfactual is of course the potato blight (1844 and beyond) in Europe. Here’s the Wikipedia page ‘revolutions of 1848’ https://en.wikipedia.org/wiki/Revolutions_of_1848
pdf  study  marginal-rev  economics  broad-econ  cliometrics  history  medieval  early-modern  age-of-discovery  branches  innovation  discovery  agriculture  food  econ-productivity  efficiency  natural-experiment  europe  the-great-west-whale  MENA  war  revolution  peace-violence  trivia  cocktail  stylized-facts  usa  endogenous-exogenous  control  geography  cost-benefit  multi  econotariat  links  poast  wiki  reference  events  roots 
december 2017 by nhaliday
Random Thought Depository — digging-holes-in-the-river: This is a video about...
“Much of the science of modern orthodontics is devoted to creating - through rubber bands, wires, and braces - the perfect “overbite.” An overbite refers to the way our top layer of incisors hang over the bottom layer, like a lid on a box. This is the ideal human occlusion. The opposite of an overbite is an “edge-to-edge” bite seen in primates such as chimpanzees, where the top incisors clash against the bottom ones, like a guillotine blade.

What the orthodontists don’t tell you is that the overbite is a very recent aspect of human anatomy and probably results from the way we use our table knives. Based on surviving skeletons, this has only been a “normal” alignment of the human jaw for 200 to 250 years in the Western world. Before that, most human beings had an edge-to-edge bite, comparable to apes. The overbite is not a product of evolution - the time frame is far too short. Rather, it seems likely to be a response to the way we cut our food during our formative years. The person who worked this out is Professor Charles Loring Brace (born 1930), a remarkable American anthropologist whose main intellectual passion was Neanderthal man. Over decades, Brace built up the world’s largest database on the evolution of hominid teeth. He possibly held more ancient human jaws in his hand than anyone else in the twentieth century.

It’s not that your teeth are too big: your jaw is too small: https://aeon.co/ideas/its-not-that-your-teeth-are-too-big-your-jaw-is-too-small
tumblr  social  trivia  cocktail  quotes  dental  embodied  history  medieval  early-modern  sapiens  archaeology  europe  comparison  china  asia  food  multi  news  org:mag  org:popup 
november 2017 by nhaliday
Why ancient Rome kept choosing bizarre and perverted emperors - Vox
Why so many bizarre emperors were able to run a vast empire
Many of these emperors had extremely small circles of advisers who often did the grunt work of running the vast empire. "The number of people who had direct access to the emperor ... was actually rather small," says Ando. The emperors ruled through networks of officials, and those officials were often more competent. They propped up the insanity at the top.

What's more, most people scattered across the vast Roman Empire didn't pay much attention. "It didn't matter how nutty Caligula was," Ando says, "unless he did something crazy with tax policy." While those living in military provinces could have been affected by an emperor's decree, those in far-flung civilian provinces might have barely noticed the change from one emperor to another.

All that underlines the real truth about imperial power in Rome: yes, there were some crazy emperors, and some of the rumors were probably true. But the most bizarre thing about the Roman Empire wasn't the emperors — it was the political structure that made them so powerful in the first place.
news  org:data  org:lite  history  iron-age  mediterranean  the-classics  trivia  conquest-empire  government  polisci  power  leadership  prudence  list  top-n  people  statesmen  institutions  organizing  antidemos  regression-to-mean  big-peeps  benevolence  alignment 
november 2017 by nhaliday
Hyperbolic angle - Wikipedia
A unit circle {\displaystyle x^{2}+y^{2}=1} x^2 + y^2 = 1 has a circular sector with an area half of the circular angle in radians. Analogously, a unit hyperbola {\displaystyle x^{2}-y^{2}=1} {\displaystyle x^{2}-y^{2}=1} has a hyperbolic sector with an area half of the hyperbolic angle.
nibble  math  trivia  wiki  reference  physics  relativity  concept  atoms  geometry  ground-up  characterization  measure  definition  plots  calculation  nitty-gritty  direction  metrics  manifolds 
november 2017 by nhaliday
The Evil Dead | West Hunter
Someone asked me to go over a chapter he wrote, about the impact of certain customs on human health. One of them was the health advantages of quick burial: the problem is, usually there aren’t any.   People seem to think that the organisms causing decomposition are pathogenic, but they’re not.  People killed by trauma (earthquakes,  floods, bullets) are dead enough, but not a threat.  Sometimes, the body of someone that died of an infectious disease is contagious – smallpox scabs have been known to remain infectious for a long, long time – but most causative agents are unable to survive for long after the host’s death. Now if you’re dissecting someone,  especially if they’re fresh, you probably don’t want to nick yourself with the scalpel – but if you just walk past the corpse and refrain from playing with it, you’re usually OK.
west-hunter  scitariat  ideas  trivia  death  embodied  disease  parasites-microbiome  spreading  public-health  epidemiology  medicine  sanctity-degradation  regularizer 
november 2017 by nhaliday
Religion in ancient Rome - Wikipedia
Religious persecution in the Roman Empire: https://en.wikipedia.org/wiki/Religious_persecution_in_the_Roman_Empire
The religion of the Christians and Jews was monotheistic in contrast to the polytheism of the Romans.[16] The Romans tended towards syncretism, seeing the same gods under different names in different places of the Empire. This being so, they were generally tolerant and accommodating towards new deities and the religious experiences of other peoples who formed part of their wider Empire.[17] This general tolerance was not extended to religions that were hostile to the state nor any that claimed exclusive rights to religious beliefs and practice.[17]

By its very nature the exclusive faith of the Jews and Christians set them apart from other people, but whereas the former group was in the main contained within a single national, ethnic grouping, in the Holy Land and Jewish diaspora—the non-Jewish adherents of the sect such as Proselytes and God-fearers being considered negligible—the latter was active and successful in seeking converts for the new religion and made universal claims not limited to a single geographical area.[17] Whereas the Masoretic Text, of which the earliest surviving copy dates from the 9th century AD, teaches that "the Gods of the gentiles are nothing", the corresponding passage in the Greek Septuagint, used by the early Christian Church, asserted that "all the gods of the heathens are devils."[18] The same gods whom the Romans believed had protected and blessed their city and its wider empire during the many centuries they had been worshipped were now demonized[19] by the early Christian Church.[20][21]

Persecution of Christians in the Roman Empire: https://en.wikipedia.org/wiki/Persecution_of_Christians_in_the_Roman_Empire
"The exclusive sovereignty of Christ clashed with Caesar's claims to his own exclusive sovereignty."[4]:87 The Roman empire practiced religious syncretism and did not demand loyalty to one god, but they did demand preeminent loyalty to the state, and this was expected to be demonstrated through the practices of the state religion with numerous feast and festival days throughout the year.[6]:84-90[7] The nature of Christian monotheism prevented Christians from participating in anything involving 'other gods'.[8]:60 Christians did not participate in feast days or processionals or offer sacrifices or light incense to the gods; this produced hostility.[9] They refused to offer incense to the Roman emperor, and in the minds of the people, the "emperor, when viewed as a god, was ... the embodiment of the Roman empire"[10], so Christians were seen as disloyal to both.[4]:87[11]:23 In Rome, "religion could be tolerated only as long as it contributed to the stability of the state" which would "brook no rival for the allegiance of its subjects. The state was the highest good in a union of state and religion."[4]:87 In Christian monotheism the state was not the highest good.[4]:87[8]:60

...

According to the Christian apologist Tertullian, some governors in Africa helped accused Christians secure acquittals or refused to bring them to trial.[15]:117 Overall, Roman governors were more interested in making apostates than martyrs: one proconsul of Asia, Arrius Antoninus, when confronted with a group of voluntary martyrs during one of his assize tours, sent a few to be executed and snapped at the rest, "If you want to die, you wretches, you can use ropes or precipices."[15]:137

...

Political leaders in the Roman Empire were also public cult leaders. Roman religion revolved around public ceremonies and sacrifices; personal belief was not as central an element as it is in many modern faiths. Thus while the private beliefs of Christians may have been largely immaterial to many Roman elites, this public religious practice was in their estimation critical to the social and political well-being of both the local community and the empire as a whole. Honoring tradition in the right way — pietas — was key to stability and success.[25]
history  iron-age  mediterranean  the-classics  wiki  reference  article  letters  religion  theos  institutions  culture  society  lived-experience  gender  christianity  judaism  conquest-empire  time  sequential  social-capital  multi  rot  zeitgeist  domestication  gibbon  alien-character  the-founding  janus  alignment  government  hmm  aphorism  quotes  tradition  duty  leviathan  ideology  ritual  myth  individualism-collectivism  privacy  trivia  cocktail  death  realness  fire  paganism 
november 2017 by nhaliday
Homebrew: List only installed top level formulas - Stack Overflow
Use brew leaves: show installed formulae that are not dependencies of another installed formula.
q-n-a  stackex  howto  yak-shaving  programming  osx  terminal  network-structure  graphs  trivia  tip-of-tongue  workflow  build-packaging 
november 2017 by nhaliday
GPS and Relativity
The nominal GPS configuration consists of a network of 24 satellites in high orbits around the Earth, but up to 30 or so satellites may be on station at any given time. Each satellite in the GPS constellation orbits at an altitude of about 20,000 km from the ground, and has an orbital speed of about 14,000 km/hour (the orbital period is roughly 12 hours - contrary to popular belief, GPS satellites are not in geosynchronous or geostationary orbits). The satellite orbits are distributed so that at least 4 satellites are always visible from any point on the Earth at any given instant (with up to 12 visible at one time). Each satellite carries with it an atomic clock that "ticks" with a nominal accuracy of 1 nanosecond (1 billionth of a second). A GPS receiver in an airplane determines its current position and course by comparing the time signals it receives from the currently visible GPS satellites (usually 6 to 12) and trilaterating on the known positions of each satellite[1]. The precision achieved is remarkable: even a simple hand-held GPS receiver can determine your absolute position on the surface of the Earth to within 5 to 10 meters in only a few seconds. A GPS receiver in a car can give accurate readings of position, speed, and course in real-time!

More sophisticated techniques, like Differential GPS (DGPS) and Real-Time Kinematic (RTK) methods, deliver centimeter-level positions with a few minutes of measurement. Such methods allow use of GPS and related satellite navigation system data to be used for high-precision surveying, autonomous driving, and other applications requiring greater real-time position accuracy than can be achieved with standard GPS receivers.

To achieve this level of precision, the clock ticks from the GPS satellites must be known to an accuracy of 20-30 nanoseconds. However, because the satellites are constantly moving relative to observers on the Earth, effects predicted by the Special and General theories of Relativity must be taken into account to achieve the desired 20-30 nanosecond accuracy.

Because an observer on the ground sees the satellites in motion relative to them, Special Relativity predicts that we should see their clocks ticking more slowly (see the Special Relativity lecture). Special Relativity predicts that the on-board atomic clocks on the satellites should fall behind clocks on the ground by about 7 microseconds per day because of the slower ticking rate due to the time dilation effect of their relative motion [2].

Further, the satellites are in orbits high above the Earth, where the curvature of spacetime due to the Earth's mass is less than it is at the Earth's surface. A prediction of General Relativity is that clocks closer to a massive object will seem to tick more slowly than those located further away (see the Black Holes lecture). As such, when viewed from the surface of the Earth, the clocks on the satellites appear to be ticking faster than identical clocks on the ground. A calculation using General Relativity predicts that the clocks in each GPS satellite should get ahead of ground-based clocks by 45 microseconds per day.

The combination of these two relativitic effects means that the clocks on-board each satellite should tick faster than identical clocks on the ground by about 38 microseconds per day (45-7=38)! This sounds small, but the high-precision required of the GPS system requires nanosecond accuracy, and 38 microseconds is 38,000 nanoseconds. If these effects were not properly taken into account, a navigational fix based on the GPS constellation would be false after only 2 minutes, and errors in global positions would continue to accumulate at a rate of about 10 kilometers each day! The whole system would be utterly worthless for navigation in a very short time.
nibble  org:junk  org:edu  explanation  trivia  cocktail  physics  gravity  relativity  applications  time  synchrony  speed  space  navigation  technology 
november 2017 by nhaliday
Scotland’s many subcultures - Demos Quarterly
surname method

Turning to our analysis of the YouGov results, it was much to our surprise that the strongest majority support for independence was not among ‘pure’ historic Scots, but among people of Irish Catholic descent: with the latter being only 6 per cent net against independence, and historic Scots 16 per cent against. On the surface one might suppose this group would take its political lead from the Labour Party, for which it votes more consistently than any other group in Scotland.
org:ngo  org:mag  news  britain  anglo  language  trivia  cocktail  exploratory  geography  within-group  poll  values  culture  sociology  elections  polisci  politics  broad-econ  cliometrics  data  demographics  pop-structure 
november 2017 by nhaliday
Indiana Jones, Economist?! - Marginal REVOLUTION
In a stunningly original paper Gojko Barjamovic, Thomas Chaney, Kerem A. Coşar, and Ali Hortaçsu use the gravity model of trade to infer the location of lost cities from Bronze age Assyria! The simplest gravity model makes predictions about trade flows based on the sizes of cities and the distances between them. More complicated models add costs based on geographic barriers. The authors have data from ancient texts on trade flows between all the cities, they know the locations of some of the cities, and they know the geography of the region. Using this data they can invert the gravity model and, triangulating from the known cities, find the lost cities that would best “fit” the model. In other words, by assuming the model is true the authors can predict where the lost cities should be located. To test the idea the authors pretend that some known cities are lost and amazingly the model is able to accurately rediscover those cities.
econotariat  marginal-rev  commentary  study  summary  economics  broad-econ  cliometrics  interdisciplinary  letters  history  antiquity  MENA  urban  geography  models  prediction  archaeology  trade  trivia  cocktail  links  cool  tricks  urban-rural  inference  traces 
november 2017 by nhaliday
functions - What are the use cases for different scoping constructs? - Mathematica Stack Exchange
As you mentioned there are many things to consider and a detailed discussion is possible. But here are some rules of thumb that I apply the majority of the time:

Module[{x}, ...] is the safest and may be needed if either

There are existing definitions for x that you want to avoid breaking during the evaluation of the Module, or
There is existing code that relies on x being undefined (for example code like Integrate[..., x]).
Module is also the only choice for creating and returning a new symbol. In particular, Module is sometimes needed in advanced Dynamic programming for this reason.

If you are confident there aren't important existing definitions for x or any code relying on it being undefined, then Block[{x}, ...] is often faster. (Note that, in a project entirely coded by you, being confident of these conditions is a reasonable "encapsulation" standard that you may wish to enforce anyway, and so Block is often a sound choice in these situations.)

With[{x = ...}, expr] is the only scoping construct that injects the value of x inside Hold[...]. This is useful and important. With can be either faster or slower than Block depending on expr and the particular evaluation path that is taken. With is less flexible, however, since you can't change the definition of x inside expr.
q-n-a  stackex  programming  CAS  trivia  howto  best-practices  checklists  pls  atoms 
november 2017 by nhaliday
« earlier      
per page:    204080120160

Copy this bookmark:





to read