nhaliday : org:sci   230

 « earlier
The Existential Risk of Math Errors - Gwern.net
How big is this upper bound? Mathematicians have often made errors in proofs. But it’s rarer for ideas to be accepted for a long time and then rejected. But we can divide errors into 2 basic cases corresponding to type I and type II errors:

1. Mistakes where the theorem is still true, but the proof was incorrect (type I)
2. Mistakes where the theorem was false, and the proof was also necessarily incorrect (type II)

Before someone comes up with a final answer, a mathematician may have many levels of intuition in formulating & working on the problem, but we’ll consider the final end-product where the mathematician feels satisfied that he has solved it. Case 1 is perhaps the most common case, with innumerable examples; this is sometimes due to mistakes in the proof that anyone would accept is a mistake, but many of these cases are due to changing standards of proof. For example, when David Hilbert discovered errors in Euclid’s proofs which no one noticed before, the theorems were still true, and the gaps more due to Hilbert being a modern mathematician thinking in terms of formal systems (which of course Euclid did not think in). (David Hilbert himself turns out to be a useful example of the other kind of error: his famous list of 23 problems was accompanied by definite opinions on the outcome of each problem and sometimes timings, several of which were wrong or questionable5.) Similarly, early calculus used ‘infinitesimals’ which were sometimes treated as being 0 and sometimes treated as an indefinitely small non-zero number; this was incoherent and strictly speaking, practically all of the calculus results were wrong because they relied on an incoherent concept - but of course the results were some of the greatest mathematical work ever conducted6 and when later mathematicians put calculus on a more rigorous footing, they immediately re-derived those results (sometimes with important qualifications), and doubtless as modern math evolves other fields have sometimes needed to go back and clean up the foundations and will in the future.7

...

Isaac Newton, incidentally, gave two proofs of the same solution to a problem in probability, one via enumeration and the other more abstract; the enumeration was correct, but the other proof totally wrong and this was not noticed for a long time, leading Stigler to remark:

...

TYPE I > TYPE II?
“Lefschetz was a purely intuitive mathematician. It was said of him that he had never given a completely correct proof, but had never made a wrong guess either.”
- Gian-Carlo Rota13

Case 2 is disturbing, since it is a case in which we wind up with false beliefs and also false beliefs about our beliefs (we no longer know that we don’t know). Case 2 could lead to extinction.

...

Except, errors do not seem to be evenly & randomly distributed between case 1 and case 2. There seem to be far more case 1s than case 2s, as already mentioned in the early calculus example: far more than 50% of the early calculus results were correct when checked more rigorously. Richard Hamming attributes to Ralph Boas a comment that while editing Mathematical Reviews that “of the new results in the papers reviewed most are true but the corresponding proofs are perhaps half the time plain wrong”.

...

Gian-Carlo Rota gives us an example with Hilbert:

...

Olga labored for three years; it turned out that all mistakes could be corrected without any major changes in the statement of the theorems. There was one exception, a paper Hilbert wrote in his old age, which could not be fixed; it was a purported proof of the continuum hypothesis, you will find it in a volume of the Mathematische Annalen of the early thirties.

...

Leslie Lamport advocates for machine-checked proofs and a more rigorous style of proofs similar to natural deduction, noting a mathematician acquaintance guesses at a broad error rate of 1/329 and that he routinely found mistakes in his own proofs and, worse, believed false conjectures30.

[more on these "structured proofs":
https://mathoverflow.net/questions/35727/community-experiences-writing-lamports-structured-proofs
]

We can probably add software to that list: early software engineering work found that, dismayingly, bug rates seem to be simply a function of lines of code, and one would expect diseconomies of scale. So one would expect that in going from the ~4,000 lines of code of the Microsoft DOS operating system kernel to the ~50,000,000 lines of code in Windows Server 2003 (with full systems of applications and libraries being even larger: the comprehensive Debian repository in 2007 contained ~323,551,126 lines of code) that the number of active bugs at any time would be… fairly large. Mathematical software is hopefully better, but practitioners still run into issues (eg Durán et al 2014, Fonseca et al 2017) and I don’t know of any research pinning down how buggy key mathematical systems like Mathematica are or how much published mathematics may be erroneous due to bugs. This general problem led to predictions of doom and spurred much research into automated proof-checking, static analysis, and functional languages31.

[related:
https://mathoverflow.net/questions/11517/computer-algebra-errors
I don't know any interesting bugs in symbolic algebra packages but I know a true, enlightening and entertaining story about something that looked like a bug but wasn't.

Define sinc𝑥=(sin𝑥)/𝑥.

Someone found the following result in an algebra package: ∫∞0𝑑𝑥sinc𝑥=𝜋/2
They then found the following results:

...

So of course when they got:

∫∞0𝑑𝑥sinc𝑥sinc(𝑥/3)sinc(𝑥/5)⋯sinc(𝑥/15)=(467807924713440738696537864469/935615849440640907310521750000)𝜋

hmm:
Which means that nobody knows Fourier analysis nowdays. Very sad and discouraging story... – fedja Jan 29 '10 at 18:47

--

Because the most popular systems are all commercial, they tend to guard their bug database rather closely -- making them public would seriously cut their sales. For example, for the open source project Sage (which is quite young), you can get a list of all the known bugs from this page. 1582 known issues on Feb.16th 2010 (which includes feature requests, problems with documentation, etc).

That is an order of magnitude less than the commercial systems. And it's not because it is better, it is because it is younger and smaller. It might be better, but until SAGE does a lot of analysis (about 40% of CAS bugs are there) and a fancy user interface (another 40%), it is too hard to compare.

I once ran a graduate course whose core topic was studying the fundamental disconnect between the algebraic nature of CAS and the analytic nature of the what it is mostly used for. There are issues of logic -- CASes work more or less in an intensional logic, while most of analysis is stated in a purely extensional fashion. There is no well-defined 'denotational semantics' for expressions-as-functions, which strongly contributes to the deeper bugs in CASes.]

...

Should such widely-believed conjectures as P≠NP or the Riemann hypothesis turn out be false, then because they are assumed by so many existing proofs, a far larger math holocaust would ensue38 - and our previous estimates of error rates will turn out to have been substantial underestimates. But it may be a cloud with a silver lining, if it doesn’t come at a time of danger.

https://mathoverflow.net/questions/338607/why-doesnt-mathematics-collapse-down-even-though-humans-quite-often-make-mista

more on formal methods in programming:
https://www.quantamagazine.org/formal-verification-creates-hacker-proof-code-20160920/
https://intelligence.org/2014/03/02/bob-constable/

Update: measured effort
In the October 2018 issue of Communications of the ACM there is an interesting article about Formally verified software in the real world with some estimates of the effort.

Interestingly (based on OS development for military equipment), it seems that producing formally proved software requires 3.3 times more effort than with traditional engineering techniques. So it's really costly.

On the other hand, it requires 2.3 times less effort to get high security software this way than with traditionally engineered software if you add the effort to make such software certified at a high security level (EAL 7). So if you have high reliability or security requirements there is definitively a business case for going formal.

WHY DON'T PEOPLE USE FORMAL METHODS?: https://www.hillelwayne.com/post/why-dont-people-use-formal-methods/
You can see examples of how all of these look at Let’s Prove Leftpad. HOL4 and Isabelle are good examples of “independent theorem” specs, SPARK and Dafny have “embedded assertion” specs, and Coq and Agda have “dependent type” specs.6

If you squint a bit it looks like these three forms of code spec map to the three main domains of automated correctness checking: tests, contracts, and types. This is not a coincidence. Correctness is a spectrum, and formal verification is one extreme of that spectrum. As we reduce the rigour (and effort) of our verification we get simpler and narrower checks, whether that means limiting the explored state space, using weaker types, or pushing verification to the runtime. Any means of total specification then becomes a means of partial specification, and vice versa: many consider Cleanroom a formal verification technique, which primarily works by pushing code review far beyond what’s humanly possible.

...

The question, then: “is 90/95/99% correct significantly cheaper than 100% correct?” The answer is very yes. We all are comfortable saying that a codebase we’ve well-tested and well-typed is mostly correct modulo a few fixes in prod, and we’re even writing more than four lines of code a day. In fact, the vast… [more]
ratty  gwern  analysis  essay  realness  truth  correctness  reason  philosophy  math  proofs  formal-methods  cs  programming  engineering  worse-is-better/the-right-thing  intuition  giants  old-anglo  error  street-fighting  heuristic  zooming  risk  threat-modeling  software  lens  logic  inference  physics  differential  geometry  estimate  distribution  robust  speculation  nonlinearity  cost-benefit  convexity-curvature  measure  scale  trivia  cocktail  history  early-modern  europe  math.CA  rigor  news  org:mag  org:sci  miri-cfar  pdf  thesis  comparison  examples  org:junk  q-n-a  stackex  pragmatic  tradeoffs  cracker-prog  techtariat  invariance  DSL  chart  ecosystem  grokkability  heavyweights  CAS  static-dynamic  lower-bounds  complexity  tcs  open-problems  big-surf  ideas  certificates-recognition  proof-systems  PCP  mediterranean  SDP  meta:prediction  epistemic  questions  guessing  distributed  overflow  nibble  soft-question  track-record  big-list  hmm  frontier  state-of-art  move-fast-(and-break-things)  grokkability-clarity  technical-writing  trust
july 2019 by nhaliday
An Efficiency Comparison of Document Preparation Systems Used in Academic Research and Development
The choice of an efficient document preparation system is an important decision for any academic researcher. To assist the research community, we report a software usability study in which 40 researchers across different disciplines prepared scholarly texts with either Microsoft Word or LaTeX. The probe texts included simple continuous text, text with tables and subheadings, and complex text with several mathematical equations. We show that LaTeX users were slower than Word users, wrote less text in the same amount of time, and produced more typesetting, orthographical, grammatical, and formatting errors. On most measures, expert LaTeX users performed even worse than novice Word users. LaTeX users, however, more often report enjoying using their respective software. We conclude that even experienced LaTeX users may suffer a loss in productivity when LaTeX is used, relative to other document preparation systems. Individuals, institutions, and journals should carefully consider the ramifications of this finding when choosing document preparation strategies, or requiring them of authors.

...

However, our study suggests that LaTeX should be used as a document preparation system only in cases in which a document is heavily loaded with mathematical equations. For all other types of documents, our results suggest that LaTeX reduces the user’s productivity and results in more orthographical, grammatical, and formatting errors, more typos, and less written text than Microsoft Word over the same duration of time. LaTeX users may argue that the overall quality of the text that is created with LaTeX is better than the text that is created with Microsoft Word. Although this argument may be true, the differences between text produced in more recent editions of Microsoft Word and text produced in LaTeX may be less obvious than it was in the past. Moreover, we believe that the appearance of text matters less than the scientific content and impact to the field. In particular, LaTeX is also used frequently for text that does not contain a significant amount of mathematical symbols and formula. We believe that the use of LaTeX under these circumstances is highly problematic and that researchers should reflect on the criteria that drive their preferences to use LaTeX over Microsoft Word for text that does not require significant mathematical representations.

...

A second decision criterion that factors into the choice to use a particular software system is reflection about what drives certain preferences. A striking result of our study is that LaTeX users are highly satisfied with their system despite reduced usability and productivity. From a psychological perspective, this finding may be related to motivational factors, i.e., the driving forces that compel or reinforce individuals to act in a certain way to achieve a desired goal. A vital motivational factor is the tendency to reduce cognitive dissonance. According to the theory of cognitive dissonance, each individual has a motivational drive to seek consonance between their beliefs and their actual actions. If a belief set does not concur with the individual’s actual behavior, then it is usually easier to change the belief rather than the behavior [6]. The results from many psychological studies in which people have been asked to choose between one of two items (e.g., products, objects, gifts, etc.) and then asked to rate the desirability, value, attractiveness, or usefulness of their choice, report that participants often reduce unpleasant feelings of cognitive dissonance by rationalizing the chosen alternative as more desirable than the unchosen alternative [6, 7]. This bias is usually unconscious and becomes stronger as the effort to reject the chosen alternative increases, which is similar in nature to the case of learning and using LaTeX.

...

Given these numbers it remains an open question to determine the amount of taxpayer money that is spent worldwide for researchers to use LaTeX over a more efficient document preparation system, which would free up their time to advance their respective field. Some publishers may save a significant amount of money by requesting or allowing LaTeX submissions because a well-formed LaTeX document complying with a well-designed class file (template) is much easier to bring into their publication workflow. However, this is at the expense of the researchers’ labor time and effort. We therefore suggest that leading scientific journals should consider accepting submissions in LaTeX only if this is justified by the level of mathematics presented in the paper. In all other cases, we think that scholarly journals should request authors to submit their documents in Word or PDF format. We believe that this would be a good policy for two reasons. First, we think that the appearance of the text is secondary to the scientific merit of an article and its impact to the field. And, second, preventing researchers from producing documents in LaTeX would save time and money to maximize the benefit of research and development for both the research team and the public.

[ed.: I sense some salt.

And basically no description of how "# errors" was calculated.]

https://news.ycombinator.com/item?id=8797002
I question the validity of their methodology.
At no point in the paper is exactly what is meant by a "formatting error" or a "typesetting error" defined. From what I gather, the participants in the study were required to reproduce the formatting and layout of the sample text. In theory, a LaTeX file should strictly be a semantic representation of the content of the document; while TeX may have been a raw typesetting language, this is most definitely not the intended use case of LaTeX and is overall a very poor test of its relative advantages and capabilities.
The separation of the semantic definition of the content from the rendering of the document is, in my opinion, the most important feature of LaTeX. Like CSS, this allows the actual formatting to be abstracted away, allowing plain (marked-up) content to be written without worrying about typesetting.
Word has some similar capabilities with styles, and can be used in a similar manner, though few Word users actually use the software properly. This may sound like a relatively insignificant point, but in practice, almost every Word document I have seen has some form of inconsistent formatting. If Word disallowed local formatting changes (including things such as relative spacing of nested bullet points), forcing all formatting changes to be done in document-global styles, it would be a far better typesetting system. Also, the users would be very unhappy.
Yes, LaTeX can undeniably be a pain in the arse, especially when it comes to trying to get figures in the right place; however the combination of a simple, semantic plain-text representation with a flexible and professional typesetting and rendering engine are undeniable and completely unaddressed by this study.
--
It seems that the test was heavily biased in favor of WYSIWYG.
Of course that approach makes it very simple to reproduce something, as has been tested here. Even simpler would be to scan the document and run OCR. The massive problem with both approaches (WYSIWYG and scanning) is that you can't generalize any of it. You're doomed repeating it forever.
(I'll also note the other significant issue with this study: when the ratings provided by participants came out opposite of their test results, they attributed it to irrational bias.)

https://www.nature.com/articles/d41586-019-01796-1
Over the past few years however, the line between the tools has blurred. In 2017, Microsoft made it possible to use LaTeX’s equation-writing syntax directly in Word, and last year it scrapped Word’s own equation editor. Other text editors also support elements of LaTeX, allowing newcomers to use as much or as little of the language as they like.

https://news.ycombinator.com/item?id=20191348
study  hmm  academia  writing  publishing  yak-shaving  technical-writing  software  tools  comparison  latex  scholar  regularizer  idk  microsoft  evidence-based  science  desktop  time  efficiency  multi  hn  commentary  critique  news  org:sci  flux-stasis  duplication  metrics  biases
june 2019 by nhaliday
How sweet it is! | West Hunter
This has probably been going on for a long, long, time. It may well go back before anatomically modern humans. I say that because of the greater honeyguide, which guides people to beehives in Africa. After we take the honey, the honeyguide eats the grubs and wax. A guiding bird attracts your attention with wavering, chattering ‘tya’ notes compounded with peeps and pipes. It flies towards an occupied hive and then stops and calls again. It has only been seen to guide humans.

I would not be surprised to find that this symbiotic relationship is far older than the the domestication of dogs. But it is not domestication: we certainly don’t control their reproduction. I wouldn’t count on it, but if you could determine the genetic basis of this signaling behavior, you might be able to get an idea of how old it is.

Honeyguides may be mankind’s oldest buds, but they’re nasty little creatures: brood parasites, like cuckoos.
west-hunter  scitariat  discussion  trivia  cocktail  africa  speculation  history  antiquity  sapiens  farmers-and-foragers  food  nature  domestication  cooperate-defect  ed-yong  org:sci  popsci  survival  outdoors
december 2017 by nhaliday
Genome Editing
This collection of articles from the Nature Research journals provides an overview of current progress in developing targeted genome editing technologies. A selection of protocols for using and adapting these tools in your own lab is also included.
news  org:sci  org:nat  list  links  aggregator  chart  info-foraging  frontier  technology  CRISPR  biotech  🌞  survey  state-of-art  article  study  genetics  genomics  speedometer
october 2017 by nhaliday
New Theory Cracks Open the Black Box of Deep Learning | Quanta Magazine
A new idea called the “information bottleneck” is helping to explain the puzzling success of today’s artificial-intelligence algorithms — and might also explain how human brains learn.

sounds like he's just talking about autoencoders?
news  org:mag  org:sci  popsci  announcement  research  deep-learning  machine-learning  acm  information-theory  bits  neuro  model-class  big-surf  frontier  nibble  hmm  signal-noise  deepgoog  expert  ideas  wild-ideas  summary  talks  video  israel  roots  physics  interdisciplinary  ai  intelligence  shannon  giants  arrows  preimage  lifts-projections  composition-decomposition  characterization  markov  gradient-descent  papers  liner-notes  experiment  hi-order-bits  generalization  expert-experience  explanans  org:inst  speedometer
september 2017 by nhaliday
Fermat's Library | Cassini, Rømer and the velocity of light annotated/explained version.
Abstract: The discovery of the finite nature of the velocity of light is usually attributed to Rømer. However, a text at the Paris Observatory confirms the minority opinion according to which Cassini was first to propose the ‘successive motion’ of light, while giving a rather correct order of magnitude for the duration of its propagation from the Sun to the Earth. We examine this question, and discuss why, in spite of the criticisms of Halley, Cassini abandoned this hypothesis while leaving Rømer free to publish it.
liner-notes  papers  essay  history  early-modern  europe  the-great-west-whale  giants  the-trenches  mediterranean  nordic  science  innovation  discovery  physics  electromag  space  speed  nibble  org:sci  org:mat
september 2017 by nhaliday
GALILEO'S STUDIES OF PROJECTILE MOTION
During the Renaissance, the focus, especially in the arts, was on representing as accurately as possible the real world whether on a 2 dimensional surface or a solid such as marble or granite. This required two things. The first was new methods for drawing or painting, e.g., perspective. The second, relevant to this topic, was careful observation.

With the spread of cannon in warfare, the study of projectile motion had taken on greater importance, and now, with more careful observation and more accurate representation, came the realization that projectiles did not move the way Aristotle and his followers had said they did: the path of a projectile did not consist of two consecutive straight line components but was instead a smooth curve. [1]

Now someone needed to come up with a method to determine if there was a special curve a projectile followed. But measuring the path of a projectile was not easy.

Using an inclined plane, Galileo had performed experiments on uniformly accelerated motion, and he now used the same apparatus to study projectile motion. He placed an inclined plane on a table and provided it with a curved piece at the bottom which deflected an inked bronze ball into a horizontal direction. The ball thus accelerated rolled over the table-top with uniform motion and then fell off the edge of the table Where it hit the floor, it left a small mark. The mark allowed the horizontal and vertical distances traveled by the ball to be measured. [2]

By varying the ball's horizontal velocity and vertical drop, Galileo was able to determine that the path of a projectile is parabolic.

https://www.scientificamerican.com/author/stillman-drake/

Galileo's Discovery of the Parabolic Trajectory: http://www.jstor.org/stable/24949756

Galileo's Experimental Confirmation of Horizontal Inertia: Unpublished Manuscripts (Galileo
Gleanings XXII): https://sci-hub.tw/https://www.jstor.org/stable/229718
- Drake Stillman

MORE THAN A DECADE HAS ELAPSED since Thomas Settle published a classic paper in which Galileo's well-known statements about his experiments on inclined planes were completely vindicated.' Settle's paper replied to an earlier attempt by Alexandre Koyre to show that Galileo could not have obtained the results he claimed in his Two New Sciences by actual observations using the equipment there described. The practical ineffectiveness of Settle's painstaking repetition of the experiments in altering the opinion of historians of science is only too evident. Koyre's paper was reprinted years later in book form without so much as a note by the editors concerning Settle's refutation of its thesis.2 And the general literature continues to belittle the role of experiment in Galileo's physics.

More recently James MacLachlan has repeated and confirmed a different experiment reported by Galileo-one which has always seemed highly exaggerated and which was also rejected by Koyre with withering sarcasm.3 In this case, however, it was accuracy of observation rather than precision of experimental data that was in question. Until now, nothing has been produced to demonstrate Galileo's skill in the design and the accurate execution of physical experiment in the modern sense.

Pant of a page of Galileo's unpublished manuscript notes, written late in 7608, corroborating his inertial assumption and leading directly to his discovery of the parabolic trajectory. (Folio 1 16v Vol. 72, MSS Galileiani; courtesy of the Biblioteca Nazionale di Firenze.)

...

(The same skeptical historians, however, believe that to show that Galileo could have used the medieval mean-speed theorem suffices to prove that he did use it, though it is found nowhere in his published or unpublished writings.)

...

Now, it happens that among Galileo's manuscript notes on motion there are many pages that were not published by Favaro, since they contained only calculations or diagrams without attendant propositions or explanations. Some pages that were published had first undergone considerable editing, making it difficult if not impossible to discern their full significance from their printed form. This unpublished material includes at least one group of notes which cannot satisfactorily be accounted for except as representing a series of experiments designed to test a fundamental assumption, which led to a new, important discovery. In these documents precise empirical data are given numerically, comparisons are made with calculated values derived from theory, a source of discrepancy from still another expected result is noted, a new experiment is designed to eliminate this, and further empirical data are recorded. The last-named data, although proving to be beyond Galileo's powers of mathematical analysis at the time, when subjected to modern analysis turn out to be remarkably precise. If this does not represent the experimental process in its fully modern sense, it is hard to imagine what standards historians require to be met.

The discovery of these notes confirms the opinion of earlier historians. They read only Galileo's published works, but did so without a preconceived notion of continuity in the history of ideas. The opinion of our more sophisticated colleagues has its sole support in philosophical interpretations that fit with preconceived views of orderly long-term scientific development. To find manuscript evidence that Galileo was at home in the physics laboratory hardly surprises me. I should find it much more astonishing if, by reasoning alone, working only from fourteenth-century theories and conclusions, he had continued along lines so different from those followed by profound philosophers in earlier centuries. It is to be hoped that, warned by these examples, historians will begin to restore the old cautionary clauses in analogous instances in which scholarly opinions are revised without new evidence, simply to fit historical theories.

In what follows, the newly discovered documents are presented in the context of a hypothetical reconstruction of Galileo's thought.

...

As early as 1590, if we are correct in ascribing Galileo's juvenile De motu to that date, it was his belief that an ideal body resting on an ideal horizontal plane could be set in motion by a force smaller than any previously assigned force, however small. By "horizontal plane" he meant a surface concentric with the earth but which for reasonable distances would be indistinguishable from a level plane. Galileo noted at the time that experiment did not confirm this belief that the body could be set in motion by a vanishingly small force, and he attributed the failure to friction, pressure, the imperfection of material surfaces and spheres, and the departure of level planes from concentricity with the earth.5

It followed from this belief that under ideal conditions the motion so induced would also be perpetual and uniform. Galileo did not mention these consequences until much later, and it is impossible to say just when he perceived them. They are, however, so evident that it is safe to assume that he saw them almost from the start. They constitute a trivial case of the proposition he seems to have been teaching before 1607-that a mover is required to start motion, but that absence of resistance is then sufficient to account for its continuation.6

In mid-1604, following some investigations of motions along circular arcs and motions of pendulums, Galileo hit upon the law that in free fall the times elapsed from rest are as the smaller distance is to the mean proportional between two distances fallen.7 This gave him the times-squared law as well as the rule of odd numbers for successive distances and speeds in free fall. During the next few years he worked out a large number of theorems relating to motion along inclined planes, later published in the Two New Sciences. He also arrived at the rule that the speed terminating free fall from rest was double the speed of the fall itself. These theorems survive in manuscript notes of the period 1604-1609. (Work during these years can be identified with virtual certainty by the watermarks in the paper used, as I have explained elsewhere.8)

In the autumn of 1608, after a summer at Florence, Galileo seems to have interested himself in the question whether the actual slowing of a body moving horizontally followed any particular rule. On folio 117i of the manuscripts just mentioned, the numbers 196, 155, 121, 100 are noted along the horizontal line near the middle of the page (see Fig. 1). I believe that this was the first entry on this leaf, for reasons that will appear later, and that Galileo placed his grooved plane in the level position and recorded distances traversed in equal times along it. Using a metronome, and rolling a light wooden ball about 4 3/4 inches in diameter along a plane with a groove 1 3/4 inches wide, I obtained similar relations over a distance of 6 feet. The figures obtained vary greatly for balls of different materials and weights and for greatly different initial speeds.9 But it suffices for my present purposes that Galileo could have obtained the figures noted by observing the actual deceleration of a ball along a level plane. It should be noted that the watermark on this leaf is like that on folio 116, to which we shall come presently, and it will be seen later that the two sheets are closely connected in time in other ways as well.

The relatively rapid deceleration is obviously related to the contact of ball and groove. Were the ball to roll right off the end of the plane, all resistance to horizontal motion would be virtually removed. If, then, there were any way to have a given ball leave the plane at different speeds of which the ratios were known, Galileo's old idea that horizontal motion would continue uniformly in the absence of resistance could be put to test. His law of free fall made this possible. The ratios of speeds could be controlled by allowing the ball to fall vertically through known heights, at the ends of which it would be deflected horizontally. Falls through given heights … [more]
nibble  org:junk  org:edu  physics  mechanics  gravity  giants  the-trenches  discovery  history  early-modern  europe  mediterranean  the-great-west-whale  frontier  science  empirical  experiment  arms  technology  lived-experience  time  measurement  dirty-hands  iron-age  the-classics  medieval  sequential  wire-guided  error  wiki  reference  people  quantitative-qualitative  multi  pdf  piracy  study  essay  letters  discrete  news  org:mag  org:sci  popsci
august 2017 by nhaliday
How & Why Solar Eclipses Happen | Solar Eclipse Across America - August 21, 2017
Cosmic Coincidence
The Sun’s diameter is about 400 times that of the Moon. The Sun is also (on average) about 400 times farther away. As a result, the two bodies appear almost exactly the same angular size in the sky — about ½°, roughly half the width of your pinky finger seen at arm's length. This truly remarkable coincidence is what gives us total solar eclipses. If the Moon were slightly smaller or orbited a little farther away from Earth, it would never completely cover the solar disk. If the Moon were a little larger or orbited a bit closer to Earth, it would block much of the solar corona during totality, and eclipses wouldn’t be nearly as spectacular.

https://blogs.scientificamerican.com/life-unbounded/the-solar-eclipse-coincidence/
nibble  org:junk  org:edu  space  physics  mechanics  spatial  visuo  data  scale  measure  volo-avolo  earth  multi  news  org:mag  org:sci  popsci  sky  cycles  pro-rata  navigation  degrees-of-freedom
august 2017 by nhaliday
Divorce demography - Wikipedia
https://en.wikipedia.org/wiki/Divorce_in_the_United_States#Rates_of_divorce
https://psychcentral.com/lib/the-myth-of-the-high-rate-of-divorce/

Marriage update: less divorce, and less sex: https://familyinequality.wordpress.com/2017/04/14/marriage-update-less-divorce-and-less-sex/

Breaking Up Is Hard to Count: The Rise of Divorce in the United States, 1980–2010: https://link.springer.com/article/10.1007%2Fs13524-013-0270-9
Divorce rates have doubled over the past two decades among persons over age 35. Among the youngest couples, however, divorce rates are stable or declining. If current trends continue, overall age-standardized divorce rates could level off or even decline over the next few decades. We argue that the leveling of divorce among persons born since 1980 probably reflects the increasing selectivity of marriage.
sociology  methodology  demographics  social-science  social-structure  life-history  sex  wiki  reference  pro-rata  metrics  longitudinal  intricacy  multi  org:sci  wonkish  sexuality  trends  data  analysis  general-survey  study  history  mostly-modern  usa  selection  age-generation  chart  beginning-middle-end
august 2017 by nhaliday
The Flynn effect for verbal and visuospatial short-term and working memory: A cross-temporal meta-analysis
Specifically, the Flynn effect was found for forward digit span (r = 0.12, p < 0.01) and forward Corsi block span (r = 0.10, p < 0.01). Moreover, an anti-Flynn effect was found for backward digit span (r = − 0.06, p < 0.01) and for backward Corsi block span (r = − 0.17, p < 0.01). Overall, the results support co-occurrence theories that predict simultaneous secular gains in specialized abilities and declines in g. The causes of the differential trajectories are further discussed.

http://www.unz.com/jthompson/working-memory-bombshell/
https://www.newscientist.com/article/2146752-we-seem-to-be-getting-stupider-and-population-ageing-may-be-why/
study  psychology  cog-psych  psychometrics  iq  trends  dysgenics  flynn  psych-architecture  meta-analysis  multi  albion  scitariat  summary  commentary  blowhards  mental-math  science-anxiety  news  org:sci
august 2017 by nhaliday
Controversial New Theory Suggests Life Wasn't a Fluke of Biology—It Was Physics | WIRED
First Support for a Physics Theory of Life: https://www.quantamagazine.org/first-support-for-a-physics-theory-of-life-20170726/
Take chemistry, add energy, get life. The first tests of Jeremy England’s provocative origin-of-life hypothesis are in, and they appear to show how order can arise from nothing.
news  org:mag  profile  popsci  bio  xenobio  deep-materialism  roots  eden  physics  interdisciplinary  applications  ideas  thermo  complex-systems  cybernetics  entropy-like  order-disorder  arrows  phys-energy  emergent  empirical  org:sci  org:inst  nibble  chemistry  fixed-point  wild-ideas  multi
august 2017 by nhaliday
Correlated Equilibria in Game Theory | Azimuth
Given this, it’s not surprising that Nash equilibria can be hard to find. Last September a paper came out making this precise, in a strong way:

• Yakov Babichenko and Aviad Rubinstein, Communication complexity of approximate Nash equilibria.

The authors show there’s no guaranteed method for players to find even an approximate Nash equilibrium unless they tell each other almost everything about their preferences. This makes finding the Nash equilibrium prohibitively difficult to find when there are lots of players… in general. There are particular games where it’s not difficult, and that makes these games important: for example, if you’re trying to run a government well. (A laughable notion these days, but still one can hope.)

Klarreich’s article in Quanta gives a nice readable account of this work and also a more practical alternative to the concept of Nash equilibrium. It’s called a ‘correlated equilibrium’, and it was invented by the mathematician Robert Aumann in 1974. You can see an attempt to define it here:
baez  org:bleg  nibble  mathtariat  commentary  summary  news  org:mag  org:sci  popsci  equilibrium  GT-101  game-theory  acm  conceptual-vocab  concept  definition  thinking  signaling  coordination  tcs  complexity  communication-complexity  lower-bounds  no-go  liner-notes  big-surf  papers  research  algorithmic-econ  volo-avolo
july 2017 by nhaliday
Corrupting cooperation and how anti-corruption strategies may backfire | Nature Human Behaviour
https://images.nature.com/original/nature-assets/nathumbehav/2017/s41562-017-0138/extref/s41562-017-0138-s1.pdf
Exposure to Norms: https://images.nature.com/original/nature-assets/nathumbehav/2017/s41562-017-0138/extref/s41562-017-0138-s1.pdf#page=114
Here we test how exposure to corruption norms affect behavior in our game. We do so by using our exposure score (a mean of the corruption perceptions of the countries the participant has lived in) and the heritage corruption score (a mean of the corruption perceptions of the countries the participant has an ethnic heritage). Since there is no incentive to offer bribes or contribute, except when compelled to do so by punishment, we predict that exposure to norms should primarily affect Leader decisions. Nonetheless, internalized norms may also affect the behavior of players in contributing and bribing.

...

The correlation between the direct exposure and heritage measures of corruption is r = 0.67, p < .001.

...

Then we see that direct exposure to corruption norms results in increased corrupt behavior—i.e. in our Canadian sample, those who have lived in corrupt countries from which they do not derive their heritage behave in more corrupt ways.

hard to interpret

http://psych.ubc.ca/when-less-is-best/

I don't think the solution is to just do nothing. Should look to history for ideas; process of "getting to Denmark" took centuries in NW Euro. Try to replicate and don't expect fast results.

Trust and Bribery: The Role of the Quid Pro Quo and the Link with Crime: http://www.nber.org/papers/w10510
I study data on bribes actually paid by individuals to public officials, viewing the results through a theoretical lens that considers the implications of trust networks. A bond of trust may permit an implicit quid pro quo to substitute for a bribe, which reduces corruption. Appropriate networks are more easily established in small towns, by long-term residents of areas with many other long-term residents, and by individuals in regions with many residents their own age. I confirm that the prevalence of bribery is lower under these circumstances, using the International Crime Victim Surveys. I also find that older people, who have had time to develop a network, bribe less. These results highlight the uphill nature of the battle against corruption faced by policy-makers in rapidly urbanizing countries with high fertility. I show that victims of (other) crimes bribe all types of public officials more than non-victims, and argue that both their victimization and bribery stem from a distrustful environment.

Kinship, Fractionalization and Corruption: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2847222
The theory of kin selection provides a straightforward justification for norms of nepotism and favoritism among relatives; more subtly, it also implies that the returns to such norms may be influenced by mating practices. Specifically, in societies with high levels of sub-ethnic fractionalization, where endogamous (and consanguineous) mating within kin-group, clan and tribe increases the local relatedness of individuals, the relative returns to norms of nepotism and favoritism are high. In societies with exogamous marriage practices, the relative returns to norms of impartial cooperation with non-relatives and strangers are increased. Using cross-country and within-country regression analyses and a cross-country lab experiment, we provide evidence for this account.

Ethnic favouritism: Not just an African phenomenon: http://voxeu.org/article/ethnic-favouritism-not-just-african-phenomenon
Ethnic favouritism is a global phenomenon
We find robust evidence for ethnic favouritism – ethnographic regions that are the current political leader’s ethnic homeland enjoy 7%-10% more intense night-time light, corresponding to 2%-3% higher regional GDP. Furthermore, we show that ethnic favouritism extends to ethnic groups that are linguistically close to the political leader.

Most significantly, these effects are as strong outside of Africa as they are within, challenging the preconception that ethnic favouritism is mainly or even entirely a sub-Saharan African phenomenon. For example, Bolivian presidents tended to favour areas populated by European descendants and Criollos, largely at the expense of the indigenous population. After the election of Evo Morales, a member of the indigenous Ayamara ethnic group, luminosity in indigenous areas grew substantially. Notably, critics suggest Morales gave special attention to the interests and values of the Ayamara at the expense of other indigenous peoples (e.g. Albro 2010, Postero 2010).

Democratisation is not a panacea
Our results further suggest that, while democratic institutions have a weak tendency to reduce ethnic favouritism, their effect is limited. In particular, a change from autocratic regimes to weak democracies does not seem to reduce ethnic favouritism (and may even increase it).

This result could in part be explained by political leaders’ motivations for engaging in ethnic favouritism. We find that the practice intensifies around election years in which the political leader's office is contested, suggesting that leaders may target policies towards their ethnic homelands to improve their re-election prospects, and not solely out of co-ethnic altruism. To the extent that political leaders engage in ethnic favouritism for electoral purposes, democratisation is not likely to be effective in curbing the practice.

Though Facebook will occasionally talk about the transparency of governments and corporations, what it really wants to advance is the transparency of individuals – or what it has called, at various moments, “radical transparency” or “ultimate transparency”. The theory holds that the sunshine of sharing our intimate details will disinfect the moral mess of our lives. With the looming threat that our embarrassing information will be broadcast, we’ll behave better. And perhaps the ubiquity of incriminating photos and damning revelations will prod us to become more tolerant of one another’s sins. “The days of you having a different image for your work friends or co-workers and for the other people you know are probably coming to an end pretty quickly,” Zuckerberg has said. “Having two identities for yourself is an example of a lack of integrity.”

The point is that Facebook has a strong, paternalistic view on what’s best for you, and it’s trying to transport you there. “To get people to this point where there’s more openness – that’s a big challenge. But I think we’ll do it,” Zuckerberg has said. He has reason to believe that he will achieve that goal. With its size, Facebook has amassed outsized powers. “In a lot of ways Facebook is more like a government than a traditional company,” Zuckerberg has said. “We have this large community of people, and more than other technology companies we’re really setting policies.”

- HENRY DAMPIER

The key value of privacy, which tends to be lost amid all the technological babble about the concept, is that it makes social cooperation more feasible among people who disagree, share different tastes, or fundamental points of view.

...

This is especially an issue with democracy. The reason why the United States has anonymous voting laws is because without them, people are persecuted for their party affiliations by people with rival party loyalties. This being forgotten, the age of Facebook and similar technologies has opened up ordinary people to this sort of ordinary political persecution. Moderating influences like that of the respect for privacy put a brake on some of the more rapacious, violent aspects of party politics.

...

The impulse for this comes less from the availability of the technology, and more because of the preexisting social trends. When there is a family life, there is communication and closeness within the family.

With more people living without a family life, they go to the public square to get their needs for social validation met. This doesn’t work so well, because strangers have no skin in the life of the atomized individual that only exists as an image on their screens.
study  org:nat  polisci  sociology  government  corruption  law  leviathan  crooked  world  developing-world  policy  cooperate-defect  free-riding  cultural-dynamics  anthropology  multi  twitter  social  commentary  scitariat  public-goodish  institutions  behavioral-econ  org:sci  summary  n-factor  trust  media  social-norms  spreading  equilibrium  🎩  🌞  broad-econ  GT-101  economics  growth-econ  org:edu  microfoundations  open-closed  leaks  canada  anglo  migration  pdf  polarization  longitudinal  social-capital  network-structure  cohesion  social-structure  axelrod  anomie  tribalism  group-level  kinship  econometrics  field-study  sapiens  stylized-facts  divergence  cliometrics  anglosphere  incentives  biodet  the-great-west-whale  populism  roots  putnam-like  behavioral-gen  sex  chart  wealth-of-nations  political-econ  polanyi-marx  eden  path-dependence  variance-components  correlation  assimilation  ethics  org:ngo  econotariat  africa  ethnocentrism  race  democracy  latin-america  asia  news  org:lite
july 2017 by nhaliday
The Greatest Generation | West Hunter
But  when you consider that people must have had 48 chromosomes back then, rather than the current measly 46, much is explained.

Theophilus Painter, a prominent cytologist, had investigated human chromosome number in 1923. He thought that there were 24 in sperm cells, resulting in a count of 48, which is entirely reasonable. That is definitely the case for all our closest relatives (chimpanzees, bonobos, gorillas, and orangutans).

The authorities say that that Painter made a mistake, and that humans always had 46 chromosomes. But then, for 30 years after Painter’s work, the authorities said that people had 48.  Textbooks in genetics continued to say that Man has 48 chromosomes up until the mid 1950s.  Many cytologists and geneticists studied human chromosomes during that period, but they knew that there were 48, and that’s what they saw. Now they know that there are 46, and that’s what every student sees.

Either the authorities are fallible and most people are sheep, or human chromosome number actually changed sometime after World War II.  No one could believe the first alternative: it would hurt our feelings, and therefore cannot be true.  No, we have a fascinating result: people today are fundamentally different from the Greatest Generation, biologically different: we’re two chromosomes shy of a load. .    So it’s not our fault !

funny comment: https://westhunt.wordpress.com/2014/11/19/the-greatest-generation/#comment-62920
“some social environments are better than others at extracting the best from its people”

That’s very true – we certainly don’t seem to be doing a very good job of it. It’s a minor matter, but threatening brilliant engineers with death or professional ruin because of their sexist sartorial choices probably isn’t helping…

I used to do some engineering, and if someone had tried on that on me, I’ve have told him to go fuck itself. Is that a lost art?

https://www.theguardian.com/science/2014/nov/14/rosetta-comet-dr-matt-taylor-apology-sexist-shirt
west-hunter  scitariat  stories  history  mostly-modern  usa  world-war  pre-ww2  science  bounded-cognition  error  being-right  info-dynamics  genetics  genomics  bio  troll  multi  news  org:sci  popsci  nature  evolution  the-trenches  poast  rant  aphorism  gender  org:lite  alt-inst  tip-of-tongue  org:anglo
july 2017 by nhaliday
World Without Stars | West Hunter
Seems to me that forming in a galaxy might give a solar system enough heavy elements, while being flung into the intergalactic deeps would protect you from cosmic catastrophes like gamma-ray bursts.   Such stars might be good homes for complex life, especially a few billion years ago.

Interstellar travel is hard enough for us, but for these guys, it would be a bitch. That first step is a doozy.
west-hunter  scitariat  speculation  ideas  commentary  study  summary  news  org:nat  org:sci  space  xenobio  death  anthropic  fermi
june 2017 by nhaliday
On Pinkglossianism | Wandering Near Sawtry
Steven Pinker is not wrong to say that some things have got better – or even that some things are getting better. We live longer. We have more food. We have more medicine. We have more free time. We have less chance of dying at another’s hands. My main objection to his arguments is not that some things have got worse as well (family life, for example, or social trust). It is not that he emphasises proportion when scale is more significant (such as with animal suffering). It is the fragility of these peaceful, prosperous conditions.

Antibiotics have made us healthier but antibiotic resistance threatens to plunge us into epidemics. Globalisation has made us richer but is also a powder-keg of cultural unease. Industrialisation has brought material wealth but it is also damaging the environment. Nuclear weapons have averted international conflict but it would only take one error for them to wreak havoc.

At his best, Pinker reminds us of how much we have to treasure, then. At his worst, he is like a co-passenger in a car – pointing out the sunny weather and the beautiful surroundings as it hurtles towards the edge of a cliff.

http://takimag.com/article/dusting_off_the_crystal_ball_john_derbyshire/print
http://blogs.discovermagazine.com/gnxp/2011/11/the-new-york-times-on-violence-and-pinker/
albion  rhetoric  contrarianism  critique  pinker  peace-violence  domestication  crime  criminology  trends  whiggish-hegelian  optimism  pessimism  cynicism-idealism  multi  news  org:lite  gnon  isteveish  futurism  list  top-n  eric-kaufmann  dysgenics  nihil  nationalism-globalism  nuclear  robust  scale  risk  gnxp  scitariat  faq  modernity  tetlock  the-bones  paleocon  journos-pundits  org:sci
june 2017 by nhaliday
Electroconvulsive therapy: a crude, controversial out-of-favor treatme – Coyne of the Realm
various evidence that ECT works

I will soon be offering e-books providing skeptical looks at mindfulness and positive psychology, as well as scientific writing courses on the web as I have been doing face-to-face for almost a decade.

https://www.coyneoftherealm.com/collections/frontpage

Mind the Hype: A Critical Evaluation and Prescriptive Agenda for Research on Mindfulness and Meditation: http://journals.sagepub.com/doi/10.1177/1745691617709589
Where's the Proof That Mindfulness Meditation Works?: https://www.scientificamerican.com/article/wheres-the-proof-that-mindfulness-meditation-works1/
scitariat  psychology  cog-psych  psychiatry  medicine  evidence-based  mindful  the-monster  announcement  attention  regularizer  contrarianism  meta-analysis  multi  critique  books  attaq  replication  realness  study  news  org:mag  org:sci  popsci  absolute-relative  backup  intervention  psycho-atoms
june 2017 by nhaliday
An Economic Analysis of the Protestant Reformation
- Ekelund, Hébert, Tollison

This paper seeks to explain the initial successes and failures of Protestantism on economic grounds. It argues that the medieval Roman Catholic Church, through doctrinal manipulation, the exclusion of rivals, and various forms of price discrimination, ultimately placed members seeking the Z good "spiritual services" on the margin of defection. These monopolistic practices encouraged entry by rival firms, some of which were aligned with civil governments. The paper hypothesizes that Protestant entry was facilitated in emergent entrepreneurial societies characterized by the decline of feudalism and relatively unstable distribution of wealth and repressed in more homogeneous, rent-seeking societies that were mostly dissipating rather than creating wealth. In these societies the Roman Church was more able to continue the practice of price discrimination. Informal tests of this proposition are conducted by considering primogeniture and urban growth as proxies for wealth stability.

Causes and Consequences of the Protestant Reformation: https://pseudoerasmus.files.wordpress.com/2017/01/becker-pfaff-rubin-2016.pdf
- Sascha O. Becker, Steven Pfaff, Jared Rubin

The Protestant Reformation is one of the defining events of the last millennium. Nearly 500 years after the Reformation, its causes and consequences have seen a renewed interest in the social sciences. Research in economics, sociology, and political science increasingly uses detailed individual-level, city-level, and regional-level data to identify drivers of the adoption of the Reformation, its diffusion pattern, and its socioeconomic consequences. We take stock of this research, pointing out what we know and what we do not know and suggesting the most promising areas for future research.

Table 1: Studies of the Supply and Demand-Side Factors of the Reformation
Table 2: Studies on the Consequences of the Reformation: Human Capital
Table 3: Studies on the Consequences of the Reformation: Work and Work Ethic
Table 4: Studies on the Consequences of the Reformation: Economic Development
Table 5: Studies on the Consequences of the Reformation: Governance
Table 6: Studies on the “Dark” Consequences of the Reformation

LUTHER AND SULEYMAN: http://www.jstor.org.sci-hub.tw/stable/40506214
- Murat Iyigun

Various historical accounts have suggested that the Ottomans' rise helped the Protestant Reformation as well as its offshoots, such as Zwinglianism, Anabaptism, and Calvinism, survive their infancy and mature. Utilizing a comprehensive data set on violent confrontations for the interval between 1401 and 1700 CE, I show that the incidence of military engagements between the Protestant Reformers and the Counter-Reformation forces between the 1520s and 1650s depended negatively on the Ottomans' military activities in Europe. Furthermore, I document that the impact of the Ottomans on Europe went beyond suppressing ecclesiastical conflicts only: at the turn of the sixteenth century, Ottoman conquests lowered the number of all newly initiated conflicts among the Europeans roughly by 25 percent, while they dampened all longer-running feuds by more than 15 percent. The Ottomans' military activities influenced the length of intra-European feuds too, with each Ottoman-European military engagement shortening the duration of intra-European conflicts by more than 50 percent. Thus, while the Protestant Reformation might have benefited from - and perhaps even capitalized on - the Ottoman advances in Europe, the latter seems to have played some role in reducing conflicts within Europe more generally.

Religious Competition and Reallocation: The Political Economy of Secularization in the Protestant Reformation: http://www.jeremiahdittmar.com/files/RRR_20170919.pdf
- Davide Cantoni, Jeremiah Dittmar, Noam Yuchtman*

Using novel microdata, we document an unintended, first-order consequence of the Protestant Reformation: a massive reallocation of resources from religious to secular purposes. To understand this process, we propose a conceptual framework in which the introduction of religious competition shifts political markets where religious authorities provide legitimacy to rulers in exchange for control over resources. Consistent with our framework, religious competition changed the balance of power between secular and religious elites: secular authorities acquired enormous amounts of wealth from monasteries closed during the Reformation, particularly in Protestant regions. This transfer of resources had important consequences. First, it shifted the allocation of upper-tail human capital. Graduates of Protestant universities increasingly took secular, especially administrative, occupations. Protestant university students increasingly studied secular subjects, especially degrees that prepared students for public sector jobs, rather than church sector-specific theology. Second, it affected the sectoral composition of fixed investment. Particularly in Protestant regions, new construction from religious toward secular purposes, especially the building of palaces and administrative buildings, which reflected the increased wealth and power of secular lords. Reallocation was not driven by pre-existing economic or cultural differences. Our findings indicate that the Reformation played an important causal role in the secularization of the West.

look at Figure 4, holy shit

History: Science and the Reformation: http://www.nature.com/nature/journal/v550/n7677/full/550454a.html?WT.mc_id=TWT_NatureNews&sf126429621=1
The scientific and religious revolutions that began 500 years ago were not causally related, but were both stimulated by printing, argues David Wootton.
https://archive.is/JElPv
No, the Reformation did not cause the scientific revolution. Nice brief article. 👍

No RCT = No causal claims, for or against ;)
Though I'm open to a regression discontinuity design! cc: @pseudoerasmus
pdf  study  economics  growth-econ  broad-econ  history  medieval  early-modern  religion  christianity  protestant-catholic  revolution  institutions  cliometrics  🎩  europe  the-great-west-whale  chart  roots  entrepreneurialism  wealth-of-nations  rent-seeking  inequality  market-power  industrial-org  political-econ  anglosphere  sociology  polisci  egalitarianism-hierarchy  flexibility  supply-demand  models  analysis  path-dependence  divergence  leviathan  theos  enlightenment-renaissance-restoration-reformation  cultural-dynamics  s:*  multi  pseudoE  piracy  conquest-empire  war  islam  mediterranean  eastern-europe  incentives  modernity  north-weingast-like  open-closed  MENA  time  density  unintended-consequences  correlation  article  survey  marginal  equilibrium  competition  distribution  wealth  comparison  things  homo-hetero  discrimination  legacy  urban  trust  corruption  morality  ethics  n-factor  diversity  redistribution  welfare-state  flux-stasis  data  scale  causation  endo-exo  natural-experiment  meta-analysis  list  education  hum
may 2017 by nhaliday
Genetically engineered humans will arrive sooner than you think. And we're not ready. - Vox
lol "epigenetics" makes an appearance ofc

https://www.theatlantic.com/science/archive/2017/06/the-moral-question-that-stanfords-bioengineering-students-get/531876/

For now, that’s prohibitively expensive, but it won’t always be that way. In 2003, it cost 4 dollars to press one of the keys on Endy’s hypothetical synthesizer. This month, it costs just two cents—a 200-fold decrease in price in just 14 years. In the same time frame, the cost of tuition at Stanford has doubled, and is now around \$50,000. Given all of that, the first question that Stanford’s budding bioengineers get is this:

At what point will the cost of printing DNA to create a human equal the cost of teaching a student in Stanford?
And the answer is: 19 years from today.

But the follow-up question is a little more complicated:

If you and your future partner are planning to have kids, would you start saving money for college tuition, or for printing the genome of your offspring?
The question tends to split students down the line, says Endy. About 60 percent say that printing a genome is wrong, and flies against what it means to be a parent. They prize the special nature of education and would opt to save for the tuition. But around 40 percent of the class will say that the value of education may change in the future, and if genetic technology becomes mature, and allows them to secure advantages for them and their lineage, they might as well do that.

https://www.nytimes.com/2016/05/14/science/synthetic-human-genome.html
http://www.nature.com/news/plan-to-synthesize-human-genome-triggers-mixed-response-1.20028

https://ipscell.com/2017/06/crispr-human-genetic-modification-a-needed-course-correction/
news  org:data  org:lite  enhancement  biotech  longevity  genetics  CRISPR  epigenetics  westminster  morality  ethics  inequality  futurism  patho-altruism  speedometer  multi  org:mag  stanford  higher-ed  values  poll  elite  org:nat  org:sci  frontier  genomics  🌞  🔬  current-events  org:rec  science  events  announcement  scitariat
may 2017 by nhaliday
Secular decline in testosterone levels - Rogue Health and Fitness
A Population-Level Decline in Serum Testosterone Levels in American Men: http://sci-hub.tw/10.1210/jc.2006-1375
Secular trends in sex hormones and fractures in men and women: http://www.eje-online.org/content/166/5/887.full.pdf
https://archive.is/dcruu
Small n and older sample, but interesting that while testosterone decreases have been large for men they’ve been even larger (in % terms) for women; wonder if this contributes to declining pregnancy and sexual frequency, rising depression.

https://www.labcorp.com/assets/11476
http://www.theamericanconservative.com/articles/sperm-killers-and-rising-male-infertility/
https://www.theguardian.com/lifeandstyle/2017/jul/25/sperm-counts-among-western-men-have-halved-in-last-40-years-study
https://www.weforum.org/agenda/2017/08/most-men-in-the-us-and-europe-could-be-infertile-by-2060
Strangelove: https://youtu.be/N1KvgtEnABY?t=67

https://www.scientificamerican.com/article/sperm-count-dropping-in-western-world/
https://news.ycombinator.com/item?id=14855796
https://news.ycombinator.com/item?id=14857588
People offering human-centric explanations like cell phones: Note also that the sperm quality of dogs has decreased 30% since 1988.

https://news.ycombinator.com/item?id=20636757

mendelian rand.:
https://www.ncbi.nlm.nih.gov/pubmed/28448539
1 SD genetically instrumented increase in BMI was associated with a 0.25 SD decrease in serum testosterone

http://www.pnas.org/content/115/4/E715.full

Tucker Carlson: "Men Seem To Be Becoming Less Male": https://www.realclearpolitics.com/video/2018/03/08/tucker_carlson_men_seem_to_be_becoming_less_male.html
Carlson interviewed Dr. Jordan Peterson who blamed the "insidious" movement being driven by the "radical left" that teaches there a problem of "toxic masculinity." He said ideological policies focus on "de-emphasizing masculinity may be part of the problem."

...

Those are the numbers. They paint a very clear picture: American men are failing, in body, mind and spirit. This is a crisis. Yet our leaders pretend it’s not happening. They tell us the opposite is true: Women are victims, men are oppressors. To question that assumption is to risk punishment. Even as women far outpace men in higher education, virtually every college campus supports a women’s studies department, whose core goal is to attack male power. Our politicians and business leaders internalize and amplify that message. Men are privileged. Women are oppressed. Hire and promote and reward accordingly.

https://pinboard.in/u:nhaliday/b:bd7b0a50d741
But it also hints at an almost opposite take: average testosterone levels have been falling for decades, so at this point these businessmen would be the only “normal” (by 1950s standards) men out there, and everyone else would be unprecedently risk-averse and boring.
org:health  fitsci  health  endocrine  trends  public-health  science-anxiety  gender  commentary  multi  study  pdf  data  piracy  white-paper  gnon  news  org:mag  right-wing  fertility  dysgenics  drugs  psychiatry  stress  politics  government  hypochondria  idk  embodied  FDA  externalities  epidemiology  video  film  classic  org:lite  org:anglo  genetics  endo-exo  mendel-randomization  obesity  fitness  scitariat  🌞  medicine  correlation  intervention  causation  GWAS  environmental-effects  hn  org:sci  popsci  model-organism  embodied-cognition  hmm  org:davos  communism  memes(ew)  fluid  endogenous-exogenous  roots  explanans  org:local  summary  modernity  rot  org:nat  chart  the-bones  albion  canada  journos-pundits  philosophy  iq  coming-apart  malaise  gender-diff  attention  disease  opioids  death  interview  current-events  tv  higher-ed  labor  management  compensation  grad-school  law  twitter  social  backup  ratty  unaffiliated  yvain  ssc  get-fit
may 2017 by nhaliday
Environmental Cancer? | In the Pipeline
And while I take the point that endocrine disruptors and the like need to be watched (and that we really do need to study these things more), I don’t see why the alarm bells need to be rung quite this loudly.
scitariat  org:nat  commentary  critique  expert  chemistry  endocrine  health  medicine  cancer  embodied-street-fighting  org:sci  science-anxiety  regularizer  public-health  expert-experience
may 2017 by nhaliday
Sequencing a genome for less than the cost of an X-ray? Not quite yet
A \$100 genome will cost \$100 in the same way that the \$1,000 genome costs \$1,000. As in, it won’t, at least not soon. “The \$1,000 genome” — which sequencer makers began promising about five years ago — “costs us \$3,000,” said Richard Gibbs, founder of the Baylor College of Medicine Human Genome Sequencing Center and one of the leaders of the original Human Genome Project in the 1990s.
news  org:sci  scaling-up  data  scale  genetics  genomics  biotech  money  efficiency  bioinformatics  cost-benefit  frontier  speedometer  measurement
april 2017 by nhaliday
The Future of the Global Muslim Population | Pew Research Center
http://www.pewforum.org/2011/01/27/future-of-the-global-muslim-population-regional-europe/
http://www.pewforum.org/2011/01/27/the-future-of-the-global-muslim-population/#the-americas

Europe’s Growing Muslim Population: http://www.pewforum.org/2017/11/29/europes-growing-muslim-population/

https://www.gnxp.com/WordPress/2017/11/30/crescent-over-the-north-sea/
Pew has a nice new report up, Europe’s Growing Muslim Population. Though it is important to read the whole thing, including the methods.

I laugh when people take projections of the year 2100 seriously. That’s because we don’t have a good sense of what might occur over 70+ years (read social and demographic projections from the 1940s and you’ll understand what I mean). Thirty years though is different. In the year 2050 children born today, such as my youngest son, will be entering the peak of their powers.

[cf.: http://blogs.discovermagazine.com/gnxp/2012/12/population-projects-50-years-into-the-future-fantasy/]

...

The problem with this is that there is a wide range of religious commitment and identification across Europe’s Muslim communities. On the whole, they are more religiously observant than non-Muslims in their nations of residence, but, for example, British Muslims are consistently more religious than French Muslims on surveys (or express views constant with greater religious conservatism).

People in Western countries are violent (yes) 29 52 34
lmao that's just ridiculous from the UK

https://www.gnxp.com/WordPress/2006/03/03/poll-of-british-muslims/
In short, read the poll closely, this isn’t an black & white community. It seems clear that some people simultaneously support Western society on principle while leaning toward separatism, while a subset, perhaps as large as 10%, are violently and radically hostile to the surrounding society.
news  org:data  data  analysis  database  religion  islam  population  demographics  fertility  world  developing-world  europe  usa  MENA  prediction  trends  migration  migrant-crisis  asia  africa  chart  multi  the-bones  white-paper  EU  gnxp  scitariat  poll  values  descriptive  hypocrisy  britain  gallic  germanic  pro-rata  maps  visualization  counterfactual  assimilation  iraq-syria  india  distribution  us-them  tribalism  peace-violence  order-disorder  terrorism  events  scale  meta:prediction  accuracy  time  org:sci
april 2017 by nhaliday
Harvest of Fears: Farm-Raised Fish May Not Be Free of Mercury and Other Pollutants - Scientific American
In fact, studies have shown that farm-raised fish have more toxins overall than their wild-caught cousins, though exceptions of course do exist.
news  org:mag  org:sci  q-n-a  data  health  food  oceans  cooking  hmm  hypochondria  human-bean  comparison
april 2017 by nhaliday
Spiders do not seem to be cognitively limited, displaying a large diversity of learning processes, from habituation to contextual learning, including a sense of numerosity. To tease apart the central from the extended cognition, we apply the mutual manipulability criterion, testing the existence of reciprocal causal links between the putative elements of the system. We conclude that the web threads and configurations are integral parts of the cognitive systems. The extension of cognition to the web helps to explain some puzzling features of spider behaviour and seems to promote evolvability within the group, enhancing innovation through cognitive connectivity to variable habitat features. Graded changes in relative brain size could also be explained by outsourcing information processing to environmental features. More generally, niche-constructed structures emerge as prime candidates for extending animal cognition, generating the selective pressures that help to shape the evolving cognitive system.

https://www.quantamagazine.org/the-thoughts-of-a-spiderweb-20170523/
study  cocktail  bio  nature  neuro  eden  evolution  intelligence  exocortex  retrofit  deep-materialism  quantitative-qualitative  multi  org:mag  org:sci  popsci  summary  nibble  org:inst
april 2017 by nhaliday
Genome-Wide Association Study Reveals Multiple Loci Influencing Normal Human Facial Morphology
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0099009
http://www.biorxiv.org/content/early/2017/09/07/185330
https://www.technologyreview.com/s/608813/does-your-genome-predict-your-face-not-quite-yet/

http://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.1000451
Domestic dogs exhibit tremendous phenotypic diversity, including a greater variation in body size than any other terrestrial mammal. Here, we generate a high density map of canine genetic variation by genotyping 915 dogs from 80 domestic dog breeds, 83 wild canids, and 10 outbred African shelter dogs across 60,968 single-nucleotide polymorphisms (SNPs). Coupling this genomic resource with external measurements from breed standards and individuals as well as skeletal measurements from museum specimens, we identify 51 regions of the dog genome associated with phenotypic variation among breeds in 57 traits. The complex traits include average breed body size and external body dimensions and cranial, dental, and long bone shape and size with and without allometric scaling. In contrast to the results from association mapping of quantitative traits in humans and domesticated plants, we find that across dog breeds, a small number of quantitative trait loci (≤3) explain the majority of phenotypic variation for most of the traits we studied. In addition, many genomic regions show signatures of recent selection, with most of the highly differentiated regions being associated with breed-defining traits such as body size, coat characteristics, and ear floppiness. Our results demonstrate the efficacy of mapping multiple traits in the domestic dog using a database of genotyped individuals and highlight the important role human-directed selection has played in altering the genetic architecture of key traits in this important species.
study  biodet  sapiens  embodied  GWAS  genetics  multi  regularizer  QTL  sex  developmental  genetic-load  evopsych  null-result  nature  model-organism  genomics  twitter  social  scitariat  discussion  publishing  realness  drama  preprint  debate  critique  news  org:mag  org:sci  org:biz
april 2017 by nhaliday
Epidemiology of autism - Wikipedia
https://spectrumnews.org/news/school-survey-india-reveals-low-autism-prevalence/
This Is How Much of Autism Is Genetic: http://time.com/4956316/how-much-of-autism-is-genetic/
Indeed, when Sandin tracked autism diagnoses over time among the sibling pairs, he found that genetics likely accounts for around 83% of the disorder. That compares to nearly 90% reported in previous studies of twins only. Using the new model, environmental factors probably contribute around 17% to the risk of developing autism.
sapiens  medicine  genetics  variance-components  science-anxiety  psychiatry  disease  neuro  autism  👽  epidemiology  wiki  reference  biodet  paternal-age  behavioral-gen  public-health  multi  news  org:mag  org:sci  india  asia  data  sib-study  study  summary  org:lite
march 2017 by nhaliday
Thursday assorted links - Marginal REVOLUTION
2. “A new study of English spelling practices demonstrates that the way we spell words is much more orderly and self-organizing than previously thought.”

3. Why we cry, and the economics of weeping. And Michael Cannon on the new health care bill.

4. Economic ideas we should forget (keep on clicking through to see the whole list). By no means do I always agree — the Coase theorem??
econotariat  marginal-rev  links  language  news  org:sci  nlp  emergent  history  medieval  early-modern  mostly-modern  economics  error  map-territory  simler  status  signaling  anthropology  evopsych  postrat  current-events
march 2017 by nhaliday
Evolution Runs Faster on Short Timescales | Quanta Magazine
But if more splashes of paint appear on a wall, they will gradually conceal some of the original color beneath new layers. Similarly, evolution and natural selection write over the initial mutations that appear over short timescales. Over millions of years, an A in the DNA may become a T, but in the intervening time it may be a C or a G for a while. Ho believes that this mutational saturation is a major cause of what he calls the time-dependent rate phenomenon.

“Think of it like the stock market,” he said. Look at the hourly or daily fluctuations of Standard & Poor’s 500 index, and it will appear wildly unstable, swinging this way and that. Zoom out, however, and the market appears much more stable as the daily shifts start to average out. In the same way, the forces of natural selection weed out the less advantageous and more deleterious mutations over time.
news  org:mag  org:sci  evolution  bio  nature  mutation  selection  time  methodology  stylized-facts  genetics  population-genetics  genomics  speed  pigeonhole-markov  bits  nibble  org:inst
march 2017 by nhaliday
per page:    204080120160

Copy this bookmark: