recentpopularlog in

nhaliday : overflow   445

« earlier  
Is the bounty system effective? - Meta Stack Exchange
https://math.meta.stackexchange.com/questions/20155/how-effective-are-bounties
could do some kinda econometric analysis using the data explorer to determine this once and for all: https://pinboard.in/u:nhaliday/b:c0cd449b9e69
maybe some kinda RDD in time, or difference-in-differences?
I don't think answer quality/quantity by time meets the common trend assumption for DD, tho... Questions that eventually receive bounty are prob higher quality in the first place, and higher quality answers accumulate more and better answers regardless. Hmm.
q-n-a  stackex  forum  community  info-foraging  efficiency  cost-benefit  data  analysis  incentives  attention  quality  ubiquity  supply-demand  multi  math  causation  endogenous-exogenous  intervention  branches  control  tactics  sleuthin  hmm  idk  todo  data-science  overflow  dbs  regression  shift  methodology  econometrics 
november 2019 by nhaliday
exponential function - Feynman's Trick for Approximating $e^x$ - Mathematics Stack Exchange
1. e^2.3 ~ 10
2. e^.7 ~ 2
3. e^x ~ 1+x
e = 2.71828...

errors (absolute, relative):
1. +0.0258, 0.26%
2. -0.0138, -0.68%
3. 1 + x approximates e^x on [-.3, .3] with absolute error < .05, and relative error < 5.6% (3.7% for [0, .3]).
nibble  q-n-a  overflow  math  feynman  giants  mental-math  calculation  multiplicative  AMT  identity  objektbuch  explanation  howto  estimate  street-fighting  stories  approximation  data  trivia  nitty-gritty 
october 2019 by nhaliday
online resources - How to write special set notation by hand? - Mathematics Stack Exchange
Your ℕN is “incorrect” in that a capital N in any serif font has the diagonal thickened, not the verticals. In fact, the rule (in Latin alphabet) is that negative slopes are thick, positive ones are thin. Verticals are sometimes thin, sometimes thick. Unique exception: Z. Just look in a newspaper at A, V, X, M, and N.
nibble  q-n-a  overflow  math  writing  notetaking  howto  pic  notation  trivia 
october 2019 by nhaliday
The Existential Risk of Math Errors - Gwern.net
How big is this upper bound? Mathematicians have often made errors in proofs. But it’s rarer for ideas to be accepted for a long time and then rejected. But we can divide errors into 2 basic cases corresponding to type I and type II errors:

1. Mistakes where the theorem is still true, but the proof was incorrect (type I)
2. Mistakes where the theorem was false, and the proof was also necessarily incorrect (type II)

Before someone comes up with a final answer, a mathematician may have many levels of intuition in formulating & working on the problem, but we’ll consider the final end-product where the mathematician feels satisfied that he has solved it. Case 1 is perhaps the most common case, with innumerable examples; this is sometimes due to mistakes in the proof that anyone would accept is a mistake, but many of these cases are due to changing standards of proof. For example, when David Hilbert discovered errors in Euclid’s proofs which no one noticed before, the theorems were still true, and the gaps more due to Hilbert being a modern mathematician thinking in terms of formal systems (which of course Euclid did not think in). (David Hilbert himself turns out to be a useful example of the other kind of error: his famous list of 23 problems was accompanied by definite opinions on the outcome of each problem and sometimes timings, several of which were wrong or questionable5.) Similarly, early calculus used ‘infinitesimals’ which were sometimes treated as being 0 and sometimes treated as an indefinitely small non-zero number; this was incoherent and strictly speaking, practically all of the calculus results were wrong because they relied on an incoherent concept - but of course the results were some of the greatest mathematical work ever conducted6 and when later mathematicians put calculus on a more rigorous footing, they immediately re-derived those results (sometimes with important qualifications), and doubtless as modern math evolves other fields have sometimes needed to go back and clean up the foundations and will in the future.7

...

Isaac Newton, incidentally, gave two proofs of the same solution to a problem in probability, one via enumeration and the other more abstract; the enumeration was correct, but the other proof totally wrong and this was not noticed for a long time, leading Stigler to remark:

...

TYPE I > TYPE II?
“Lefschetz was a purely intuitive mathematician. It was said of him that he had never given a completely correct proof, but had never made a wrong guess either.”
- Gian-Carlo Rota13

Case 2 is disturbing, since it is a case in which we wind up with false beliefs and also false beliefs about our beliefs (we no longer know that we don’t know). Case 2 could lead to extinction.

...

Except, errors do not seem to be evenly & randomly distributed between case 1 and case 2. There seem to be far more case 1s than case 2s, as already mentioned in the early calculus example: far more than 50% of the early calculus results were correct when checked more rigorously. Richard Hamming attributes to Ralph Boas a comment that while editing Mathematical Reviews that “of the new results in the papers reviewed most are true but the corresponding proofs are perhaps half the time plain wrong”.

...

Gian-Carlo Rota gives us an example with Hilbert:

...

Olga labored for three years; it turned out that all mistakes could be corrected without any major changes in the statement of the theorems. There was one exception, a paper Hilbert wrote in his old age, which could not be fixed; it was a purported proof of the continuum hypothesis, you will find it in a volume of the Mathematische Annalen of the early thirties.

...

Leslie Lamport advocates for machine-checked proofs and a more rigorous style of proofs similar to natural deduction, noting a mathematician acquaintance guesses at a broad error rate of 1/329 and that he routinely found mistakes in his own proofs and, worse, believed false conjectures30.

[more on these "structured proofs":
https://academia.stackexchange.com/questions/52435/does-anyone-actually-publish-structured-proofs
https://mathoverflow.net/questions/35727/community-experiences-writing-lamports-structured-proofs
]

We can probably add software to that list: early software engineering work found that, dismayingly, bug rates seem to be simply a function of lines of code, and one would expect diseconomies of scale. So one would expect that in going from the ~4,000 lines of code of the Microsoft DOS operating system kernel to the ~50,000,000 lines of code in Windows Server 2003 (with full systems of applications and libraries being even larger: the comprehensive Debian repository in 2007 contained ~323,551,126 lines of code) that the number of active bugs at any time would be… fairly large. Mathematical software is hopefully better, but practitioners still run into issues (eg Durán et al 2014, Fonseca et al 2017) and I don’t know of any research pinning down how buggy key mathematical systems like Mathematica are or how much published mathematics may be erroneous due to bugs. This general problem led to predictions of doom and spurred much research into automated proof-checking, static analysis, and functional languages31.

[related:
https://mathoverflow.net/questions/11517/computer-algebra-errors
I don't know any interesting bugs in symbolic algebra packages but I know a true, enlightening and entertaining story about something that looked like a bug but wasn't.

Define sinc𝑥=(sin𝑥)/𝑥.

Someone found the following result in an algebra package: ∫∞0𝑑𝑥sinc𝑥=𝜋/2
They then found the following results:

...

So of course when they got:

∫∞0𝑑𝑥sinc𝑥sinc(𝑥/3)sinc(𝑥/5)⋯sinc(𝑥/15)=(467807924713440738696537864469/935615849440640907310521750000)𝜋

hmm:
Which means that nobody knows Fourier analysis nowdays. Very sad and discouraging story... – fedja Jan 29 '10 at 18:47

--

Because the most popular systems are all commercial, they tend to guard their bug database rather closely -- making them public would seriously cut their sales. For example, for the open source project Sage (which is quite young), you can get a list of all the known bugs from this page. 1582 known issues on Feb.16th 2010 (which includes feature requests, problems with documentation, etc).

That is an order of magnitude less than the commercial systems. And it's not because it is better, it is because it is younger and smaller. It might be better, but until SAGE does a lot of analysis (about 40% of CAS bugs are there) and a fancy user interface (another 40%), it is too hard to compare.

I once ran a graduate course whose core topic was studying the fundamental disconnect between the algebraic nature of CAS and the analytic nature of the what it is mostly used for. There are issues of logic -- CASes work more or less in an intensional logic, while most of analysis is stated in a purely extensional fashion. There is no well-defined 'denotational semantics' for expressions-as-functions, which strongly contributes to the deeper bugs in CASes.]

...

Should such widely-believed conjectures as P≠NP or the Riemann hypothesis turn out be false, then because they are assumed by so many existing proofs, a far larger math holocaust would ensue38 - and our previous estimates of error rates will turn out to have been substantial underestimates. But it may be a cloud with a silver lining, if it doesn’t come at a time of danger.

https://mathoverflow.net/questions/338607/why-doesnt-mathematics-collapse-down-even-though-humans-quite-often-make-mista

more on formal methods in programming:
https://www.quantamagazine.org/formal-verification-creates-hacker-proof-code-20160920/
https://intelligence.org/2014/03/02/bob-constable/

https://softwareengineering.stackexchange.com/questions/375342/what-are-the-barriers-that-prevent-widespread-adoption-of-formal-methods
Update: measured effort
In the October 2018 issue of Communications of the ACM there is an interesting article about Formally verified software in the real world with some estimates of the effort.

Interestingly (based on OS development for military equipment), it seems that producing formally proved software requires 3.3 times more effort than with traditional engineering techniques. So it's really costly.

On the other hand, it requires 2.3 times less effort to get high security software this way than with traditionally engineered software if you add the effort to make such software certified at a high security level (EAL 7). So if you have high reliability or security requirements there is definitively a business case for going formal.

WHY DON'T PEOPLE USE FORMAL METHODS?: https://www.hillelwayne.com/post/why-dont-people-use-formal-methods/
You can see examples of how all of these look at Let’s Prove Leftpad. HOL4 and Isabelle are good examples of “independent theorem” specs, SPARK and Dafny have “embedded assertion” specs, and Coq and Agda have “dependent type” specs.6

If you squint a bit it looks like these three forms of code spec map to the three main domains of automated correctness checking: tests, contracts, and types. This is not a coincidence. Correctness is a spectrum, and formal verification is one extreme of that spectrum. As we reduce the rigour (and effort) of our verification we get simpler and narrower checks, whether that means limiting the explored state space, using weaker types, or pushing verification to the runtime. Any means of total specification then becomes a means of partial specification, and vice versa: many consider Cleanroom a formal verification technique, which primarily works by pushing code review far beyond what’s humanly possible.

...

The question, then: “is 90/95/99% correct significantly cheaper than 100% correct?” The answer is very yes. We all are comfortable saying that a codebase we’ve well-tested and well-typed is mostly correct modulo a few fixes in prod, and we’re even writing more than four lines of code a day. In fact, the vast… [more]
ratty  gwern  analysis  essay  realness  truth  correctness  reason  philosophy  math  proofs  formal-methods  cs  programming  engineering  worse-is-better/the-right-thing  intuition  giants  old-anglo  error  street-fighting  heuristic  zooming  risk  threat-modeling  software  lens  logic  inference  physics  differential  geometry  estimate  distribution  robust  speculation  nonlinearity  cost-benefit  convexity-curvature  measure  scale  trivia  cocktail  history  early-modern  europe  math.CA  rigor  news  org:mag  org:sci  miri-cfar  pdf  thesis  comparison  examples  org:junk  q-n-a  stackex  pragmatic  tradeoffs  cracker-prog  techtariat  invariance  DSL  chart  ecosystem  grokkability  heavyweights  CAS  static-dynamic  lower-bounds  complexity  tcs  open-problems  big-surf  ideas  certificates-recognition  proof-systems  PCP  mediterranean  SDP  meta:prediction  epistemic  questions  guessing  distributed  overflow  nibble  soft-question  track-record  big-list  hmm  frontier  state-of-art  move-fast-(and-break-things)  grokkability-clarity  technical-writing  trust 
july 2019 by nhaliday
data structures - Why are Red-Black trees so popular? - Computer Science Stack Exchange
- AVL trees have smaller average depth than red-black trees, and thus searching for a value in AVL tree is consistently faster.
- Red-black trees make less structural changes to balance themselves than AVL trees, which could make them potentially faster for insert/delete. I'm saying potentially, because this would depend on the cost of the structural change to the tree, as this will depend a lot on the runtime and implemntation (might also be completely different in a functional language when the tree is immutable?)

There are many benchmarks online that compare AVL and Red-black trees, but what struck me is that my professor basically said, that usually you'd do one of two things:
- Either you don't really care that much about performance, in which case the 10-20% difference of AVL vs Red-black in most cases won't matter at all.
- Or you really care about performance, in which you case you'd ditch both AVL and Red-black trees, and go with B-trees, which can be tweaked to work much better (or (a,b)-trees, I'm gonna put all of those in one basket.)

--

> For some kinds of binary search trees, including red-black trees but not AVL trees, the "fixes" to the tree can fairly easily be predicted on the way down and performed during a single top-down pass, making the second pass unnecessary. Such insertion algorithms are typically implemented with a loop rather than recursion, and often run slightly faster in practice than their two-pass counterparts.

So a RedBlack tree insert can be implemented without recursion, on some CPUs recursion is very expensive if you overrun the function call cache (e.g SPARC due to is use of Register window)

--

There are some cases where you can't use B-trees at all.

One prominent case is std::map from C++ STL. The standard requires that insert does not invalidate existing iterators

...

I also believe that "single pass tail recursive" implementation is not the reason for red black tree popularity as a mutable data structure.

First of all, stack depth is irrelevant here, because (given log𝑛 height) you would run out of the main memory before you run out of stack space. Jemalloc is happy with preallocating worst case depth on the stack.
nibble  q-n-a  overflow  cs  algorithms  tcs  data-structures  functional  orders  trees  cost-benefit  tradeoffs  roots  explanans  impetus  performance  applicability-prereqs  programming  pls  c(pp)  ubiquity 
june 2019 by nhaliday
What's the expected level of paper for top conferences in Computer Science - Academia Stack Exchange
Top. The top level.

My experience on program committees for STOC, FOCS, ITCS, SODA, SOCG, etc., is that there are FAR more submissions of publishable quality than can be accepted into the conference. By "publishable quality" I mean a well-written presentation of a novel, interesting, and non-trivial result within the scope of the conference.

...

There are several questions that come up over and over in the FOCS/STOC review cycle:

- How surprising / novel / elegant / interesting is the result?
- How surprising / novel / elegant / interesting / general are the techniques?
- How technically difficult is the result? Ironically, FOCS and STOC committees have a reputation for ignoring the distinction between trivial (easy to derive from scratch) and nondeterministically trivial (easy to understand after the fact).
- What is the expected impact of this result? Is this paper going to change the way people do theoretical computer science over the next five years?
- Is the result of general interest to the theoretical computer science community? Or is it only of interest to a narrow subcommunity? In particular, if the topic is outside the STOC/FOCS mainstream—say, for example, computational topology—does the paper do a good job of explaining and motivating the results to a typical STOC/FOCS audience?
nibble  q-n-a  overflow  academia  tcs  cs  meta:research  publishing  scholar  lens  properties  cost-benefit  analysis  impetus  increase-decrease  soft-question  motivation  proofs  search  complexity  analogy  problem-solving  elegance  synthesis  hi-order-bits  novelty  discovery 
june 2019 by nhaliday
classification - ImageNet: what is top-1 and top-5 error rate? - Cross Validated
Now, in the case of top-1 score, you check if the top class (the one having the highest probability) is the same as the target label.

In the case of top-5 score, you check if the target label is one of your top 5 predictions (the 5 ones with the highest probabilities).
nibble  q-n-a  overflow  machine-learning  deep-learning  metrics  comparison  ranking  top-n  classification  computer-vision  benchmarks  dataset  accuracy  error  jargon 
june 2019 by nhaliday
soft question - What are good non-English languages for mathematicians to know? - MathOverflow
I'm with Deane here: I think learning foreign languages is not a very mathematically productive thing to do; of course, there are lots of good reasons to learn foreign languages, but doing mathematics is not one of them. Not only are there few modern mathematics papers written in languages other than English, but the primary other language they are written (French) in is pretty easy to read without actually knowing it.

Even though I've been to France several times, my spoken French mostly consists of "merci," "si vous plait," "d'accord" and some food words; I've still skimmed 100 page long papers in French without a lot of trouble.

If nothing else, think of reading a paper in French as a good opportunity to teach Google Translate some mathematical French.
q-n-a  overflow  math  academia  learning  foreign-lang  publishing  science  french  soft-question  math.AG  nibble  quixotic  comparison  language  china  asia  trends 
february 2019 by nhaliday
orbit - Best approximation for Sun's trajectory around galactic center? - Astronomy Stack Exchange
The Sun orbits in the Galactic potential. The motion is complex; it takes about 230 million years to make a circuit with an orbital speed of around 220 km/s, but at the same time it oscillates up and down with respect to the Galactic plane every ∼70∼70 million years and also wobbles in and out every ∼150∼150 million years (this is called epicyclic motion). The spatial amplitudes of these oscillations are around 100 pc vertically and 300 pc in the radial direction inwards and outwards around an average orbital radius (I am unable to locate a precise figure for the latter).
nibble  q-n-a  overflow  space  oscillation  time  cycles  spatial  trivia  manifolds 
december 2017 by nhaliday
light - Why doesn't the moon twinkle? - Astronomy Stack Exchange
As you mention, when light enters our atmosphere, it goes through several parcels of gas with varying density, temperature, pressure, and humidity. These differences make the refractive index of the parcels different, and since they move around (the scientific term for air moving around is "wind"), the light rays take slightly different paths through the atmosphere.

Stars are point sources
…the Moon is not
nibble  q-n-a  overflow  space  physics  trivia  cocktail  navigation  sky  visuo  illusion  measure  random  electromag  signal-noise  flux-stasis  explanation  explanans  magnitude  atmosphere  roots 
december 2017 by nhaliday
galaxy - How do astronomers estimate the total mass of dust in clouds and galaxies? - Astronomy Stack Exchange
Dust absorbs stellar light (primarily in the ultraviolet), and is heated up. Subsequently it cools by emitting infrared, "thermal" radiation. Assuming a dust composition and grain size distribution, the amount of emitted IR light per unit dust mass can be calculated as a function of temperature. Observing the object at several different IR wavelengths, a Planck curve can be fitted to the data points, yielding the dust temperature. The more UV light incident on the dust, the higher the temperature.

The result is somewhat sensitive to the assumptions, and thus the uncertainties are sometimes quite large. The more IR data points obtained, the better. If only one IR point is available, the temperature cannot be calculated. Then there's a degeneracy between incident UV light and the amount of dust, and the mass can only be estimated to within some orders of magnitude (I think).
nibble  q-n-a  overflow  space  measurement  measure  estimate  physics  electromag  visuo  methodology 
december 2017 by nhaliday
general relativity - What if the universe is rotating as a whole? - Physics Stack Exchange
To find out whether the universe is rotating, in principle the most straightforward test is to watch the motion of a gyroscope relative to the distant galaxies. If it rotates at an angular velocity -ω relative to them, then the universe is rotating at angular velocity ω. In practice, we do not have mechanical gyroscopes with small enough random and systematic errors to put a very low limit on ω. However, we can use the entire solar system as a kind of gyroscope. Solar-system observations put a model-independent upper limit of 10^-7 radians/year on the rotation,[Clemence 1957] which is an order of magnitude too lax to rule out the Gödel metric.
nibble  q-n-a  overflow  physics  relativity  gedanken  direction  absolute-relative  big-picture  space  experiment  measurement  volo-avolo 
november 2017 by nhaliday
What is the connection between special and general relativity? - Physics Stack Exchange
Special relativity is the "special case" of general relativity where spacetime is flat. The speed of light is essential to both.
nibble  q-n-a  overflow  physics  relativity  explanation  synthesis  hi-order-bits  ground-up  gravity  summary  aphorism  differential  geometry 
november 2017 by nhaliday
gn.general topology - Pair of curves joining opposite corners of a square must intersect---proof? - MathOverflow
In his 'Ordinary Differential Equations' (sec. 1.2) V.I. Arnold says "... every pair of curves in the square joining different pairs of opposite corners must intersect".

This is obvious geometrically but I was wondering how one could go about proving this rigorously. I have thought of a proof using Brouwer's Fixed Point Theorem which I describe below. I would greatly appreciate the group's comments on whether this proof is right and if a simpler proof is possible.

...

Since the full Jordan curve theorem is quite subtle, it might be worth pointing out that theorem in question reduces to the Jordan curve theorem for polygons, which is easier.

Suppose on the contrary that the curves A,BA,B joining opposite corners do not meet. Since A,BA,B are closed sets, their minimum distance apart is some ε>0ε>0. By compactness, each of A,BA,B can be partitioned into finitely many arcs, each of which lies in a disk of diameter <ε/3<ε/3. Then, by a homotopy inside each disk we can replace A,BA,B by polygonal paths A′,B′A′,B′ that join the opposite corners of the square and are still disjoint.

Also, we can replace A′,B′A′,B′ by simple polygonal paths A″,B″A″,B″ by omitting loops. Now we can close A″A″ to a polygon, and B″B″ goes from its "inside" to "outside" without meeting it, contrary to the Jordan curve theorem for polygons.

- John Stillwell
nibble  q-n-a  overflow  math  geometry  topology  tidbits  intricacy  intersection  proofs  gotchas  oly  mathtariat  fixed-point  math.AT  manifolds  intersection-connectedness 
october 2017 by nhaliday
multivariate analysis - Is it possible to have a pair of Gaussian random variables for which the joint distribution is not Gaussian? - Cross Validated
The bivariate normal distribution is the exception, not the rule!

It is important to recognize that "almost all" joint distributions with normal marginals are not the bivariate normal distribution. That is, the common viewpoint that joint distributions with normal marginals that are not the bivariate normal are somehow "pathological", is a bit misguided.

Certainly, the multivariate normal is extremely important due to its stability under linear transformations, and so receives the bulk of attention in applications.

note: there is a multivariate central limit theorem, so those such applications have no problem
nibble  q-n-a  overflow  stats  math  acm  probability  distribution  gotchas  intricacy  characterization  structure  composition-decomposition  counterexample  limits  concentration-of-measure 
october 2017 by nhaliday
Karl Pearson and the Chi-squared Test
Pearson's paper of 1900 introduced what subsequently became known as the chi-squared test of goodness of fit. The terminology and allusions of 80 years ago create a barrier for the modern reader, who finds that the interpretation of Pearson's test procedure and the assessment of what he achieved are less than straightforward, notwithstanding the technical advances made since then. An attempt is made here to surmount these difficulties by exploring Pearson's relevant activities during the first decade of his statistical career, and by describing the work by his contemporaries and predecessors which seem to have influenced his approach to the problem. Not all the questions are answered, and others remain for further study.

original paper: http://www.economics.soton.ac.uk/staff/aldrich/1900.pdf

How did Karl Pearson come up with the chi-squared statistic?: https://stats.stackexchange.com/questions/97604/how-did-karl-pearson-come-up-with-the-chi-squared-statistic
He proceeds by working with the multivariate normal, and the chi-square arises as a sum of squared standardized normal variates.

You can see from the discussion on p160-161 he's clearly discussing applying the test to multinomial distributed data (I don't think he uses that term anywhere). He apparently understands the approximate multivariate normality of the multinomial (certainly he knows the margins are approximately normal - that's a very old result - and knows the means, variances and covariances, since they're stated in the paper); my guess is that most of that stuff is already old hat by 1900. (Note that the chi-squared distribution itself dates back to work by Helmert in the mid-1870s.)

Then by the bottom of p163 he derives a chi-square statistic as "a measure of goodness of fit" (the statistic itself appears in the exponent of the multivariate normal approximation).

He then goes on to discuss how to evaluate the p-value*, and then he correctly gives the upper tail area of a χ212χ122 beyond 43.87 as 0.000016. [You should keep in mind, however, that he didn't correctly understand how to adjust degrees of freedom for parameter estimation at that stage, so some of the examples in his papers use too high a d.f.]
nibble  papers  acm  stats  hypothesis-testing  methodology  history  mostly-modern  pre-ww2  old-anglo  giants  science  the-trenches  stories  multi  q-n-a  overflow  explanation  summary  innovation  discovery  distribution  degrees-of-freedom  limits 
october 2017 by nhaliday
self study - Looking for a good and complete probability and statistics book - Cross Validated
I never had the opportunity to visit a stats course from a math faculty. I am looking for a probability theory and statistics book that is complete and self-sufficient. By complete I mean that it contains all the proofs and not just states results.
nibble  q-n-a  overflow  data-science  stats  methodology  books  recommendations  list  top-n  confluence  proofs  rigor  reference  accretion 
october 2017 by nhaliday
Variance of product of multiple random variables - Cross Validated
prod_i (var[X_i] + (E[X_i])^2) - prod_i (E[X_i])^2

two variable case: var[X] var[Y] + var[X] (E[Y])^2 + (E[X])^2 var[Y]
nibble  q-n-a  overflow  stats  probability  math  identity  moments  arrows  multiplicative  iidness  dependence-independence 
october 2017 by nhaliday
newtonian gravity - Newton's original proof of gravitation for non-point-mass objects - Physics Stack Exchange
This theorem is Proposition LXXI, Theorem XXXI in the Principia. To warm up, consider the more straightforward proof of the preceding theorem, that there's no inverse-square force inside of a spherical shell:

picture

The crux of the argument is that the triangles HPI and LPK are similar. The mass enclosed in the small-but-near patch of sphere HI goes like the square of the distance HP, while the mass enclosed in the large-but-far patch of sphere KL, with the same solid angle, goes like the square of the distance KP. This mass ratio cancels out the distance-squared ratio governing the strength of the force, and so the net force from those two patches vanishes.

For a point mass outside a shell, Newton's approach is essentially the same as the modern approach:

picture

One integral is removed because we're considering a thin spherical shell rather than a solid sphere. The second integral, "as the semi-circle AKB revolves about the diameter AB," trivially turns Newton's infinitesimal arcs HI and KL into annuli.

The third integral is over all the annuli in the sphere, over 0≤ϕ≤τ/20≤ϕ≤τ/2 or over R−r≤s≤R+rR−r≤s≤R+r. This one is a little bit hairy, even with the advantage of modern notation.

Newton's clever trick is to consider the relationship between the force due to the smaller, nearer annulus HI and the larger, farther annulus KL defined by the same viewing angle (in modern notation, dθdθ). If I understand correctly he argues again, based on lots of similar triangles with infinitesimal angles, that the smaller-but-nearer annulus and the larger-but-farther annulus exert the same force at P. Furthermore, he shows that the force doesn't depend on the distance PF, and thus doesn't depend on the radius of the sphere; the only parameter left is the distance PS (squared) between the particle and the sphere's center. Since the argument doesn't depend on the angle HPS, it's true for all the annuli, and the theorem is proved.
nibble  q-n-a  overflow  giants  old-anglo  the-trenches  physics  mechanics  gravity  proofs  symmetry  geometry  spatial 
september 2017 by nhaliday
electricity - Why is AC more "dangerous" than DC? - Physics Stack Exchange
One of the reasons that AC might be considered more dangerous is that it arguably has more ways of getting into your body. Since the voltage alternates, it can cause current to enter and exit your body even without a closed loop, since your body (and what ground it's attached to) has capacitance. DC cannot do that. Also, AC is quite easily stepped up to higher voltages using transformers, while with DC that requires some relatively elaborate electronics. Finally, while your skin has a fairly high resistance to protect you, and the air is also a terrific insulator as long as you're not touching any wires, sometimes the inductance of AC transformers can cause high-voltage sparks that break down the air and I imagine can get through your skin a bit as well.

Also, like you mentioned, the heart is controlled by electric pulses and repeated pulses of electricity can throw this off quite a bit and cause a heart attack. However, I don't think that this is unique to alternating current. I read once about an unfortunate young man that was learning about electricity and wanted to measure the resistance of his own body. He took a multimeter and set a lead to each thumb. By accident or by stupidity, he punctured both thumbs with the leads, and the small (I imagine it to be 9 V) battery in the multimeter caused a current in his bloodstream, and he died on the spot. So maybe ignorance is more dangerous than either AC or DC.
nibble  q-n-a  overflow  physics  electromag  dirty-hands  embodied  safety  short-circuit  IEEE  death 
september 2017 by nhaliday
Why is Earth's gravity stronger at the poles? - Physics Stack Exchange
The point is that if we approximate Earth with an oblate ellipsoid, then the surface of Earth is an equipotential surface,11 see e.g. this Phys.SE post.

Now, because the polar radius is smaller than the equatorial radius, the density of equipotential surfaces at the poles must be bigger than at the equator.

Or equivalently, the field strength22 gg at the poles must be bigger than at the equator.
nibble  q-n-a  overflow  physics  mechanics  gravity  earth  space  intricacy  explanation  tidbits  spatial  direction  nitty-gritty  geography 
september 2017 by nhaliday
diffusion - Surviving under water in air bubble - Physics Stack Exchange
I get d≈400md≈400m.

It's interesting to note that this is independent of pressure: I've neglected pressure dependence of DD and human resilience to carbon dioxide, and the maximum safe concentration of carbon dioxide is independent of pressure, just derived from measurements at STP.

Finally, a bubble this large will probably rapidly break up due to buoyancy and Plateau-Rayleigh instabilities.
nibble  q-n-a  overflow  physics  mechanics  h2o  safety  short-circuit  tidbits  gedanken  fluid  street-fighting 
august 2017 by nhaliday
rotational dynamics - Why do non-rigid bodies try to increase their moment of inertia? - Physics Stack Exchange
This happens to isolated rotating system that is not a rigid body.

Inside such a body (for example, steel chain in free fall) the parts move relatively to each other and there is internal friction that dissipates kinetic energy of the system, while angular momentum is conserved. The dissipation goes on until the parts stop moving with respect to each other, so body rotates as a rigid body, even if it is not rigid by constitution.

The rotating state of the body that has the lowest kinetic energy for given angular momentum is that in which the body has the greatest moment of inertia (with respect to center of mass). For example, a long chain thrown into free fall will twist and turn until it is all straight and rotating as rigid body.

...

If LL is constant (net torque of external forces acting on the system is zero) and the constitution and initial conditions allow it, the system's dissipation will work to diminish energy until it has the minimum value, which happens for maximum IaIa possible.
nibble  q-n-a  overflow  physics  mechanics  tidbits  spatial  rigidity  flexibility  invariance  direction  stylized-facts  dynamical  volo-avolo  street-fighting  yoga 
august 2017 by nhaliday
gravity - Gravitational collapse and free fall time (spherical, pressure-free) - Physics Stack Exchange
the parenthetical regarding Gauss's law just involves noting a shell of radius r + symmetry (so single parameter determines field along shell)
nibble  q-n-a  overflow  physics  mechanics  gravity  tidbits  time  phase-transition  symmetry  differential  identity  dynamical 
august 2017 by nhaliday
co.combinatorics - Classification of Platonic solids - MathOverflow
My question is very basic: where can I find a complete (and hopefully self-contained) proof of the classification of Platonic solids? In all the references that I found, they use Euler's formula v−e+f=2v−e+f=2 to show that there are exactly five possible triples (v,e,f)(v,e,f). But of course this is not a complete proof because it does not rule out the possibility of different configurations or deformations. Has anyone ever written up a complete proof of this statement?!

...

This is a classical question. Here is my reading of it: Why is there a unique polytope with given combinatorics of faces, which are all regular polygons? Of course, for simple polytopes (tetrahedron, cube, dodecahedron) this is clear, but for the octahedron and icosahedron this is less clear.

The answer lies in the Cauchy's theorem. It was Legendre, while writing his Elements of Geometry and Trigonometry, noticed that Euclid's proof is incomplete in the Elements. Curiously, Euclid finds both radii of inscribed and circumscribed spheres (correctly) without ever explaining why they exist. Cauchy worked out a proof while still a student in 1813, more or less specifically for this purpose. The proof also had a technical gap which was found and patched up by Steinitz in 1920s.

The complete (corrected) proof can be found in the celebrated Proofs from the Book, or in Marcel Berger's Geometry. My book gives a bit more of historical context and some soft arguments (ch. 19). It's worth comparing this proof with (an erroneous) pre-Steinitz exposition, say in Hadamard's Leçons de Géométrie Elémentaire II, or with an early post-Steinitz correct but tedious proof given in (otherwise, excellent) Alexandrov's monograph (see also ch.26 in my book which compares all the approaches).

P.S. Note that Coxeter in Regular Polytopes can completely avoid this issue but taking a different (modern) definition of the regular polytopes (which are symmetric under group actions). For a modern exposition and the state of art of this approach, see McMullen and Schulte's Abstract Regular Polytopes.

https://en.wikipedia.org/wiki/Platonic_solid#Classification
https://mathoverflow.net/questions/46502/on-the-number-of-archimedean-solids
q-n-a  overflow  math  topology  geometry  math.CO  history  iron-age  mediterranean  the-classics  multi  curiosity  clarity  proofs  nibble  wiki  reference  characterization  uniqueness  list  ground-up  grokkability-clarity 
july 2017 by nhaliday
« earlier      
per page:    204080120160

Copy this bookmark:





to read