recentpopularlog in

nhaliday : differential   65

The Existential Risk of Math Errors - Gwern.net
How big is this upper bound? Mathematicians have often made errors in proofs. But it’s rarer for ideas to be accepted for a long time and then rejected. But we can divide errors into 2 basic cases corresponding to type I and type II errors:

1. Mistakes where the theorem is still true, but the proof was incorrect (type I)
2. Mistakes where the theorem was false, and the proof was also necessarily incorrect (type II)

Before someone comes up with a final answer, a mathematician may have many levels of intuition in formulating & working on the problem, but we’ll consider the final end-product where the mathematician feels satisfied that he has solved it. Case 1 is perhaps the most common case, with innumerable examples; this is sometimes due to mistakes in the proof that anyone would accept is a mistake, but many of these cases are due to changing standards of proof. For example, when David Hilbert discovered errors in Euclid’s proofs which no one noticed before, the theorems were still true, and the gaps more due to Hilbert being a modern mathematician thinking in terms of formal systems (which of course Euclid did not think in). (David Hilbert himself turns out to be a useful example of the other kind of error: his famous list of 23 problems was accompanied by definite opinions on the outcome of each problem and sometimes timings, several of which were wrong or questionable5.) Similarly, early calculus used ‘infinitesimals’ which were sometimes treated as being 0 and sometimes treated as an indefinitely small non-zero number; this was incoherent and strictly speaking, practically all of the calculus results were wrong because they relied on an incoherent concept - but of course the results were some of the greatest mathematical work ever conducted6 and when later mathematicians put calculus on a more rigorous footing, they immediately re-derived those results (sometimes with important qualifications), and doubtless as modern math evolves other fields have sometimes needed to go back and clean up the foundations and will in the future.7

...

Isaac Newton, incidentally, gave two proofs of the same solution to a problem in probability, one via enumeration and the other more abstract; the enumeration was correct, but the other proof totally wrong and this was not noticed for a long time, leading Stigler to remark:

...

TYPE I > TYPE II?
“Lefschetz was a purely intuitive mathematician. It was said of him that he had never given a completely correct proof, but had never made a wrong guess either.”
- Gian-Carlo Rota13

Case 2 is disturbing, since it is a case in which we wind up with false beliefs and also false beliefs about our beliefs (we no longer know that we don’t know). Case 2 could lead to extinction.

...

Except, errors do not seem to be evenly & randomly distributed between case 1 and case 2. There seem to be far more case 1s than case 2s, as already mentioned in the early calculus example: far more than 50% of the early calculus results were correct when checked more rigorously. Richard Hamming attributes to Ralph Boas a comment that while editing Mathematical Reviews that “of the new results in the papers reviewed most are true but the corresponding proofs are perhaps half the time plain wrong”.

...

Gian-Carlo Rota gives us an example with Hilbert:

...

Olga labored for three years; it turned out that all mistakes could be corrected without any major changes in the statement of the theorems. There was one exception, a paper Hilbert wrote in his old age, which could not be fixed; it was a purported proof of the continuum hypothesis, you will find it in a volume of the Mathematische Annalen of the early thirties.

...

Leslie Lamport advocates for machine-checked proofs and a more rigorous style of proofs similar to natural deduction, noting a mathematician acquaintance guesses at a broad error rate of 1/329 and that he routinely found mistakes in his own proofs and, worse, believed false conjectures30.

[more on these "structured proofs":
https://academia.stackexchange.com/questions/52435/does-anyone-actually-publish-structured-proofs
https://mathoverflow.net/questions/35727/community-experiences-writing-lamports-structured-proofs
]

We can probably add software to that list: early software engineering work found that, dismayingly, bug rates seem to be simply a function of lines of code, and one would expect diseconomies of scale. So one would expect that in going from the ~4,000 lines of code of the Microsoft DOS operating system kernel to the ~50,000,000 lines of code in Windows Server 2003 (with full systems of applications and libraries being even larger: the comprehensive Debian repository in 2007 contained ~323,551,126 lines of code) that the number of active bugs at any time would be… fairly large. Mathematical software is hopefully better, but practitioners still run into issues (eg Durán et al 2014, Fonseca et al 2017) and I don’t know of any research pinning down how buggy key mathematical systems like Mathematica are or how much published mathematics may be erroneous due to bugs. This general problem led to predictions of doom and spurred much research into automated proof-checking, static analysis, and functional languages31.

[related:
https://mathoverflow.net/questions/11517/computer-algebra-errors
I don't know any interesting bugs in symbolic algebra packages but I know a true, enlightening and entertaining story about something that looked like a bug but wasn't.

Define sinc𝑥=(sin𝑥)/𝑥.

Someone found the following result in an algebra package: ∫∞0𝑑𝑥sinc𝑥=𝜋/2
They then found the following results:

...

So of course when they got:

∫∞0𝑑𝑥sinc𝑥sinc(𝑥/3)sinc(𝑥/5)⋯sinc(𝑥/15)=(467807924713440738696537864469/935615849440640907310521750000)𝜋

hmm:
Which means that nobody knows Fourier analysis nowdays. Very sad and discouraging story... – fedja Jan 29 '10 at 18:47

--

Because the most popular systems are all commercial, they tend to guard their bug database rather closely -- making them public would seriously cut their sales. For example, for the open source project Sage (which is quite young), you can get a list of all the known bugs from this page. 1582 known issues on Feb.16th 2010 (which includes feature requests, problems with documentation, etc).

That is an order of magnitude less than the commercial systems. And it's not because it is better, it is because it is younger and smaller. It might be better, but until SAGE does a lot of analysis (about 40% of CAS bugs are there) and a fancy user interface (another 40%), it is too hard to compare.

I once ran a graduate course whose core topic was studying the fundamental disconnect between the algebraic nature of CAS and the analytic nature of the what it is mostly used for. There are issues of logic -- CASes work more or less in an intensional logic, while most of analysis is stated in a purely extensional fashion. There is no well-defined 'denotational semantics' for expressions-as-functions, which strongly contributes to the deeper bugs in CASes.]

...

Should such widely-believed conjectures as P≠NP or the Riemann hypothesis turn out be false, then because they are assumed by so many existing proofs, a far larger math holocaust would ensue38 - and our previous estimates of error rates will turn out to have been substantial underestimates. But it may be a cloud with a silver lining, if it doesn’t come at a time of danger.

https://mathoverflow.net/questions/338607/why-doesnt-mathematics-collapse-down-even-though-humans-quite-often-make-mista

more on formal methods in programming:
https://www.quantamagazine.org/formal-verification-creates-hacker-proof-code-20160920/
https://intelligence.org/2014/03/02/bob-constable/

https://softwareengineering.stackexchange.com/questions/375342/what-are-the-barriers-that-prevent-widespread-adoption-of-formal-methods
Update: measured effort
In the October 2018 issue of Communications of the ACM there is an interesting article about Formally verified software in the real world with some estimates of the effort.

Interestingly (based on OS development for military equipment), it seems that producing formally proved software requires 3.3 times more effort than with traditional engineering techniques. So it's really costly.

On the other hand, it requires 2.3 times less effort to get high security software this way than with traditionally engineered software if you add the effort to make such software certified at a high security level (EAL 7). So if you have high reliability or security requirements there is definitively a business case for going formal.

WHY DON'T PEOPLE USE FORMAL METHODS?: https://www.hillelwayne.com/post/why-dont-people-use-formal-methods/
You can see examples of how all of these look at Let’s Prove Leftpad. HOL4 and Isabelle are good examples of “independent theorem” specs, SPARK and Dafny have “embedded assertion” specs, and Coq and Agda have “dependent type” specs.6

If you squint a bit it looks like these three forms of code spec map to the three main domains of automated correctness checking: tests, contracts, and types. This is not a coincidence. Correctness is a spectrum, and formal verification is one extreme of that spectrum. As we reduce the rigour (and effort) of our verification we get simpler and narrower checks, whether that means limiting the explored state space, using weaker types, or pushing verification to the runtime. Any means of total specification then becomes a means of partial specification, and vice versa: many consider Cleanroom a formal verification technique, which primarily works by pushing code review far beyond what’s humanly possible.

...

The question, then: “is 90/95/99% correct significantly cheaper than 100% correct?” The answer is very yes. We all are comfortable saying that a codebase we’ve well-tested and well-typed is mostly correct modulo a few fixes in prod, and we’re even writing more than four lines of code a day. In fact, the vast… [more]
ratty  gwern  analysis  essay  realness  truth  correctness  reason  philosophy  math  proofs  formal-methods  cs  programming  engineering  worse-is-better/the-right-thing  intuition  giants  old-anglo  error  street-fighting  heuristic  zooming  risk  threat-modeling  software  lens  logic  inference  physics  differential  geometry  estimate  distribution  robust  speculation  nonlinearity  cost-benefit  convexity-curvature  measure  scale  trivia  cocktail  history  early-modern  europe  math.CA  rigor  news  org:mag  org:sci  miri-cfar  pdf  thesis  comparison  examples  org:junk  q-n-a  stackex  pragmatic  tradeoffs  cracker-prog  techtariat  invariance  DSL  chart  ecosystem  grokkability  heavyweights  CAS  static-dynamic  lower-bounds  complexity  tcs  open-problems  big-surf  ideas  certificates-recognition  proof-systems  PCP  mediterranean  SDP  meta:prediction  epistemic  questions  guessing  distributed  overflow  nibble  soft-question  track-record  big-list  hmm  frontier  state-of-art  move-fast-(and-break-things)  grokkability-clarity  technical-writing  trust 
july 2019 by nhaliday
ON THE GEOMETRY OF NASH EQUILIBRIA AND CORRELATED EQUILIBRIA
Abstract: It is well known that the set of correlated equilibrium distributions of an n-player noncooperative game is a convex polytope that includes all the Nash equilibrium distributions. We demonstrate an elementary yet surprising result: the Nash equilibria all lie on the boundary of the polytope.
pdf  nibble  papers  ORFE  game-theory  optimization  geometry  dimensionality  linear-algebra  equilibrium  structure  differential  correlation  iidness  acm  linear-programming  spatial  characterization  levers 
may 2019 by nhaliday
Antinomia Imediata – experiments in a reaction from the left
https://antinomiaimediata.wordpress.com/lrx/
So, what is the Left Reaction? First of all, it’s reaction: opposition to the modern rationalist establishment, the Cathedral. It opposes the universalist Jacobin program of global government, favoring a fractured geopolitics organized through long-evolved complex systems. It’s profoundly anti-socialist and anti-communist, favoring market economy and individualism. It abhors tribalism and seeks a realistic plan for dismantling it (primarily informed by HBD and HBE). It looks at modernity as a degenerative ratchet, whose only way out is intensification (hence clinging to crypto-marxist market-driven acceleration).

How come can any of this still be in the *Left*? It defends equality of power, i.e. freedom. This radical understanding of liberty is deeply rooted in leftist tradition and has been consistently abhored by the Right. LRx is not democrat, is not socialist, is not progressist and is not even liberal (in its current, American use). But it defends equality of power. It’s utopia is individual sovereignty. It’s method is paleo-agorism. The anti-hierarchy of hunter-gatherer nomads is its understanding of the only realistic objective of equality.

...

In more cosmic terms, it seeks only to fulfill the Revolution’s side in the left-right intelligence pump: mutation or creation of paths. Proudhon’s antinomy is essentially about this: the collective force of the socius, evinced in moral standards and social organization vs the creative force of the individuals, that constantly revolutionize and disrupt the social body. The interplay of these forces create reality (it’s a metaphysics indeed): the Absolute (socius) builds so that the (individualistic) Revolution can destroy so that the Absolute may adapt, and then repeat. The good old formula of ‘solve et coagula’.

Ultimately, if the Neoreaction promises eternal hell, the LRx sneers “but Satan is with us”.

https://antinomiaimediata.wordpress.com/2016/12/16/a-statement-of-principles/
Liberty is to be understood as the ability and right of all sentient beings to dispose of their persons and the fruits of their labor, and nothing else, as they see fit. This stems from their self-awareness and their ability to control and choose the content of their actions.

...

Equality is to be understood as the state of no imbalance of power, that is, of no subjection to another sentient being. This stems from their universal ability for empathy, and from their equal ability for reason.

...

It is important to notice that, contrary to usual statements of these two principles, my standpoint is that Liberty and Equality here are not merely compatible, meaning they could coexist in some possible universe, but rather they are two sides of the same coin, complementary and interdependent. There can be NO Liberty where there is no Equality, for the imbalance of power, the state of subjection, will render sentient beings unable to dispose of their persons and the fruits of their labor[1], and it will limit their ability to choose over their rightful jurisdiction. Likewise, there can be NO Equality without Liberty, for restraining sentient beings’ ability to choose and dispose of their persons and fruits of labor will render some more powerful than the rest, and establish a state of subjection.

https://antinomiaimediata.wordpress.com/2017/04/18/flatness/
equality is the founding principle (and ultimately indistinguishable from) freedom. of course, it’s only in one specific sense of “equality” that this sentence is true.

to try and eliminate the bullshit, let’s turn to networks again:

any nodes’ degrees of freedom is the number of nodes they are connected to in a network. freedom is maximum when the network is symmetrically connected, i. e., when all nodes are connected to each other and thus there is no topographical hierarchy (middlemen) – in other words, flatness.

in this understanding, the maximization of freedom is the maximization of entropy production, that is, of intelligence. As Land puts it:

https://antinomiaimediata.wordpress.com/category/philosophy/mutualism/
gnon  blog  stream  politics  polisci  ideology  philosophy  land  accelerationism  left-wing  right-wing  paradox  egalitarianism-hierarchy  civil-liberty  power  hmm  revolution  analytical-holistic  mutation  selection  individualism-collectivism  tribalism  us-them  modernity  multi  tradeoffs  network-structure  complex-systems  cybernetics  randy-ayndy  insight  contrarianism  metameta  metabuch  characterization  cooperate-defect  n-factor  altruism  list  coordination  graphs  visual-understanding  cartoons  intelligence  entropy-like  thermo  information-theory  order-disorder  decentralized  distribution  degrees-of-freedom  analogy  graph-theory  extrema  evolution  interdisciplinary  bio  differential  geometry  anglosphere  optimate  nascent-state  deep-materialism  new-religion  cool  mystic  the-classics  self-interest  interests  reason  volo-avolo  flux-stasis  invariance  government  markets  paying-rent  cost-benefit  peace-violence  frontier  exit-voice  nl-and-so-can-you  war  track-record  usa  history  mostly-modern  world-war  military  justice  protestant-cathol 
march 2018 by nhaliday
What is the connection between special and general relativity? - Physics Stack Exchange
Special relativity is the "special case" of general relativity where spacetime is flat. The speed of light is essential to both.
nibble  q-n-a  overflow  physics  relativity  explanation  synthesis  hi-order-bits  ground-up  gravity  summary  aphorism  differential  geometry 
november 2017 by nhaliday
What is the difference between general and special relativity? - Quora
General Relativity is, quite simply, needed to explain gravity.

Special Relativity is the special case of GR, when the metric is flat — which means no gravity.

You need General Relativity when the metric gets all curvy, and when things start to experience gravitation.
nibble  q-n-a  qra  explanation  physics  relativity  synthesis  hi-order-bits  ground-up  gravity  summary  aphorism  differential  geometry 
november 2017 by nhaliday
gravity - Gravitational collapse and free fall time (spherical, pressure-free) - Physics Stack Exchange
the parenthetical regarding Gauss's law just involves noting a shell of radius r + symmetry (so single parameter determines field along shell)
nibble  q-n-a  overflow  physics  mechanics  gravity  tidbits  time  phase-transition  symmetry  differential  identity  dynamical 
august 2017 by nhaliday
Subgradients - S. Boyd and L. Vandenberghe
If f is convex and x ∈ int dom f, then ∂f(x) is nonempty and bounded. To establish that ∂f(x) ≠ ∅, we apply the supporting hyperplane theorem to the convex set epi f at the boundary point (x, f(x)), ...
pdf  nibble  lecture-notes  acm  optimization  curvature  math.CA  estimate  linearity  differential  existence  proofs  exposition  atoms  math  marginal  convexity-curvature 
august 2017 by nhaliday
Lanchester's laws - Wikipedia
Lanchester's laws are mathematical formulae for calculating the relative strengths of a predator–prey pair, originally devised to analyse relative strengths of military forces.
war  meta:war  models  plots  time  differential  street-fighting  methodology  strategy  tactics  wiki  reference  history  mostly-modern  pre-ww2  world-war  britain  old-anglo  giants  magnitude  arms  identity 
june 2017 by nhaliday
If there are 3 space dimensions and one time dimension, is it theoretically possible to have multiple time demensions and if so how would it work? : askscience
Yes, we can consider spacetimes with any number of temporal or spatial dimensions. The theory is set up essentially the same. Spacetime is modeled as a smooth n-dimensional manifold with a pseudo-Riemannian metric, and the metric satisfies the Einstein field equations (Einstein tensor = stress tensor).
A pseudo-Riemannian tensor is characterized by its signature, i.e., the number of negative quadratic forms in its metric and the number of positive quadratic forms. The coordinates with negative forms correspond to temporal dimensions. (This is a convention that is fixed from the start.) In general relativity, spacetime is 4-dimensional, and the signature is (1,3), so there is 1 temporal dimension and 3 spatial dimensions.
Okay, so that's a lot of math, but it all basically means that, yes, it makes sense to ask questions like "what does a universe with 2 time dimensions and 3 spatial dimensions look like?" It turns out that spacetimes with more than 1 temporal dimension are very pathological. For one, initial value problems do not generally have unique solutions. There is also generally no canonical way to pick out 1 of the infinitely many solutions to the equations of physics. This means that predictability is impossible (e.g., how do you know which solution is the correct one?). Essentially, there is no meaningful physics in a spacetime with more than 1 temporal dimension.
q-n-a  reddit  social  discussion  trivia  math  physics  relativity  curiosity  state  dimensionality  differential  geometry  gedanken  volo-avolo 
june 2017 by nhaliday
Archimedes Palimpsest - Wikipedia
Using this method, Archimedes was able to solve several problems now treated by integral calculus, which was given its modern form in the seventeenth century by Isaac Newton and Gottfried Leibniz. Among those problems were that of calculating the center of gravity of a solid hemisphere, the center of gravity of a frustum of a circular paraboloid, and the area of a region bounded by a parabola and one of its secant lines. (For explicit details, see Archimedes' use of infinitesimals.)

When rigorously proving theorems, Archimedes often used what are now called Riemann sums. In "On the Sphere and Cylinder," he gives upper and lower bounds for the surface area of a sphere by cutting the sphere into sections of equal width. He then bounds the area of each section by the area of an inscribed and circumscribed cone, which he proves have a larger and smaller area correspondingly. He adds the areas of the cones, which is a type of Riemann sum for the area of the sphere considered as a surface of revolution.

But there are two essential differences between Archimedes' method and 19th-century methods:

1. Archimedes did not know about differentiation, so he could not calculate any integrals other than those that came from center-of-mass considerations, by symmetry. While he had a notion of linearity, to find the volume of a sphere he had to balance two figures at the same time; he never figured out how to change variables or integrate by parts.

2. When calculating approximating sums, he imposed the further constraint that the sums provide rigorous upper and lower bounds. This was required because the Greeks lacked algebraic methods that could establish that error terms in an approximation are small.
big-peeps  history  iron-age  mediterranean  the-classics  innovation  discovery  knowledge  math  math.CA  finiteness  the-trenches  wiki  trivia  cocktail  stories  nibble  canon  differential 
may 2017 by nhaliday
Riemannian manifold - Wikipedia
In differential geometry, a (smooth) Riemannian manifold or (smooth) Riemannian space (M,g) is a real smooth manifold M equipped with an inner product {\displaystyle g_{p}} on the tangent space {\displaystyle T_{p}M} at each point {\displaystyle p} that varies smoothly from point to point in the sense that if X and Y are vector fields on M, then {\displaystyle p\mapsto g_{p}(X(p),Y(p))} is a smooth function. The family {\displaystyle g_{p}} of inner products is called a Riemannian metric (tensor). These terms are named after the German mathematician Bernhard Riemann. The study of Riemannian manifolds constitutes the subject called Riemannian geometry.

A Riemannian metric (tensor) makes it possible to define various geometric notions on a Riemannian manifold, such as angles, lengths of curves, areas (or volumes), curvature, gradients of functions and divergence of vector fields.
concept  definition  math  differential  geometry  manifolds  inner-product  norms  measure  nibble 
february 2017 by nhaliday
Orthogonal — Greg Egan
In Yalda’s universe, light has no universal speed and its creation generates energy.

On Yalda’s world, plants make food by emitting their own light into the dark night sky.
greg-egan  fiction  gedanken  physics  electromag  differential  geometry  thermo  space  cool  curiosity  reading  exposition  init  stat-mech  waves  relativity  positivity  unit  wild-ideas  speed  gravity  big-picture  🔬  xenobio  ideas  scifi-fantasy  signum 
february 2017 by nhaliday
Sobolev space - Wikipedia
In mathematics, a Sobolev space is a vector space of functions equipped with a norm that is a combination of Lp-norms of the function itself and its derivatives up to a given order. The derivatives are understood in a suitable weak sense to make the space complete, thus a Banach space. Intuitively, a Sobolev space is a space of functions with sufficiently many derivatives for some application domain, such as partial differential equations, and equipped with a norm that measures both the size and regularity of a function.
math  concept  math.CA  math.FA  differential  inner-product  wiki  reference  regularity  smoothness  norms  nibble  zooming 
february 2017 by nhaliday
Mikhail Leonidovich Gromov - Wikipedia
Gromov's style of geometry often features a "coarse" or "soft" viewpoint, analyzing asymptotic or large-scale properties.

Gromov is also interested in mathematical biology,[11] the structure of the brain and the thinking process, and the way scientific ideas evolve.[8]
math  people  russia  differential  geometry  topology  math.GR  wiki  structure  meta:math  meta:science  interdisciplinary  bio  neuro  magnitude  limits  science  nibble  coarse-fine  wild-ideas  convergence  info-dynamics  ideas  heavyweights 
january 2017 by nhaliday
ho.history overview - Proofs that require fundamentally new ways of thinking - MathOverflow
my favorite:
Although this has already been said elsewhere on MathOverflow, I think it's worth repeating that Gromov is someone who has arguably introduced more radical thoughts into mathematics than anyone else. Examples involving groups with polynomial growth and holomorphic curves have already been cited in other answers to this question. I have two other obvious ones but there are many more.

I don't remember where I first learned about convergence of Riemannian manifolds, but I had to laugh because there's no way I would have ever conceived of a notion. To be fair, all of the groundwork for this was laid out in Cheeger's thesis, but it was Gromov who reformulated everything as a convergence theorem and recognized its power.

Another time Gromov made me laugh was when I was reading what little I could understand of his book Partial Differential Relations. This book is probably full of radical ideas that I don't understand. The one I did was his approach to solving the linearized isometric embedding equation. His radical, absurd, but elementary idea was that if the system is sufficiently underdetermined, then the linear partial differential operator could be inverted by another linear partial differential operator. Both the statement and proof are for me the funniest in mathematics. Most of us view solving PDE's as something that requires hard work, involving analysis and estimates, and Gromov manages to do it using only elementary linear algebra. This then allows him to establish the existence of isometric embedding of Riemannian manifolds in a wide variety of settings.
q-n-a  overflow  soft-question  big-list  math  meta:math  history  insight  synthesis  gowers  mathtariat  hi-order-bits  frontier  proofs  magnitude  giants  differential  geometry  limits  flexibility  nibble  degrees-of-freedom  big-picture  novelty  zooming  big-surf  wild-ideas  metameta  courage  convergence  ideas  innovation  the-trenches  discovery  creative  elegance 
january 2017 by nhaliday

Copy this bookmark:





to read