When is proof by contradiction necessary? | Gowers's Weblog

nibble org:bleg gowers mathtariat math proofs contradiction volo-avolo structure math.CA math.NT algebra parsimony elegance minimalism efficiency technical-writing necessity-sufficiency degrees-of-freedom simplification-normalization

october 2019 by nhaliday

nibble org:bleg gowers mathtariat math proofs contradiction volo-avolo structure math.CA math.NT algebra parsimony elegance minimalism efficiency technical-writing necessity-sufficiency degrees-of-freedom simplification-normalization

october 2019 by nhaliday

Operations on polynomials (on cp-algorithms) - Codeforces

august 2019 by nhaliday

https://stackoverflow.com/questions/44770632/fft-division-for-fast-polynomial-division

links to good lecture notes: http://web.cs.iastate.edu/~cs577/handouts/polydivide.pdf

oly
oly-programming
programming
python
examples
nitty-gritty
polynomials
algebra
math.CA
yoga
multi
pdf
lecture-notes
howto
algorithms
q-n-a
stackex
fourier
libraries
multiplicative
math.NT
calculation
links to good lecture notes: http://web.cs.iastate.edu/~cs577/handouts/polydivide.pdf

august 2019 by nhaliday

big list - Are there proofs that you feel you did not "understand" for a long time? - MathOverflow

nibble q-n-a overflow soft-question big-list math proofs expert-experience heavyweights gowers mathtariat reflection learning intricacy grokkability intuition algebra math.GR motivation math.GN topology synthesis math.CT computation tcs logic iteration-recursion math.CA extrema smoothness span-cover grokkability-clarity

august 2019 by nhaliday

nibble q-n-a overflow soft-question big-list math proofs expert-experience heavyweights gowers mathtariat reflection learning intricacy grokkability intuition algebra math.GR motivation math.GN topology synthesis math.CT computation tcs logic iteration-recursion math.CA extrema smoothness span-cover grokkability-clarity

august 2019 by nhaliday

galois theory - Existence of irreducible polynomial of arbitrary degree over finite field without use of primitive element theorem? - Mathematics Stack Exchange

nibble q-n-a overflow math math.CA algebra multiplicative tidbits proofs existence pigeonhole-markov estimate fields identity measure

july 2019 by nhaliday

nibble q-n-a overflow math math.CA algebra multiplicative tidbits proofs existence pigeonhole-markov estimate fields identity measure

july 2019 by nhaliday

Rational Sines of Rational Multiples of p

july 2019 by nhaliday

For which rational multiples of p is the sine rational? We have the three trivial cases

[0, pi/2, pi/6]

and we wish to show that these are essentially the only distinct rational sines of rational multiples of p.

The assertion about rational sines of rational multiples of p follows from two fundamental lemmas. The first is

Lemma 1: For any rational number q the value of sin(qp) is a root of a monic polynomial with integer coefficients.

[Pf uses some ideas unfamiliar to me: similarity parameter of Moebius (linear fraction) transformations, and finding a polynomial for a desired root by constructing a Moebius transformation with a finite period.]

...

Lemma 2: Any root of a monic polynomial f(x) with integer coefficients must either be an integer or irrational.

[Gauss's Lemma, cf Dummit-Foote.]

...

nibble
tidbits
org:junk
analysis
trivia
math
algebra
polynomials
fields
characterization
direction
math.CA
math.CV
ground-up
[0, pi/2, pi/6]

and we wish to show that these are essentially the only distinct rational sines of rational multiples of p.

The assertion about rational sines of rational multiples of p follows from two fundamental lemmas. The first is

Lemma 1: For any rational number q the value of sin(qp) is a root of a monic polynomial with integer coefficients.

[Pf uses some ideas unfamiliar to me: similarity parameter of Moebius (linear fraction) transformations, and finding a polynomial for a desired root by constructing a Moebius transformation with a finite period.]

...

Lemma 2: Any root of a monic polynomial f(x) with integer coefficients must either be an integer or irrational.

[Gauss's Lemma, cf Dummit-Foote.]

...

july 2019 by nhaliday

The Existential Risk of Math Errors - Gwern.net

july 2019 by nhaliday

How big is this upper bound? Mathematicians have often made errors in proofs. But it’s rarer for ideas to be accepted for a long time and then rejected. But we can divide errors into 2 basic cases corresponding to type I and type II errors:

1. Mistakes where the theorem is still true, but the proof was incorrect (type I)

2. Mistakes where the theorem was false, and the proof was also necessarily incorrect (type II)

Before someone comes up with a final answer, a mathematician may have many levels of intuition in formulating & working on the problem, but we’ll consider the final end-product where the mathematician feels satisfied that he has solved it. Case 1 is perhaps the most common case, with innumerable examples; this is sometimes due to mistakes in the proof that anyone would accept is a mistake, but many of these cases are due to changing standards of proof. For example, when David Hilbert discovered errors in Euclid’s proofs which no one noticed before, the theorems were still true, and the gaps more due to Hilbert being a modern mathematician thinking in terms of formal systems (which of course Euclid did not think in). (David Hilbert himself turns out to be a useful example of the other kind of error: his famous list of 23 problems was accompanied by definite opinions on the outcome of each problem and sometimes timings, several of which were wrong or questionable5.) Similarly, early calculus used ‘infinitesimals’ which were sometimes treated as being 0 and sometimes treated as an indefinitely small non-zero number; this was incoherent and strictly speaking, practically all of the calculus results were wrong because they relied on an incoherent concept - but of course the results were some of the greatest mathematical work ever conducted6 and when later mathematicians put calculus on a more rigorous footing, they immediately re-derived those results (sometimes with important qualifications), and doubtless as modern math evolves other fields have sometimes needed to go back and clean up the foundations and will in the future.7

...

Isaac Newton, incidentally, gave two proofs of the same solution to a problem in probability, one via enumeration and the other more abstract; the enumeration was correct, but the other proof totally wrong and this was not noticed for a long time, leading Stigler to remark:

...

TYPE I > TYPE II?

“Lefschetz was a purely intuitive mathematician. It was said of him that he had never given a completely correct proof, but had never made a wrong guess either.”

- Gian-Carlo Rota13

Case 2 is disturbing, since it is a case in which we wind up with false beliefs and also false beliefs about our beliefs (we no longer know that we don’t know). Case 2 could lead to extinction.

...

Except, errors do not seem to be evenly & randomly distributed between case 1 and case 2. There seem to be far more case 1s than case 2s, as already mentioned in the early calculus example: far more than 50% of the early calculus results were correct when checked more rigorously. Richard Hamming attributes to Ralph Boas a comment that while editing Mathematical Reviews that “of the new results in the papers reviewed most are true but the corresponding proofs are perhaps half the time plain wrong”.

...

Gian-Carlo Rota gives us an example with Hilbert:

...

Olga labored for three years; it turned out that all mistakes could be corrected without any major changes in the statement of the theorems. There was one exception, a paper Hilbert wrote in his old age, which could not be fixed; it was a purported proof of the continuum hypothesis, you will find it in a volume of the Mathematische Annalen of the early thirties.

...

Leslie Lamport advocates for machine-checked proofs and a more rigorous style of proofs similar to natural deduction, noting a mathematician acquaintance guesses at a broad error rate of 1/329 and that he routinely found mistakes in his own proofs and, worse, believed false conjectures30.

[more on these "structured proofs":

https://academia.stackexchange.com/questions/52435/does-anyone-actually-publish-structured-proofs

https://mathoverflow.net/questions/35727/community-experiences-writing-lamports-structured-proofs

]

We can probably add software to that list: early software engineering work found that, dismayingly, bug rates seem to be simply a function of lines of code, and one would expect diseconomies of scale. So one would expect that in going from the ~4,000 lines of code of the Microsoft DOS operating system kernel to the ~50,000,000 lines of code in Windows Server 2003 (with full systems of applications and libraries being even larger: the comprehensive Debian repository in 2007 contained ~323,551,126 lines of code) that the number of active bugs at any time would be… fairly large. Mathematical software is hopefully better, but practitioners still run into issues (eg Durán et al 2014, Fonseca et al 2017) and I don’t know of any research pinning down how buggy key mathematical systems like Mathematica are or how much published mathematics may be erroneous due to bugs. This general problem led to predictions of doom and spurred much research into automated proof-checking, static analysis, and functional languages31.

[related:

https://mathoverflow.net/questions/11517/computer-algebra-errors

I don't know any interesting bugs in symbolic algebra packages but I know a true, enlightening and entertaining story about something that looked like a bug but wasn't.

Define sinc𝑥=(sin𝑥)/𝑥.

Someone found the following result in an algebra package: ∫∞0𝑑𝑥sinc𝑥=𝜋/2

They then found the following results:

...

So of course when they got:

∫∞0𝑑𝑥sinc𝑥sinc(𝑥/3)sinc(𝑥/5)⋯sinc(𝑥/15)=(467807924713440738696537864469/935615849440640907310521750000)𝜋

hmm:

Which means that nobody knows Fourier analysis nowdays. Very sad and discouraging story... – fedja Jan 29 '10 at 18:47

--

Because the most popular systems are all commercial, they tend to guard their bug database rather closely -- making them public would seriously cut their sales. For example, for the open source project Sage (which is quite young), you can get a list of all the known bugs from this page. 1582 known issues on Feb.16th 2010 (which includes feature requests, problems with documentation, etc).

That is an order of magnitude less than the commercial systems. And it's not because it is better, it is because it is younger and smaller. It might be better, but until SAGE does a lot of analysis (about 40% of CAS bugs are there) and a fancy user interface (another 40%), it is too hard to compare.

I once ran a graduate course whose core topic was studying the fundamental disconnect between the algebraic nature of CAS and the analytic nature of the what it is mostly used for. There are issues of logic -- CASes work more or less in an intensional logic, while most of analysis is stated in a purely extensional fashion. There is no well-defined 'denotational semantics' for expressions-as-functions, which strongly contributes to the deeper bugs in CASes.]

...

Should such widely-believed conjectures as P≠NP or the Riemann hypothesis turn out be false, then because they are assumed by so many existing proofs, a far larger math holocaust would ensue38 - and our previous estimates of error rates will turn out to have been substantial underestimates. But it may be a cloud with a silver lining, if it doesn’t come at a time of danger.

https://mathoverflow.net/questions/338607/why-doesnt-mathematics-collapse-down-even-though-humans-quite-often-make-mista

more on formal methods in programming:

https://www.quantamagazine.org/formal-verification-creates-hacker-proof-code-20160920/

https://intelligence.org/2014/03/02/bob-constable/

https://softwareengineering.stackexchange.com/questions/375342/what-are-the-barriers-that-prevent-widespread-adoption-of-formal-methods

Update: measured effort

In the October 2018 issue of Communications of the ACM there is an interesting article about Formally verified software in the real world with some estimates of the effort.

Interestingly (based on OS development for military equipment), it seems that producing formally proved software requires 3.3 times more effort than with traditional engineering techniques. So it's really costly.

On the other hand, it requires 2.3 times less effort to get high security software this way than with traditionally engineered software if you add the effort to make such software certified at a high security level (EAL 7). So if you have high reliability or security requirements there is definitively a business case for going formal.

WHY DON'T PEOPLE USE FORMAL METHODS?: https://www.hillelwayne.com/post/why-dont-people-use-formal-methods/

You can see examples of how all of these look at Let’s Prove Leftpad. HOL4 and Isabelle are good examples of “independent theorem” specs, SPARK and Dafny have “embedded assertion” specs, and Coq and Agda have “dependent type” specs.6

If you squint a bit it looks like these three forms of code spec map to the three main domains of automated correctness checking: tests, contracts, and types. This is not a coincidence. Correctness is a spectrum, and formal verification is one extreme of that spectrum. As we reduce the rigour (and effort) of our verification we get simpler and narrower checks, whether that means limiting the explored state space, using weaker types, or pushing verification to the runtime. Any means of total specification then becomes a means of partial specification, and vice versa: many consider Cleanroom a formal verification technique, which primarily works by pushing code review far beyond what’s humanly possible.

...

The question, then: “is 90/95/99% correct significantly cheaper than 100% correct?” The answer is very yes. We all are comfortable saying that a codebase we’ve well-tested and well-typed is mostly correct modulo a few fixes in prod, and we’re even writing more than four lines of code a day. In fact, the vast… [more]

ratty
gwern
analysis
essay
realness
truth
correctness
reason
philosophy
math
proofs
formal-methods
cs
programming
engineering
worse-is-better/the-right-thing
intuition
giants
old-anglo
error
street-fighting
heuristic
zooming
risk
threat-modeling
software
lens
logic
inference
physics
differential
geometry
estimate
distribution
robust
speculation
nonlinearity
cost-benefit
convexity-curvature
measure
scale
trivia
cocktail
history
early-modern
europe
math.CA
rigor
news
org:mag
org:sci
miri-cfar
pdf
thesis
comparison
examples
org:junk
q-n-a
stackex
pragmatic
tradeoffs
cracker-prog
techtariat
invariance
DSL
chart
ecosystem
grokkability
heavyweights
CAS
static-dynamic
lower-bounds
complexity
tcs
open-problems
big-surf
ideas
certificates-recognition
proof-systems
PCP
mediterranean
SDP
meta:prediction
epistemic
questions
guessing
distributed
overflow
nibble
soft-question
track-record
big-list
hmm
frontier
state-of-art
move-fast-(and-break-things)
grokkability-clarity
technical-writing
trust
1. Mistakes where the theorem is still true, but the proof was incorrect (type I)

2. Mistakes where the theorem was false, and the proof was also necessarily incorrect (type II)

Before someone comes up with a final answer, a mathematician may have many levels of intuition in formulating & working on the problem, but we’ll consider the final end-product where the mathematician feels satisfied that he has solved it. Case 1 is perhaps the most common case, with innumerable examples; this is sometimes due to mistakes in the proof that anyone would accept is a mistake, but many of these cases are due to changing standards of proof. For example, when David Hilbert discovered errors in Euclid’s proofs which no one noticed before, the theorems were still true, and the gaps more due to Hilbert being a modern mathematician thinking in terms of formal systems (which of course Euclid did not think in). (David Hilbert himself turns out to be a useful example of the other kind of error: his famous list of 23 problems was accompanied by definite opinions on the outcome of each problem and sometimes timings, several of which were wrong or questionable5.) Similarly, early calculus used ‘infinitesimals’ which were sometimes treated as being 0 and sometimes treated as an indefinitely small non-zero number; this was incoherent and strictly speaking, practically all of the calculus results were wrong because they relied on an incoherent concept - but of course the results were some of the greatest mathematical work ever conducted6 and when later mathematicians put calculus on a more rigorous footing, they immediately re-derived those results (sometimes with important qualifications), and doubtless as modern math evolves other fields have sometimes needed to go back and clean up the foundations and will in the future.7

...

Isaac Newton, incidentally, gave two proofs of the same solution to a problem in probability, one via enumeration and the other more abstract; the enumeration was correct, but the other proof totally wrong and this was not noticed for a long time, leading Stigler to remark:

...

TYPE I > TYPE II?

“Lefschetz was a purely intuitive mathematician. It was said of him that he had never given a completely correct proof, but had never made a wrong guess either.”

- Gian-Carlo Rota13

Case 2 is disturbing, since it is a case in which we wind up with false beliefs and also false beliefs about our beliefs (we no longer know that we don’t know). Case 2 could lead to extinction.

...

Except, errors do not seem to be evenly & randomly distributed between case 1 and case 2. There seem to be far more case 1s than case 2s, as already mentioned in the early calculus example: far more than 50% of the early calculus results were correct when checked more rigorously. Richard Hamming attributes to Ralph Boas a comment that while editing Mathematical Reviews that “of the new results in the papers reviewed most are true but the corresponding proofs are perhaps half the time plain wrong”.

...

Gian-Carlo Rota gives us an example with Hilbert:

...

Olga labored for three years; it turned out that all mistakes could be corrected without any major changes in the statement of the theorems. There was one exception, a paper Hilbert wrote in his old age, which could not be fixed; it was a purported proof of the continuum hypothesis, you will find it in a volume of the Mathematische Annalen of the early thirties.

...

Leslie Lamport advocates for machine-checked proofs and a more rigorous style of proofs similar to natural deduction, noting a mathematician acquaintance guesses at a broad error rate of 1/329 and that he routinely found mistakes in his own proofs and, worse, believed false conjectures30.

[more on these "structured proofs":

https://academia.stackexchange.com/questions/52435/does-anyone-actually-publish-structured-proofs

https://mathoverflow.net/questions/35727/community-experiences-writing-lamports-structured-proofs

]

We can probably add software to that list: early software engineering work found that, dismayingly, bug rates seem to be simply a function of lines of code, and one would expect diseconomies of scale. So one would expect that in going from the ~4,000 lines of code of the Microsoft DOS operating system kernel to the ~50,000,000 lines of code in Windows Server 2003 (with full systems of applications and libraries being even larger: the comprehensive Debian repository in 2007 contained ~323,551,126 lines of code) that the number of active bugs at any time would be… fairly large. Mathematical software is hopefully better, but practitioners still run into issues (eg Durán et al 2014, Fonseca et al 2017) and I don’t know of any research pinning down how buggy key mathematical systems like Mathematica are or how much published mathematics may be erroneous due to bugs. This general problem led to predictions of doom and spurred much research into automated proof-checking, static analysis, and functional languages31.

[related:

https://mathoverflow.net/questions/11517/computer-algebra-errors

I don't know any interesting bugs in symbolic algebra packages but I know a true, enlightening and entertaining story about something that looked like a bug but wasn't.

Define sinc𝑥=(sin𝑥)/𝑥.

Someone found the following result in an algebra package: ∫∞0𝑑𝑥sinc𝑥=𝜋/2

They then found the following results:

...

So of course when they got:

∫∞0𝑑𝑥sinc𝑥sinc(𝑥/3)sinc(𝑥/5)⋯sinc(𝑥/15)=(467807924713440738696537864469/935615849440640907310521750000)𝜋

hmm:

Which means that nobody knows Fourier analysis nowdays. Very sad and discouraging story... – fedja Jan 29 '10 at 18:47

--

Because the most popular systems are all commercial, they tend to guard their bug database rather closely -- making them public would seriously cut their sales. For example, for the open source project Sage (which is quite young), you can get a list of all the known bugs from this page. 1582 known issues on Feb.16th 2010 (which includes feature requests, problems with documentation, etc).

That is an order of magnitude less than the commercial systems. And it's not because it is better, it is because it is younger and smaller. It might be better, but until SAGE does a lot of analysis (about 40% of CAS bugs are there) and a fancy user interface (another 40%), it is too hard to compare.

I once ran a graduate course whose core topic was studying the fundamental disconnect between the algebraic nature of CAS and the analytic nature of the what it is mostly used for. There are issues of logic -- CASes work more or less in an intensional logic, while most of analysis is stated in a purely extensional fashion. There is no well-defined 'denotational semantics' for expressions-as-functions, which strongly contributes to the deeper bugs in CASes.]

...

Should such widely-believed conjectures as P≠NP or the Riemann hypothesis turn out be false, then because they are assumed by so many existing proofs, a far larger math holocaust would ensue38 - and our previous estimates of error rates will turn out to have been substantial underestimates. But it may be a cloud with a silver lining, if it doesn’t come at a time of danger.

https://mathoverflow.net/questions/338607/why-doesnt-mathematics-collapse-down-even-though-humans-quite-often-make-mista

more on formal methods in programming:

https://www.quantamagazine.org/formal-verification-creates-hacker-proof-code-20160920/

https://intelligence.org/2014/03/02/bob-constable/

https://softwareengineering.stackexchange.com/questions/375342/what-are-the-barriers-that-prevent-widespread-adoption-of-formal-methods

Update: measured effort

In the October 2018 issue of Communications of the ACM there is an interesting article about Formally verified software in the real world with some estimates of the effort.

Interestingly (based on OS development for military equipment), it seems that producing formally proved software requires 3.3 times more effort than with traditional engineering techniques. So it's really costly.

On the other hand, it requires 2.3 times less effort to get high security software this way than with traditionally engineered software if you add the effort to make such software certified at a high security level (EAL 7). So if you have high reliability or security requirements there is definitively a business case for going formal.

WHY DON'T PEOPLE USE FORMAL METHODS?: https://www.hillelwayne.com/post/why-dont-people-use-formal-methods/

You can see examples of how all of these look at Let’s Prove Leftpad. HOL4 and Isabelle are good examples of “independent theorem” specs, SPARK and Dafny have “embedded assertion” specs, and Coq and Agda have “dependent type” specs.6

If you squint a bit it looks like these three forms of code spec map to the three main domains of automated correctness checking: tests, contracts, and types. This is not a coincidence. Correctness is a spectrum, and formal verification is one extreme of that spectrum. As we reduce the rigour (and effort) of our verification we get simpler and narrower checks, whether that means limiting the explored state space, using weaker types, or pushing verification to the runtime. Any means of total specification then becomes a means of partial specification, and vice versa: many consider Cleanroom a formal verification technique, which primarily works by pushing code review far beyond what’s humanly possible.

...

The question, then: “is 90/95/99% correct significantly cheaper than 100% correct?” The answer is very yes. We all are comfortable saying that a codebase we’ve well-tested and well-typed is mostly correct modulo a few fixes in prod, and we’re even writing more than four lines of code a day. In fact, the vast… [more]

july 2019 by nhaliday

Factorization of polynomials over finite fields - Wikipedia

july 2019 by nhaliday

In mathematics and computer algebra the factorization of a polynomial consists of decomposing it into a product of irreducible factors. This decomposition is theoretically possible and is unique for polynomials with coefficients in any field, but rather strong restrictions on the field of the coefficients are needed to allow the computation of the factorization by means of an algorithm. In practice, algorithms have been designed only for polynomials with coefficients in a finite field, in the field of rationals or in a finitely generated field extension of one of them.

All factorization algorithms, including the case of multivariate polynomials over the rational numbers, reduce the problem to this case; see polynomial factorization. It is also used for various applications of finite fields, such as coding theory (cyclic redundancy codes and BCH codes), cryptography (public key cryptography by the means of elliptic curves), and computational number theory.

As the reduction of the factorization of multivariate polynomials to that of univariate polynomials does not have any specificity in the case of coefficients in a finite field, only polynomials with one variable are considered in this article.

...

In the algorithms that follow, the complexities are expressed in terms of number of arithmetic operations in Fq, using classical algorithms for the arithmetic of polynomials.

[ed.: Interesting choice...]

...

Factoring algorithms

Many algorithms for factoring polynomials over finite fields include the following three stages:

Square-free factorization

Distinct-degree factorization

Equal-degree factorization

An important exception is Berlekamp's algorithm, which combines stages 2 and 3.

Berlekamp's algorithm

Main article: Berlekamp's algorithm

The Berlekamp's algorithm is historically important as being the first factorization algorithm, which works well in practice. However, it contains a loop on the elements of the ground field, which implies that it is practicable only over small finite fields. For a fixed ground field, its time complexity is polynomial, but, for general ground fields, the complexity is exponential in the size of the ground field.

[ed.: This actually looks fairly implementable.]

wiki
reference
concept
algorithms
calculation
nibble
numerics
math
algebra
math.CA
fields
polynomials
levers
multiplicative
math.NT
All factorization algorithms, including the case of multivariate polynomials over the rational numbers, reduce the problem to this case; see polynomial factorization. It is also used for various applications of finite fields, such as coding theory (cyclic redundancy codes and BCH codes), cryptography (public key cryptography by the means of elliptic curves), and computational number theory.

As the reduction of the factorization of multivariate polynomials to that of univariate polynomials does not have any specificity in the case of coefficients in a finite field, only polynomials with one variable are considered in this article.

...

In the algorithms that follow, the complexities are expressed in terms of number of arithmetic operations in Fq, using classical algorithms for the arithmetic of polynomials.

[ed.: Interesting choice...]

...

Factoring algorithms

Many algorithms for factoring polynomials over finite fields include the following three stages:

Square-free factorization

Distinct-degree factorization

Equal-degree factorization

An important exception is Berlekamp's algorithm, which combines stages 2 and 3.

Berlekamp's algorithm

Main article: Berlekamp's algorithm

The Berlekamp's algorithm is historically important as being the first factorization algorithm, which works well in practice. However, it contains a loop on the elements of the ground field, which implies that it is practicable only over small finite fields. For a fixed ground field, its time complexity is polynomial, but, for general ground fields, the complexity is exponential in the size of the ground field.

[ed.: This actually looks fairly implementable.]

july 2019 by nhaliday

Programming Languages - Hyperpolyglot

june 2019 by nhaliday

very detailed PL comparisons/cheatsheets, also CASes, sci-comp stuff, SQLs, and programmer tools

tools
reference
cheatsheet
comparison
programming
pls
python
javascript
howto
list
terminal
c(pp)
golang
jvm
rust
scala
functional
haskell
ocaml-sml
lisp
numerics
sci-comp
data-science
r-lang
CAS
nibble
tutorial
init
documentation
editors
vcs
git
hg
dbs
types
oop
syntax
linear-algebra
math
math.CA
differential
math.CO
math.NT
plots
dataviz
polynomials
unix
objektbuch
crosstab
track-record
dotnet
DSL
whole-partial-many
static-dynamic
error-handling
error
june 2019 by nhaliday

[chao-dyn/9907004] Quasi periodic motions from Hipparchus to Kolmogorov

november 2017 by nhaliday

The evolution of the conception of motion as composed by circular uniform motions is analyzed, stressing its continuity from antiquity to our days.

nibble
preprint
papers
math
physics
mechanics
space
history
iron-age
mediterranean
the-classics
science
the-trenches
fourier
math.CA
cycles
oscillation
interdisciplinary
early-modern
the-great-west-whale
composition-decomposition
series
time
sequential
article
exposition
explanation
math.DS
innovation
novelty
giants
waves
org:mat
november 2017 by nhaliday

Physics 152: Gravity, Fluids, Waves, Heat

september 2017 by nhaliday

lots of good lecture notes with pictures, worked examples, and simulations

unit
org:edu
org:junk
course
physics
mechanics
gravity
tidbits
symmetry
calculation
examples
lecture-notes
simulation
dynamic
dynamical
visualization
visual-understanding
ground-up
fluid
waves
oscillation
thermo
stat-mech
p:whenever
accretion
math.CA
hi-order-bits
nitty-gritty
linearity
spatial
space
entropy-like
temperature
proofs
yoga
plots
september 2017 by nhaliday

Isaac Newton: the first physicist.

august 2017 by nhaliday

[...] More fundamentally, Newton's mathematical approach has become so basic to all of physics that he is generally regarded as _the father of the clockwork universe_: the first, and perhaps the greatest, physicist.

The Alchemist

In fact, Newton was deeply opposed to the mechanistic conception of the world. A secretive alchemist [...]. His written work on the subject ran to more than a million words, far more than he ever produced on calculus or mechanics [21]. Obsessively religious, he spent years correlating biblical prophecy with historical events [319ff]. He became deeply convinced that Christian doctrine had been deliberately corrupted by _the false notion of the trinity_, and developed a vicious contempt for conventional (trinitarian) Christianity and for Roman Catholicism in particular [324]. [...] He believed that God mediated the gravitational force [511](353), and opposed any attempt to give a mechanistic explanation of chemistry or gravity, since that would diminish the role of God [646]. He consequently conceived such _a hatred of Descartes_, on whose foundations so many of his achievements were built, that at times _he refused even to write his name_ [399,401].

The Man

Newton was rigorously puritanical: when one of his few friends told him "a loose story about a nun", he ended their friendship (267). [...] He thought of himself as the sole inventor of the calculus, and hence the greatest mathematician since the ancients, and left behind a huge corpus of unpublished work, mostly alchemy and biblical exegesis, that he believed future generations would appreciate more than his own (199,511).

[...] Even though these unattractive qualities caused him to waste huge amounts of time and energy in ruthless vendettas against colleagues who in many cases had helped him (see below), they also drove him to the extraordinary achievements for which he is still remembered. And for all his arrogance, Newton's own summary of his life (574) was beautifully humble:

"I do not know how I may appear to the world, but to myself I seem to have been only like a boy, playing on the sea-shore, and diverting myself, in now and then finding a smoother pebble or prettier shell than ordinary, whilst the great ocean of truth lay all undiscovered before me."

Before Newton

...

1. Calculus. Descartes, in 1637, pioneered the use of coordinates to turn geometric problems into algebraic ones, a method that Newton was never to accept [399]. Descartes, Fermat, and others investigated methods of calculating the tangents to arbitrary curves [28-30]. Kepler, Cavalieri, and others used infinitesimal slices to calculate volumes and areas enclosed by curves [30], but no unified treatment of these problems had yet been found.

2. Mechanics & Planetary motion. The elliptical orbits of the planets having been established by Kepler, Descartes proposed the idea of a purely mechanical heliocentric universe, following deterministic laws, and with no need of any divine agency [15], another anathema to Newton. _No one imagined, however, that a single law might explain both falling bodies and planetary motion_. Galileo invented the concept of inertia, anticipating Newton's first and second laws of motion (293), and Huygens used it to analyze collisions and circular motion [11]. Again, these pieces of progress had not been synthesized into a general method for analyzing forces and motion.

3. Light. Descartes claimed that light was a pressure wave, Gassendi that it was a stream of particles (corpuscles) [13]. As might be guessed, Newton vigorously supported the corpuscular theory. _White light was universally believed to be the pure form_, and colors were some added property bequeathed to it upon reflection from matter (150). Descartes had discovered the sine law of refraction (94), but it was not known that some colors were refracted more than others. The pattern was the familiar one: many pieces of the puzzle were in place, but the overall picture was still unclear.

The Natural Philosopher

Between 1671 and 1690, Newton was to supply definitive treatments of most of these problems. By assiduous experimentation with prisms he established that colored light was actually fundamental, and that it could be recombined to create white light. He did not publish the result for 6 years, by which time it seemed so obvious to him that he found great difficulty in responding patiently to the many misunderstandings and objections with which it met [239ff].

He invented differential and integral calculus in 1665-6, but failed to publish it. Leibniz invented it independently 10 years later, and published it first [718]. This resulted in a priority dispute which degenerated into a feud characterized by extraordinary dishonesty and venom on both sides (542).

In discovering gravitation, Newton was also _barely ahead of the rest of the pack_. Hooke was the first to realize that orbital motion was produced by a centripetal force (268), and in 1679 _he suggested an inverse square law to Newton_ [387]. Halley and Wren came to the same conclusion, and turned to Newton for a proof, which he duly supplied [402]. Newton did not stop there, however. From 1684 to 1687 he worked continuously on a grand synthesis of the whole of mechanics, the "Philosophiae Naturalis Principia Mathematica," in which he developed his three laws of motion and showed in detail that the universal force of gravitation could explain the fall of an apple as well as the precise motions of planets and comets.

The "Principia" crystallized the new conceptions of force and inertia that had gradually been emerging, and marks the beginning of theoretical physics as the mathematical field that we know today. It is not an easy read: Newton had developed the idea that geometry and equations should never be combined [399], and therefore _refused to use simple analytical techniques in his proofs_, requiring classical geometric constructions instead [428]. He even made his Principia _deliberately abstruse in order to discourage amateurs from feeling qualified to criticize it_ [459].

[...] most of the rest of his life was spent in administrative work as Master of the Mint and as President of the Royal Society, _a position he ruthlessly exploited in the pursuit of vendettas_ against Hooke (300ff,500), Leibniz (510ff), and Flamsteed (490,500), among others. He kept secret his disbelief in Christ's divinity right up until his dying moment, at which point he refused the last rites, at last openly defying the church (576). [...]

org:junk
people
old-anglo
giants
physics
mechanics
gravity
books
religion
christianity
theos
science
the-trenches
britain
history
early-modern
the-great-west-whale
stories
math
math.CA
nibble
discovery
The Alchemist

In fact, Newton was deeply opposed to the mechanistic conception of the world. A secretive alchemist [...]. His written work on the subject ran to more than a million words, far more than he ever produced on calculus or mechanics [21]. Obsessively religious, he spent years correlating biblical prophecy with historical events [319ff]. He became deeply convinced that Christian doctrine had been deliberately corrupted by _the false notion of the trinity_, and developed a vicious contempt for conventional (trinitarian) Christianity and for Roman Catholicism in particular [324]. [...] He believed that God mediated the gravitational force [511](353), and opposed any attempt to give a mechanistic explanation of chemistry or gravity, since that would diminish the role of God [646]. He consequently conceived such _a hatred of Descartes_, on whose foundations so many of his achievements were built, that at times _he refused even to write his name_ [399,401].

The Man

Newton was rigorously puritanical: when one of his few friends told him "a loose story about a nun", he ended their friendship (267). [...] He thought of himself as the sole inventor of the calculus, and hence the greatest mathematician since the ancients, and left behind a huge corpus of unpublished work, mostly alchemy and biblical exegesis, that he believed future generations would appreciate more than his own (199,511).

[...] Even though these unattractive qualities caused him to waste huge amounts of time and energy in ruthless vendettas against colleagues who in many cases had helped him (see below), they also drove him to the extraordinary achievements for which he is still remembered. And for all his arrogance, Newton's own summary of his life (574) was beautifully humble:

"I do not know how I may appear to the world, but to myself I seem to have been only like a boy, playing on the sea-shore, and diverting myself, in now and then finding a smoother pebble or prettier shell than ordinary, whilst the great ocean of truth lay all undiscovered before me."

Before Newton

...

1. Calculus. Descartes, in 1637, pioneered the use of coordinates to turn geometric problems into algebraic ones, a method that Newton was never to accept [399]. Descartes, Fermat, and others investigated methods of calculating the tangents to arbitrary curves [28-30]. Kepler, Cavalieri, and others used infinitesimal slices to calculate volumes and areas enclosed by curves [30], but no unified treatment of these problems had yet been found.

2. Mechanics & Planetary motion. The elliptical orbits of the planets having been established by Kepler, Descartes proposed the idea of a purely mechanical heliocentric universe, following deterministic laws, and with no need of any divine agency [15], another anathema to Newton. _No one imagined, however, that a single law might explain both falling bodies and planetary motion_. Galileo invented the concept of inertia, anticipating Newton's first and second laws of motion (293), and Huygens used it to analyze collisions and circular motion [11]. Again, these pieces of progress had not been synthesized into a general method for analyzing forces and motion.

3. Light. Descartes claimed that light was a pressure wave, Gassendi that it was a stream of particles (corpuscles) [13]. As might be guessed, Newton vigorously supported the corpuscular theory. _White light was universally believed to be the pure form_, and colors were some added property bequeathed to it upon reflection from matter (150). Descartes had discovered the sine law of refraction (94), but it was not known that some colors were refracted more than others. The pattern was the familiar one: many pieces of the puzzle were in place, but the overall picture was still unclear.

The Natural Philosopher

Between 1671 and 1690, Newton was to supply definitive treatments of most of these problems. By assiduous experimentation with prisms he established that colored light was actually fundamental, and that it could be recombined to create white light. He did not publish the result for 6 years, by which time it seemed so obvious to him that he found great difficulty in responding patiently to the many misunderstandings and objections with which it met [239ff].

He invented differential and integral calculus in 1665-6, but failed to publish it. Leibniz invented it independently 10 years later, and published it first [718]. This resulted in a priority dispute which degenerated into a feud characterized by extraordinary dishonesty and venom on both sides (542).

In discovering gravitation, Newton was also _barely ahead of the rest of the pack_. Hooke was the first to realize that orbital motion was produced by a centripetal force (268), and in 1679 _he suggested an inverse square law to Newton_ [387]. Halley and Wren came to the same conclusion, and turned to Newton for a proof, which he duly supplied [402]. Newton did not stop there, however. From 1684 to 1687 he worked continuously on a grand synthesis of the whole of mechanics, the "Philosophiae Naturalis Principia Mathematica," in which he developed his three laws of motion and showed in detail that the universal force of gravitation could explain the fall of an apple as well as the precise motions of planets and comets.

The "Principia" crystallized the new conceptions of force and inertia that had gradually been emerging, and marks the beginning of theoretical physics as the mathematical field that we know today. It is not an easy read: Newton had developed the idea that geometry and equations should never be combined [399], and therefore _refused to use simple analytical techniques in his proofs_, requiring classical geometric constructions instead [428]. He even made his Principia _deliberately abstruse in order to discourage amateurs from feeling qualified to criticize it_ [459].

[...] most of the rest of his life was spent in administrative work as Master of the Mint and as President of the Royal Society, _a position he ruthlessly exploited in the pursuit of vendettas_ against Hooke (300ff,500), Leibniz (510ff), and Flamsteed (490,500), among others. He kept secret his disbelief in Christ's divinity right up until his dying moment, at which point he refused the last rites, at last openly defying the church (576). [...]

august 2017 by nhaliday

Separating Hyperplane Theorems

august 2017 by nhaliday

also has supporting hyperplane theorems

pdf
lecture-notes
nibble
exposition
caltech
acm
math
math.CA
curvature
optimization
proofs
existence
levers
atoms
yoga
convexity-curvature
august 2017 by nhaliday

Lecture 7: Convex Problems, Separation Theorems

august 2017 by nhaliday

Supporting Hyperplane Theorem

Separating Hyperplane Theorems

pdf
nibble
lectures
slides
exposition
proofs
acm
math
math.CA
optimization
curvature
existence
duality
levers
atoms
yoga
convexity-curvature
Separating Hyperplane Theorems

august 2017 by nhaliday

Subgradients - S. Boyd and L. Vandenberghe

august 2017 by nhaliday

If f is convex and x ∈ int dom f, then ∂f(x) is nonempty and bounded. To establish that ∂f(x) ≠ ∅, we apply the supporting hyperplane theorem to the convex set epi f at the boundary point (x, f(x)), ...

pdf
nibble
lecture-notes
acm
optimization
curvature
math.CA
estimate
linearity
differential
existence
proofs
exposition
atoms
math
marginal
convexity-curvature
august 2017 by nhaliday

Archimedes Palimpsest - Wikipedia

may 2017 by nhaliday

Using this method, Archimedes was able to solve several problems now treated by integral calculus, which was given its modern form in the seventeenth century by Isaac Newton and Gottfried Leibniz. Among those problems were that of calculating the center of gravity of a solid hemisphere, the center of gravity of a frustum of a circular paraboloid, and the area of a region bounded by a parabola and one of its secant lines. (For explicit details, see Archimedes' use of infinitesimals.)

When rigorously proving theorems, Archimedes often used what are now called Riemann sums. In "On the Sphere and Cylinder," he gives upper and lower bounds for the surface area of a sphere by cutting the sphere into sections of equal width. He then bounds the area of each section by the area of an inscribed and circumscribed cone, which he proves have a larger and smaller area correspondingly. He adds the areas of the cones, which is a type of Riemann sum for the area of the sphere considered as a surface of revolution.

But there are two essential differences between Archimedes' method and 19th-century methods:

1. Archimedes did not know about differentiation, so he could not calculate any integrals other than those that came from center-of-mass considerations, by symmetry. While he had a notion of linearity, to find the volume of a sphere he had to balance two figures at the same time; he never figured out how to change variables or integrate by parts.

2. When calculating approximating sums, he imposed the further constraint that the sums provide rigorous upper and lower bounds. This was required because the Greeks lacked algebraic methods that could establish that error terms in an approximation are small.

big-peeps
history
iron-age
mediterranean
the-classics
innovation
discovery
knowledge
math
math.CA
finiteness
the-trenches
wiki
trivia
cocktail
stories
nibble
canon
differential
When rigorously proving theorems, Archimedes often used what are now called Riemann sums. In "On the Sphere and Cylinder," he gives upper and lower bounds for the surface area of a sphere by cutting the sphere into sections of equal width. He then bounds the area of each section by the area of an inscribed and circumscribed cone, which he proves have a larger and smaller area correspondingly. He adds the areas of the cones, which is a type of Riemann sum for the area of the sphere considered as a surface of revolution.

But there are two essential differences between Archimedes' method and 19th-century methods:

1. Archimedes did not know about differentiation, so he could not calculate any integrals other than those that came from center-of-mass considerations, by symmetry. While he had a notion of linearity, to find the volume of a sphere he had to balance two figures at the same time; he never figured out how to change variables or integrate by parts.

2. When calculating approximating sums, he imposed the further constraint that the sums provide rigorous upper and lower bounds. This was required because the Greeks lacked algebraic methods that could establish that error terms in an approximation are small.

may 2017 by nhaliday

Chapter 2: Asymptotic Expansions

april 2017 by nhaliday

includes complementary error function

pdf
nibble
exposition
math
acm
math.CA
approximation
limits
integral
magnitude
AMT
yoga
estimate
lecture-notes
april 2017 by nhaliday

Fourier transform - Wikipedia

april 2017 by nhaliday

https://en.wikipedia.org/wiki/Fourier_transform#Properties_of_the_Fourier_transform

https://en.wikipedia.org/wiki/Fourier_transform#Tables_of_important_Fourier_transforms

nibble
math
acm
math.CA
fourier
list
identity
duality
math.CV
wiki
reference
multi
objektbuch
cheatsheet
calculation
nitty-gritty
concept
examples
integral
AMT
ground-up
IEEE
properties
https://en.wikipedia.org/wiki/Fourier_transform#Tables_of_important_Fourier_transforms

april 2017 by nhaliday

An Introduction to Measure Theory - Terence Tao

books draft unit math gowers mathtariat measure math.CA probability yoga problem-solving pdf tricki local-global counterexample visual-understanding lifts-projections oscillation limits estimate quantifiers-sums synthesis coarse-fine p:someday s:** heavyweights

february 2017 by nhaliday

books draft unit math gowers mathtariat measure math.CA probability yoga problem-solving pdf tricki local-global counterexample visual-understanding lifts-projections oscillation limits estimate quantifiers-sums synthesis coarse-fine p:someday s:** heavyweights

february 2017 by nhaliday

List of Laplace transforms - Wikipedia

february 2017 by nhaliday

= moment-generating function

concept
math
acm
math.CA
probability
moments
wiki
reference
calculation
objektbuch
list
examples
nibble
integral
cheatsheet
identity
AMT
properties
february 2017 by nhaliday

The tensor power trick | Tricki

february 2017 by nhaliday

- Fubini's for integrals of tensored extension

- entropy digression is interesting

nibble
tricki
exposition
problem-solving
yoga
estimate
magnitude
tensors
levers
algebraic-complexity
wiki
reference
metabuch
hi-order-bits
synthesis
tidbits
tightness
quantifiers-sums
integral
information-theory
entropy-like
stirling
binomial
concentration-of-measure
limits
stat-mech
additive-combo
math.CV
math.CA
math.FA
fourier
s:*
better-explained
org:mat
elegance
- entropy digression is interesting

february 2017 by nhaliday

pr.probability - Identities and inequalities in analysis and probability - MathOverflow

february 2017 by nhaliday

interesting approach to proving Cauchy-Schwarz (symmetry+sum of squares)

q-n-a
overflow
math
math.CA
math.FA
probability
list
big-list
estimate
yoga
synthesis
structure
examples
identity
nibble
sum-of-squares
positivity
tricki
inner-product
wisdom
integral
quantifiers-sums
tidbits
p:whenever
s:null
signum
elegance
february 2017 by nhaliday

What is Haar Measure? - Mathematics Stack Exchange

february 2017 by nhaliday

- group translation invariance

- always exists, often unique

q-n-a
overflow
math
algebra
measure
math.GR
math.CA
concept
explanation
nibble
integral
existence
uniqueness
- always exists, often unique

february 2017 by nhaliday

Barrier function - Wikipedia

february 2017 by nhaliday

In constrained optimization, a field of mathematics, a barrier function is a continuous function whose value on a point increases to infinity as the point approaches the boundary of the feasible region of an optimization problem.[1] Such functions are used to replace inequality constraints by a penalizing term in the objective function that is easier to handle.

math
acm
concept
optimization
singularity
smoothness
relaxation
wiki
reference
regularization
math.CA
nibble
february 2017 by nhaliday

Sobolev space - Wikipedia

february 2017 by nhaliday

In mathematics, a Sobolev space is a vector space of functions equipped with a norm that is a combination of Lp-norms of the function itself and its derivatives up to a given order. The derivatives are understood in a suitable weak sense to make the space complete, thus a Banach space. Intuitively, a Sobolev space is a space of functions with sufficiently many derivatives for some application domain, such as partial differential equations, and equipped with a norm that measures both the size and regularity of a function.

math
concept
math.CA
math.FA
differential
inner-product
wiki
reference
regularity
smoothness
norms
nibble
zooming
february 2017 by nhaliday

A VERY BRIEF REVIEW OF MEASURE THEORY

february 2017 by nhaliday

A brief philosophical discussion:

Measure theory, as much as any branch of mathematics, is an area where it is important to be acquainted with the basic notions and statements, but not desperately important to be acquainted with the detailed proofs, which are often rather unilluminating. One should always have in a mind a place where one could go and look if one ever did need to understand a proof: for me, that place is Rudin’s Real and Complex Analysis (Rudin’s “red book”).

gowers
pdf
math
math.CA
math.FA
philosophy
measure
exposition
synthesis
big-picture
hi-order-bits
ergodic
ground-up
summary
roadmap
mathtariat
proofs
nibble
unit
integral
zooming
p:whenever
Measure theory, as much as any branch of mathematics, is an area where it is important to be acquainted with the basic notions and statements, but not desperately important to be acquainted with the detailed proofs, which are often rather unilluminating. One should always have in a mind a place where one could go and look if one ever did need to understand a proof: for me, that place is Rudin’s Real and Complex Analysis (Rudin’s “red book”).

february 2017 by nhaliday

measure theory - Continuous function a.e. - Mathematics Stack Exchange

january 2017 by nhaliday

- note: Riemann integrable iff continuous a.e. (see Wheeden-Zygmund 5.54)

- equal a.e. to continuous f, but not continuous a.e.: characteristic function of rationals

- continuous a.e., but not equal a.e. to continuous f: step function

- continuous a.e., w/ uncountably many discontinuities: characteristic function of Cantor set

q-n-a
overflow
math
math.CA
counterexample
list
measure
smoothness
singularity
nibble
integral
- equal a.e. to continuous f, but not continuous a.e.: characteristic function of rationals

- continuous a.e., but not equal a.e. to continuous f: step function

- continuous a.e., w/ uncountably many discontinuities: characteristic function of Cantor set

january 2017 by nhaliday

set theory - What are interesting families of subsets of a given set? - MathOverflow

january 2017 by nhaliday

This fascinating essay by Gromov discusses the issue of "interesting" substructures in a very general way.

q-n-a
overflow
math
math.CA
topology
synthesis
soft-question
list
big-list
mathtariat
insight
links
giants
measure
structure
math.GN
nibble
wild-ideas
p:someday
closure
ideas
sub-super
january 2017 by nhaliday

soft question - What notions are used but not clearly defined in modern mathematics? - MathOverflow

q-n-a overflow math list big-list discussion mathtariat tcstariat conceptual-vocab vague rigor nibble thinking definition clarity physics quantum iteration-recursion algebra fields math.CA math.NT structure logic proofs math.DS grokkability-clarity

january 2017 by nhaliday

q-n-a overflow math list big-list discussion mathtariat tcstariat conceptual-vocab vague rigor nibble thinking definition clarity physics quantum iteration-recursion algebra fields math.CA math.NT structure logic proofs math.DS grokkability-clarity

january 2017 by nhaliday

soft question - Fundamental Examples - MathOverflow

q-n-a overflow math examples list big-list ground-up synthesis big-picture nibble database top-n hi-order-bits logic physics math.CA math.CV differential math.FA algebra math.NT probability math.DS geometry topology graph-theory math.CO tcs cs social-science game-theory GT-101 stats elegance

january 2017 by nhaliday

q-n-a overflow math examples list big-list ground-up synthesis big-picture nibble database top-n hi-order-bits logic physics math.CA math.CV differential math.FA algebra math.NT probability math.DS geometry topology graph-theory math.CO tcs cs social-science game-theory GT-101 stats elegance

january 2017 by nhaliday

functional analysis - Connections between metrics, norms and scalar products (for understanding e.g. Banach and Hilbert spaces) - Mathematics Stack Exchange

q-n-a overflow synthesis math math.CA math.FA linear-algebra concept inner-product norms metric-space nibble hierarchy measure

january 2017 by nhaliday

q-n-a overflow synthesis math math.CA math.FA linear-algebra concept inner-product norms metric-space nibble hierarchy measure

january 2017 by nhaliday

ho.history overview - Video lectures of mathematics courses available online for free - MathOverflow

january 2017 by nhaliday

interesting fourier transform notes

q-n-a
overflow
accretion
lecture-notes
video
math
exposition
links
list
big-list
nibble
p:someday
quixotic
advanced
fourier
math.CA
algebra
math.NT
applications
IEEE
january 2017 by nhaliday

ca.analysis and odes - Why do functions in complex analysis behave so well? (as opposed to functions in real analysis) - MathOverflow

january 2017 by nhaliday

Well, real-valued analytic functions are just as rigid as their complex-valued counterparts. The true question is why complex smooth (or complex differentiable) functions are automatically complex analytic, whilst real smooth (or real differentiable) functions need not be real analytic.

q-n-a
overflow
math
math.CA
math.CV
synthesis
curiosity
gowers
oly
mathtariat
tcstariat
comparison
rigidity
smoothness
singularity
regularity
nibble
january 2017 by nhaliday

Soft analysis, hard analysis, and the finite convergence principle | What's new

january 2017 by nhaliday

It is fairly well known that the results obtained by hard and soft analysis respectively can be connected to each other by various “correspondence principles” or “compactness principles”. It is however my belief that the relationship between the two types of analysis is in fact much closer[3] than just this; in many cases, qualitative analysis can be viewed as a convenient abstraction of quantitative analysis, in which the precise dependencies between various finite quantities has been efficiently concealed from view by use of infinitary notation. Conversely, quantitative analysis can often be viewed as a more precise and detailed refinement of qualitative analysis. Furthermore, a method from hard analysis often has some analogue in soft analysis and vice versa, though the language and notation of the analogue may look completely different from that of the original. I therefore feel that it is often profitable for a practitioner of one type of analysis to learn about the other, as they both offer their own strengths, weaknesses, and intuition, and knowledge of one gives more insight[4] into the workings of the other. I wish to illustrate this point here using a simple but not terribly well known result, which I shall call the “finite convergence principle” (thanks to Ben Green for suggesting this name; Jennifer Chayes has also suggested the “metastability principle”). It is the finitary analogue of an utterly trivial infinitary result – namely, that every bounded monotone sequence converges – but sometimes, a careful analysis of a trivial result can be surprisingly revealing, as I hope to demonstrate here.

gowers
mathtariat
math
math.CA
expert
reflection
philosophy
meta:math
logic
math.CO
lens
big-picture
symmetry
limits
finiteness
nibble
org:bleg
coarse-fine
metameta
convergence
expert-experience
january 2017 by nhaliday

Cantor function - Wikipedia

january 2017 by nhaliday

- uniformly continuous but not absolutely continuous

- derivative zero almost everywhere but not constant

- see also: http://mathoverflow.net/questions/31603/why-do-probabilists-take-random-variables-to-be-borel-and-not-lebesgue-measura/31609#31609 (the exercise mentioned uses c(x)+x for c the Cantor function)

math
math.CA
counterexample
wiki
reference
multi
math.FA
atoms
measure
smoothness
singularity
nibble
- derivative zero almost everywhere but not constant

- see also: http://mathoverflow.net/questions/31603/why-do-probabilists-take-random-variables-to-be-borel-and-not-lebesgue-measura/31609#31609 (the exercise mentioned uses c(x)+x for c the Cantor function)

january 2017 by nhaliday

soft question - What are some slogans that express mathematical tricks? - MathOverflow

q-n-a overflow math list big-list soft-question synthesis yoga tricks aphorism big-picture proofs nibble tricki math.CA math.FA inner-product estimate local-global uniqueness synchrony math.AT symmetry extrema existence wisdom quantifiers-sums probabilistic-method concentration-of-measure p:whenever s:** elegance

january 2017 by nhaliday

q-n-a overflow math list big-list soft-question synthesis yoga tricks aphorism big-picture proofs nibble tricki math.CA math.FA inner-product estimate local-global uniqueness synchrony math.AT symmetry extrema existence wisdom quantifiers-sums probabilistic-method concentration-of-measure p:whenever s:** elegance

january 2017 by nhaliday

Copy this bookmark: