**nhaliday : heavyweights**
69

About This Website - Gwern.net

ratty gwern people summary workflow exocortex long-short-run software oss vcs internet web flux-stasis time sequential spreading longform discipline writing vulgar subculture scifi-fantasy fiction meta:reading tools priors-posteriors meta:prediction lesswrong planning info-foraging r-lang feynman giants heavyweights learning mindful retention notetaking pdf backup profile confidence epistemic rationality yak-shaving checking wire-guided hn forum aggregator quotes aphorism time-series data frontend minimalism form-design

october 2019 by nhaliday

ratty gwern people summary workflow exocortex long-short-run software oss vcs internet web flux-stasis time sequential spreading longform discipline writing vulgar subculture scifi-fantasy fiction meta:reading tools priors-posteriors meta:prediction lesswrong planning info-foraging r-lang feynman giants heavyweights learning mindful retention notetaking pdf backup profile confidence epistemic rationality yak-shaving checking wire-guided hn forum aggregator quotes aphorism time-series data frontend minimalism form-design

october 2019 by nhaliday

CppCon 2015: Chandler Carruth "Tuning C++: Benchmarks, and CPUs, and Compilers! Oh My!" - YouTube

october 2019 by nhaliday

- very basics of benchmarking

- Q: why does preemptive reserve speed up push_back by 10x?

- favorite tool is Linux perf

- callgraph profiling

- important option: -fomit-frame-pointer

- perf has nice interface ('a' = "annotate") for reading assembly (good display of branches/jumps)

- A: optimized to no-op

- how to turn off optimizer

- profilers aren't infallible. a lot of the time samples are misattributed to neighboring ops

- fast mod example

- branch prediction hints (#define UNLIKELY(x), __builtin_expected, etc)

video
presentation
c(pp)
pls
programming
unix
heavyweights
cracker-prog
benchmarks
engineering
best-practices
working-stiff
systems
expert-experience
google
llvm
common-case
stories
libraries
measurement
linux
performance
traces
graphs
static-dynamic
ui
assembly
compilers
methodology
techtariat
- Q: why does preemptive reserve speed up push_back by 10x?

- favorite tool is Linux perf

- callgraph profiling

- important option: -fomit-frame-pointer

- perf has nice interface ('a' = "annotate") for reading assembly (good display of branches/jumps)

- A: optimized to no-op

- how to turn off optimizer

- profilers aren't infallible. a lot of the time samples are misattributed to neighboring ops

- fast mod example

- branch prediction hints (#define UNLIKELY(x), __builtin_expected, etc)

october 2019 by nhaliday

The Future of Mathematics? [video] | Hacker News

october 2019 by nhaliday

https://news.ycombinator.com/item?id=20909404

Kevin Buzzard (the Lean guy)

- general reflection on proof asssistants/theorem provers

- Kevin Hale's formal abstracts project, etc

- thinks of available theorem provers, Lean is "[the only one currently available that may be capable of formalizing all of mathematics eventually]" (goes into more detail right at the end, eg, quotient types)

hn
commentary
discussion
video
talks
presentation
math
formal-methods
expert-experience
msr
frontier
state-of-art
proofs
rigor
education
higher-ed
optimism
prediction
lens
search
meta:research
speculation
exocortex
skunkworks
automation
research
math.NT
big-surf
software
parsimony
cost-benefit
intricacy
correctness
programming
pls
python
functional
haskell
heavyweights
research-program
review
reflection
multi
pdf
slides
oly
experiment
span-cover
git
vcs
teaching
impetus
academia
composition-decomposition
coupling-cohesion
database
trust
types
plt
lifts-projections
induction
critique
beauty
truth
elegance
aesthetics
Kevin Buzzard (the Lean guy)

- general reflection on proof asssistants/theorem provers

- Kevin Hale's formal abstracts project, etc

- thinks of available theorem provers, Lean is "[the only one currently available that may be capable of formalizing all of mathematics eventually]" (goes into more detail right at the end, eg, quotient types)

october 2019 by nhaliday

Making of Byrne’s Euclid - C82: Works of Nicholas Rougeux

october 2019 by nhaliday

https://www.math.ubc.ca/~cass/Euclid/byrne.html

Tufte: https://www.gwern.net/docs/statistics/1990-tufte-envisioninginformation-ch5-bryneseuclid.pdf

https://habr.com/ru/post/452520/

techtariat
reflection
project
summary
design
web
visuo
visual-understanding
math
geometry
worrydream
beauty
books
history
early-modern
classic
the-classics
britain
anglo
writing
technical-writing
gwern
elegance
virtu
:)
notation
the-great-west-whale
explanation
form-design
heavyweights
dataviz
org:junk
org:edu
pdf
essay
art
multi
latex
Tufte: https://www.gwern.net/docs/statistics/1990-tufte-envisioninginformation-ch5-bryneseuclid.pdf

https://habr.com/ru/post/452520/

october 2019 by nhaliday

Leslie Lamport: Thinking Above the Code - YouTube

heavyweights cs distributed systems system-design formal-methods rigor correctness rhetoric contrarianism presentation video detail-architecture engineering programming thinking writing technical-writing concurrency protocol-metadata

august 2019 by nhaliday

heavyweights cs distributed systems system-design formal-methods rigor correctness rhetoric contrarianism presentation video detail-architecture engineering programming thinking writing technical-writing concurrency protocol-metadata

august 2019 by nhaliday

big list - Are there proofs that you feel you did not "understand" for a long time? - MathOverflow

nibble q-n-a overflow soft-question big-list math proofs expert-experience heavyweights gowers mathtariat reflection learning intricacy grokkability intuition algebra math.GR motivation math.GN topology synthesis math.CT computation tcs logic iteration-recursion math.CA extrema smoothness span-cover grokkability-clarity

august 2019 by nhaliday

nibble q-n-a overflow soft-question big-list math proofs expert-experience heavyweights gowers mathtariat reflection learning intricacy grokkability intuition algebra math.GR motivation math.GN topology synthesis math.CT computation tcs logic iteration-recursion math.CA extrema smoothness span-cover grokkability-clarity

august 2019 by nhaliday

Epigrams in Programming | Computer Science

august 2019 by nhaliday

- Alan Perlis

nibble
quotes
aphorism
list
cs
computation
programming
pls
hi-order-bits
synthesis
lens
data-structures
arrows
algorithms
iteration-recursion
intricacy
strings
types
math
formal-methods
pic
visuo
visual-understanding
systems
state
structure
turing
cost-benefit
lisp
performance
software
language
plt
invariance
ends-means
ai
nitty-gritty
sci-comp
composition-decomposition
tradeoffs
grokkability
assembly
internet
egalitarianism-hierarchy
functional
impetus
roots
path-dependence
heavyweights
grokkability-clarity
august 2019 by nhaliday

Zero-based numbering - Wikipedia

july 2019 by nhaliday

https://www.cs.utexas.edu/users/EWD/transcriptions/EWD08xx/EWD831.html

https://softwareengineering.stackexchange.com/questions/110804/why-are-zero-based-arrays-the-norm

idk about this guy's competence. he seems to want to sensationalize the historical reasons while ignoring or being ignorant of the legitimate mathematical reasons:

http://exple.tive.org/blarg/2013/10/22/citation-needed/

nibble
concept
wiki
reference
programming
pls
c(pp)
systems
plt
roots
explanans
degrees-of-freedom
data-structures
multi
hmm
debate
critique
ideas
techtariat
rant
q-n-a
stackex
compilers
performance
calculation
correctness
mental-math
parsimony
whole-partial-many
tradition
worse-is-better/the-right-thing
mit
business
rhetoric
worrydream
flux-stasis
legacy
elegance
stories
blowhards
idk
org:junk
intricacy
measure
debugging
best-practices
notation
heavyweights
protocol-metadata
https://softwareengineering.stackexchange.com/questions/110804/why-are-zero-based-arrays-the-norm

idk about this guy's competence. he seems to want to sensationalize the historical reasons while ignoring or being ignorant of the legitimate mathematical reasons:

http://exple.tive.org/blarg/2013/10/22/citation-needed/

july 2019 by nhaliday

The Existential Risk of Math Errors - Gwern.net

july 2019 by nhaliday

How big is this upper bound? Mathematicians have often made errors in proofs. But it’s rarer for ideas to be accepted for a long time and then rejected. But we can divide errors into 2 basic cases corresponding to type I and type II errors:

1. Mistakes where the theorem is still true, but the proof was incorrect (type I)

2. Mistakes where the theorem was false, and the proof was also necessarily incorrect (type II)

Before someone comes up with a final answer, a mathematician may have many levels of intuition in formulating & working on the problem, but we’ll consider the final end-product where the mathematician feels satisfied that he has solved it. Case 1 is perhaps the most common case, with innumerable examples; this is sometimes due to mistakes in the proof that anyone would accept is a mistake, but many of these cases are due to changing standards of proof. For example, when David Hilbert discovered errors in Euclid’s proofs which no one noticed before, the theorems were still true, and the gaps more due to Hilbert being a modern mathematician thinking in terms of formal systems (which of course Euclid did not think in). (David Hilbert himself turns out to be a useful example of the other kind of error: his famous list of 23 problems was accompanied by definite opinions on the outcome of each problem and sometimes timings, several of which were wrong or questionable5.) Similarly, early calculus used ‘infinitesimals’ which were sometimes treated as being 0 and sometimes treated as an indefinitely small non-zero number; this was incoherent and strictly speaking, practically all of the calculus results were wrong because they relied on an incoherent concept - but of course the results were some of the greatest mathematical work ever conducted6 and when later mathematicians put calculus on a more rigorous footing, they immediately re-derived those results (sometimes with important qualifications), and doubtless as modern math evolves other fields have sometimes needed to go back and clean up the foundations and will in the future.7

...

Isaac Newton, incidentally, gave two proofs of the same solution to a problem in probability, one via enumeration and the other more abstract; the enumeration was correct, but the other proof totally wrong and this was not noticed for a long time, leading Stigler to remark:

...

TYPE I > TYPE II?

“Lefschetz was a purely intuitive mathematician. It was said of him that he had never given a completely correct proof, but had never made a wrong guess either.”

- Gian-Carlo Rota13

Case 2 is disturbing, since it is a case in which we wind up with false beliefs and also false beliefs about our beliefs (we no longer know that we don’t know). Case 2 could lead to extinction.

...

Except, errors do not seem to be evenly & randomly distributed between case 1 and case 2. There seem to be far more case 1s than case 2s, as already mentioned in the early calculus example: far more than 50% of the early calculus results were correct when checked more rigorously. Richard Hamming attributes to Ralph Boas a comment that while editing Mathematical Reviews that “of the new results in the papers reviewed most are true but the corresponding proofs are perhaps half the time plain wrong”.

...

Gian-Carlo Rota gives us an example with Hilbert:

...

Olga labored for three years; it turned out that all mistakes could be corrected without any major changes in the statement of the theorems. There was one exception, a paper Hilbert wrote in his old age, which could not be fixed; it was a purported proof of the continuum hypothesis, you will find it in a volume of the Mathematische Annalen of the early thirties.

...

Leslie Lamport advocates for machine-checked proofs and a more rigorous style of proofs similar to natural deduction, noting a mathematician acquaintance guesses at a broad error rate of 1/329 and that he routinely found mistakes in his own proofs and, worse, believed false conjectures30.

[more on these "structured proofs":

https://academia.stackexchange.com/questions/52435/does-anyone-actually-publish-structured-proofs

https://mathoverflow.net/questions/35727/community-experiences-writing-lamports-structured-proofs

]

We can probably add software to that list: early software engineering work found that, dismayingly, bug rates seem to be simply a function of lines of code, and one would expect diseconomies of scale. So one would expect that in going from the ~4,000 lines of code of the Microsoft DOS operating system kernel to the ~50,000,000 lines of code in Windows Server 2003 (with full systems of applications and libraries being even larger: the comprehensive Debian repository in 2007 contained ~323,551,126 lines of code) that the number of active bugs at any time would be… fairly large. Mathematical software is hopefully better, but practitioners still run into issues (eg Durán et al 2014, Fonseca et al 2017) and I don’t know of any research pinning down how buggy key mathematical systems like Mathematica are or how much published mathematics may be erroneous due to bugs. This general problem led to predictions of doom and spurred much research into automated proof-checking, static analysis, and functional languages31.

[related:

https://mathoverflow.net/questions/11517/computer-algebra-errors

I don't know any interesting bugs in symbolic algebra packages but I know a true, enlightening and entertaining story about something that looked like a bug but wasn't.

Define sinc𝑥=(sin𝑥)/𝑥.

Someone found the following result in an algebra package: ∫∞0𝑑𝑥sinc𝑥=𝜋/2

They then found the following results:

...

So of course when they got:

∫∞0𝑑𝑥sinc𝑥sinc(𝑥/3)sinc(𝑥/5)⋯sinc(𝑥/15)=(467807924713440738696537864469/935615849440640907310521750000)𝜋

hmm:

Which means that nobody knows Fourier analysis nowdays. Very sad and discouraging story... – fedja Jan 29 '10 at 18:47

--

Because the most popular systems are all commercial, they tend to guard their bug database rather closely -- making them public would seriously cut their sales. For example, for the open source project Sage (which is quite young), you can get a list of all the known bugs from this page. 1582 known issues on Feb.16th 2010 (which includes feature requests, problems with documentation, etc).

That is an order of magnitude less than the commercial systems. And it's not because it is better, it is because it is younger and smaller. It might be better, but until SAGE does a lot of analysis (about 40% of CAS bugs are there) and a fancy user interface (another 40%), it is too hard to compare.

I once ran a graduate course whose core topic was studying the fundamental disconnect between the algebraic nature of CAS and the analytic nature of the what it is mostly used for. There are issues of logic -- CASes work more or less in an intensional logic, while most of analysis is stated in a purely extensional fashion. There is no well-defined 'denotational semantics' for expressions-as-functions, which strongly contributes to the deeper bugs in CASes.]

...

Should such widely-believed conjectures as P≠NP or the Riemann hypothesis turn out be false, then because they are assumed by so many existing proofs, a far larger math holocaust would ensue38 - and our previous estimates of error rates will turn out to have been substantial underestimates. But it may be a cloud with a silver lining, if it doesn’t come at a time of danger.

https://mathoverflow.net/questions/338607/why-doesnt-mathematics-collapse-down-even-though-humans-quite-often-make-mista

more on formal methods in programming:

https://www.quantamagazine.org/formal-verification-creates-hacker-proof-code-20160920/

https://intelligence.org/2014/03/02/bob-constable/

https://softwareengineering.stackexchange.com/questions/375342/what-are-the-barriers-that-prevent-widespread-adoption-of-formal-methods

Update: measured effort

In the October 2018 issue of Communications of the ACM there is an interesting article about Formally verified software in the real world with some estimates of the effort.

Interestingly (based on OS development for military equipment), it seems that producing formally proved software requires 3.3 times more effort than with traditional engineering techniques. So it's really costly.

On the other hand, it requires 2.3 times less effort to get high security software this way than with traditionally engineered software if you add the effort to make such software certified at a high security level (EAL 7). So if you have high reliability or security requirements there is definitively a business case for going formal.

WHY DON'T PEOPLE USE FORMAL METHODS?: https://www.hillelwayne.com/post/why-dont-people-use-formal-methods/

You can see examples of how all of these look at Let’s Prove Leftpad. HOL4 and Isabelle are good examples of “independent theorem” specs, SPARK and Dafny have “embedded assertion” specs, and Coq and Agda have “dependent type” specs.6

If you squint a bit it looks like these three forms of code spec map to the three main domains of automated correctness checking: tests, contracts, and types. This is not a coincidence. Correctness is a spectrum, and formal verification is one extreme of that spectrum. As we reduce the rigour (and effort) of our verification we get simpler and narrower checks, whether that means limiting the explored state space, using weaker types, or pushing verification to the runtime. Any means of total specification then becomes a means of partial specification, and vice versa: many consider Cleanroom a formal verification technique, which primarily works by pushing code review far beyond what’s humanly possible.

...

The question, then: “is 90/95/99% correct significantly cheaper than 100% correct?” The answer is very yes. We all are comfortable saying that a codebase we’ve well-tested and well-typed is mostly correct modulo a few fixes in prod, and we’re even writing more than four lines of code a day. In fact, the vast… [more]

ratty
gwern
analysis
essay
realness
truth
correctness
reason
philosophy
math
proofs
formal-methods
cs
programming
engineering
worse-is-better/the-right-thing
intuition
giants
old-anglo
error
street-fighting
heuristic
zooming
risk
threat-modeling
software
lens
logic
inference
physics
differential
geometry
estimate
distribution
robust
speculation
nonlinearity
cost-benefit
convexity-curvature
measure
scale
trivia
cocktail
history
early-modern
europe
math.CA
rigor
news
org:mag
org:sci
miri-cfar
pdf
thesis
comparison
examples
org:junk
q-n-a
stackex
pragmatic
tradeoffs
cracker-prog
techtariat
invariance
DSL
chart
ecosystem
grokkability
heavyweights
CAS
static-dynamic
lower-bounds
complexity
tcs
open-problems
big-surf
ideas
certificates-recognition
proof-systems
PCP
mediterranean
SDP
meta:prediction
epistemic
questions
guessing
distributed
overflow
nibble
soft-question
track-record
big-list
hmm
frontier
state-of-art
move-fast-(and-break-things)
grokkability-clarity
technical-writing
trust
1. Mistakes where the theorem is still true, but the proof was incorrect (type I)

2. Mistakes where the theorem was false, and the proof was also necessarily incorrect (type II)

Before someone comes up with a final answer, a mathematician may have many levels of intuition in formulating & working on the problem, but we’ll consider the final end-product where the mathematician feels satisfied that he has solved it. Case 1 is perhaps the most common case, with innumerable examples; this is sometimes due to mistakes in the proof that anyone would accept is a mistake, but many of these cases are due to changing standards of proof. For example, when David Hilbert discovered errors in Euclid’s proofs which no one noticed before, the theorems were still true, and the gaps more due to Hilbert being a modern mathematician thinking in terms of formal systems (which of course Euclid did not think in). (David Hilbert himself turns out to be a useful example of the other kind of error: his famous list of 23 problems was accompanied by definite opinions on the outcome of each problem and sometimes timings, several of which were wrong or questionable5.) Similarly, early calculus used ‘infinitesimals’ which were sometimes treated as being 0 and sometimes treated as an indefinitely small non-zero number; this was incoherent and strictly speaking, practically all of the calculus results were wrong because they relied on an incoherent concept - but of course the results were some of the greatest mathematical work ever conducted6 and when later mathematicians put calculus on a more rigorous footing, they immediately re-derived those results (sometimes with important qualifications), and doubtless as modern math evolves other fields have sometimes needed to go back and clean up the foundations and will in the future.7

...

Isaac Newton, incidentally, gave two proofs of the same solution to a problem in probability, one via enumeration and the other more abstract; the enumeration was correct, but the other proof totally wrong and this was not noticed for a long time, leading Stigler to remark:

...

TYPE I > TYPE II?

“Lefschetz was a purely intuitive mathematician. It was said of him that he had never given a completely correct proof, but had never made a wrong guess either.”

- Gian-Carlo Rota13

Case 2 is disturbing, since it is a case in which we wind up with false beliefs and also false beliefs about our beliefs (we no longer know that we don’t know). Case 2 could lead to extinction.

...

Except, errors do not seem to be evenly & randomly distributed between case 1 and case 2. There seem to be far more case 1s than case 2s, as already mentioned in the early calculus example: far more than 50% of the early calculus results were correct when checked more rigorously. Richard Hamming attributes to Ralph Boas a comment that while editing Mathematical Reviews that “of the new results in the papers reviewed most are true but the corresponding proofs are perhaps half the time plain wrong”.

...

Gian-Carlo Rota gives us an example with Hilbert:

...

Olga labored for three years; it turned out that all mistakes could be corrected without any major changes in the statement of the theorems. There was one exception, a paper Hilbert wrote in his old age, which could not be fixed; it was a purported proof of the continuum hypothesis, you will find it in a volume of the Mathematische Annalen of the early thirties.

...

Leslie Lamport advocates for machine-checked proofs and a more rigorous style of proofs similar to natural deduction, noting a mathematician acquaintance guesses at a broad error rate of 1/329 and that he routinely found mistakes in his own proofs and, worse, believed false conjectures30.

[more on these "structured proofs":

https://academia.stackexchange.com/questions/52435/does-anyone-actually-publish-structured-proofs

https://mathoverflow.net/questions/35727/community-experiences-writing-lamports-structured-proofs

]

We can probably add software to that list: early software engineering work found that, dismayingly, bug rates seem to be simply a function of lines of code, and one would expect diseconomies of scale. So one would expect that in going from the ~4,000 lines of code of the Microsoft DOS operating system kernel to the ~50,000,000 lines of code in Windows Server 2003 (with full systems of applications and libraries being even larger: the comprehensive Debian repository in 2007 contained ~323,551,126 lines of code) that the number of active bugs at any time would be… fairly large. Mathematical software is hopefully better, but practitioners still run into issues (eg Durán et al 2014, Fonseca et al 2017) and I don’t know of any research pinning down how buggy key mathematical systems like Mathematica are or how much published mathematics may be erroneous due to bugs. This general problem led to predictions of doom and spurred much research into automated proof-checking, static analysis, and functional languages31.

[related:

https://mathoverflow.net/questions/11517/computer-algebra-errors

I don't know any interesting bugs in symbolic algebra packages but I know a true, enlightening and entertaining story about something that looked like a bug but wasn't.

Define sinc𝑥=(sin𝑥)/𝑥.

Someone found the following result in an algebra package: ∫∞0𝑑𝑥sinc𝑥=𝜋/2

They then found the following results:

...

So of course when they got:

∫∞0𝑑𝑥sinc𝑥sinc(𝑥/3)sinc(𝑥/5)⋯sinc(𝑥/15)=(467807924713440738696537864469/935615849440640907310521750000)𝜋

hmm:

Which means that nobody knows Fourier analysis nowdays. Very sad and discouraging story... – fedja Jan 29 '10 at 18:47

--

Because the most popular systems are all commercial, they tend to guard their bug database rather closely -- making them public would seriously cut their sales. For example, for the open source project Sage (which is quite young), you can get a list of all the known bugs from this page. 1582 known issues on Feb.16th 2010 (which includes feature requests, problems with documentation, etc).

That is an order of magnitude less than the commercial systems. And it's not because it is better, it is because it is younger and smaller. It might be better, but until SAGE does a lot of analysis (about 40% of CAS bugs are there) and a fancy user interface (another 40%), it is too hard to compare.

I once ran a graduate course whose core topic was studying the fundamental disconnect between the algebraic nature of CAS and the analytic nature of the what it is mostly used for. There are issues of logic -- CASes work more or less in an intensional logic, while most of analysis is stated in a purely extensional fashion. There is no well-defined 'denotational semantics' for expressions-as-functions, which strongly contributes to the deeper bugs in CASes.]

...

Should such widely-believed conjectures as P≠NP or the Riemann hypothesis turn out be false, then because they are assumed by so many existing proofs, a far larger math holocaust would ensue38 - and our previous estimates of error rates will turn out to have been substantial underestimates. But it may be a cloud with a silver lining, if it doesn’t come at a time of danger.

https://mathoverflow.net/questions/338607/why-doesnt-mathematics-collapse-down-even-though-humans-quite-often-make-mista

more on formal methods in programming:

https://www.quantamagazine.org/formal-verification-creates-hacker-proof-code-20160920/

https://intelligence.org/2014/03/02/bob-constable/

https://softwareengineering.stackexchange.com/questions/375342/what-are-the-barriers-that-prevent-widespread-adoption-of-formal-methods

Update: measured effort

In the October 2018 issue of Communications of the ACM there is an interesting article about Formally verified software in the real world with some estimates of the effort.

Interestingly (based on OS development for military equipment), it seems that producing formally proved software requires 3.3 times more effort than with traditional engineering techniques. So it's really costly.

On the other hand, it requires 2.3 times less effort to get high security software this way than with traditionally engineered software if you add the effort to make such software certified at a high security level (EAL 7). So if you have high reliability or security requirements there is definitively a business case for going formal.

WHY DON'T PEOPLE USE FORMAL METHODS?: https://www.hillelwayne.com/post/why-dont-people-use-formal-methods/

You can see examples of how all of these look at Let’s Prove Leftpad. HOL4 and Isabelle are good examples of “independent theorem” specs, SPARK and Dafny have “embedded assertion” specs, and Coq and Agda have “dependent type” specs.6

If you squint a bit it looks like these three forms of code spec map to the three main domains of automated correctness checking: tests, contracts, and types. This is not a coincidence. Correctness is a spectrum, and formal verification is one extreme of that spectrum. As we reduce the rigour (and effort) of our verification we get simpler and narrower checks, whether that means limiting the explored state space, using weaker types, or pushing verification to the runtime. Any means of total specification then becomes a means of partial specification, and vice versa: many consider Cleanroom a formal verification technique, which primarily works by pushing code review far beyond what’s humanly possible.

...

The question, then: “is 90/95/99% correct significantly cheaper than 100% correct?” The answer is very yes. We all are comfortable saying that a codebase we’ve well-tested and well-typed is mostly correct modulo a few fixes in prod, and we’re even writing more than four lines of code a day. In fact, the vast… [more]

july 2019 by nhaliday

One week of bugs

may 2019 by nhaliday

If I had to guess, I'd say I probably work around hundreds of bugs in an average week, and thousands in a bad week. It's not unusual for me to run into a hundred new bugs in a single week. But I often get skepticism when I mention that I run into multiple new (to me) bugs per day, and that this is inevitable if we don't change how we write tests. Well, here's a log of one week of bugs, limited to bugs that were new to me that week. After a brief description of the bugs, I'll talk about what we can do to improve the situation. The obvious answer to spend more effort on testing, but everyone already knows we should do that and no one does it. That doesn't mean it's hopeless, though.

...

Here's where I'm supposed to write an appeal to take testing more seriously and put real effort into it. But we all know that's not going to work. It would take 90k LOC of tests to get Julia to be as well tested as a poorly tested prototype (falsely assuming linear complexity in size). That's two person-years of work, not even including time to debug and fix bugs (which probably brings it closer to four of five years). Who's going to do that? No one. Writing tests is like writing documentation. Everyone already knows you should do it. Telling people they should do it adds zero information1.

Given that people aren't going to put any effort into testing, what's the best way to do it?

Property-based testing. Generative testing. Random testing. Concolic Testing (which was done long before the term was coined). Static analysis. Fuzzing. Statistical bug finding. There are lots of options. Some of them are actually the same thing because the terminology we use is inconsistent and buggy. I'm going to arbitrarily pick one to talk about, but they're all worth looking into.

...

There are a lot of great resources out there, but if you're just getting started, I found this description of types of fuzzers to be one of those most helpful (and simplest) things I've read.

John Regehr has a udacity course on software testing. I haven't worked through it yet (Pablo Torres just pointed to it), but given the quality of Dr. Regehr's writing, I expect the course to be good.

For more on my perspective on testing, there's this.

Everything's broken and nobody's upset: https://www.hanselman.com/blog/EverythingsBrokenAndNobodysUpset.aspx

https://news.ycombinator.com/item?id=4531549

https://hypothesis.works/articles/the-purpose-of-hypothesis/

From the perspective of a user, the purpose of Hypothesis is to make it easier for you to write better tests.

From my perspective as the primary author, that is of course also a purpose of Hypothesis. I write a lot of code, it needs testing, and the idea of trying to do that without Hypothesis has become nearly unthinkable.

But, on a large scale, the true purpose of Hypothesis is to drag the world kicking and screaming into a new and terrifying age of high quality software.

Software is everywhere. We have built a civilization on it, and it’s only getting more prevalent as more services move online and embedded and “internet of things” devices become cheaper and more common.

Software is also terrible. It’s buggy, it’s insecure, and it’s rarely well thought out.

This combination is clearly a recipe for disaster.

The state of software testing is even worse. It’s uncontroversial at this point that you should be testing your code, but it’s a rare codebase whose authors could honestly claim that they feel its testing is sufficient.

Much of the problem here is that it’s too hard to write good tests. Tests take up a vast quantity of development time, but they mostly just laboriously encode exactly the same assumptions and fallacies that the authors had when they wrote the code, so they miss exactly the same bugs that you missed when they wrote the code.

Preventing the Collapse of Civilization [video]: https://news.ycombinator.com/item?id=19945452

- Jonathan Blow

NB: DevGAMM is a game industry conference

- loss of technological knowledge (Antikythera mechanism, aqueducts, etc.)

- hardware driving most gains, not software

- software's actually less robust, often poorly designed and overengineered these days

- *list of bugs he's encountered recently*:

https://youtu.be/pW-SOdj4Kkk?t=1387

- knowledge of trivia becomes [ed.: missing the word "valued" here, I think?]more than general, deep knowledge

- does at least acknowledge value of DRY, reusing code, abstraction saving dev time

techtariat
dan-luu
tech
software
error
list
debugging
linux
github
robust
checking
oss
troll
lol
aphorism
webapp
email
google
facebook
games
julia
pls
compilers
communication
mooc
browser
rust
programming
engineering
random
jargon
formal-methods
expert-experience
prof
c(pp)
course
correctness
hn
commentary
video
presentation
carmack
pragmatic
contrarianism
pessimism
sv
unix
rhetoric
critique
worrydream
hardware
performance
trends
multiplicative
roots
impact
comparison
history
iron-age
the-classics
mediterranean
conquest-empire
gibbon
technology
the-world-is-just-atoms
flux-stasis
increase-decrease
graphics
hmm
idk
systems
os
abstraction
intricacy
worse-is-better/the-right-thing
build-packaging
microsoft
osx
apple
reflection
assembly
things
knowledge
detail-architecture
thick-thin
trivia
info-dynamics
caching
frameworks
generalization
systematic-ad-hoc
universalism-particularism
analytical-holistic
structure
tainter
libraries
tradeoffs
prepping
threat-modeling
network-structure
writing
risk
local-glob
...

Here's where I'm supposed to write an appeal to take testing more seriously and put real effort into it. But we all know that's not going to work. It would take 90k LOC of tests to get Julia to be as well tested as a poorly tested prototype (falsely assuming linear complexity in size). That's two person-years of work, not even including time to debug and fix bugs (which probably brings it closer to four of five years). Who's going to do that? No one. Writing tests is like writing documentation. Everyone already knows you should do it. Telling people they should do it adds zero information1.

Given that people aren't going to put any effort into testing, what's the best way to do it?

Property-based testing. Generative testing. Random testing. Concolic Testing (which was done long before the term was coined). Static analysis. Fuzzing. Statistical bug finding. There are lots of options. Some of them are actually the same thing because the terminology we use is inconsistent and buggy. I'm going to arbitrarily pick one to talk about, but they're all worth looking into.

...

There are a lot of great resources out there, but if you're just getting started, I found this description of types of fuzzers to be one of those most helpful (and simplest) things I've read.

John Regehr has a udacity course on software testing. I haven't worked through it yet (Pablo Torres just pointed to it), but given the quality of Dr. Regehr's writing, I expect the course to be good.

For more on my perspective on testing, there's this.

Everything's broken and nobody's upset: https://www.hanselman.com/blog/EverythingsBrokenAndNobodysUpset.aspx

https://news.ycombinator.com/item?id=4531549

https://hypothesis.works/articles/the-purpose-of-hypothesis/

From the perspective of a user, the purpose of Hypothesis is to make it easier for you to write better tests.

From my perspective as the primary author, that is of course also a purpose of Hypothesis. I write a lot of code, it needs testing, and the idea of trying to do that without Hypothesis has become nearly unthinkable.

But, on a large scale, the true purpose of Hypothesis is to drag the world kicking and screaming into a new and terrifying age of high quality software.

Software is everywhere. We have built a civilization on it, and it’s only getting more prevalent as more services move online and embedded and “internet of things” devices become cheaper and more common.

Software is also terrible. It’s buggy, it’s insecure, and it’s rarely well thought out.

This combination is clearly a recipe for disaster.

The state of software testing is even worse. It’s uncontroversial at this point that you should be testing your code, but it’s a rare codebase whose authors could honestly claim that they feel its testing is sufficient.

Much of the problem here is that it’s too hard to write good tests. Tests take up a vast quantity of development time, but they mostly just laboriously encode exactly the same assumptions and fallacies that the authors had when they wrote the code, so they miss exactly the same bugs that you missed when they wrote the code.

Preventing the Collapse of Civilization [video]: https://news.ycombinator.com/item?id=19945452

- Jonathan Blow

NB: DevGAMM is a game industry conference

- loss of technological knowledge (Antikythera mechanism, aqueducts, etc.)

- hardware driving most gains, not software

- software's actually less robust, often poorly designed and overengineered these days

- *list of bugs he's encountered recently*:

https://youtu.be/pW-SOdj4Kkk?t=1387

- knowledge of trivia becomes [ed.: missing the word "valued" here, I think?]more than general, deep knowledge

- does at least acknowledge value of DRY, reusing code, abstraction saving dev time

may 2019 by nhaliday

Research Debt

techtariat acmtariat rhetoric contrarianism research debt academia communication writing science meta:science better-explained worrydream coordination bret-victor michael-nielsen tech links thurston overflow discussion org:bleg nibble 🔬 🖥 🎓 visual-understanding the-trenches impact meta:research metameta big-picture hi-order-bits info-dynamics elegance meta:reading technical-writing heavyweights org:popup

march 2017 by nhaliday

techtariat acmtariat rhetoric contrarianism research debt academia communication writing science meta:science better-explained worrydream coordination bret-victor michael-nielsen tech links thurston overflow discussion org:bleg nibble 🔬 🖥 🎓 visual-understanding the-trenches impact meta:research metameta big-picture hi-order-bits info-dynamics elegance meta:reading technical-writing heavyweights org:popup

march 2017 by nhaliday

Peter Norvig, the meaning of polynomials, debugging as psychotherapy | Quomodocumque

march 2017 by nhaliday

He briefly showed a demo where, given values of a polynomial, a machine can put together a few lines of code that successfully computes the polynomial. But the code looks weird to a human eye. To compute some quadratic, it nests for-loops and adds things up in a funny way that ends up giving the right output. So has it really ”learned” the polynomial? I think in computer science, you typically feel you’ve learned a function if you can accurately predict its value on a given input. For an algebraist like me, a function determines but isn’t determined by the values it takes; to me, there’s something about that quadratic polynomial the machine has failed to grasp. I don’t think there’s a right or wrong answer here, just a cultural difference to be aware of. Relevant: Norvig’s description of “the two cultures” at the end of this long post on natural language processing (which is interesting all the way through!)

mathtariat
org:bleg
nibble
tech
ai
talks
summary
philosophy
lens
comparison
math
cs
tcs
polynomials
nlp
debugging
psychology
cog-psych
complex-systems
deep-learning
analogy
legibility
interpretability
composition-decomposition
coupling-cohesion
apollonian-dionysian
heavyweights
march 2017 by nhaliday

reference request - Essays and thoughts on mathematics - MathOverflow

q-n-a overflow nibble list big-list math writing essay reflection soft-question links meta:math philosophy big-picture thurston gowers meaningness virtu metameta wisdom p:null heavyweights technical-writing communication

february 2017 by nhaliday

q-n-a overflow nibble list big-list math writing essay reflection soft-question links meta:math philosophy big-picture thurston gowers meaningness virtu metameta wisdom p:null heavyweights technical-writing communication

february 2017 by nhaliday

An Introduction to Measure Theory - Terence Tao

books draft unit math gowers mathtariat measure math.CA probability yoga problem-solving pdf tricki local-global counterexample visual-understanding lifts-projections oscillation limits estimate quantifiers-sums synthesis coarse-fine p:someday s:** heavyweights

february 2017 by nhaliday

books draft unit math gowers mathtariat measure math.CA probability yoga problem-solving pdf tricki local-global counterexample visual-understanding lifts-projections oscillation limits estimate quantifiers-sums synthesis coarse-fine p:someday s:** heavyweights

february 2017 by nhaliday

Mikhail Leonidovich Gromov - Wikipedia

january 2017 by nhaliday

Gromov's style of geometry often features a "coarse" or "soft" viewpoint, analyzing asymptotic or large-scale properties.

Gromov is also interested in mathematical biology,[11] the structure of the brain and the thinking process, and the way scientific ideas evolve.[8]

math
people
russia
differential
geometry
topology
math.GR
wiki
structure
meta:math
meta:science
interdisciplinary
bio
neuro
magnitude
limits
science
nibble
coarse-fine
wild-ideas
convergence
info-dynamics
ideas
heavyweights
Gromov is also interested in mathematical biology,[11] the structure of the brain and the thinking process, and the way scientific ideas evolve.[8]

january 2017 by nhaliday

In Computers We Trust? | Quanta Magazine

january 2017 by nhaliday

As math grows ever more complex, will computers reign?

Shalosh B. Ekhad is a computer. Or, rather, it is any of a rotating cast of computers used by the mathematician Doron Zeilberger, from the Dell in his New Jersey office to a supercomputer whose services he occasionally enlists in Austria. The name — Hebrew for “three B one” — refers to the AT&T 3B1, Ekhad’s earliest incarnation.

“The soul is the software,” said Zeilberger, who writes his own code using a popular math programming tool called Maple.

news
org:mag
org:sci
popsci
math
culture
academia
automation
formal-methods
ai
debate
interdisciplinary
rigor
proofs
nibble
org:inst
calculation
bare-hands
heavyweights
contrarianism
computation
correctness
oss
replication
logic
frontier
state-of-art
technical-writing
trust
Shalosh B. Ekhad is a computer. Or, rather, it is any of a rotating cast of computers used by the mathematician Doron Zeilberger, from the Dell in his New Jersey office to a supercomputer whose services he occasionally enlists in Austria. The name — Hebrew for “three B one” — refers to the AT&T 3B1, Ekhad’s earliest incarnation.

“The soul is the software,” said Zeilberger, who writes his own code using a popular math programming tool called Maple.

january 2017 by nhaliday

soft question - Thinking and Explaining - MathOverflow

january 2017 by nhaliday

- good question from Bill Thurston

- great answers by Terry Tao, fedja, Minhyong Kim, gowers, etc.

Terry Tao:

- symmetry as blurring/vibrating/wobbling, scale invariance

- anthropomorphization, adversarial perspective for estimates/inequalities/quantifiers, spending/economy

fedja walks through his though-process from another answer

Minhyong Kim: anthropology of mathematical philosophizing

Per Vognsen: normality as isotropy

comment: conjugate subgroup gHg^-1 ~ "H but somewhere else in G"

gowers: hidden things in basic mathematics/arithmetic

comment by Ryan Budney: x sin(x) via x -> (x, sin(x)), (x, y) -> xy

I kinda get what he's talking about but needed to use Mathematica to get the initial visualization down.

To remind myself later:

- xy can be easily visualized by juxtaposing the two parabolae x^2 and -x^2 diagonally

- x sin(x) can be visualized along that surface by moving your finger along the line (x, 0) but adding some oscillations in y direction according to sin(x)

q-n-a
soft-question
big-list
intuition
communication
teaching
math
thinking
writing
thurston
lens
overflow
synthesis
hi-order-bits
👳
insight
meta:math
clarity
nibble
giants
cartoons
gowers
mathtariat
better-explained
stories
the-trenches
problem-solving
homogeneity
symmetry
fedja
examples
philosophy
big-picture
vague
isotropy
reflection
spatial
ground-up
visual-understanding
polynomials
dimensionality
math.GR
worrydream
scholar
🎓
neurons
metabuch
yoga
retrofit
mental-math
metameta
wisdom
wordlessness
oscillation
operational
adversarial
quantifiers-sums
exposition
explanation
tricki
concrete
s:***
manifolds
invariance
dynamical
info-dynamics
cool
direction
elegance
heavyweights
analysis
guessing
grokkability-clarity
technical-writing
- great answers by Terry Tao, fedja, Minhyong Kim, gowers, etc.

Terry Tao:

- symmetry as blurring/vibrating/wobbling, scale invariance

- anthropomorphization, adversarial perspective for estimates/inequalities/quantifiers, spending/economy

fedja walks through his though-process from another answer

Minhyong Kim: anthropology of mathematical philosophizing

Per Vognsen: normality as isotropy

comment: conjugate subgroup gHg^-1 ~ "H but somewhere else in G"

gowers: hidden things in basic mathematics/arithmetic

comment by Ryan Budney: x sin(x) via x -> (x, sin(x)), (x, y) -> xy

I kinda get what he's talking about but needed to use Mathematica to get the initial visualization down.

To remind myself later:

- xy can be easily visualized by juxtaposing the two parabolae x^2 and -x^2 diagonally

- x sin(x) can be visualized along that surface by moving your finger along the line (x, 0) but adding some oscillations in y direction according to sin(x)

january 2017 by nhaliday

On “local” and “global” errors in mathematical papers, and how to detect them

november 2016 by nhaliday

local vs. global errors in technical papers

old:

https://plus.google.com/+TerenceTao27/posts/78aoEHoPhpS

gowers
social
metabuch
thinking
problem-solving
math
advice
reflection
scholar
🎓
expert
mathtariat
lens
local-global
meta:math
cartoons
learning
the-trenches
meta:research
s:**
info-dynamics
studying
expert-experience
meta:reading
multi
heavyweights
old:

https://plus.google.com/+TerenceTao27/posts/78aoEHoPhpS

november 2016 by nhaliday

On “compilation errors” in mathematical reading, and how to resolve them

november 2016 by nhaliday

compilation errors in academic papers

old:

[Google Buzz closed down for good recently, so I will be reprinting a small n...

https://plus.google.com/u/0/+TerenceTao27/posts/TGjjJPUdJjk

gowers
social
advice
reflection
math
thinking
problem-solving
metabuch
expert
scholar
🎓
mathtariat
lens
meta:math
cartoons
learning
lifts-projections
the-trenches
meta:research
s:**
info-dynamics
studying
expert-experience
meta:reading
analogy
compilers
multi
heavyweights
zooming
old:

[Google Buzz closed down for good recently, so I will be reprinting a small n...

https://plus.google.com/u/0/+TerenceTao27/posts/TGjjJPUdJjk

november 2016 by nhaliday

The capacity to be alone | Quomodocumque

september 2016 by nhaliday

In fact, most of these comrades who I gauged to be more brilliant than I have gone on to become distinguished mathematicians. Still from the perspective or thirty or thirty five years, I can state that their imprint upon the mathematics of our time has not been very profound. They’ve done all things, often beautiful things in a context that was already set out before them, which they had no inclination to disturb. Without being aware of it, they’ve remained prisoners of those invisible and despotic circles which delimit the universe of a certain milieu in a given era. To have broken these bounds they would have to rediscover in themselves that capability which was their birthright, as it was mine: The capacity to be alone.

math
reflection
quotes
scholar
mathtariat
lens
optimate
serene
individualism-collectivism
the-monster
humility
the-trenches
virtu
courage
emotion
extra-introversion
allodium
ascetic
heavyweights
psychiatry
september 2016 by nhaliday

On proof and progress in mathematics

pdf thurston math writing thinking synthesis papers essay unit nibble intuition worrydream communication proofs the-trenches reflection geometry meta:math better-explained stories virtu 🎓 scholar metameta wisdom narrative p:whenever inference cs programming rigor formal-methods meta:research info-dynamics elegance technical-writing heavyweights guessing trust

august 2016 by nhaliday

pdf thurston math writing thinking synthesis papers essay unit nibble intuition worrydream communication proofs the-trenches reflection geometry meta:math better-explained stories virtu 🎓 scholar metameta wisdom narrative p:whenever inference cs programming rigor formal-methods meta:research info-dynamics elegance technical-writing heavyweights guessing trust

august 2016 by nhaliday

ho.history overview - Does any research mathematics involve solving functional equations? - MathOverflow

math reflection expert characterization tidbits q-n-a overflow oly mathtariat gowers motivation nibble expert-experience rec-math heavyweights explanation roots explanans properties

july 2016 by nhaliday

math reflection expert characterization tidbits q-n-a overflow oly mathtariat gowers motivation nibble expert-experience rec-math heavyweights explanation roots explanans properties

july 2016 by nhaliday

notation - Why does mathematical convention deal so ineptly with multisets? - Mathematics Stack Exchange

thinking language math history q-n-a worrydream thurston overflow soft-question notation meta:math intricacy conceptual-vocab nibble elegance worse-is-better/the-right-thing heavyweights form-design

july 2016 by nhaliday

thinking language math history q-n-a worrydream thurston overflow soft-question notation meta:math intricacy conceptual-vocab nibble elegance worse-is-better/the-right-thing heavyweights form-design

july 2016 by nhaliday

soft question - Famous mathematical quotes - MathOverflow

math aphorism reflection list quotes q-n-a overflow soft-question big-list mathtariat stories lens nibble giants von-neumann darwinian old-anglo poetry letters troll lol creative algebra geometry linear-algebra thick-thin moments high-variance elegance heavyweights

june 2016 by nhaliday

math aphorism reflection list quotes q-n-a overflow soft-question big-list mathtariat stories lens nibble giants von-neumann darwinian old-anglo poetry letters troll lol creative algebra geometry linear-algebra thick-thin moments high-variance elegance heavyweights

june 2016 by nhaliday

soft question - How do you not forget old math? - MathOverflow

june 2016 by nhaliday

Terry Tao:

I find that blogging about material that I would otherwise forget eventually is extremely valuable in this regard. (I end up consulting my own blog posts on a regular basis.) EDIT: and now I remember I already wrote on this topic: terrytao.wordpress.com/career-advice/write-down-what-youve-done

fedja:

The only way to cope with this loss of memory I know is to do some reading on systematic basis. Of course, if you read one paper in algebraic geometry (or whatever else) a month (or even two months), you may not remember the exact content of all of them by the end of the year but, since all mathematicians in one field use pretty much the same tricks and draw from pretty much the same general knowledge, you'll keep the core things in your memory no matter what you read (provided it is not patented junk, of course) and this is about as much as you can hope for.

Relating abstract things to "real life stuff" (and vice versa) is automatic when you work as a mathematician. For me, the proof of the Chacon-Ornstein ergodic theorem is just a sandpile moving over a pit with the sand falling down after every shift. I often tell my students that every individual term in the sequence doesn't matter at all for the limit but somehow together they determine it like no individual human is of any real importance while together they keep this civilization running, etc. No special effort is needed here and, moreover, if the analogy is not natural but contrived, it'll not be helpful or memorable. The standard mnemonic techniques are pretty useless in math. IMHO (the famous "foil" rule for the multiplication of sums of two terms is inferior to the natural "pair each term in the first sum with each term in the second sum" and to the picture of a rectangle tiled with smaller rectangles, though, of course, the foil rule sounds way more sexy).

One thing that I don't think the other respondents have emphasized enough is that you should work on prioritizing what you choose to study and remember.

Timothy Chow:

As others have said, forgetting lots of stuff is inevitable. But there are ways you can mitigate the damage of this information loss. I find that a useful technique is to try to organize your knowledge hierarchically. Start by coming up with a big picture, and make sure you understand and remember that picture thoroughly. Then drill down to the next level of detail, and work on remembering that. For example, if I were trying to remember everything in a particular book, I might start by memorizing the table of contents, and then I'd work on remembering the theorem statements, and then finally the proofs. (Don't take this illustration too literally; it's better to come up with your own conceptual hierarchy than to slavishly follow the formal hierarchy of a published text. But I do think that a hierarchical approach is valuable.)

Organizing your knowledge like this helps you prioritize. You can then consciously decide that certain large swaths of knowledge are not worth your time at the moment, and just keep a "stub" in memory to remind you that that body of knowledge exists, should you ever need to dive into it. In areas of higher priority, you can plunge more deeply. By making sure you thoroughly internalize the top levels of the hierarchy, you reduce the risk of losing sight of entire areas of important knowledge. Generally it's less catastrophic to forget the details than to forget about a whole region of the big picture, because you can often revisit the details as long as you know what details you need to dig up. (This is fortunate since the details are the most memory-intensive.)

Having a hierarchy also helps you accrue new knowledge. Often when you encounter something new, you can relate it to something you already know, and file it in the same branch of your mental tree.

thinking
math
growth
advice
expert
q-n-a
🎓
long-term
tradeoffs
scholar
overflow
soft-question
gowers
mathtariat
ground-up
hi-order-bits
intuition
synthesis
visual-understanding
decision-making
scholar-pack
cartoons
lens
big-picture
ergodic
nibble
zooming
trees
fedja
reflection
retention
meta:research
wisdom
skeleton
practice
prioritizing
concrete
s:***
info-dynamics
knowledge
studying
the-trenches
chart
expert-experience
quixotic
elegance
heavyweights
I find that blogging about material that I would otherwise forget eventually is extremely valuable in this regard. (I end up consulting my own blog posts on a regular basis.) EDIT: and now I remember I already wrote on this topic: terrytao.wordpress.com/career-advice/write-down-what-youve-done

fedja:

The only way to cope with this loss of memory I know is to do some reading on systematic basis. Of course, if you read one paper in algebraic geometry (or whatever else) a month (or even two months), you may not remember the exact content of all of them by the end of the year but, since all mathematicians in one field use pretty much the same tricks and draw from pretty much the same general knowledge, you'll keep the core things in your memory no matter what you read (provided it is not patented junk, of course) and this is about as much as you can hope for.

Relating abstract things to "real life stuff" (and vice versa) is automatic when you work as a mathematician. For me, the proof of the Chacon-Ornstein ergodic theorem is just a sandpile moving over a pit with the sand falling down after every shift. I often tell my students that every individual term in the sequence doesn't matter at all for the limit but somehow together they determine it like no individual human is of any real importance while together they keep this civilization running, etc. No special effort is needed here and, moreover, if the analogy is not natural but contrived, it'll not be helpful or memorable. The standard mnemonic techniques are pretty useless in math. IMHO (the famous "foil" rule for the multiplication of sums of two terms is inferior to the natural "pair each term in the first sum with each term in the second sum" and to the picture of a rectangle tiled with smaller rectangles, though, of course, the foil rule sounds way more sexy).

One thing that I don't think the other respondents have emphasized enough is that you should work on prioritizing what you choose to study and remember.

Timothy Chow:

As others have said, forgetting lots of stuff is inevitable. But there are ways you can mitigate the damage of this information loss. I find that a useful technique is to try to organize your knowledge hierarchically. Start by coming up with a big picture, and make sure you understand and remember that picture thoroughly. Then drill down to the next level of detail, and work on remembering that. For example, if I were trying to remember everything in a particular book, I might start by memorizing the table of contents, and then I'd work on remembering the theorem statements, and then finally the proofs. (Don't take this illustration too literally; it's better to come up with your own conceptual hierarchy than to slavishly follow the formal hierarchy of a published text. But I do think that a hierarchical approach is valuable.)

Organizing your knowledge like this helps you prioritize. You can then consciously decide that certain large swaths of knowledge are not worth your time at the moment, and just keep a "stub" in memory to remind you that that body of knowledge exists, should you ever need to dive into it. In areas of higher priority, you can plunge more deeply. By making sure you thoroughly internalize the top levels of the hierarchy, you reduce the risk of losing sight of entire areas of important knowledge. Generally it's less catastrophic to forget the details than to forget about a whole region of the big picture, because you can often revisit the details as long as you know what details you need to dig up. (This is fortunate since the details are the most memory-intensive.)

Having a hierarchy also helps you accrue new knowledge. Often when you encounter something new, you can relate it to something you already know, and file it in the same branch of your mental tree.

june 2016 by nhaliday

Answer to What is it like to understand advanced mathematics? - Quora

may 2016 by nhaliday

thinking like a mathematician

some of the points:

- small # of tricks (echoes Rota)

- web of concepts and modularization (zooming out) allow quick reasoning

- comfort w/ ambiguity and lack of understanding, study high-dimensional objects via projections

- above is essential for research (and often what distinguishes research mathematicians from people who were good at math, or majored in math)

math
reflection
thinking
intuition
expert
synthesis
wormholes
insight
q-n-a
🎓
metabuch
tricks
scholar
problem-solving
aphorism
instinct
heuristic
lens
qra
soft-question
curiosity
meta:math
ground-up
cartoons
analytical-holistic
lifts-projections
hi-order-bits
scholar-pack
nibble
the-trenches
innovation
novelty
zooming
tricki
virtu
humility
metameta
wisdom
abstraction
skeleton
s:***
knowledge
expert-experience
elegance
judgement
advanced
heavyweights
guessing
some of the points:

- small # of tricks (echoes Rota)

- web of concepts and modularization (zooming out) allow quick reasoning

- comfort w/ ambiguity and lack of understanding, study high-dimensional objects via projections

- above is essential for research (and often what distinguishes research mathematicians from people who were good at math, or majored in math)

may 2016 by nhaliday

Reflections on the recent solution of the cap-set problem I | Gowers's Weblog

may 2016 by nhaliday

As regular readers of this blog will know, I have a strong interest in the question of where mathematical ideas come from, and a strong conviction that they always result from a fairly systematic process — and that the opposite impression, that some ideas are incredible bolts from the blue that require “genius” or “sudden inspiration” to find, is an illusion that results from the way mathematicians present their proofs after they have discovered them.

math
research
academia
gowers
hmm
mathtariat
org:bleg
nibble
big-surf
algebraic-complexity
math.CO
questions
heavyweights
exposition
technical-writing
roots
problem-solving
polynomials
linear-algebra
motivation
guessing
may 2016 by nhaliday

The man who knew partition asymptotics | Annoying Precision

may 2016 by nhaliday

nice coverage of saddle point method

yoga
acm
math.CA
math
exposition
tidbits
mathtariat
oly
limits
math.CO
alg-combo
film
giants
org:bleg
nibble
stirling
clever-rats
AMT
heavyweights
may 2016 by nhaliday

The Mathematician Ken Ono’s Life Inspired By Ramanujan | Quanta Magazine

may 2016 by nhaliday

This intellectual crucible produced the desired results — Ono studied mathematics and launched a promising academic career — but at great emotional cost. As a teenager, Ono became so desperate to escape his parents’ expectations that he dropped out of high school. He later earned admission to the University of Chicago but had an apathetic attitude toward his studies, preferring to party with his fraternity brothers. He eventually discovered a genuine enthusiasm for mathematics, became a professor, and started a family, but fear of failure still weighed so heavily on Ono that he attempted suicide while attending an academic conference. Only after he joined the Institute for Advanced Study himself did Ono begin to make peace with his upbringing.

profile
math
people
career
popsci
hmm
news
org:mag
org:sci
giants
math.NT
nibble
org:inst
heavyweights
may 2016 by nhaliday

Polymath 10 Emergency Post 5: The Erdos-Szemeredi Sunflower Conjecture is Now Proven. | Combinatorics and more

math gowers announcement algebraic-complexity tcs math.CO tcstariat mathtariat research org:mat polynomials additive-combo org:bleg nibble big-surf questions heavyweights

may 2016 by nhaliday

math gowers announcement algebraic-complexity tcs math.CO tcstariat mathtariat research org:mat polynomials additive-combo org:bleg nibble big-surf questions heavyweights

may 2016 by nhaliday

Making invisible understanding visible

may 2016 by nhaliday

I like the example of cyclic subgroups

visualization
worrydream
thinking
math
yoga
thurston
intuition
algebra
insight
👳
wormholes
visual-understanding
michael-nielsen
water
exocortex
2016
fourier
cartoons
tcstariat
techtariat
clarity
vague
org:bleg
nibble
better-explained
math.GR
bounded-cognition
metameta
wordlessness
meta:math
s:***
composition-decomposition
dynamical
info-dynamics
let-me-see
elegance
heavyweights
guessing
form-design
grokkability-clarity
skunkworks
may 2016 by nhaliday

On writing | What's new

april 2016 by nhaliday

also: on reading papers

writing
papers
academia
math
tcs
advice
reflection
thinking
expert
gowers
long-term
🎓
checklists
grad-school
scholar
mathtariat
learning
nibble
org:bleg
meta:research
info-foraging
studying
p:whenever
s:*
info-dynamics
expert-experience
meta:reading
technical-writing
heavyweights
april 2016 by nhaliday

Work hard | What's new

april 2016 by nhaliday

Similarly, to be a “professional” mathematician, you need to not only work on your research problem(s), but you should also constantly be working on learning new proofs and techniques, going over important proofs and papers time and again until you’ve mastered them. Don’t stay in your mathematical comfort zone, but expand your horizon by also reading (relevant) papers that are not at the heart of your own field. You should go to seminars to stay current and to challenge yourself to understand math in real time. And so on. All of these elements have to find their way into your daily work routine, because if you neglect any of them it will ultimately affect your research output negatively.

- from the comments

advice
academia
math
reflection
career
expert
gowers
long-term
🎓
aphorism
grad-school
phd
scholar
mathtariat
discipline
curiosity
🦉
nibble
org:bleg
the-trenches
meta:research
gtd
stamina
vitality
s:**
info-dynamics
expert-experience
heavyweights
- from the comments

april 2016 by nhaliday

Lean

january 2016 by nhaliday

https://lean-forward.github.io

The goal of the Lean Forward project is to collaborate with number theorists to formally prove theorems about research mathematics and to address the main usability issues hampering the adoption of proof assistants in mathematical circles. The theorems will be selected together with our collaborators to guide the development of formal libraries and verified tools.

mostly happening in the Netherlands

https://formalabstracts.github.io

A Review of the Lean Theorem Prover: https://jiggerwit.wordpress.com/2018/09/18/a-review-of-the-lean-theorem-prover/

- Thomas Hales

seems like a Coq might be a better starter if I ever try to get into proof assistants/theorem provers

edit: on second thought this actually seems like a wash for beginners

An Argument for Controlled Natural Languages in Mathematics: https://jiggerwit.wordpress.com/2019/06/20/an-argument-for-controlled-natural-languages-in-mathematics/

By controlled natural language for mathematics (CNL), we mean an artificial language for the communication of mathematics that is (1) designed in a deliberate and explicit way with precise computer-readable syntax and semantics, (2) based on a single natural language (such as Chinese, Spanish, or English), and (3) broadly understood at least in an intuitive way by mathematically literate speakers of the natural language.

The definition of controlled natural language is intended to exclude invented languages such as Esperanto and Logjam that are not based on a single natural language. Programming languages are meant to be excluded, but a case might be made for TeX as the first broadly adopted controlled natural language for mathematics.

Perhaps it is best to start with an example. Here is a beautifully crafted CNL text created by Peter Koepke and Steffen Frerix. It reproduces a theorem and proof in Rudin’s Principles of mathematical analysis almost word for word. Their automated proof system is able to read and verify the proof.

https://github.com/Naproche/Naproche-SAD

research
math
formal-methods
msr
multi
homepage
research-program
skunkworks
math.NT
academia
ux
CAS
mathtariat
expert-experience
cost-benefit
nitty-gritty
review
critique
rant
types
learning
intricacy
functional
performance
c(pp)
ocaml-sml
comparison
ecosystem
DSL
tradeoffs
composition-decomposition
interdisciplinary
europe
germanic
grokkability
nlp
language
heavyweights
inference
rigor
automata-languages
repo
software
tools
syntax
frontier
state-of-art
pls
grokkability-clarity
technical-writing
database
lifts-projections
The goal of the Lean Forward project is to collaborate with number theorists to formally prove theorems about research mathematics and to address the main usability issues hampering the adoption of proof assistants in mathematical circles. The theorems will be selected together with our collaborators to guide the development of formal libraries and verified tools.

mostly happening in the Netherlands

https://formalabstracts.github.io

A Review of the Lean Theorem Prover: https://jiggerwit.wordpress.com/2018/09/18/a-review-of-the-lean-theorem-prover/

- Thomas Hales

seems like a Coq might be a better starter if I ever try to get into proof assistants/theorem provers

edit: on second thought this actually seems like a wash for beginners

An Argument for Controlled Natural Languages in Mathematics: https://jiggerwit.wordpress.com/2019/06/20/an-argument-for-controlled-natural-languages-in-mathematics/

By controlled natural language for mathematics (CNL), we mean an artificial language for the communication of mathematics that is (1) designed in a deliberate and explicit way with precise computer-readable syntax and semantics, (2) based on a single natural language (such as Chinese, Spanish, or English), and (3) broadly understood at least in an intuitive way by mathematically literate speakers of the natural language.

The definition of controlled natural language is intended to exclude invented languages such as Esperanto and Logjam that are not based on a single natural language. Programming languages are meant to be excluded, but a case might be made for TeX as the first broadly adopted controlled natural language for mathematics.

Perhaps it is best to start with an example. Here is a beautifully crafted CNL text created by Peter Koepke and Steffen Frerix. It reproduces a theorem and proof in Rudin’s Principles of mathematical analysis almost word for word. Their automated proof system is able to read and verify the proof.

https://github.com/Naproche/Naproche-SAD

january 2016 by nhaliday

The Setup / Russ Cox

july 2014 by nhaliday

I swear by the small Apple keyboard (in stores they have one that size with a USB cable too) and the Evoluent mouse.

...

I run acme full screen as my day to day work environment. It serves the role of editor, terminal, and window system. It's hard to get a feel for it without using it, but this video helps a little.

Rob Pike's sam editor deserves special mention too. From a UI standpoint, it's a graphical version of ed, which you either love or hate, but it does two things better than any other editor I know. First, it is a true multi-file editor. I have used it to edit thousands of files at a time, interactively. Second, and even more important, it works insanely well over low-bandwidth, high-latency connections. I can run sam in Boston to edit files in Sydney over ssh connections where the round trip time would make vi or emacs unusable. Sam runs as two halves: the UI half runs locally and knows about the sections of the file that are on or near the screen, the back end half runs near the files, and the two halves communicate using a well-engineered custom protocol. The original target environment was 1200 bps modem lines in the early 1980s, so it's a little surprising how relevant the design remains, but in fact, it's the same basic design used by any significant JavaScript application on the web today. Finally, sam is the editor of choice for both Ken Thompson and Bjarne Stroustroup. If you can satisfy both of them, you're doing something right.

...

I use Unison to sync files between my various computers. Dropbox seems to be the hot new thing, but I like that Unison doesn't ever store my files on someone else's computers.

...

I want to be working on my home desktop, realize what time it is, run out the door to catch my train, open my laptop on the train, continue right where I left off, close the laptop, hop off the train, sit down at work, and have all my state sitting there on the monitor on my desk, all without even thinking about it.

programming
hardware
plan9
rsc
software
recommendations
techtariat
devtools
worse-is-better/the-right-thing
nostalgia
summer-2014
interview
ergo
osx
linux
desktop
consumerism
people
editors
tools
list
google
cloud
os
profile
summary
c(pp)
networking
performance
distributed
config
cracker-prog
heavyweights
unix
workflow
client-server
...

I run acme full screen as my day to day work environment. It serves the role of editor, terminal, and window system. It's hard to get a feel for it without using it, but this video helps a little.

Rob Pike's sam editor deserves special mention too. From a UI standpoint, it's a graphical version of ed, which you either love or hate, but it does two things better than any other editor I know. First, it is a true multi-file editor. I have used it to edit thousands of files at a time, interactively. Second, and even more important, it works insanely well over low-bandwidth, high-latency connections. I can run sam in Boston to edit files in Sydney over ssh connections where the round trip time would make vi or emacs unusable. Sam runs as two halves: the UI half runs locally and knows about the sections of the file that are on or near the screen, the back end half runs near the files, and the two halves communicate using a well-engineered custom protocol. The original target environment was 1200 bps modem lines in the early 1980s, so it's a little surprising how relevant the design remains, but in fact, it's the same basic design used by any significant JavaScript application on the web today. Finally, sam is the editor of choice for both Ken Thompson and Bjarne Stroustroup. If you can satisfy both of them, you're doing something right.

...

I use Unison to sync files between my various computers. Dropbox seems to be the hot new thing, but I like that Unison doesn't ever store my files on someone else's computers.

...

I want to be working on my home desktop, realize what time it is, run out the door to catch my train, open my laptop on the train, continue right where I left off, close the laptop, hop off the train, sit down at work, and have all my state sitting there on the monitor on my desk, all without even thinking about it.

july 2014 by nhaliday

Copy this bookmark: