**nhaliday : open-problems**
45

Products I Wish Existed | Hacker News

6 weeks ago by nhaliday

- Elad Gil

hn
commentary
techtariat
list
top-n
barons
questions
open-problems
impact
hi-order-bits
frontier
quixotic
business
startups
venture
sv
tech
working-stiff
social
media
internet
communication
age-generation
flux-stasis
games
vr
github
twitter
speculation
prediction
saas
devtools
security
crime
intel
atmosphere
environment
climate-change
energy-resources
nuclear
propaganda
marketing
government
military
software
machine-learning
computer-vision
applications
logistics
corporation
biotech
fertility
sex
duplication
longevity
aging
enhancement
medicine
hard-tech
business-models
6 weeks ago by nhaliday

IMO Grand Challenge | IMO Grand Challenge for Artificial Intelligence

10 weeks ago by nhaliday

this seems very ambitious. but at least the timeline is unstated (realistic)

oly
math
rec-math
research
research-program
homepage
ai
machine-learning
skunkworks
wild-ideas
questions
open-problems
problem-solving
formal-methods
interdisciplinary
msr
quixotic
frontier
10 weeks ago by nhaliday

The Existential Risk of Math Errors - Gwern.net

july 2019 by nhaliday

How big is this upper bound? Mathematicians have often made errors in proofs. But it’s rarer for ideas to be accepted for a long time and then rejected. But we can divide errors into 2 basic cases corresponding to type I and type II errors:

1. Mistakes where the theorem is still true, but the proof was incorrect (type I)

2. Mistakes where the theorem was false, and the proof was also necessarily incorrect (type II)

Before someone comes up with a final answer, a mathematician may have many levels of intuition in formulating & working on the problem, but we’ll consider the final end-product where the mathematician feels satisfied that he has solved it. Case 1 is perhaps the most common case, with innumerable examples; this is sometimes due to mistakes in the proof that anyone would accept is a mistake, but many of these cases are due to changing standards of proof. For example, when David Hilbert discovered errors in Euclid’s proofs which no one noticed before, the theorems were still true, and the gaps more due to Hilbert being a modern mathematician thinking in terms of formal systems (which of course Euclid did not think in). (David Hilbert himself turns out to be a useful example of the other kind of error: his famous list of 23 problems was accompanied by definite opinions on the outcome of each problem and sometimes timings, several of which were wrong or questionable5.) Similarly, early calculus used ‘infinitesimals’ which were sometimes treated as being 0 and sometimes treated as an indefinitely small non-zero number; this was incoherent and strictly speaking, practically all of the calculus results were wrong because they relied on an incoherent concept - but of course the results were some of the greatest mathematical work ever conducted6 and when later mathematicians put calculus on a more rigorous footing, they immediately re-derived those results (sometimes with important qualifications), and doubtless as modern math evolves other fields have sometimes needed to go back and clean up the foundations and will in the future.7

...

Isaac Newton, incidentally, gave two proofs of the same solution to a problem in probability, one via enumeration and the other more abstract; the enumeration was correct, but the other proof totally wrong and this was not noticed for a long time, leading Stigler to remark:

...

TYPE I > TYPE II?

“Lefschetz was a purely intuitive mathematician. It was said of him that he had never given a completely correct proof, but had never made a wrong guess either.”

- Gian-Carlo Rota13

Case 2 is disturbing, since it is a case in which we wind up with false beliefs and also false beliefs about our beliefs (we no longer know that we don’t know). Case 2 could lead to extinction.

...

Except, errors do not seem to be evenly & randomly distributed between case 1 and case 2. There seem to be far more case 1s than case 2s, as already mentioned in the early calculus example: far more than 50% of the early calculus results were correct when checked more rigorously. Richard Hamming attributes to Ralph Boas a comment that while editing Mathematical Reviews that “of the new results in the papers reviewed most are true but the corresponding proofs are perhaps half the time plain wrong”.

...

Gian-Carlo Rota gives us an example with Hilbert:

...

Olga labored for three years; it turned out that all mistakes could be corrected without any major changes in the statement of the theorems. There was one exception, a paper Hilbert wrote in his old age, which could not be fixed; it was a purported proof of the continuum hypothesis, you will find it in a volume of the Mathematische Annalen of the early thirties.

...

Leslie Lamport advocates for machine-checked proofs and a more rigorous style of proofs similar to natural deduction, noting a mathematician acquaintance guesses at a broad error rate of 1/329 and that he routinely found mistakes in his own proofs and, worse, believed false conjectures30.

[more on these "structured proofs":

https://academia.stackexchange.com/questions/52435/does-anyone-actually-publish-structured-proofs

https://mathoverflow.net/questions/35727/community-experiences-writing-lamports-structured-proofs

]

We can probably add software to that list: early software engineering work found that, dismayingly, bug rates seem to be simply a function of lines of code, and one would expect diseconomies of scale. So one would expect that in going from the ~4,000 lines of code of the Microsoft DOS operating system kernel to the ~50,000,000 lines of code in Windows Server 2003 (with full systems of applications and libraries being even larger: the comprehensive Debian repository in 2007 contained ~323,551,126 lines of code) that the number of active bugs at any time would be… fairly large. Mathematical software is hopefully better, but practitioners still run into issues (eg Durán et al 2014, Fonseca et al 2017) and I don’t know of any research pinning down how buggy key mathematical systems like Mathematica are or how much published mathematics may be erroneous due to bugs. This general problem led to predictions of doom and spurred much research into automated proof-checking, static analysis, and functional languages31.

[related:

https://mathoverflow.net/questions/11517/computer-algebra-errors

I don't know any interesting bugs in symbolic algebra packages but I know a true, enlightening and entertaining story about something that looked like a bug but wasn't.

Define sinc𝑥=(sin𝑥)/𝑥.

Someone found the following result in an algebra package: ∫∞0𝑑𝑥sinc𝑥=𝜋/2

They then found the following results:

...

So of course when they got:

∫∞0𝑑𝑥sinc𝑥sinc(𝑥/3)sinc(𝑥/5)⋯sinc(𝑥/15)=(467807924713440738696537864469/935615849440640907310521750000)𝜋

hmm:

Which means that nobody knows Fourier analysis nowdays. Very sad and discouraging story... – fedja Jan 29 '10 at 18:47

--

Because the most popular systems are all commercial, they tend to guard their bug database rather closely -- making them public would seriously cut their sales. For example, for the open source project Sage (which is quite young), you can get a list of all the known bugs from this page. 1582 known issues on Feb.16th 2010 (which includes feature requests, problems with documentation, etc).

That is an order of magnitude less than the commercial systems. And it's not because it is better, it is because it is younger and smaller. It might be better, but until SAGE does a lot of analysis (about 40% of CAS bugs are there) and a fancy user interface (another 40%), it is too hard to compare.

I once ran a graduate course whose core topic was studying the fundamental disconnect between the algebraic nature of CAS and the analytic nature of the what it is mostly used for. There are issues of logic -- CASes work more or less in an intensional logic, while most of analysis is stated in a purely extensional fashion. There is no well-defined 'denotational semantics' for expressions-as-functions, which strongly contributes to the deeper bugs in CASes.]

...

Should such widely-believed conjectures as P≠NP or the Riemann hypothesis turn out be false, then because they are assumed by so many existing proofs, a far larger math holocaust would ensue38 - and our previous estimates of error rates will turn out to have been substantial underestimates. But it may be a cloud with a silver lining, if it doesn’t come at a time of danger.

https://mathoverflow.net/questions/338607/why-doesnt-mathematics-collapse-down-even-though-humans-quite-often-make-mista

more on formal methods in programming:

https://www.quantamagazine.org/formal-verification-creates-hacker-proof-code-20160920/

https://intelligence.org/2014/03/02/bob-constable/

https://softwareengineering.stackexchange.com/questions/375342/what-are-the-barriers-that-prevent-widespread-adoption-of-formal-methods

Update: measured effort

In the October 2018 issue of Communications of the ACM there is an interesting article about Formally verified software in the real world with some estimates of the effort.

Interestingly (based on OS development for military equipment), it seems that producing formally proved software requires 3.3 times more effort than with traditional engineering techniques. So it's really costly.

On the other hand, it requires 2.3 times less effort to get high security software this way than with traditionally engineered software if you add the effort to make such software certified at a high security level (EAL 7). So if you have high reliability or security requirements there is definitively a business case for going formal.

WHY DON'T PEOPLE USE FORMAL METHODS?: https://www.hillelwayne.com/post/why-dont-people-use-formal-methods/

You can see examples of how all of these look at Let’s Prove Leftpad. HOL4 and Isabelle are good examples of “independent theorem” specs, SPARK and Dafny have “embedded assertion” specs, and Coq and Agda have “dependent type” specs.6

If you squint a bit it looks like these three forms of code spec map to the three main domains of automated correctness checking: tests, contracts, and types. This is not a coincidence. Correctness is a spectrum, and formal verification is one extreme of that spectrum. As we reduce the rigour (and effort) of our verification we get simpler and narrower checks, whether that means limiting the explored state space, using weaker types, or pushing verification to the runtime. Any means of total specification then becomes a means of partial specification, and vice versa: many consider Cleanroom a formal verification technique, which primarily works by pushing code review far beyond what’s humanly possible.

...

The question, then: “is 90/95/99% correct significantly cheaper than 100% correct?” The answer is very yes. We all are comfortable saying that a codebase we’ve well-tested and well-typed is mostly correct modulo a few fixes in prod, and we’re even writing more than four lines of code a day. In fact, the vast… [more]

ratty
gwern
analysis
essay
realness
truth
correctness
reason
philosophy
math
proofs
formal-methods
cs
programming
engineering
worse-is-better/the-right-thing
intuition
giants
old-anglo
error
street-fighting
heuristic
zooming
risk
threat-modeling
software
lens
logic
inference
physics
differential
geometry
estimate
distribution
robust
speculation
nonlinearity
cost-benefit
convexity-curvature
measure
scale
trivia
cocktail
history
early-modern
europe
math.CA
rigor
news
org:mag
org:sci
miri-cfar
pdf
thesis
comparison
examples
org:junk
q-n-a
stackex
pragmatic
tradeoffs
cracker-prog
techtariat
invariance
DSL
chart
ecosystem
grokkability
heavyweights
CAS
static-dynamic
lower-bounds
complexity
tcs
open-problems
big-surf
ideas
certificates-recognition
proof-systems
PCP
mediterranean
SDP
meta:prediction
epistemic
questions
guessing
distributed
overflow
nibble
soft-question
track-record
big-list
hmm
frontier
state-of-art
move-fast-(and-break-things)
grokkability-clarity
technical-writing
trust
1. Mistakes where the theorem is still true, but the proof was incorrect (type I)

2. Mistakes where the theorem was false, and the proof was also necessarily incorrect (type II)

Before someone comes up with a final answer, a mathematician may have many levels of intuition in formulating & working on the problem, but we’ll consider the final end-product where the mathematician feels satisfied that he has solved it. Case 1 is perhaps the most common case, with innumerable examples; this is sometimes due to mistakes in the proof that anyone would accept is a mistake, but many of these cases are due to changing standards of proof. For example, when David Hilbert discovered errors in Euclid’s proofs which no one noticed before, the theorems were still true, and the gaps more due to Hilbert being a modern mathematician thinking in terms of formal systems (which of course Euclid did not think in). (David Hilbert himself turns out to be a useful example of the other kind of error: his famous list of 23 problems was accompanied by definite opinions on the outcome of each problem and sometimes timings, several of which were wrong or questionable5.) Similarly, early calculus used ‘infinitesimals’ which were sometimes treated as being 0 and sometimes treated as an indefinitely small non-zero number; this was incoherent and strictly speaking, practically all of the calculus results were wrong because they relied on an incoherent concept - but of course the results were some of the greatest mathematical work ever conducted6 and when later mathematicians put calculus on a more rigorous footing, they immediately re-derived those results (sometimes with important qualifications), and doubtless as modern math evolves other fields have sometimes needed to go back and clean up the foundations and will in the future.7

...

Isaac Newton, incidentally, gave two proofs of the same solution to a problem in probability, one via enumeration and the other more abstract; the enumeration was correct, but the other proof totally wrong and this was not noticed for a long time, leading Stigler to remark:

...

TYPE I > TYPE II?

“Lefschetz was a purely intuitive mathematician. It was said of him that he had never given a completely correct proof, but had never made a wrong guess either.”

- Gian-Carlo Rota13

Case 2 is disturbing, since it is a case in which we wind up with false beliefs and also false beliefs about our beliefs (we no longer know that we don’t know). Case 2 could lead to extinction.

...

Except, errors do not seem to be evenly & randomly distributed between case 1 and case 2. There seem to be far more case 1s than case 2s, as already mentioned in the early calculus example: far more than 50% of the early calculus results were correct when checked more rigorously. Richard Hamming attributes to Ralph Boas a comment that while editing Mathematical Reviews that “of the new results in the papers reviewed most are true but the corresponding proofs are perhaps half the time plain wrong”.

...

Gian-Carlo Rota gives us an example with Hilbert:

...

Olga labored for three years; it turned out that all mistakes could be corrected without any major changes in the statement of the theorems. There was one exception, a paper Hilbert wrote in his old age, which could not be fixed; it was a purported proof of the continuum hypothesis, you will find it in a volume of the Mathematische Annalen of the early thirties.

...

Leslie Lamport advocates for machine-checked proofs and a more rigorous style of proofs similar to natural deduction, noting a mathematician acquaintance guesses at a broad error rate of 1/329 and that he routinely found mistakes in his own proofs and, worse, believed false conjectures30.

[more on these "structured proofs":

https://academia.stackexchange.com/questions/52435/does-anyone-actually-publish-structured-proofs

https://mathoverflow.net/questions/35727/community-experiences-writing-lamports-structured-proofs

]

We can probably add software to that list: early software engineering work found that, dismayingly, bug rates seem to be simply a function of lines of code, and one would expect diseconomies of scale. So one would expect that in going from the ~4,000 lines of code of the Microsoft DOS operating system kernel to the ~50,000,000 lines of code in Windows Server 2003 (with full systems of applications and libraries being even larger: the comprehensive Debian repository in 2007 contained ~323,551,126 lines of code) that the number of active bugs at any time would be… fairly large. Mathematical software is hopefully better, but practitioners still run into issues (eg Durán et al 2014, Fonseca et al 2017) and I don’t know of any research pinning down how buggy key mathematical systems like Mathematica are or how much published mathematics may be erroneous due to bugs. This general problem led to predictions of doom and spurred much research into automated proof-checking, static analysis, and functional languages31.

[related:

https://mathoverflow.net/questions/11517/computer-algebra-errors

I don't know any interesting bugs in symbolic algebra packages but I know a true, enlightening and entertaining story about something that looked like a bug but wasn't.

Define sinc𝑥=(sin𝑥)/𝑥.

Someone found the following result in an algebra package: ∫∞0𝑑𝑥sinc𝑥=𝜋/2

They then found the following results:

...

So of course when they got:

∫∞0𝑑𝑥sinc𝑥sinc(𝑥/3)sinc(𝑥/5)⋯sinc(𝑥/15)=(467807924713440738696537864469/935615849440640907310521750000)𝜋

hmm:

Which means that nobody knows Fourier analysis nowdays. Very sad and discouraging story... – fedja Jan 29 '10 at 18:47

--

Because the most popular systems are all commercial, they tend to guard their bug database rather closely -- making them public would seriously cut their sales. For example, for the open source project Sage (which is quite young), you can get a list of all the known bugs from this page. 1582 known issues on Feb.16th 2010 (which includes feature requests, problems with documentation, etc).

That is an order of magnitude less than the commercial systems. And it's not because it is better, it is because it is younger and smaller. It might be better, but until SAGE does a lot of analysis (about 40% of CAS bugs are there) and a fancy user interface (another 40%), it is too hard to compare.

I once ran a graduate course whose core topic was studying the fundamental disconnect between the algebraic nature of CAS and the analytic nature of the what it is mostly used for. There are issues of logic -- CASes work more or less in an intensional logic, while most of analysis is stated in a purely extensional fashion. There is no well-defined 'denotational semantics' for expressions-as-functions, which strongly contributes to the deeper bugs in CASes.]

...

Should such widely-believed conjectures as P≠NP or the Riemann hypothesis turn out be false, then because they are assumed by so many existing proofs, a far larger math holocaust would ensue38 - and our previous estimates of error rates will turn out to have been substantial underestimates. But it may be a cloud with a silver lining, if it doesn’t come at a time of danger.

https://mathoverflow.net/questions/338607/why-doesnt-mathematics-collapse-down-even-though-humans-quite-often-make-mista

more on formal methods in programming:

https://www.quantamagazine.org/formal-verification-creates-hacker-proof-code-20160920/

https://intelligence.org/2014/03/02/bob-constable/

https://softwareengineering.stackexchange.com/questions/375342/what-are-the-barriers-that-prevent-widespread-adoption-of-formal-methods

Update: measured effort

In the October 2018 issue of Communications of the ACM there is an interesting article about Formally verified software in the real world with some estimates of the effort.

Interestingly (based on OS development for military equipment), it seems that producing formally proved software requires 3.3 times more effort than with traditional engineering techniques. So it's really costly.

On the other hand, it requires 2.3 times less effort to get high security software this way than with traditionally engineered software if you add the effort to make such software certified at a high security level (EAL 7). So if you have high reliability or security requirements there is definitively a business case for going formal.

WHY DON'T PEOPLE USE FORMAL METHODS?: https://www.hillelwayne.com/post/why-dont-people-use-formal-methods/

You can see examples of how all of these look at Let’s Prove Leftpad. HOL4 and Isabelle are good examples of “independent theorem” specs, SPARK and Dafny have “embedded assertion” specs, and Coq and Agda have “dependent type” specs.6

If you squint a bit it looks like these three forms of code spec map to the three main domains of automated correctness checking: tests, contracts, and types. This is not a coincidence. Correctness is a spectrum, and formal verification is one extreme of that spectrum. As we reduce the rigour (and effort) of our verification we get simpler and narrower checks, whether that means limiting the explored state space, using weaker types, or pushing verification to the runtime. Any means of total specification then becomes a means of partial specification, and vice versa: many consider Cleanroom a formal verification technique, which primarily works by pushing code review far beyond what’s humanly possible.

...

The question, then: “is 90/95/99% correct significantly cheaper than 100% correct?” The answer is very yes. We all are comfortable saying that a codebase we’ve well-tested and well-typed is mostly correct modulo a few fixes in prod, and we’re even writing more than four lines of code a day. In fact, the vast… [more]

july 2019 by nhaliday

Browse the State-of-the-Art in Machine Learning

aggregator machine-learning deep-learning ai benchmarks state-of-art ranking top-n competition nibble dataset computer-vision comparison marginal nlp speedometer frontier open-problems list papers links info-foraging graphs time-series audio games medicine applications accuracy foreign-lang arrows questions

june 2019 by nhaliday

aggregator machine-learning deep-learning ai benchmarks state-of-art ranking top-n competition nibble dataset computer-vision comparison marginal nlp speedometer frontier open-problems list papers links info-foraging graphs time-series audio games medicine applications accuracy foreign-lang arrows questions

june 2019 by nhaliday

Reverse salients | West Hunter

may 2019 by nhaliday

Edison thought in terms of reverse salients and critical problems.

“Reverse salients are areas of research and development that are lagging in some obvious way behind the general line of advance. Critical problems are the research questions, cast in terms of the concrete particulars of currently available knowledge and technique and of specific exemplars or models that are solvable and whose solutions would eliminate the reverse salients. ”

What strikes you as as important current example of a reverse salient, and the associated critical problem or problems?

west-hunter
scitariat
discussion
science
technology
innovation
low-hanging
list
top-n
research
open-problems
the-world-is-just-atoms
marginal
definite-planning
frontier
🔬
speedometer
ideas
the-trenches
hi-order-bits
prioritizing
judgement
“Reverse salients are areas of research and development that are lagging in some obvious way behind the general line of advance. Critical problems are the research questions, cast in terms of the concrete particulars of currently available knowledge and technique and of specific exemplars or models that are solvable and whose solutions would eliminate the reverse salients. ”

What strikes you as as important current example of a reverse salient, and the associated critical problem or problems?

may 2019 by nhaliday

Why books don’t work | Andy Matuschak

may 2019 by nhaliday

https://www.spreaker.com/user/10197011/designing-and-developing-new-tools-for-t

https://twitter.com/andy_matuschak/status/1190675776036687878

https://archive.is/hNIFG

https://archive.is/f9Bwh

hmm: "zettelkasten like note systems have you do a linear search for connections, that gets exponentially more expensive as your note body grows",

https://twitter.com/Meaningness/status/1210309788141117440

https://archive.is/P6PH2

https://archive.is/uD9ls

https://archive.is/Sb9Jq

https://twitter.com/Scholars_Stage/status/1199702832728948737

https://archive.is/cc4zf

I reviewed today my catalogue of 420~ books I have read over the last six years and I am in despair. There are probably 100~ whose contents I can tell you almost nothing about—nothing noteworthy anyway.

techtariat
worrydream
learning
education
teaching
higher-ed
neurons
thinking
rhetoric
essay
michael-nielsen
retention
better-explained
bounded-cognition
info-dynamics
info-foraging
books
communication
lectures
contrarianism
academia
scholar
design
meta:reading
studying
form-design
writing
technical-writing
skunkworks
multi
broad-econ
wonkish
unaffiliated
twitter
social
discussion
backup
reflection
metameta
podcast
audio
interview
impetus
space
open-problems
questions
tech
hard-tech
startups
commentary
postrat
europe
germanic
notetaking
graphs
network-structure
similarity
intersection-connectedness
magnitude
cost-benefit
multiplicative
https://twitter.com/andy_matuschak/status/1190675776036687878

https://archive.is/hNIFG

https://archive.is/f9Bwh

hmm: "zettelkasten like note systems have you do a linear search for connections, that gets exponentially more expensive as your note body grows",

https://twitter.com/Meaningness/status/1210309788141117440

https://archive.is/P6PH2

https://archive.is/uD9ls

https://archive.is/Sb9Jq

https://twitter.com/Scholars_Stage/status/1199702832728948737

https://archive.is/cc4zf

I reviewed today my catalogue of 420~ books I have read over the last six years and I am in despair. There are probably 100~ whose contents I can tell you almost nothing about—nothing noteworthy anyway.

may 2019 by nhaliday

Computer Scientists Close In on Unique Games Conjecture Proof | Quanta Magazine

news org:mag org:sci popsci tcs cs computation UGC complexity approximation lower-bounds hardness nibble org:inst rand-approx proofs big-surf announcement SDP optimization open-problems questions research

june 2018 by nhaliday

news org:mag org:sci popsci tcs cs computation UGC complexity approximation lower-bounds hardness nibble org:inst rand-approx proofs big-surf announcement SDP optimization open-problems questions research

june 2018 by nhaliday

Information Processing: Big, complicated data sets

february 2017 by nhaliday

This Times article profiles Nick Patterson, a mathematician whose career wandered from cryptography, to finance (7 years at Renaissance) and finally to bioinformatics. “I’m a data guy,” Dr. Patterson said. “What I know about is how to analyze big, complicated data sets.”

If you're a smart guy looking for something to do, there are 3 huge computational problems staring you in the face, for which the data is readily accessible.

1) human genome: 3 GB of data in a single genome; most data freely available on the Web (e.g., Hapmap stores patterns of sequence variation). Got a hypothesis about deep human history (evolution)? Test it yourself...

2) market prediction: every market tick available at zero or minimal subscription-service cost. Can you model short term movements? It's never been cheaper to build and test your model!

3) internet search: about 10^3 Terabytes of data (admittedly, a barrier to entry for an individual, but not for a startup). Can you come up with a better way to index or search it? What about peripheral problems like language translation or picture or video search?

The biggest barrier to entry is, of course, brainpower and a few years (a decade?) of concentrated learning. But the necessary books are all in the library :-)

Patterson has worked in 2 of the 3 areas listed above! Substituting crypto for internet search is understandable given his age, our cold war history, etc.

hsu
scitariat
quotes
links
news
org:rec
profile
giants
stories
huge-data-the-biggest
genomics
bioinformatics
finance
crypto
history
britain
interdisciplinary
the-trenches
🔬
questions
genetics
dataset
search
web
internet
scale
commentary
apollonian-dionysian
magnitude
examples
open-problems
big-surf
markets
securities
ORFE
nitty-gritty
quixotic
google
startups
ideas
measure
space-complexity
minimum-viable
move-fast-(and-break-things)
If you're a smart guy looking for something to do, there are 3 huge computational problems staring you in the face, for which the data is readily accessible.

1) human genome: 3 GB of data in a single genome; most data freely available on the Web (e.g., Hapmap stores patterns of sequence variation). Got a hypothesis about deep human history (evolution)? Test it yourself...

2) market prediction: every market tick available at zero or minimal subscription-service cost. Can you model short term movements? It's never been cheaper to build and test your model!

3) internet search: about 10^3 Terabytes of data (admittedly, a barrier to entry for an individual, but not for a startup). Can you come up with a better way to index or search it? What about peripheral problems like language translation or picture or video search?

The biggest barrier to entry is, of course, brainpower and a few years (a decade?) of concentrated learning. But the necessary books are all in the library :-)

Patterson has worked in 2 of the 3 areas listed above! Substituting crypto for internet search is understandable given his age, our cold war history, etc.

february 2017 by nhaliday

cc.complexity theory - The complexity of sampling (approximately) the Fourier transform of a Boolean function - Theoretical Computer Science Stack Exchange

q-n-a overflow tcs complexity quantum quantum-info sampling fourier boolean-analysis research aaronson tcstariat mathtariat hierarchy rand-complexity open-problems counting nibble questions

february 2017 by nhaliday

q-n-a overflow tcs complexity quantum quantum-info sampling fourier boolean-analysis research aaronson tcstariat mathtariat hierarchy rand-complexity open-problems counting nibble questions

february 2017 by nhaliday

cc.complexity theory - Evidence that PPAD is hard? - Theoretical Computer Science Stack Exchange

q-n-a overflow tcs complexity algorithmic-econ game-theory motivation open-problems lower-bounds research relativization crypto rigorous-crypto nibble big-surf hardness questions equilibrium evidence

february 2017 by nhaliday

q-n-a overflow tcs complexity algorithmic-econ game-theory motivation open-problems lower-bounds research relativization crypto rigorous-crypto nibble big-surf hardness questions equilibrium evidence

february 2017 by nhaliday

Overcoming Bias : Neglected Big Problems

ratty hanson frontier coordination list top-n big-picture open-problems farmers-and-foragers education matching questions hari-seldon learning age-generation duplication caching scale population density urban-rural evolution EEA duty flux-stasis uncertainty

february 2017 by nhaliday

ratty hanson frontier coordination list top-n big-picture open-problems farmers-and-foragers education matching questions hari-seldon learning age-generation duplication caching scale population density urban-rural evolution EEA duty flux-stasis uncertainty

february 2017 by nhaliday

(Gil Kalai) The weak epsilon-net problem | What's new

january 2017 by nhaliday

This is a problem in discrete and convex geometry. It seeks to quantify the intuitively obvious fact that large convex bodies are so “fat” that they cannot avoid “detection” by a small number of observation points.

gowers
mathtariat
tcstariat
tcs
math
concept
rounding
linear-programming
research
open-problems
geometry
math.CO
magnitude
probabilistic-method
math.MG
discrete
nibble
org:bleg
questions
curvature
pigeonhole-markov
convexity-curvature
january 2017 by nhaliday

fa.functional analysis - The mathematical theory of Feynman integrals - MathOverflow

january 2017 by nhaliday

another question: http://mathoverflow.net/questions/19495/mathematics-of-path-integral-state-of-the-art

q-n-a
overflow
physics
lens
math
math.FA
concept
rigor
open-problems
multi
nibble
questions
january 2017 by nhaliday

ds.algorithms - Evidence that matrix multiplication can be done in quadratic time? - Theoretical Computer Science Stack Exchange

q-n-a overflow tcs complexity algorithms linear-algebra algebraic-complexity motivation open-problems research frontier curiosity liner-notes math.GR alg-combo nibble big-surf questions

january 2017 by nhaliday

q-n-a overflow tcs complexity algorithms linear-algebra algebraic-complexity motivation open-problems research frontier curiosity liner-notes math.GR alg-combo nibble big-surf questions

january 2017 by nhaliday

The probabilistic heuristic justification of the ABC conjecture | What's new

open-problems gowers thinking tricks intuition probability math tidbits yoga mathtariat models heuristic math.NT cartoons nibble org:bleg borel-cantelli big-surf additive multiplicative questions guessing

october 2016 by nhaliday

open-problems gowers thinking tricks intuition probability math tidbits yoga mathtariat models heuristic math.NT cartoons nibble org:bleg borel-cantelli big-surf additive multiplicative questions guessing

october 2016 by nhaliday

soft question - A Book You Would Like to Write - MathOverflow

october 2016 by nhaliday

- The Differential Topology of Loop Spaces

- Knot Theory: Kawaii examples for topological machines

- An Introduction to Forcing (for people who don't care about foundations.)

writing
math
q-n-a
discussion
books
list
synthesis
big-list
overflow
soft-question
techtariat
mathtariat
exposition
topology
open-problems
logic
nibble
fedja
questions
- Knot Theory: Kawaii examples for topological machines

- An Introduction to Forcing (for people who don't care about foundations.)

october 2016 by nhaliday

mg.metric geometry - Distributing points evenly on a sphere - MathOverflow

october 2016 by nhaliday

https://en.wikipedia.org/wiki/Spherical_code

http://www.uvm.edu/pdodds/files/papers/others/1994/mooers1994a.pdf

tidbits
open-problems
geometry
math
optimization
acm
cool
atoms
multi
overflow
q-n-a
math.MG
orourke
nibble
questions
constraint-satisfaction
discrete
http://www.uvm.edu/pdodds/files/papers/others/1994/mooers1994a.pdf

october 2016 by nhaliday

Ten Semi-Grand Challenges for Quantum Computing Theory

open-problems research expert list quantum tcs aaronson quantum-info tcstariat frontier top-n learning-theory deep-learning circuits crypto rigorous-crypto computation proof-systems big-surf questions ideas expert-experience quixotic

august 2016 by nhaliday

open-problems research expert list quantum tcs aaronson quantum-info tcstariat frontier top-n learning-theory deep-learning circuits crypto rigorous-crypto computation proof-systems big-surf questions ideas expert-experience quixotic

august 2016 by nhaliday

Copy this bookmark: