**nhaliday : boolean-analysis**
31

cc.complexity theory - The complexity of sampling (approximately) the Fourier transform of a Boolean function - Theoretical Computer Science Stack Exchange

q-n-a overflow tcs complexity quantum quantum-info sampling fourier boolean-analysis research aaronson tcstariat mathtariat hierarchy rand-complexity open-problems counting nibble questions

february 2017 by nhaliday

q-n-a overflow tcs complexity quantum quantum-info sampling fourier boolean-analysis research aaronson tcstariat mathtariat hierarchy rand-complexity open-problems counting nibble questions

february 2017 by nhaliday

Superconcentration and Related Topics

february 2017 by nhaliday

when Var X_n = o(n) instead of Var X_n = O(n)

pdf
lecture-notes
math
probability
boolean-analysis
concentration-of-measure
limits
magnitude
concept
yoga
👳
unit
discrete
phase-transition
stat-mech
percolation
ising
p:*
quixotic
advanced
february 2017 by nhaliday

COS 522: Complexity Theory - Spring 2007.

princeton course tcs complexity lecture-notes 👳 rand-approx approximation pcp rand-complexity coding-theory proof-systems random relativization boaz-barak tcstariat wigderson pseudorandomness crypto rigorous-crypto expanders boolean-analysis space-complexity unit p:** quixotic advanced

february 2017 by nhaliday

princeton course tcs complexity lecture-notes 👳 rand-approx approximation pcp rand-complexity coding-theory proof-systems random relativization boaz-barak tcstariat wigderson pseudorandomness crypto rigorous-crypto expanders boolean-analysis space-complexity unit p:** quixotic advanced

february 2017 by nhaliday

soft question - What kind of mathematical background is needed for complexity theory? - Theoretical Computer Science Stack Exchange

q-n-a overflow tcs complexity ground-up soft-question advice discussion oly linear-algebra probability probabilistic-method math.CO boolean-analysis coding-theory information-theory math.RT markov algebra fields nibble knowledge reading accretion recommendations list books

january 2017 by nhaliday

q-n-a overflow tcs complexity ground-up soft-question advice discussion oly linear-algebra probability probabilistic-method math.CO boolean-analysis coding-theory information-theory math.RT markov algebra fields nibble knowledge reading accretion recommendations list books

january 2017 by nhaliday

soft question - Why does Fourier analysis of Boolean functions "work"? - Theoretical Computer Science Stack Exchange

december 2016 by nhaliday

Here is my point of view, which I learned from Guy Kindler, though someone more experienced can probably give a better answer: Consider the linear space of functions f: {0,1}^n -> R and consider a linear operator of the form σ_w (for w in {0,1}^n), that maps a function f(x) as above to the function f(x+w). In many of the questions of TCS, there is an underlying need to analyze the effects that such operators have on certain functions.

Now, the point is that the Fourier basis is the basis that diagonalizes all those operators at the same time, which makes the analysis of those operators much simpler. More generally, the Fourier basis diagonalizes the convolution operator, which also underlies many of those questions. Thus, Fourier analysis is likely to be effective whenever one needs to analyze those operators.

q-n-a
math
tcs
synthesis
boolean-analysis
fourier
👳
tidbits
motivation
intuition
linear-algebra
overflow
hi-order-bits
insight
curiosity
ground-up
arrows
nibble
s:*
elegance
guessing
Now, the point is that the Fourier basis is the basis that diagonalizes all those operators at the same time, which makes the analysis of those operators much simpler. More generally, the Fourier basis diagonalizes the convolution operator, which also underlies many of those questions. Thus, Fourier analysis is likely to be effective whenever one needs to analyze those operators.

december 2016 by nhaliday

gt.geometric topology - Intuitive crutches for higher dimensional thinking - MathOverflow

december 2016 by nhaliday

Terry Tao:

I can't help you much with high-dimensional topology - it's not my field, and I've not picked up the various tricks topologists use to get a grip on the subject - but when dealing with the geometry of high-dimensional (or infinite-dimensional) vector spaces such as R^n, there are plenty of ways to conceptualise these spaces that do not require visualising more than three dimensions directly.

For instance, one can view a high-dimensional vector space as a state space for a system with many degrees of freedom. A megapixel image, for instance, is a point in a million-dimensional vector space; by varying the image, one can explore the space, and various subsets of this space correspond to various classes of images.

One can similarly interpret sound waves, a box of gases, an ecosystem, a voting population, a stream of digital data, trials of random variables, the results of a statistical survey, a probabilistic strategy in a two-player game, and many other concrete objects as states in a high-dimensional vector space, and various basic concepts such as convexity, distance, linearity, change of variables, orthogonality, or inner product can have very natural meanings in some of these models (though not in all).

It can take a bit of both theory and practice to merge one's intuition for these things with one's spatial intuition for vectors and vector spaces, but it can be done eventually (much as after one has enough exposure to measure theory, one can start merging one's intuition regarding cardinality, mass, length, volume, probability, cost, charge, and any number of other "real-life" measures).

For instance, the fact that most of the mass of a unit ball in high dimensions lurks near the boundary of the ball can be interpreted as a manifestation of the law of large numbers, using the interpretation of a high-dimensional vector space as the state space for a large number of trials of a random variable.

More generally, many facts about low-dimensional projections or slices of high-dimensional objects can be viewed from a probabilistic, statistical, or signal processing perspective.

Scott Aaronson:

Here are some of the crutches I've relied on. (Admittedly, my crutches are probably much more useful for theoretical computer science, combinatorics, and probability than they are for geometry, topology, or physics. On a related note, I personally have a much easier time thinking about R^n than about, say, R^4 or R^5!)

1. If you're trying to visualize some 4D phenomenon P, first think of a related 3D phenomenon P', and then imagine yourself as a 2D being who's trying to visualize P'. The advantage is that, unlike with the 4D vs. 3D case, you yourself can easily switch between the 3D and 2D perspectives, and can therefore get a sense of exactly what information is being lost when you drop a dimension. (You could call this the "Flatland trick," after the most famous literary work to rely on it.)

2. As someone else mentioned, discretize! Instead of thinking about R^n, think about the Boolean hypercube {0,1}^n, which is finite and usually easier to get intuition about. (When working on problems, I often find myself drawing {0,1}^4 on a sheet of paper by drawing two copies of {0,1}^3 and then connecting the corresponding vertices.)

3. Instead of thinking about a subset S⊆R^n, think about its characteristic function f:R^n→{0,1}. I don't know why that trivial perspective switch makes such a big difference, but it does ... maybe because it shifts your attention to the process of computing f, and makes you forget about the hopeless task of visualizing S!

4. One of the central facts about R^n is that, while it has "room" for only n orthogonal vectors, it has room for exp(n) almost-orthogonal vectors. Internalize that one fact, and so many other properties of R^n (for example, that the n-sphere resembles a "ball with spikes sticking out," as someone mentioned before) will suddenly seem non-mysterious. In turn, one way to internalize the fact that R^n has so many almost-orthogonal vectors is to internalize Shannon's theorem that there exist good error-correcting codes.

5. To get a feel for some high-dimensional object, ask questions about the behavior of a process that takes place on that object. For example: if I drop a ball here, which local minimum will it settle into? How long does this random walk on {0,1}^n take to mix?

Gil Kalai:

This is a slightly different point, but Vitali Milman, who works in high-dimensional convexity, likes to draw high-dimensional convex bodies in a non-convex way. This is to convey the point that if you take the convex hull of a few points on the unit sphere of R^n, then for large n very little of the measure of the convex body is anywhere near the corners, so in a certain sense the body is a bit like a small sphere with long thin "spikes".

q-n-a
intuition
math
visual-understanding
list
discussion
thurston
tidbits
aaronson
tcs
geometry
problem-solving
yoga
👳
big-list
metabuch
tcstariat
gowers
mathtariat
acm
overflow
soft-question
levers
dimensionality
hi-order-bits
insight
synthesis
thinking
models
cartoons
coding-theory
information-theory
probability
concentration-of-measure
magnitude
linear-algebra
boolean-analysis
analogy
arrows
lifts-projections
measure
markov
sampling
shannon
conceptual-vocab
nibble
degrees-of-freedom
worrydream
neurons
retrofit
oscillation
paradox
novelty
tricki
concrete
high-dimension
s:***
manifolds
direction
curvature
convexity-curvature
elegance
guessing
I can't help you much with high-dimensional topology - it's not my field, and I've not picked up the various tricks topologists use to get a grip on the subject - but when dealing with the geometry of high-dimensional (or infinite-dimensional) vector spaces such as R^n, there are plenty of ways to conceptualise these spaces that do not require visualising more than three dimensions directly.

For instance, one can view a high-dimensional vector space as a state space for a system with many degrees of freedom. A megapixel image, for instance, is a point in a million-dimensional vector space; by varying the image, one can explore the space, and various subsets of this space correspond to various classes of images.

One can similarly interpret sound waves, a box of gases, an ecosystem, a voting population, a stream of digital data, trials of random variables, the results of a statistical survey, a probabilistic strategy in a two-player game, and many other concrete objects as states in a high-dimensional vector space, and various basic concepts such as convexity, distance, linearity, change of variables, orthogonality, or inner product can have very natural meanings in some of these models (though not in all).

It can take a bit of both theory and practice to merge one's intuition for these things with one's spatial intuition for vectors and vector spaces, but it can be done eventually (much as after one has enough exposure to measure theory, one can start merging one's intuition regarding cardinality, mass, length, volume, probability, cost, charge, and any number of other "real-life" measures).

For instance, the fact that most of the mass of a unit ball in high dimensions lurks near the boundary of the ball can be interpreted as a manifestation of the law of large numbers, using the interpretation of a high-dimensional vector space as the state space for a large number of trials of a random variable.

More generally, many facts about low-dimensional projections or slices of high-dimensional objects can be viewed from a probabilistic, statistical, or signal processing perspective.

Scott Aaronson:

Here are some of the crutches I've relied on. (Admittedly, my crutches are probably much more useful for theoretical computer science, combinatorics, and probability than they are for geometry, topology, or physics. On a related note, I personally have a much easier time thinking about R^n than about, say, R^4 or R^5!)

1. If you're trying to visualize some 4D phenomenon P, first think of a related 3D phenomenon P', and then imagine yourself as a 2D being who's trying to visualize P'. The advantage is that, unlike with the 4D vs. 3D case, you yourself can easily switch between the 3D and 2D perspectives, and can therefore get a sense of exactly what information is being lost when you drop a dimension. (You could call this the "Flatland trick," after the most famous literary work to rely on it.)

2. As someone else mentioned, discretize! Instead of thinking about R^n, think about the Boolean hypercube {0,1}^n, which is finite and usually easier to get intuition about. (When working on problems, I often find myself drawing {0,1}^4 on a sheet of paper by drawing two copies of {0,1}^3 and then connecting the corresponding vertices.)

3. Instead of thinking about a subset S⊆R^n, think about its characteristic function f:R^n→{0,1}. I don't know why that trivial perspective switch makes such a big difference, but it does ... maybe because it shifts your attention to the process of computing f, and makes you forget about the hopeless task of visualizing S!

4. One of the central facts about R^n is that, while it has "room" for only n orthogonal vectors, it has room for exp(n) almost-orthogonal vectors. Internalize that one fact, and so many other properties of R^n (for example, that the n-sphere resembles a "ball with spikes sticking out," as someone mentioned before) will suddenly seem non-mysterious. In turn, one way to internalize the fact that R^n has so many almost-orthogonal vectors is to internalize Shannon's theorem that there exist good error-correcting codes.

5. To get a feel for some high-dimensional object, ask questions about the behavior of a process that takes place on that object. For example: if I drop a ball here, which local minimum will it settle into? How long does this random walk on {0,1}^n take to mix?

Gil Kalai:

This is a slightly different point, but Vitali Milman, who works in high-dimensional convexity, likes to draw high-dimensional convex bodies in a non-convex way. This is to convey the point that if you take the convex hull of a few points on the unit sphere of R^n, then for large n very little of the measure of the convex body is anywhere near the corners, so in a certain sense the body is a bit like a small sphere with long thin "spikes".

december 2016 by nhaliday

The US Elections and Nate Silver: Informtion Aggregation, Noise Sensitivity, HEX, and Quantum Elections. | Combinatorics and more

politics 2016-election tcs boolean-analysis interdisciplinary tidbits exposition tcstariat social-choice mathtariat org:bleg nibble sensitivity current-events elections

november 2016 by nhaliday

politics 2016-election tcs boolean-analysis interdisciplinary tidbits exposition tcstariat social-choice mathtariat org:bleg nibble sensitivity current-events elections

november 2016 by nhaliday

Insensitive Intersections of Halfspaces – STOC 2014 Recaps (Part 11) – Not so Great Ideas in Theoretical Computer Science

conference summary tcs learning-theory boolean-analysis math.CA mit tcstariat research liner-notes org:bleg nibble intersection intersection-connectedness stoc

june 2016 by nhaliday

conference summary tcs learning-theory boolean-analysis math.CA mit tcstariat research liner-notes org:bleg nibble intersection intersection-connectedness stoc

june 2016 by nhaliday

Cryptography at STOC/FOCS | in theory

june 2016 by nhaliday

On Sunday I also attended a cryptography session. One thing that impressed me was the lively discussion at the end of the talks, very different from the stunned silence that usually follows when the session chair asks if there are any questions. The other thing I noticed was that the session was attended almost exclusively by cryptographers.

Why is that? A first guess is that the field has become very technical. But this cannot be the point; after all, a typical paper on PCP is also very technical, but the audience is not made exclusively of PCP technicians. Maybe the point is that even, or especially, definitions are very technical in cryptography. One can go to a talk showing that sparsest cut does not have a constant-factor approximation assuming the Unique Games Conjecture, and be fairly satisfied that he understands what it would mean for sparsest cut to have a constant-factor approximation and what it would mean for the Unique Games Conjecture to be false. Then one sees some slides with clouds of vertices connected in various ways, one hears mentions of Gaussian distributions, influence of variables, and invariance principles, and one gets lost, but with an idea that there is a reduction that needs certain complicated mathematical techniques to be analyzed.

In a cryptography talk, however, one may get started with the problem of realizing primitive X under assumptions Y1 and Y2, according to security requirement Z, with no set-up assumptions, and it would require quite some expertise to realize that requirement Z is considerably harder to achieve than similarly sounding Z’, which was known to be achievable under assumptions of Y1 and Y’2, where Y’2 is incomparable to Y2, but intuitively stronger, and so on. Consider the recent breakthrough on the long-standing very clear-cut question to achieve statistically hiding commitments assuming only one-way functions. This is a statement that is an order of magnitude simpler than the typical result in cryptography, probably the most basic question that was still open in the 2000s, but even to unpack such a statement is not easy and requires to see various examples, discussion of applications and so on.

crypto
rigorous-crypto
research
thinking
tcs
critique
reflection
tcstariat
conference
lens
UGC
boolean-analysis
reduction
conceptual-vocab
ground-up
luca-trevisan
nibble
org:bleg
stoc
focs
Why is that? A first guess is that the field has become very technical. But this cannot be the point; after all, a typical paper on PCP is also very technical, but the audience is not made exclusively of PCP technicians. Maybe the point is that even, or especially, definitions are very technical in cryptography. One can go to a talk showing that sparsest cut does not have a constant-factor approximation assuming the Unique Games Conjecture, and be fairly satisfied that he understands what it would mean for sparsest cut to have a constant-factor approximation and what it would mean for the Unique Games Conjecture to be false. Then one sees some slides with clouds of vertices connected in various ways, one hears mentions of Gaussian distributions, influence of variables, and invariance principles, and one gets lost, but with an idea that there is a reduction that needs certain complicated mathematical techniques to be analyzed.

In a cryptography talk, however, one may get started with the problem of realizing primitive X under assumptions Y1 and Y2, according to security requirement Z, with no set-up assumptions, and it would require quite some expertise to realize that requirement Z is considerably harder to achieve than similarly sounding Z’, which was known to be achievable under assumptions of Y1 and Y’2, where Y’2 is incomparable to Y2, but intuitively stronger, and so on. Consider the recent breakthrough on the long-standing very clear-cut question to achieve statistically hiding commitments assuming only one-way functions. This is a statement that is an order of magnitude simpler than the typical result in cryptography, probably the most basic question that was still open in the 2000s, but even to unpack such a statement is not easy and requires to see various examples, discussion of applications and so on.

june 2016 by nhaliday

Talagrand’s concentration inequality | What's new

may 2016 by nhaliday

Proposition 1 follows easily from the following statement, that asserts that if a convex set {A \subset {\bf R}^n} occupies a non-trivial fraction of the cube {\{-1,+1\}^n}, then the neighbourhood {A_t := \{ x \in {\bf R}^n: \hbox{dist}(x,A) \leq t \}} will occupy almost all of the cube for {t \gg 1}:

exposition
math.CA
math
gowers
concentration-of-measure
mathtariat
random-matrices
levers
estimate
probability
math.MG
geometry
boolean-analysis
nibble
org:bleg
high-dimension
p:whenever
dimensionality
curvature
convexity-curvature
may 2016 by nhaliday

Hypercontractivity and its Applications | tcs math

tcs tcstariat washington papers slides survey talks lectures exposition concept estimate math boolean-analysis yoga cool rand-approx optimization algorithms coding-theory math.GR SDP rounding complexity lower-bounds graphs graph-theory UGC org:bleg nibble

may 2016 by nhaliday

tcs tcstariat washington papers slides survey talks lectures exposition concept estimate math boolean-analysis yoga cool rand-approx optimization algorithms coding-theory math.GR SDP rounding complexity lower-bounds graphs graph-theory UGC org:bleg nibble

may 2016 by nhaliday

Copy this bookmark: