**nhaliday : rand-approx**
41

How to fool the "try some test cases" heuristic: Algorithms that appear correct, but are actually incorrect - Computer Science Stack Exchange

nibble q-n-a overflow cs programming correctness error measurement random checking examples counterexample rigor rand-approx greedy debugging math.NT multiplicative algorithms intricacy heuristic methodology

august 2019 by nhaliday

nibble q-n-a overflow cs programming correctness error measurement random checking examples counterexample rigor rand-approx greedy debugging math.NT multiplicative algorithms intricacy heuristic methodology

august 2019 by nhaliday

Which of Haskell and OCaml is more practical? For example, in which aspect will each play a key role? - Quora

june 2019 by nhaliday

- Tikhon Jelvis,

Haskell.

This is a question I'm particularly well-placed to answer because I've spent quite a bit of time with both Haskell and OCaml, seeing both in the real world (including working at Jane Street for a bit). I've also seen the languages in academic settings and know many people at startups using both languages. This gives me a good perspective on both languages, with a fairly similar amount of experience in the two (admittedly biased towards Haskell).

And so, based on my own experience rather than the languages' reputations, I can confidently say it's Haskell.

Parallelism and Concurrency

...

Libraries

...

Typeclasses vs Modules

...

In some sense, OCaml modules are better behaved and founded on a sounder theory than Haskell typeclasses, which have some serious drawbacks. However, the fact that typeclasses can be reliably inferred whereas modules have to be explicitly used all the time more than makes up for this. Moreover, extensions to the typeclass system enable much of the power provided by OCaml modules.

...

Of course, OCaml has some advantages of its own as well. It has a performance profile that's much easier to predict. The module system is awesome and often missed in Haskell. Polymorphic variants can be very useful for neatly representing certain situations, and don't have an obvious Haskell analog.

While both languages have a reasonable C FFI, OCaml's seems a bit simpler. It's hard for me to say this with any certainty because I've only used the OCaml FFI myself, but it was quite easy to use—a hard bar for Haskell's to clear. One really nice use of modules in OCaml is to pass around values directly from C as abstract types, which can help avoid extra marshalling/unmarshalling; that seemed very nice in OCaml.

However, overall, I still think Haskell is the more practical choice. Apart from the reasoning above, I simply have my own observations: my Haskell code tends to be clearer, simpler and shorter than my OCaml code. I'm also more productive in Haskell. Part of this is certainly a matter of having more Haskell experience, but the delta is limited especially as I'm working at my third OCaml company. (Of course, the first two were just internships.)

Both Haskell and OCaml are uniquivocally superb options—miles ahead of any other languages I know. While I do prefer Haskell, I'd choose either one in a pinch.

--

I've looked at F# a bit, but it feels like it makes too many tradeoffs to be on .NET. You lose the module system, which is probably OCaml's best feature, in return for an unfortunate, nominally typed OOP layer.

I'm also not invested in .NET at all: if anything, I'd prefer to avoid it in favor of simplicity. I exclusively use Linux and, from the outside, Mono doesn't look as good as it could be. I'm also far more likely to interoperate with a C library than a .NET library.

If I had some additional reason to use .NET, I'd definitely go for F#, but right now I don't.

https://www.reddit.com/r/haskell/comments/3huexy/what_are_haskellers_critiques_of_f_and_ocaml/

https://www.reddit.com/r/haskell/comments/3huexy/what_are_haskellers_critiques_of_f_and_ocaml/cub5mmb/

Thinking about it now, it boils down to a single word: expressiveness. When I'm writing OCaml, I feel more constrained than when I'm writing Haskell. And that's important: unlike so many others, what first attracted me to Haskell was expressiveness, not safety. It's easier for me to write code that looks how I want it to look in Haskell. The upper bound on code quality is higher.

...

Perhaps it all boils down to OCaml and its community feeling more "worse is better" than Haskell, something I highly disfavor.

...

Laziness or, more strictly, non-strictness is big. A controversial start, perhaps, but I stand by it. Unlike some, I do not see non-strictness as a design mistake but as a leap in abstraction. Perhaps a leap before its time, but a leap nonetheless. Haskell lets me program without constantly keeping the code's order in my head. Sure, it's not perfect and sometimes performance issues jar the illusion, but they are the exception not the norm. Coming from imperative languages where order is omnipresent (I can't even imagine not thinking about execution order as I write an imperative program!) it's incredibly liberating, even accounting for the weird issues and jinks I'd never see in a strict language.

This is what I imagine life felt like with the first garbage collectors: they may have been slow and awkward, the abstraction might have leaked here and there, but, for all that, it was an incredible advance. You didn't have to constantly think about memory allocation any more. It took a lot of effort to get where we are now and garbage collectors still aren't perfect and don't fit everywhere, but it's hard to imagine the world without them. Non-strictness feels like it has the same potential, without anywhere near the work garbage collection saw put into it.

...

The other big thing that stands out are typeclasses. OCaml might catch up on this front with implicit modules or it might not (Scala implicits are, by many reports, awkward at best—ask Edward Kmett about it, not me) but, as it stands, not having them is a major shortcoming. Not having inference is a bigger deal than it seems: it makes all sorts of idioms we take for granted in Haskell awkward in OCaml which means that people simply don't use them. Haskell's typeclasses, for all their shortcomings (some of which I find rather annoying), are incredibly expressive.

In Haskell, it's trivial to create your own numeric type and operators work as expected. In OCaml, while you can write code that's polymorphic over numeric types, people simply don't. Why not? Because you'd have to explicitly convert your literals and because you'd have to explicitly open a module with your operators—good luck using multiple numeric types in a single block of code! This means that everyone uses the default types: (63/31-bit) ints and doubles. If that doesn't scream "worse is better", I don't know what does.

...

There's more. Haskell's effect management, brought up elsewhere in this thread, is a big boon. It makes changing things more comfortable and makes informal reasoning much easier. Haskell is the only language where I consistently leave code I visit better than I found it. Even if I hadn't worked on the project in years. My Haskell code has better longevity than my OCaml code, much less other languages.

http://blog.ezyang.com/2011/02/ocaml-gotchas/

One observation about purity and randomness: I think one of the things people frequently find annoying in Haskell is the fact that randomness involves mutation of state, and thus be wrapped in a monad. This makes building probabilistic data structures a little clunkier, since you can no longer expose pure interfaces. OCaml is not pure, and as such you can query the random number generator whenever you want.

However, I think Haskell may get the last laugh in certain circumstances. In particular, if you are using a random number generator in order to generate random test cases for your code, you need to be able to reproduce a particular set of random tests. Usually, this is done by providing a seed which you can then feed back to the testing script, for deterministic behavior. But because OCaml's random number generator manipulates global state, it's very easy to accidentally break determinism by asking for a random number for something unrelated. You can work around it by manually bracketing the global state, but explicitly handling the randomness state means providing determinism is much more natural.

q-n-a
qra
programming
pls
engineering
nitty-gritty
pragmatic
functional
haskell
ocaml-sml
dotnet
types
arrows
cost-benefit
tradeoffs
concurrency
libraries
performance
expert-experience
composition-decomposition
comparison
critique
multi
reddit
social
discussion
techtariat
reflection
review
random
data-structures
numerics
rand-approx
sublinear
syntax
volo-avolo
causation
scala
jvm
ecosystem
metal-to-virtual
Haskell.

This is a question I'm particularly well-placed to answer because I've spent quite a bit of time with both Haskell and OCaml, seeing both in the real world (including working at Jane Street for a bit). I've also seen the languages in academic settings and know many people at startups using both languages. This gives me a good perspective on both languages, with a fairly similar amount of experience in the two (admittedly biased towards Haskell).

And so, based on my own experience rather than the languages' reputations, I can confidently say it's Haskell.

Parallelism and Concurrency

...

Libraries

...

Typeclasses vs Modules

...

In some sense, OCaml modules are better behaved and founded on a sounder theory than Haskell typeclasses, which have some serious drawbacks. However, the fact that typeclasses can be reliably inferred whereas modules have to be explicitly used all the time more than makes up for this. Moreover, extensions to the typeclass system enable much of the power provided by OCaml modules.

...

Of course, OCaml has some advantages of its own as well. It has a performance profile that's much easier to predict. The module system is awesome and often missed in Haskell. Polymorphic variants can be very useful for neatly representing certain situations, and don't have an obvious Haskell analog.

While both languages have a reasonable C FFI, OCaml's seems a bit simpler. It's hard for me to say this with any certainty because I've only used the OCaml FFI myself, but it was quite easy to use—a hard bar for Haskell's to clear. One really nice use of modules in OCaml is to pass around values directly from C as abstract types, which can help avoid extra marshalling/unmarshalling; that seemed very nice in OCaml.

However, overall, I still think Haskell is the more practical choice. Apart from the reasoning above, I simply have my own observations: my Haskell code tends to be clearer, simpler and shorter than my OCaml code. I'm also more productive in Haskell. Part of this is certainly a matter of having more Haskell experience, but the delta is limited especially as I'm working at my third OCaml company. (Of course, the first two were just internships.)

Both Haskell and OCaml are uniquivocally superb options—miles ahead of any other languages I know. While I do prefer Haskell, I'd choose either one in a pinch.

--

I've looked at F# a bit, but it feels like it makes too many tradeoffs to be on .NET. You lose the module system, which is probably OCaml's best feature, in return for an unfortunate, nominally typed OOP layer.

I'm also not invested in .NET at all: if anything, I'd prefer to avoid it in favor of simplicity. I exclusively use Linux and, from the outside, Mono doesn't look as good as it could be. I'm also far more likely to interoperate with a C library than a .NET library.

If I had some additional reason to use .NET, I'd definitely go for F#, but right now I don't.

https://www.reddit.com/r/haskell/comments/3huexy/what_are_haskellers_critiques_of_f_and_ocaml/

https://www.reddit.com/r/haskell/comments/3huexy/what_are_haskellers_critiques_of_f_and_ocaml/cub5mmb/

Thinking about it now, it boils down to a single word: expressiveness. When I'm writing OCaml, I feel more constrained than when I'm writing Haskell. And that's important: unlike so many others, what first attracted me to Haskell was expressiveness, not safety. It's easier for me to write code that looks how I want it to look in Haskell. The upper bound on code quality is higher.

...

Perhaps it all boils down to OCaml and its community feeling more "worse is better" than Haskell, something I highly disfavor.

...

Laziness or, more strictly, non-strictness is big. A controversial start, perhaps, but I stand by it. Unlike some, I do not see non-strictness as a design mistake but as a leap in abstraction. Perhaps a leap before its time, but a leap nonetheless. Haskell lets me program without constantly keeping the code's order in my head. Sure, it's not perfect and sometimes performance issues jar the illusion, but they are the exception not the norm. Coming from imperative languages where order is omnipresent (I can't even imagine not thinking about execution order as I write an imperative program!) it's incredibly liberating, even accounting for the weird issues and jinks I'd never see in a strict language.

This is what I imagine life felt like with the first garbage collectors: they may have been slow and awkward, the abstraction might have leaked here and there, but, for all that, it was an incredible advance. You didn't have to constantly think about memory allocation any more. It took a lot of effort to get where we are now and garbage collectors still aren't perfect and don't fit everywhere, but it's hard to imagine the world without them. Non-strictness feels like it has the same potential, without anywhere near the work garbage collection saw put into it.

...

The other big thing that stands out are typeclasses. OCaml might catch up on this front with implicit modules or it might not (Scala implicits are, by many reports, awkward at best—ask Edward Kmett about it, not me) but, as it stands, not having them is a major shortcoming. Not having inference is a bigger deal than it seems: it makes all sorts of idioms we take for granted in Haskell awkward in OCaml which means that people simply don't use them. Haskell's typeclasses, for all their shortcomings (some of which I find rather annoying), are incredibly expressive.

In Haskell, it's trivial to create your own numeric type and operators work as expected. In OCaml, while you can write code that's polymorphic over numeric types, people simply don't. Why not? Because you'd have to explicitly convert your literals and because you'd have to explicitly open a module with your operators—good luck using multiple numeric types in a single block of code! This means that everyone uses the default types: (63/31-bit) ints and doubles. If that doesn't scream "worse is better", I don't know what does.

...

There's more. Haskell's effect management, brought up elsewhere in this thread, is a big boon. It makes changing things more comfortable and makes informal reasoning much easier. Haskell is the only language where I consistently leave code I visit better than I found it. Even if I hadn't worked on the project in years. My Haskell code has better longevity than my OCaml code, much less other languages.

http://blog.ezyang.com/2011/02/ocaml-gotchas/

One observation about purity and randomness: I think one of the things people frequently find annoying in Haskell is the fact that randomness involves mutation of state, and thus be wrapped in a monad. This makes building probabilistic data structures a little clunkier, since you can no longer expose pure interfaces. OCaml is not pure, and as such you can query the random number generator whenever you want.

However, I think Haskell may get the last laugh in certain circumstances. In particular, if you are using a random number generator in order to generate random test cases for your code, you need to be able to reproduce a particular set of random tests. Usually, this is done by providing a seed which you can then feed back to the testing script, for deterministic behavior. But because OCaml's random number generator manipulates global state, it's very easy to accidentally break determinism by asking for a random number for something unrelated. You can work around it by manually bracketing the global state, but explicitly handling the randomness state means providing determinism is much more natural.

june 2019 by nhaliday

Computer Scientists Close In on Unique Games Conjecture Proof | Quanta Magazine

news org:mag org:sci popsci tcs cs computation UGC complexity approximation lower-bounds hardness nibble org:inst rand-approx proofs big-surf announcement SDP optimization open-problems questions research

june 2018 by nhaliday

news org:mag org:sci popsci tcs cs computation UGC complexity approximation lower-bounds hardness nibble org:inst rand-approx proofs big-surf announcement SDP optimization open-problems questions research

june 2018 by nhaliday

references - Mathematician wants the equivalent knowledge to a quality stats degree - Cross Validated

nibble q-n-a overflow lens acm stats hypothesis-testing limits confluence books recommendations list top-n accretion data-science roadmap p:whenever p:someday reading quixotic advanced markov monte-carlo convexity-curvature optimization topics linear-models linear-algebra machine-learning classification random rand-approx martingale regression time-series no-go

november 2017 by nhaliday

nibble q-n-a overflow lens acm stats hypothesis-testing limits confluence books recommendations list top-n accretion data-science roadmap p:whenever p:someday reading quixotic advanced markov monte-carlo convexity-curvature optimization topics linear-models linear-algebra machine-learning classification random rand-approx martingale regression time-series no-go

november 2017 by nhaliday

Algorithms for Finding Nearest Neighbors (and Relatives) - Piotr Indyk

september 2017 by nhaliday

kd-tree mostly

pdf
nibble
slides
talks
mit
tcs
algorithms
data-structures
geometry
spatial
metric-space
mihai
yoga
rand-approx
approximation
high-dimension
dimensionality
hashing
embeddings
trees
contiguity-proximity
september 2017 by nhaliday

Equivalence between counting and sampling

february 2017 by nhaliday

also: every counting problem either has FPTRAS or no approx. w/i polynomial factor

pdf
exposition
lecture-notes
berkeley
nibble
tcs
counting
sampling
characterization
complexity
approximation
rand-approx
proofs
february 2017 by nhaliday

reference request - What are the recent TCS books whose drafts are available online? - Theoretical Computer Science Stack Exchange

q-n-a overflow books draft list big-list tcs links accretion 👳 encyclopedic unit nibble confluence p:someday quixotic advanced proof-systems average-case complexity iidness pseudorandomness rand-approx markov mixing sublinear crypto rigorous-crypto circuits properties checking

february 2017 by nhaliday

q-n-a overflow books draft list big-list tcs links accretion 👳 encyclopedic unit nibble confluence p:someday quixotic advanced proof-systems average-case complexity iidness pseudorandomness rand-approx markov mixing sublinear crypto rigorous-crypto circuits properties checking

february 2017 by nhaliday

Advanced Complexity Theory, Fall 2012

course mit tcs complexity lower-bounds circuits relativization hierarchy communication-complexity boaz-barak random pseudorandomness rand-complexity madhu-sudan pcp approximation 👳 yoga dana-moshkovitz rand-approx naturality counting space-complexity expanders unit

february 2017 by nhaliday

course mit tcs complexity lower-bounds circuits relativization hierarchy communication-complexity boaz-barak random pseudorandomness rand-complexity madhu-sudan pcp approximation 👳 yoga dana-moshkovitz rand-approx naturality counting space-complexity expanders unit

february 2017 by nhaliday

COS 522: Complexity Theory - Spring 2007.

princeton course tcs complexity lecture-notes 👳 rand-approx approximation pcp rand-complexity coding-theory proof-systems random relativization boaz-barak tcstariat wigderson pseudorandomness crypto rigorous-crypto expanders boolean-analysis space-complexity unit p:** quixotic advanced

february 2017 by nhaliday

princeton course tcs complexity lecture-notes 👳 rand-approx approximation pcp rand-complexity coding-theory proof-systems random relativization boaz-barak tcstariat wigderson pseudorandomness crypto rigorous-crypto expanders boolean-analysis space-complexity unit p:** quixotic advanced

february 2017 by nhaliday

Lecture 16

january 2017 by nhaliday

In which we define a multi-commodity flow problem, and we see that its dual is the relaxation of a useful graph partitioning problem. The relaxation can be rounded to yield an approximate graph partitioning algorithm.

pdf
lecture-notes
exposition
optimization
linear-programming
graphs
graph-theory
algorithms
duality
rounding
stanford
approximation
rand-approx
luca-trevisan
relaxation
nibble
stock-flow
constraint-satisfaction
tcs
tcstariat
january 2017 by nhaliday

computational complexity - What is the easiest randomized algorithm to motivate to the layperson? - MathOverflow

january 2017 by nhaliday

- volume of shape in R^n

- polynomial identity testing

q-n-a
overflow
tcs
algorithms
rand-approx
random
motivation
list
examples
aaronson
tcstariat
gowers
spatial
geometry
polynomials
teaching
nibble
- polynomial identity testing

january 2017 by nhaliday

Computational Complexity: Favorite Theorems: The Yao Principle

january 2017 by nhaliday

The Yao Principle applies when we don't consider the algorithmic complexity of the players. For example in communication complexity we have two players who each have a separate half of an input string and they want to compute some function of the input with the minimum amount of communication between them. The Yao principle states that the best probabilistic strategies for the players will achieve exactly the communication bounds as the best deterministic strategy over a worst-case distribution of inputs.

The Yao Principle plays a smaller role where we measure the running time of an algorithm since applying the Principle would require solving an extremely large linear program. But since so many of our bounds are in information-based models like communication and decision-tree complexity, the Yao Principle, though not particularly complicated, plays an important role in lower bounds in a large number of results in our field.

tcstariat
tcs
complexity
adversarial
rand-approx
algorithms
game-theory
yoga
levers
communication-complexity
random
lower-bounds
average-case
nibble
org:bleg
The Yao Principle plays a smaller role where we measure the running time of an algorithm since applying the Principle would require solving an extremely large linear program. But since so many of our bounds are in information-based models like communication and decision-tree complexity, the Yao Principle, though not particularly complicated, plays an important role in lower bounds in a large number of results in our field.

january 2017 by nhaliday

6.854/18.415J: Advanced Algorithms

mit course algorithms mihai yoga metabuch lecture-notes 👳 init tcs unit toolkit p:*** quixotic advanced hashing linear-programming optimization duality gradient-descent iterative-methods spectral stock-flow approximation rand-approx greedy adversarial data-structures trees latency-throughput state flux-stasis time sequential geometry amortization-potential

november 2016 by nhaliday

mit course algorithms mihai yoga metabuch lecture-notes 👳 init tcs unit toolkit p:*** quixotic advanced hashing linear-programming optimization duality gradient-descent iterative-methods spectral stock-flow approximation rand-approx greedy adversarial data-structures trees latency-throughput state flux-stasis time sequential geometry amortization-potential

november 2016 by nhaliday

Programming books you might want to consider reading

october 2016 by nhaliday

- surprisingly theory-focused actually (w/ a smattering of OS/systems and hardware)

- cites among others: DPV, CLRS, Okasaki, Erik Demaine

- a bunch of AGT stuff

- some SWE stuff

- some business/tech culture stuff

- math (calc and prob.)

- he mentions Jukna's Extremal Combinatorics in passing at the end, wow

advice
dan-luu
engineering
books
list
recommendations
reading
accretion
🖥
2016
top-n
info-foraging
techtariat
confluence
p:null
quixotic
advanced
pragmatic
applications
applicability-prereqs
working-stiff
career
jobs
recruiting
algorithms
tcs
data-structures
functional
performance
time-complexity
random
rand-approx
complexity
cs
computation
learning-theory
machine-learning
acm
os
systems
linux
unix
concurrency
s:***
programming
nitty-gritty
problem-solving
hardware
algorithmic-econ
game-theory
mechanism-design
IEEE
erik-demaine
ground-up
legacy
code-dive
system-design
best-practices
business
microsoft
competition
culture
dark-arts
management
tech
twitter
sv
productivity
aging
art
math
probability
math.CO
math.CA
electromag
p:someday
intricacy
abstraction
composition-decomposition
coupling-cohesion
code-organizing
metal-to-virtual
age-generation
extrema
project-management
- cites among others: DPV, CLRS, Okasaki, Erik Demaine

- a bunch of AGT stuff

- some SWE stuff

- some business/tech culture stuff

- math (calc and prob.)

- he mentions Jukna's Extremal Combinatorics in passing at the end, wow

october 2016 by nhaliday

Probabilistic Filters By Example: Cuckoo Filter and Bloom Filters

september 2016 by nhaliday

Bloom filters have been in use since the 1970s and are well understood. Implementations are widely available. Variants exist that support deletion and counting, though with expanded storage requirements.

Cuckoo filters were described in Cuckoo Filter: Practically Better Than Bloom, a paper by researchers at CMU in 2014. Cuckoo filters improve on Bloom filters by supporting deletion, limited counting, and bounded FPP with similar storage efficiency as a standard Bloom filter.

comparison
data-structures
tutorial
visualization
explanation
engineering
mihai
visual-understanding
techtariat
rand-approx
Cuckoo filters were described in Cuckoo Filter: Practically Better Than Bloom, a paper by researchers at CMU in 2014. Cuckoo filters improve on Bloom filters by supporting deletion, limited counting, and bounded FPP with similar storage efficiency as a standard Bloom filter.

september 2016 by nhaliday

The Power of Noise - Less Wrong

rationality complexity essay reflection synthesis philosophy lesswrong insight len:long spock 🤖 ratty 2014 clever-rats acmtariat rand-approx big-picture rand-complexity markov monte-carlo tcs rhetoric random clarity sampling unit nibble hi-order-bits p:whenever s:** grokkability-clarity

august 2016 by nhaliday

rationality complexity essay reflection synthesis philosophy lesswrong insight len:long spock 🤖 ratty 2014 clever-rats acmtariat rand-approx big-picture rand-complexity markov monte-carlo tcs rhetoric random clarity sampling unit nibble hi-order-bits p:whenever s:** grokkability-clarity

august 2016 by nhaliday

Hypercontractivity and its Applications | tcs math

tcs tcstariat washington papers slides survey talks lectures exposition concept estimate math boolean-analysis yoga cool rand-approx optimization algorithms coding-theory math.GR SDP rounding complexity lower-bounds graphs graph-theory UGC org:bleg nibble

may 2016 by nhaliday

tcs tcstariat washington papers slides survey talks lectures exposition concept estimate math boolean-analysis yoga cool rand-approx optimization algorithms coding-theory math.GR SDP rounding complexity lower-bounds graphs graph-theory UGC org:bleg nibble

may 2016 by nhaliday

LP Relaxations, Randomized Rounding

april 2016 by nhaliday

integrality vs. algorithmic gap

tcs
cmu
optimization
acm
atoms
algorithms
lecture-notes
pdf
exposition
rounding
linear-programming
rand-approx
random
approximation
relaxation
nibble
discrete
april 2016 by nhaliday

Bayesianism, frequentism, and the planted clique, or do algorithms believe in unicorns? | Windows On Theory

april 2016 by nhaliday

But if you consider probabilities as encoding beliefs, then it’s quite likely that a computationally bounded observer is not certain whether {17} is in the clique or not. After all, finding a maximum clique is a hard computational problem. So if T is much smaller than the time it takes to solve the k-clique problem (which is n^{const\cdot k} as far as we know), then it might make sense for time T observers to assign a probability between 0 and 1 to this event. Can we come up with a coherent theory of such probabilities?

research
tcs
complexity
probability
stats
algorithms
yoga
speculation
tcstariat
frontier
insight
exposition
rand-approx
synthesis
big-picture
boaz-barak
org:bleg
nibble
frequentist
bayesian
subjective-objective
april 2016 by nhaliday

L18: Five Essential Tools for the Analysis of Randomized Algorithms

tcs algorithms list yoga reflection concept expert 👳 metabuch hi-order-bits top-n stanford pdf lecture-notes exposition lens rand-approx big-picture ground-up random concentration-of-measure linearity rounding probabilistic-method tim-roughgarden nibble tricki problem-solving metameta skeleton s:*** expectancy toolkit chart knowledge expert-experience elegance

april 2016 by nhaliday

tcs algorithms list yoga reflection concept expert 👳 metabuch hi-order-bits top-n stanford pdf lecture-notes exposition lens rand-approx big-picture ground-up random concentration-of-measure linearity rounding probabilistic-method tim-roughgarden nibble tricki problem-solving metameta skeleton s:*** expectancy toolkit chart knowledge expert-experience elegance

april 2016 by nhaliday

Copy this bookmark: