recentpopularlog in

nhaliday : accretion   133

« earlier  
Introduction · CTF Field Guide
also has some decent looking career advice and links to books/courses if I ever get interested in infosec stuff
guide  security  links  list  recommendations  contest  puzzles  hacker  init  adversarial  systems  traces  accretion  programming  debugging  assembly  c(pp)  metal-to-virtual  career  planning  jobs  books  course  learning  threat-modeling  tech  working-stiff 
december 2019 by nhaliday
Unhappy Go Lucky!
- regularly publishes unofficial editorials for AtCoder
- also seems like an otaku >_>
oly  oly-programming  blog  stream  algorithms  problem-solving  accretion  puzzles  techtariat  japan  asia  quixotic  yoga 
september 2019 by nhaliday
[Tutorial] A way to Practice Competitive Programming : From Rating 1000 to 2400+ - Codeforces
this guy really didn't take that long to reach red..., as of today he's done 20 contests in 2y to my 44 contests in 7y (w/ a long break)...>_>

tho he has 3 times as many submissions as me. maybe he does a lot of virtual rounds?

some snippets from the PDF guide linked:
1400-1900:
To be rating 1900, skills as follows are needed:
- You know and can use major algorithms like these:
Brute force DP DFS BFS Dijkstra
Binary Indexed Tree nCr, nPr Mod inverse Bitmasks Binary Search
- You can code faster (For example, 5 minutes for R1100 problems, 10 minutes for
R1400 problems)

If you are not good at fast-coding and fast-debugging, you should solve AtCoder problems. Actually, and statistically, many Japanese are good at fast-coding relatively while not so good at solving difficult problems. I think that’s because of AtCoder.

I recommend to solve problem C and D in AtCoder Beginner Contest. On average, if you can solve problem C of AtCoder Beginner Contest within 10 minutes and problem D within 20 minutes, you are Div1 in FastCodingForces :)

...

Interestingly, typical problems are concentrated in Div2-only round problems. If you are not good at Div2-only round, it is likely that you are not good at using typical algorithms, especially 10 algorithms that are written above.

If you can use some typical problem but not good at solving more than R1500 in Codeforces, you should begin TopCoder. This type of practice is effective for people who are good at Div.2 only round but not good at Div.1+Div.2 combined or Div.1+Div.2 separated round.

Sometimes, especially in Div1+Div2 round, some problems need mathematical concepts or thinking. Since there are a lot of problems which uses them (and also light-implementation!) in TopCoder, you should solve TopCoder problems.

I recommend to solve Div1Easy of recent 100 SRMs. But some problems are really difficult, (e.g. even red-ranked coder could not solve) so before you solve, you should check how many percent of people did solve this problem. You can use https://competitiveprogramming.info/ to know some informations.

1900-2200:
To be rating 2200, skills as follows are needed:
- You know and can use 10 algorithms which I stated in pp.11 and segment trees
(including lazy propagations)
- You can solve problems very fast: For example, 5 mins for R1100, 10 mins for
R1500, 15 mins for R1800, 40 mins for R2000.
- You have decent skills for mathematical-thinking or considering problems
- Strong mental which can think about the solution more than 1 hours, and don’t give up even if you are below average in Div1 in the middle of the contest

This is only my way to practice, but I did many virtual contests when I was rating 2000. In this page, virtual contest does not mean “Virtual Participation” in Codeforces. It means choosing 4 or 5 problems which the difficulty is near your rating (For example, if you are rating 2000, choose R2000 problems in Codeforces) and solve them within 2 hours. You can use https://vjudge.net/. In this website, you can make virtual contests from problems on many online judges. (e.g. AtCoder, Codeforces, Hackerrank, Codechef, POJ, ...)

If you cannot solve problem within the virtual contests and could not be able to find the solution during the contest, you should read editorial. Google it. (e.g. If you want to know editorial of Codeforces Round #556 (Div. 1), search “Codeforces Round #556 editorial” in google) There is one more important thing to gain rating in Codeforces. To solve problem fast, you should equip some coding library (or template code). For example, I think that equipping segment tree libraries, lazy segment tree libraries, modint library, FFT library, geometry library, etc. is very effective.

2200 to 2400:
Rating 2200 and 2400 is actually very different ...

To be rating 2400, skills as follows are needed:
- You should have skills that stated in previous section (rating 2200)
- You should solve difficult problems which are only solved by less than 100 people in Div1 contests

...

At first, there are a lot of educational problems in AtCoder. I recommend you should solve problem E and F (especially 700-900 points problem in AtCoder) of AtCoder Regular Contest, especially ARC058-ARC090. Though old AtCoder Regular Contests are balanced for “considering” and “typical”, but sadly, AtCoder Grand Contest and recent AtCoder Regular Contest problems are actually too biased for considering I think, so I don’t recommend if your goal is gain rating in Codeforces. (Though if you want to gain rating more than 2600, you should solve problems from AtCoder Grand Contest)

For me, actually, after solving AtCoder Regular Contests, my average performance in CF virtual contest increased from 2100 to 2300 (I could not reach 2400 because start was early)

If you cannot solve problems, I recommend to give up and read editorial as follows:
Point value 600 700 800 900 1000-
CF rating R2000 R2200 R2400 R2600 R2800
Time to editorial 40 min 50 min 60 min 70 min 80 min

If you solve AtCoder educational problems, your skills of competitive programming will be increased. But there is one more problem. Without practical skills, you rating won’t increase. So, you should do 50+ virtual participations (especially Div.1) in Codeforces. In virtual participation, you can learn how to compete as a purple/orange-ranked coder (e.g. strategy) and how to use skills in Codeforces contests that you learned in AtCoder. I strongly recommend to read editorial of all problems except too difficult one (e.g. Less than 30 people solved in contest) after the virtual contest. I also recommend to write reflections about strategy, learns and improvements after reading editorial on notebooks after the contests/virtual.

In addition, about once a week, I recommend you to make time to think about much difficult problem (e.g. R2800 in Codeforces) for couple of hours. If you could not reach the solution after thinking couple of hours, I recommend you to read editorial because you can learn a lot. Solving high-level problems may give you chance to gain over 100 rating in a single contest, but also can give you chance to solve easier problems faster.
oly  oly-programming  problem-solving  learning  practice  accretion  strategy  hmm  pdf  guide  reflection  advice  wire-guided  marginal  stylized-facts  speed  time  cost-benefit  tools  multi  sleuthin  review  comparison  puzzles  contest  aggregator  recommendations  objektbuch  time-use  growth  studying  🖥  👳  yoga 
august 2019 by nhaliday
The 'science' of training in competitive programming - Codeforces
"Hard problems" is subjective. A good rule of thumb for learning problem solving (at least according to me) is that your problem selection is good if you fail to solve roughly 50% of problems you attempt. Anything in [20%,80%] should still be fine, although many people have problems staying motivated if they fail too often. Read solutions for problems you fail to solve.

(There is some actual math behind this. Hopefully one day I'll have the time to write it down.)
- misof in a comment
--
I don't believe in any of things like "either you solve it in 30mins — few hours, or you never solve it at all". There are some magic at first glance algorithms like polynomial hashing, interval tree or FFT (which is magic even at tenth glance :P), but there are not many of them and vast majority of algorithms are possible to be invented on our own, for example dp. In high school I used to solve many problems from IMO and PMO and when I didn't solve a problem I tried it once again for some time. And I have solved some problems after third or sth like that attempt. Though, if we are restricting ourselves to beginners, I think that it still holds true, but it would be better to read solutions after some time, because there are so many other things which we can learn, so better not get stuck at one particular problem, when there are hundreds of other important concepts to be learnt.
oly  oly-programming  problem-solving  learning  practice  accretion  strategy  marginal  wire-guided  stylized-facts  hmm  advice  tactics  time  time-use  cost-benefit  growth  studying  🖥  👳 
august 2019 by nhaliday
LeetCode - The World's Leading Online Programming Learning Platform
very much targeted toward interview prep
https://www.quora.com/Is-LeetCode-Online-Judges-premium-membership-really-worth-it
This data is especially valuable because you get to know a company's interview style beforehand. For example, most questions that appeared in Facebook interviews have short solution typically not more than 30 lines of code. Their interview process focus on your ability to write clean, concise code. On the other hand, Google style interviews lean more on the analytical side and is algorithmic heavy, typically with multiple solutions to a question - each with a different run time complexity.
programming  tech  career  working-stiff  recruiting  interview-prep  algorithms  problem-solving  oly-programming  multi  q-n-a  qra  comparison  stylized-facts  facebook  google  cost-benefit  homo-hetero  startups  organization  alien-character  🖥  contest  puzzles  accretion  transitions  money-for-time 
june 2019 by nhaliday
About - Project Euler
I've written my program but should it take days to get to the answer?
Absolutely not! Each problem has been designed according to a "one-minute rule", which means that although it may take several hours to design a successful algorithm with more difficult problems, an efficient implementation will allow a solution to be obtained on a modestly powered computer in less than one minute.
math  rec-math  math.NT  math.CO  programming  oly  database  community  forum  stream  problem-solving  accretion  puzzles  contest  🖥  👳 
june 2019 by nhaliday
What every computer scientist should know about floating-point arithmetic
Floating-point arithmetic is considered as esoteric subject by many people. This is rather surprising, because floating-point is ubiquitous in computer systems: Almost every language has a floating-point datatype; computers from PCs to supercomputers have floating-point accelerators; most compilers will be called upon to compile floating-point algorithms from time to time; and virtually every operating system must respond to floating-point exceptions such as overflow. This paper presents a tutorial on the aspects of floating-point that have a direct impact on designers of computer systems. It begins with background on floating-point representation and rounding error, continues with a discussion of the IEEE floating point standard, and concludes with examples of how computer system builders can better support floating point.

Float Toy: http://evanw.github.io/float-toy/
https://news.ycombinator.com/item?id=22113485

https://stackoverflow.com/questions/2729637/does-epsilon-really-guarantees-anything-in-floating-point-computations
"you must use an epsilon when dealing with floats" is a knee-jerk reaction of programmers with a superficial understanding of floating-point computations, for comparisons in general (not only to zero).

This is usually unhelpful because it doesn't tell you how to minimize the propagation of rounding errors, it doesn't tell you how to avoid cancellation or absorption problems, and even when your problem is indeed related to the comparison of two floats, it doesn't tell you what value of epsilon is right for what you are doing.

...

Regarding the propagation of rounding errors, there exists specialized analyzers that can help you estimate it, because it is a tedious thing to do by hand.

https://www.di.ens.fr/~cousot/projects/DAEDALUS/synthetic_summary/CEA/Fluctuat/index.html

This was part of HW1 of CS24:
https://en.wikipedia.org/wiki/Kahan_summation_algorithm
In particular, simply summing n numbers in sequence has a worst-case error that grows proportional to n, and a root mean square error that grows as {\displaystyle {\sqrt {n}}} {\sqrt {n}} for random inputs (the roundoff errors form a random walk).[2] With compensated summation, the worst-case error bound is independent of n, so a large number of values can be summed with an error that only depends on the floating-point precision.[2]

cf:
https://en.wikipedia.org/wiki/Pairwise_summation
In numerical analysis, pairwise summation, also called cascade summation, is a technique to sum a sequence of finite-precision floating-point numbers that substantially reduces the accumulated round-off error compared to naively accumulating the sum in sequence.[1] Although there are other techniques such as Kahan summation that typically have even smaller round-off errors, pairwise summation is nearly as good (differing only by a logarithmic factor) while having much lower computational cost—it can be implemented so as to have nearly the same cost (and exactly the same number of arithmetic operations) as naive summation.

In particular, pairwise summation of a sequence of n numbers xn works by recursively breaking the sequence into two halves, summing each half, and adding the two sums: a divide and conquer algorithm. Its worst-case roundoff errors grow asymptotically as at most O(ε log n), where ε is the machine precision (assuming a fixed condition number, as discussed below).[1] In comparison, the naive technique of accumulating the sum in sequence (adding each xi one at a time for i = 1, ..., n) has roundoff errors that grow at worst as O(εn).[1] Kahan summation has a worst-case error of roughly O(ε), independent of n, but requires several times more arithmetic operations.[1] If the roundoff errors are random, and in particular have random signs, then they form a random walk and the error growth is reduced to an average of {\displaystyle O(\varepsilon {\sqrt {\log n}})} O(\varepsilon {\sqrt {\log n}}) for pairwise summation.[2]

A very similar recursive structure of summation is found in many fast Fourier transform (FFT) algorithms, and is responsible for the same slow roundoff accumulation of those FFTs.[2][3]

https://eng.libretexts.org/Bookshelves/Electrical_Engineering/Book%3A_Fast_Fourier_Transforms_(Burrus)/10%3A_Implementing_FFTs_in_Practice/10.8%3A_Numerical_Accuracy_in_FFTs
However, these encouraging error-growth rates only apply if the trigonometric “twiddle” factors in the FFT algorithm are computed very accurately. Many FFT implementations, including FFTW and common manufacturer-optimized libraries, therefore use precomputed tables of twiddle factors calculated by means of standard library functions (which compute trigonometric constants to roughly machine precision). The other common method to compute twiddle factors is to use a trigonometric recurrence formula—this saves memory (and cache), but almost all recurrences have errors that grow as O(n‾√) , O(n) or even O(n2) which lead to corresponding errors in the FFT.

...

There are, in fact, trigonometric recurrences with the same logarithmic error growth as the FFT, but these seem more difficult to implement efficiently; they require that a table of Θ(logn) values be stored and updated as the recurrence progresses. Instead, in order to gain at least some of the benefits of a trigonometric recurrence (reduced memory pressure at the expense of more arithmetic), FFTW includes several ways to compute a much smaller twiddle table, from which the desired entries can be computed accurately on the fly using a bounded number (usually <3) of complex multiplications. For example, instead of a twiddle table with n entries ωkn , FFTW can use two tables with Θ(n‾√) entries each, so that ωkn is computed by multiplying an entry in one table (indexed with the low-order bits of k ) by an entry in the other table (indexed with the high-order bits of k ).

[ed.: Nicholas Higham's "Accuracy and Stability of Numerical Algorithms" seems like a good reference for this kind of analysis.]
nibble  pdf  papers  programming  systems  numerics  nitty-gritty  intricacy  approximation  accuracy  types  sci-comp  multi  q-n-a  stackex  hmm  oly-programming  accretion  formal-methods  yak-shaving  wiki  reference  algorithms  yoga  ground-up  divide-and-conquer  fourier  books  tidbits  chart  caltech  nostalgia  dynamic  calculator  visualization  protocol-metadata  identity 
may 2019 by nhaliday
self study - Looking for a good and complete probability and statistics book - Cross Validated
I never had the opportunity to visit a stats course from a math faculty. I am looking for a probability theory and statistics book that is complete and self-sufficient. By complete I mean that it contains all the proofs and not just states results.
nibble  q-n-a  overflow  data-science  stats  methodology  books  recommendations  list  top-n  confluence  proofs  rigor  reference  accretion 
october 2017 by nhaliday
Talks
Quantum Supremacy: Office of Science and Technology Policy QIS Forum, Eisenhower Executive Office Building, White House Complex, Washington DC, October 18, 2016. Another version at UTCS Faculty Lunch, October 26, 2016. Another version at UT Austin Physics Colloquium, Austin, TX, November 9, 2016.

Complexity-Theoretic Foundations of Quantum Supremacy Experiments: Quantum Algorithms Workshop, Aspen Center for Physics, Aspen, CO, March 25, 2016

When Exactly Do Quantum Computers Provide A Speedup?: Yale Quantum Institute Seminar, Yale University, New Haven, CT, October 10, 2014. Another version at UT Austin Physics Colloquium, Austin, TX, November 19, 2014; Applied and Interdisciplinary Mathematics Seminar, Northeastern University, Boston, MA, November 25, 2014; Hebrew University Physics Colloquium, Jerusalem, Israel, January 5, 2015; Computer Science Colloquium, Technion, Haifa, Israel, January 8, 2015; Stanford University Physics Colloquium, January 27, 2015
tcstariat  aaronson  tcs  complexity  quantum  quantum-info  talks  list  slides  accretion  algorithms  applications  physics  nibble  frontier  computation  volo-avolo  speedometer  questions 
may 2017 by nhaliday
Main Page - Competitive Programming Algorithms: E-Maxx Algorithms in English
original russian version: http://e-maxx.ru/algo/

some notable stuff:
- O(N) factorization sieve
- discrete logarithm
- factorial N! (mod P) in O(P log N)
- flow algorithms
- enumerating submasks
- bridges, articulation points
- Ukkonen algorithm
- sqrt(N) trick, eg, for range mode query
explanation  programming  algorithms  russia  foreign-lang  oly  oly-programming  problem-solving  accretion  math.NT  graphs  graph-theory  optimization  data-structures  yoga  tidbits  multi  anglo  language  arrows  strings 
february 2017 by nhaliday
Information Processing: Learn to solve every problem that has been solved
While it may be impossible to achieve Feynman's goal, I'm surprised that more people don't attempt the importance threshold-modified version. Suppose we set the importance bar really, really high: what are the most important results that everyone should try to understand? Here's a very biased partial list: basic physics and mathematics (e.g., to the level of the Feynman Lectures); quantitative theory of genetics and evolution; information, entropy and probability; basic ideas about logic and computation (Godel and Turing?); ... What else? Dynamics of markets? Complex Systems? Psychometrics? Descriptive biology? Organic chemistry?
hsu  scitariat  feynman  giants  stories  aphorism  curiosity  interdisciplinary  frontier  signal-noise  top-n  discussion  caltech  problem-solving  big-picture  vitality  🎓  virtu  big-surf  courage  🔬  allodium  nietzschean  ideas  quixotic  accretion  learning  hi-order-bits 
february 2017 by nhaliday
« earlier      
per page:    204080120160

Copy this bookmark:





to read