recentpopularlog in

nhaliday : hacker   55

Introduction · CTF Field Guide
also has some decent looking career advice and links to books/courses if I ever get interested in infosec stuff
guide  security  links  list  recommendations  contest  puzzles  hacker  init  adversarial  systems  traces  accretion  programming  debugging  assembly  c(pp)  metal-to-virtual  career  planning  jobs  books  course  learning  threat-modeling  tech  working-stiff 
december 2019 by nhaliday
Ask HN: Learning modern web design and CSS | Hacker News
Ask HN: Best way to learn HTML and CSS for web design?:
Ask HN: How to learn design as a hacker?:

Ask HN: How to learn front-end beyond the basics?:
Ask HN: What is the best JavaScript stack for a beginner to learn?:
Free resources for learning full-stack web development:

Ask HN: What is essential reading for learning modern web development?:
Ask HN: A Syllabus for Modern Web Development?:

Ask HN: Modern day web development for someone who last did it 15 years ago:
hn  discussion  design  form-design  frontend  web  tutorial  links  recommendations  init  pareto  efficiency  minimum-viable  move-fast-(and-break-things)  advice  roadmap  multi  hacker  games  puzzles  learning  guide  dynamic  retention  DSL  working-stiff  q-n-a  javascript  frameworks  ecosystem  libraries  client-server  hci  ux  books  chart 
october 2019 by nhaliday
Anti-hash test. - Codeforces
- Thue-Morse sequence
- nice paper:
In general, polynomial string hashing is a useful technique in construction of efficient string algorithms. One simply needs to remember to carefully select the modulus M and the variable of the polynomial p depending on the application. A good rule of thumb is to pick both values as prime numbers with M as large as possible so that no integer overflow occurs and p being at least the size of the alphabet.
2.2. Upper Bound on M
[stuff about 32- and 64-bit integers]
2.3. Lower Bound on M
On the other side Mis bounded due to the well-known birthday paradox: if we consider a collection of m keys with m ≥ 1.2√M then the chance of a collision to occur within this collection is at least 50% (assuming that the distribution of fingerprints is close to uniform on the set of all strings). Thus if the birthday paradox applies then one needs to choose M=ω(m^2)to have a fair chance to avoid a collision. However, one should note that not always the birthday paradox applies. As a benchmark consider the following two problems.

I generally prefer to use Schwartz-Zippel to reason about collision probabilities w/ this kind of thing, eg,

A good way to get more accurate results: just use multiple primes and the Chinese remainder theorem to get as large an M as you need w/o going beyond 64-bit arithmetic.

more on this:
oly  oly-programming  gotchas  howto  hashing  algorithms  strings  random  best-practices  counterexample  multi  pdf  papers  nibble  examples  fields  polynomials  lecture-notes  yoga  probability  estimate  magnitude  hacker  adversarial  CAS  lattice  discrete 
august 2019 by nhaliday
history - Why are UNIX/POSIX system call namings so illegible? - Unix & Linux Stack Exchange
It's due to the technical constraints of the time. The POSIX standard was created in the 1980s and referred to UNIX, which was born in the 1970. Several C compilers at that time were limited to identifiers that were 6 or 8 characters long, so that settled the standard for the length of variable and function names.
We carried out a family of controlled experiments to investigate whether the use of abbreviated identifier names, with respect to full-word identifier names, affects fault fixing in C and Java source code. This family consists of an original (or baseline) controlled experiment and three replications. We involved 100 participants with different backgrounds and experiences in total. Overall results suggested that there is no difference in terms of effort, effectiveness, and efficiency to fix faults, when source code contains either only abbreviated or only full-word identifier names. We also conducted a qualitative study to understand the values, beliefs, and assumptions that inform and shape fault fixing when identifier names are either abbreviated or full-word. We involved in this qualitative study six professional developers with 1--3 years of work experience. A number of insights emerged from this qualitative study and can be considered a useful complement to the quantitative results from our family of experiments. One of the most interesting insights is that developers, when working on source code with abbreviated identifier names, adopt a more methodical approach to identify and fix faults by extending their focus point and only in a few cases do they expand abbreviated identifiers.
q-n-a  stackex  trivia  programming  os  systems  legacy  legibility  ux  libraries  unix  linux  hacker  cracker-prog  multi  evidence-based  empirical  expert-experience  engineering  study  best-practices  comparison  quality  debugging  efficiency  time  code-organizing  grokkability  grokkability-clarity 
july 2019 by nhaliday
Home is a small, engineless sailboat (2018) | Hacker News
Her deck looked disorderly; metal pipes lying on either side of the cabin, what might have been a bed sheet or sail cover (or one in the same) bunched between oxidized turnbuckles and portlights. A purple hula hoop. A green bucket. Several small, carefully potted plants. At the stern, a weathered tree limb lashed to a metal cradle – the arm of a sculling oar. There was no motor. The transom was partially obscured by a wind vane and Alexandra’s years of exposure to the elements were on full display.


Sean is a programmer, a fervent believer in free open source code – software programs available to the public to use and/or modify free of charge. His only computer is the Raspberry Pi he uses to code and control his autopilot, which he calls pypilot. Sean is also a programmer for and regular contributor to OpenCPN Chart Plotter Navigation, free open source software for cruisers. “I mostly write the graphics or the way it draws the chart, but a lot more than that, like how it draws the weather patterns and how it can calculate routes, like you should sail this way.”

from the comments:
Have also read both; they're fascinating in different ways. Paul Lutus has a boat full of technology (diesel engine, laptop, radio, navigation tools, and more) but his book is an intensely - almost uncomfortably - personal voyage through his psyche, while he happens to be sailing around the world. A diary of reflections on life, struggles with people, views on science, observations on the stars and sky and waves, poignant writing on how being at sea affect people, while he happens to be sailing around the world. It's better for that, more relatable as a geek, sadder and more emotional; I consider it a good read, and I reflect on it a lot.
Captain Slocum's voyage of 1896(?) is so different; he took an old clock, and not much else, he lashes the tiller and goes down below for hours at a time to read or sleep without worrying about crashing into other boats, he tells stories of mouldy cheese induced nightmares during rough seas or chasing natives away from robbing him, or finding remote islands with communites of slightly odd people. Much of his writing is about the people he meets - they often know in advance he's making a historic voyage, so when he arrives anywhere, there's a big fuss, he's invited to dine with local dignitaries or captains of large ships, gifted interesting foods and boat parts, there's a lot of interesting things about the world of 1896. (There's also quite a bit of tedious place names and locations and passages where nothing much happens, I'm not that interested in the geography of it).
hn  commentary  oceans  books  reflection  stories  track-record  world  minimum-viable  dirty-hands  links  frontier  allodium  prepping  navigation  oss  hacker 
july 2019 by nhaliday
unix - How can I profile C++ code running on Linux? - Stack Overflow
If your goal is to use a profiler, use one of the suggested ones.

However, if you're in a hurry and you can manually interrupt your program under the debugger while it's being subjectively slow, there's a simple way to find performance problems.

Just halt it several times, and each time look at the call stack. If there is some code that is wasting some percentage of the time, 20% or 50% or whatever, that is the probability that you will catch it in the act on each sample. So that is roughly the percentage of samples on which you will see it. There is no educated guesswork required. If you do have a guess as to what the problem is, this will prove or disprove it.

You may have multiple performance problems of different sizes. If you clean out any one of them, the remaining ones will take a larger percentage, and be easier to spot, on subsequent passes. This magnification effect, when compounded over multiple problems, can lead to truly massive speedup factors.

Caveat: Programmers tend to be skeptical of this technique unless they've used it themselves. They will say that profilers give you this information, but that is only true if they sample the entire call stack, and then let you examine a random set of samples. (The summaries are where the insight is lost.) Call graphs don't give you the same information, because they don't summarize at the instruction level, and
they give confusing summaries in the presence of recursion.
They will also say it only works on toy programs, when actually it works on any program, and it seems to work better on bigger programs, because they tend to have more problems to find. They will say it sometimes finds things that aren't problems, but that is only true if you see something once. If you see a problem on more than one sample, it is real.

gprof, Valgrind and gperftools - an evaluation of some tools for application level CPU profiling on Linux:
gprof is the dinosaur among the evaluated profilers - its roots go back into the 1980’s. It seems it was widely used and a good solution during the past decades. But its limited support for multi-threaded applications, the inability to profile shared libraries and the need for recompilation with compatible compilers and special flags that produce a considerable runtime overhead, make it unsuitable for using it in today’s real-world projects.

Valgrind delivers the most accurate results and is well suited for multi-threaded applications. It’s very easy to use and there is KCachegrind for visualization/analysis of the profiling data, but the slow execution of the application under test disqualifies it for larger, longer running applications.

The gperftools CPU profiler has a very little runtime overhead, provides some nice features like selectively profiling certain areas of interest and has no problem with multi-threaded applications. KCachegrind can be used to analyze the profiling data. Like all sampling based profilers, it suffers statistical inaccuracy and therefore the results are not as accurate as with Valgrind, but practically that’s usually not a big problem (you can always increase the sampling frequency if you need more accurate results). I’m using this profiler on a large code-base and from my personal experience I can definitely recommend using it.
q-n-a  stackex  programming  engineering  performance  devtools  tools  advice  checklists  hacker  nitty-gritty  tricks  lol  multi  unix  linux  techtariat  analysis  comparison  recommendations  software  measurement  oly-programming  concurrency  debugging  metabuch 
may 2019 by nhaliday
Complexity no Bar to AI -
Critics of AI risk suggest diminishing returns to computing (formalized asymptotically) means AI will be weak; this argument relies on a large number of questionable premises and ignoring additional resources, constant factors, and nonlinear returns to small intelligence advantages, and is highly unlikely. (computer science, transhumanism, AI, R)
created: 1 June 2014; modified: 01 Feb 2018; status: finished; confidence: likely; importance: 10
ratty  gwern  analysis  faq  ai  risk  speedometer  intelligence  futurism  cs  computation  complexity  tcs  linear-algebra  nonlinearity  convexity-curvature  average-case  adversarial  article  time-complexity  singularity  iteration-recursion  magnitude  multiplicative  lower-bounds  no-go  performance  hardware  humanity  psychology  cog-psych  psychometrics  iq  distribution  moments  complement-substitute  hanson  ems  enhancement  parable  detail-architecture  universalism-particularism  neuro  ai-control  environment  climate-change  threat-modeling  security  theory-practice  hacker  academia  realness  crypto  rigorous-crypto  usa  government 
april 2018 by nhaliday
National Defense Strategy of the United States of America
National Defense Strategy released with clear priority: Stay ahead of Russia and China:
A saner allocation of US 'defense' funds would be something like 10% nuclear trident, 10% border patrol, & spend the rest innoculating against cyber & biological attacks.
and since the latter 2 are hopeless, just refund 80% of the defense budget.
Monopoly on force at sea is arguably worthwhile.
Given the value of the US market to any would-be adversary, id be willing to roll the dice & let it ride.
subs are part of the triad, surface ships are sitting ducks this day and age
But nobody does sink them, precisely because of the monopoly on force. It's a path-dependent equilibirum where (for now) no other actor can reap the benefits of destabilizing the monopoly, and we're probably drastically underestimating the ramifications if/when it goes away.
can lethal autonomous weapon systems get some
pdf  white-paper  org:gov  usa  government  trump  policy  nascent-state  foreign-policy  realpolitik  authoritarianism  china  asia  russia  antidemos  military  defense  world  values  enlightenment-renaissance-restoration-reformation  democracy  chart  politics  current-events  sulla  nuclear  arms  deterrence  strategy  technology  sky  oceans  korea  communism  innovation  india  europe  EU  MENA  multi  org:foreign  war  great-powers  thucydides  competition  twitter  social  discussion  backup  gnon  🐸  markets  trade  nationalism-globalism  equilibrium  game-theory  tactics  top-n  hi-order-bits  security  hacker  biotech  terrorism  disease  parasites-microbiome  migration  walls  internet 
january 2018 by nhaliday
The Membrane – spottedtoad
All of which is to say that the Internet, which shares many qualities in common with an assemblage of living things except for those clear boundaries and defenses, might well not trend toward increased usability or easier exchange of information over the longer term, even if that is what we have experienced heretofore. The history of evolution is every bit as much a history of parasitism and counterparasitism as it is any kind of story of upward movement toward greater complexity or order. There is no reason to think that we (and still less national or political entities) will necessarily experience technology as a means of enablement and Cool Stuff We Can Do rather than a perpetual set of defenses against scammers of our money and attention. There’s the respect that makes Fake News the news that matters forever more.

ai robocalls/phonetrees/Indian Ocean call centers~biologicalization of corporations thru automation&global com tech

fly-by-night scams double mitotically,covered by outer membrane slime&peptidoglycan

trillion $ corps w/nonspecific skin/neutrophils/specific B/T cells against YOU
ratty  unaffiliated  contrarianism  walls  internet  hacker  risk  futurism  speculation  wonkish  chart  red-queen  parasites-microbiome  analogy  prediction  unintended-consequences  security  open-closed  multi  pdf  white-paper  propaganda  ai  offense-defense  ecology  cybernetics  pessimism  twitter  social  discussion  backup  bio  automation  cooperate-defect  coordination  attention  crypto  money  corporation  accelerationism  threat-modeling  alignment  cost-benefit  interface  interface-compatibility 
december 2016 by nhaliday
Colin Percivel's homepage
interesting links to Hacker News highlights
people  security  engineering  programming  links  blog  stream  hn  hacker  🖥  techtariat  cracker-prog 
october 2016 by nhaliday
Home-Built STM | Dan Berard
diy scanning tunneling microscope
diy  guide  hacker  maker  dirty-hands 
june 2016 by nhaliday

Copy this bookmark:

to read