nhaliday : identity   76

exponential function - Feynman's Trick for Approximating $e^x$ - Mathematics Stack Exchange
1. e^2.3 ~ 10
2. e^.7 ~ 2
3. e^x ~ 1+x
e = 2.71828...

errors (absolute, relative):
1. +0.0258, 0.26%
2. -0.0138, -0.68%
3. 1 + x approximates e^x on [-.3, .3] with absolute error < .05, and relative error < 5.6% (3.7% for [0, .3]).
nibble  q-n-a  overflow  math  feynman  giants  mental-math  calculation  multiplicative  AMT  identity  objektbuch  explanation  howto  estimate  street-fighting  stories  approximation  data  trivia  nitty-gritty
october 2019 by nhaliday
What every computer scientist should know about floating-point arithmetic
Floating-point arithmetic is considered as esoteric subject by many people. This is rather surprising, because floating-point is ubiquitous in computer systems: Almost every language has a floating-point datatype; computers from PCs to supercomputers have floating-point accelerators; most compilers will be called upon to compile floating-point algorithms from time to time; and virtually every operating system must respond to floating-point exceptions such as overflow. This paper presents a tutorial on the aspects of floating-point that have a direct impact on designers of computer systems. It begins with background on floating-point representation and rounding error, continues with a discussion of the IEEE floating point standard, and concludes with examples of how computer system builders can better support floating point.

Float Toy: http://evanw.github.io/float-toy/
https://news.ycombinator.com/item?id=22113485

https://stackoverflow.com/questions/2729637/does-epsilon-really-guarantees-anything-in-floating-point-computations
"you must use an epsilon when dealing with floats" is a knee-jerk reaction of programmers with a superficial understanding of floating-point computations, for comparisons in general (not only to zero).

This is usually unhelpful because it doesn't tell you how to minimize the propagation of rounding errors, it doesn't tell you how to avoid cancellation or absorption problems, and even when your problem is indeed related to the comparison of two floats, it doesn't tell you what value of epsilon is right for what you are doing.

...

Regarding the propagation of rounding errors, there exists specialized analyzers that can help you estimate it, because it is a tedious thing to do by hand.

https://www.di.ens.fr/~cousot/projects/DAEDALUS/synthetic_summary/CEA/Fluctuat/index.html

This was part of HW1 of CS24:
https://en.wikipedia.org/wiki/Kahan_summation_algorithm
In particular, simply summing n numbers in sequence has a worst-case error that grows proportional to n, and a root mean square error that grows as {\displaystyle {\sqrt {n}}} {\sqrt {n}} for random inputs (the roundoff errors form a random walk).[2] With compensated summation, the worst-case error bound is independent of n, so a large number of values can be summed with an error that only depends on the floating-point precision.[2]

cf:
https://en.wikipedia.org/wiki/Pairwise_summation
In numerical analysis, pairwise summation, also called cascade summation, is a technique to sum a sequence of finite-precision floating-point numbers that substantially reduces the accumulated round-off error compared to naively accumulating the sum in sequence.[1] Although there are other techniques such as Kahan summation that typically have even smaller round-off errors, pairwise summation is nearly as good (differing only by a logarithmic factor) while having much lower computational cost—it can be implemented so as to have nearly the same cost (and exactly the same number of arithmetic operations) as naive summation.

In particular, pairwise summation of a sequence of n numbers xn works by recursively breaking the sequence into two halves, summing each half, and adding the two sums: a divide and conquer algorithm. Its worst-case roundoff errors grow asymptotically as at most O(ε log n), where ε is the machine precision (assuming a fixed condition number, as discussed below).[1] In comparison, the naive technique of accumulating the sum in sequence (adding each xi one at a time for i = 1, ..., n) has roundoff errors that grow at worst as O(εn).[1] Kahan summation has a worst-case error of roughly O(ε), independent of n, but requires several times more arithmetic operations.[1] If the roundoff errors are random, and in particular have random signs, then they form a random walk and the error growth is reduced to an average of {\displaystyle O(\varepsilon {\sqrt {\log n}})} O(\varepsilon {\sqrt {\log n}}) for pairwise summation.[2]

A very similar recursive structure of summation is found in many fast Fourier transform (FFT) algorithms, and is responsible for the same slow roundoff accumulation of those FFTs.[2][3]

https://eng.libretexts.org/Bookshelves/Electrical_Engineering/Book%3A_Fast_Fourier_Transforms_(Burrus)/10%3A_Implementing_FFTs_in_Practice/10.8%3A_Numerical_Accuracy_in_FFTs
However, these encouraging error-growth rates only apply if the trigonometric “twiddle” factors in the FFT algorithm are computed very accurately. Many FFT implementations, including FFTW and common manufacturer-optimized libraries, therefore use precomputed tables of twiddle factors calculated by means of standard library functions (which compute trigonometric constants to roughly machine precision). The other common method to compute twiddle factors is to use a trigonometric recurrence formula—this saves memory (and cache), but almost all recurrences have errors that grow as O(n‾√) , O(n) or even O(n2) which lead to corresponding errors in the FFT.

...

There are, in fact, trigonometric recurrences with the same logarithmic error growth as the FFT, but these seem more difficult to implement efficiently; they require that a table of Θ(logn) values be stored and updated as the recurrence progresses. Instead, in order to gain at least some of the benefits of a trigonometric recurrence (reduced memory pressure at the expense of more arithmetic), FFTW includes several ways to compute a much smaller twiddle table, from which the desired entries can be computed accurately on the fly using a bounded number (usually <3) of complex multiplications. For example, instead of a twiddle table with n entries ωkn , FFTW can use two tables with Θ(n‾√) entries each, so that ωkn is computed by multiplying an entry in one table (indexed with the low-order bits of k ) by an entry in the other table (indexed with the high-order bits of k ).

[ed.: Nicholas Higham's "Accuracy and Stability of Numerical Algorithms" seems like a good reference for this kind of analysis.]
nibble  pdf  papers  programming  systems  numerics  nitty-gritty  intricacy  approximation  accuracy  types  sci-comp  multi  q-n-a  stackex  hmm  oly-programming  accretion  formal-methods  yak-shaving  wiki  reference  algorithms  yoga  ground-up  divide-and-conquer  fourier  books  tidbits  chart  caltech  nostalgia  dynamic  calculator  visualization  protocol-metadata  identity
may 2019 by nhaliday
Eliminative materialism - Wikipedia
Eliminative materialism (also called eliminativism) is the claim that people's common-sense understanding of the mind (or folk psychology) is false and that certain classes of mental states that most people believe in do not exist.[1] It is a materialist position in the philosophy of mind. Some supporters of eliminativism argue that no coherent neural basis will be found for many everyday psychological concepts such as belief or desire, since they are poorly defined. Rather, they argue that psychological concepts of behaviour and experience should be judged by how well they reduce to the biological level.[2] Other versions entail the non-existence of conscious mental states such as pain and visual perceptions.[3]

Eliminativism about a class of entities is the view that that class of entities does not exist.[4] For example, materialism tends to be eliminativist about the soul; modern chemists are eliminativist about phlogiston; and modern physicists are eliminativist about the existence of luminiferous aether. Eliminative materialism is the relatively new (1960s–1970s) idea that certain classes of mental entities that common sense takes for granted, such as beliefs, desires, and the subjective sensation of pain, do not exist.[5][6] The most common versions are eliminativism about propositional attitudes, as expressed by Paul and Patricia Churchland,[7] and eliminativism about qualia (subjective interpretations about particular instances of subjective experience), as expressed by Daniel Dennett and Georges Rey.[3] These philosophers often appeal to an introspection illusion.

In the context of materialist understandings of psychology, eliminativism stands in opposition to reductive materialism which argues that mental states as conventionally understood do exist, and that they directly correspond to the physical state of the nervous system.[8][need quotation to verify] An intermediate position is revisionary materialism, which will often argue that the mental state in question will prove to be somewhat reducible to physical phenomena—with some changes needed to the common sense concept.

Since eliminative materialism claims that future research will fail to find a neuronal basis for various mental phenomena, it must necessarily wait for science to progress further. One might question the position on these grounds, but other philosophers like Churchland argue that eliminativism is often necessary in order to open the minds of thinkers to new evidence and better explanations.[8]
concept  conceptual-vocab  philosophy  ideology  thinking  metameta  weird  realness  psychology  cog-psych  neurons  neuro  brain-scan  reduction  complex-systems  cybernetics  wiki  reference  parallax  truth  dennett  within-without  the-self  subjective-objective  absolute-relative  deep-materialism  new-religion  identity  analytical-holistic  systematic-ad-hoc  science  theory-practice  theory-of-mind  applicability-prereqs  nihil  lexical
april 2018 by nhaliday
Harnessing Evolution - with Bret Weinstein | Virtual Futures Salon - YouTube
- ways to get out of Malthusian conditions: expansion to new frontiers, new technology, redistribution/theft
- some discussion of existential risk
- wants to change humanity's "purpose" to one that would be safe in the long run; important thing is it has to be ESS (maybe he wants a singleton?)
- not too impressed by transhumanism (wouldn't identify with a brain emulation)
video  interview  thiel  expert-experience  evolution  deep-materialism  new-religion  sapiens  cultural-dynamics  anthropology  evopsych  sociality  ecology  flexibility  biodet  behavioral-gen  self-interest  interests  moloch  arms  competition  coordination  cooperate-defect  frontier  expansionism  technology  efficiency  thinking  redistribution  open-closed  zero-positive-sum  peace-violence  war  dominant-minority  hypocrisy  dignity  sanctity-degradation  futurism  environment  climate-change  time-preference  long-short-run  population  scale  earth  hidden-motives  game-theory  GT-101  free-riding  innovation  leviathan  malthus  network-structure  risk  existence  civil-liberty  authoritarianism  tribalism  us-them  identity-politics  externalities  unintended-consequences  internet  social  media  pessimism  universalism-particularism  energy-resources  biophysical-econ  politics  coalitions  incentives  attention  epistemic  biases  blowhards  teaching  education  emotion  impetus  comedy  expression-survival  economics  farmers-and-foragers  ca
april 2018 by nhaliday
The Coming Technological Singularity
Within thirty years, we will have the technological
means to create superhuman intelligence. Shortly after,
the human era will be ended.

Is such progress avoidable? If not to be avoided, can
events be guided so that we may survive? These questions
are investigated. Some possible answers (and some further
dangers) are presented.

_What is The Singularity?_

The acceleration of technological progress has been the central
feature of this century. I argue in this paper that we are on the edge
of change comparable to the rise of human life on Earth. The precise
cause of this change is the imminent creation by technology of
entities with greater than human intelligence. There are several means
by which science may achieve this breakthrough (and this is another
reason for having confidence that the event will occur):
o The development of computers that are "awake" and
superhumanly intelligent. (To date, most controversy in the
area of AI relates to whether we can create human equivalence
in a machine. But if the answer is "yes, we can", then there
is little doubt that beings more intelligent can be constructed
shortly thereafter.
o Large computer networks (and their associated users) may "wake
up" as a superhumanly intelligent entity.
o Computer/human interfaces may become so intimate that users
may reasonably be considered superhumanly intelligent.
o Biological science may find ways to improve upon the natural
human intellect.

The first three possibilities depend in large part on
improvements in computer hardware. Progress in computer hardware has
followed an amazingly steady curve in the last few decades [16]. Based
largely on this trend, I believe that the creation of greater than
human intelligence will occur during the next thirty years. (Charles
Platt [19] has pointed out the AI enthusiasts have been making claims
like this for the last thirty years. Just so I'm not guilty of a
relative-time ambiguity, let me more specific: I'll be surprised if
this event occurs before 2005 or after 2030.)

What are the consequences of this event? When greater-than-human
intelligence drives progress, that progress will be much more rapid.
In fact, there seems no reason why progress itself would not involve
the creation of still more intelligent entities -- on a still-shorter
time scale. The best analogy that I see is with the evolutionary past:
Animals can adapt to problems and make inventions, but often no faster
than natural selection can do its work -- the world acts as its own
simulator in the case of natural selection. We humans have the ability
to internalize the world and conduct "what if's" in our heads; we can
solve many problems thousands of times faster than natural selection.
Now, by creating the means to execute those simulations at much higher
speeds, we are entering a regime as radically different from our human
past as we humans are from the lower animals.
org:junk  humanity  accelerationism  futurism  prediction  classic  technology  frontier  speedometer  ai  risk  internet  time  essay  rhetoric  network-structure  ai-control  morality  ethics  volo-avolo  egalitarianism-hierarchy  intelligence  scale  giants  scifi-fantasy  speculation  quotes  religion  theos  singularity  flux-stasis  phase-transition  cybernetics  coordination  cooperate-defect  moloch  communication  bits  speed  efficiency  eden-heaven  ecology  benevolence  end-times  good-evil  identity  the-self  whole-partial-many  density
march 2018 by nhaliday
Rare Earth hypothesis: https://en.wikipedia.org/wiki/Rare_Earth_hypothesis
Fine-tuned Universe: https://en.wikipedia.org/wiki/Fine-tuned_Universe
something to keep in mind:
Puddle theory is a term coined by Douglas Adams to satirize arguments that the universe is made for man.[54][55] As stated in Adams' book The Salmon of Doubt:[56]
Imagine a puddle waking up one morning and thinking, “This is an interesting world I find myself in, an interesting hole I find myself in, fits me rather neatly, doesn't it? In fact, it fits me staggeringly well, must have been made to have me in it!” This is such a powerful idea that as the sun rises in the sky and the air heats up and as, gradually, the puddle gets smaller and smaller, it's still frantically hanging on to the notion that everything's going to be all right, because this World was meant to have him in it, was built to have him in it; so the moment he disappears catches him rather by surprise. I think this may be something we need to be on the watch out for.
article  concept  paradox  wiki  reference  fermi  anthropic  space  xenobio  roots  speculation  ideas  risk  threat-modeling  civilization  nihil  🔬  deep-materialism  new-religion  futurism  frontier  technology  communication  simulation  intelligence  eden  war  nuclear  deterrence  identity  questions  multi  explanans  physics  theos  philosophy  religion  chemistry  bio  hmm  idk  degrees-of-freedom  lol  troll  existence
january 2018 by nhaliday
Autoignition temperature - Wikipedia
The autoignition temperature or kindling point of a substance is the lowest temperature at which it spontaneously ignites in normal atmosphere without an external source of ignition, such as a flame or spark. This temperature is required to supply the activation energy needed for combustion. The temperature at which a chemical ignites decreases as the pressure or oxygen concentration increases. It is usually applied to a combustible fuel mixture.

The time {\displaystyle t_{\text{ig}}} {\displaystyle t_{\text{ig}}} it takes for a material to reach its autoignition temperature {\displaystyle T_{\text{ig}}} {\displaystyle T_{\text{ig}}} when exposed to a heat flux {\displaystyle q''} {\displaystyle q''} is given by the following equation:
nibble  wiki  reference  concept  metrics  identity  physics  thermo  temperature  time  stock-flow  phys-energy  chemistry  article  street-fighting  fire  magnitude  data  list
november 2017 by nhaliday
Section 10 Chi-squared goodness-of-fit test.
- pf that chi-squared statistic for Pearson's test (multinomial goodness-of-fit) actually has chi-squared distribution asymptotically
- the gotcha: terms Z_j in sum aren't independent
- solution:
- compute the covariance matrix of the terms to be E[Z_iZ_j] = -sqrt(p_ip_j)
- note that an equivalent way of sampling the Z_j is to take a random standard Gaussian and project onto the plane orthogonal to (sqrt(p_1), sqrt(p_2), ..., sqrt(p_r))
- that is equivalent to just sampling a Gaussian w/ 1 less dimension (hence df=r-1)
QED
pdf  nibble  lecture-notes  mit  stats  hypothesis-testing  acm  probability  methodology  proofs  iidness  distribution  limits  identity  direction  lifts-projections
october 2017 by nhaliday
Variance of product of multiple random variables - Cross Validated
prod_i (var[X_i] + (E[X_i])^2) - prod_i (E[X_i])^2

two variable case: var[X] var[Y] + var[X] (E[Y])^2 + (E[X])^2 var[Y]
nibble  q-n-a  overflow  stats  probability  math  identity  moments  arrows  multiplicative  iidness  dependence-independence
october 2017 by nhaliday
Power of a point - Wikipedia
The power of point P (see in Figure 1) can be defined equivalently as the product of distances from the point P to the two intersection points of any ray emanating from P.
nibble  math  geometry  spatial  ground-up  concept  metrics  invariance  identity  atoms  wiki  reference  measure  yoga  calculation
september 2017 by nhaliday
Drude model - Wikipedia
The Drude model of electrical conduction was proposed in 1900[1][2] by Paul Drude to explain the transport properties of electrons in materials (especially metals). The model, which is an application of kinetic theory, assumes that the microscopic behavior of electrons in a solid may be treated classically and looks much like _a pinball machine_, with a sea of constantly jittering electrons bouncing and re-bouncing off heavier, relatively immobile positive ions.

The two most significant results of the Drude model are an electronic equation of motion,

d<p(t)>/dt = q(E + 1/m <p(t)> x B) - <p(t)>/τ

and a linear relationship between current density J and electric field E,

J = (nq^2τ/m) E

latter is Ohm's law
nibble  physics  electromag  models  local-global  stat-mech  identity  atoms  wiki  reference  ground-up  cartoons
september 2017 by nhaliday
Flows With Friction
To see how the no-slip condition arises, and how the no-slip condition and the fluid viscosity lead to frictional stresses, we can examine the conditions at a solid surface on a molecular scale. When a fluid is stationary, its molecules are in a constant state of motion with a random velocity v. For a gas, v is equal to the speed of sound. When a fluid is in motion, there is superimposed on this random velocity a mean velocity V, sometimes called the bulk velocity, which is the velocity at which fluid from one place to another. At the interface between the fluid and the surface, there exists an attraction between the molecules or atoms that make up the fluid and those that make up the solid. This attractive force is strong enough to reduce the bulk velocity of the fluid to zero. So the bulk velocity of the fluid must change from whatever its value is far away from the wall to a value of zero at the wall (figure 7). This is called the no-slip condition.

http://www.engineeringarchives.com/les_fm_noslip.html
The fluid property responsible for the no-slip condition and the development of the boundary layer is viscosity.
https://www.quora.com/What-is-the-physics-behind-no-slip-condition-in-fluid-mechanics
https://www.researchgate.net/post/Can_someone_explain_what_exactly_no_slip_condition_or_slip_condition_means_in_terms_of_momentum_transfer_of_the_molecules
https://en.wikipedia.org/wiki/Boundary_layer_thickness
http://www.fkm.utm.my/~ummi/SME1313/Chapter%201.pdf
org:junk  org:edu  physics  mechanics  h2o  identity  atoms  constraint-satisfaction  volo-avolo  flux-stasis  chemistry  stat-mech  nibble  multi  q-n-a  reddit  social  discussion  dirty-hands  pdf  slides  lectures  qra  fluid  local-global  explanation
september 2017 by nhaliday
Constitutive equation - Wikipedia
In physics and engineering, a constitutive equation or constitutive relation is a relation between two physical quantities (especially kinetic quantities as related to kinematic quantities) that is specific to a material or substance, and approximates the response of that material to external stimuli, usually as applied fields or forces. They are combined with other equations governing physical laws to solve physical problems; for example in fluid mechanics the flow of a fluid in a pipe, in solid state physics the response of a crystal to an electric field, or in structural analysis, the connection between applied stresses or forces to strains or deformations.

Some constitutive equations are simply phenomenological; others are derived from first principles. A common approximate constitutive equation frequently is expressed as a simple proportionality using a parameter taken to be a property of the material, such as electrical conductivity or a spring constant. However, it is often necessary to account for the directional dependence of the material, and the scalar parameter is generalized to a tensor. Constitutive relations are also modified to account for the rate of response of materials and their non-linear behavior.[1] See the article Linear response function.
nibble  wiki  reference  article  physics  mechanics  electromag  identity  estimate  approximation  empirical  stylized-facts  list  dirty-hands  fluid  logos
august 2017 by nhaliday
gravity - Gravitational collapse and free fall time (spherical, pressure-free) - Physics Stack Exchange
the parenthetical regarding Gauss's law just involves noting a shell of radius r + symmetry (so single parameter determines field along shell)
nibble  q-n-a  overflow  physics  mechanics  gravity  tidbits  time  phase-transition  symmetry  differential  identity  dynamical
august 2017 by nhaliday
Introduction to Scaling Laws
http://galileo.phys.virginia.edu/classes/304/scaling.pdf

Galileo’s Discovery of Scaling Laws: https://www.mtholyoke.edu/~mpeterso/classes/galileo/scaling8.pdf
Days 1 and 2 of Two New Sciences

An example of such an insight is “the surface of a small solid is comparatively greater than that of a large one” because the surface goes like the square of a linear dimension, but the volume goes like the cube.5 Thus as one scales down macroscopic objects, forces on their surfaces like viscous drag become relatively more important, and bulk forces like weight become relatively less important. Galileo uses this idea on the First Day in the context of resistance in free fall, as an explanation for why similar objects of different size do not fall exactly together, but the smaller one lags behind.
nibble  org:junk  exposition  lecture-notes  physics  mechanics  street-fighting  problem-solving  scale  magnitude  estimate  fermi  mental-math  calculation  nitty-gritty  multi  scitariat  org:bleg  lens  tutorial  guide  ground-up  tricki  skeleton  list  cheatsheet  identity  levers  hi-order-bits  yoga  metabuch  pdf  article  essay  history  early-modern  europe  the-great-west-whale  science  the-trenches  discovery  fluid  architecture  oceans  giants  tidbits  elegance
august 2017 by nhaliday
Inscribed angle - Wikipedia
pf:
- for triangle w/ one side = a diameter, draw isosceles triangle and use supplementary angle identities
- otherwise draw second triangle w/ side = a diameter, and use above result twice
nibble  math  geometry  spatial  ground-up  wiki  reference  proofs  identity  levers  yoga
august 2017 by nhaliday
Tidal locking - Wikipedia
The Moon's rotation and orbital periods are tidally locked with each other, so no matter when the Moon is observed from Earth the same hemisphere of the Moon is always seen. The far side of the Moon was not seen until 1959, when photographs of most of the far side were transmitted from the Soviet spacecraft Luna 3.[12]

nibble  wiki  reference  space  mechanics  gravity  navigation  explanation  flux-stasis  marginal  volo-avolo  spatial  direction  invariance  physics  flexibility  rigidity  time  identity  phase-transition  being-becoming
august 2017 by nhaliday
The Earth-Moon system
nice way of expressing Kepler's law (scaled by AU, solar mass, year, etc.) among other things

1. PHYSICAL PROPERTIES OF THE MOON
2. LUNAR PHASES
3. ECLIPSES
4. TIDES
nibble  org:junk  explanation  trivia  data  objektbuch  space  mechanics  spatial  visualization  earth  visual-understanding  navigation  experiment  measure  marginal  gravity  scale  physics  nitty-gritty  tidbits  identity  cycles  time  magnitude  street-fighting  calculation  oceans  pro-rata  rhythm  flux-stasis
august 2017 by nhaliday
Roche limit - Wikipedia
In celestial mechanics, the Roche limit (pronounced /ʁɔʃ/) or Roche radius, is the distance within which a celestial body, held together only by its own gravity, will disintegrate due to a second celestial body's tidal forces exceeding the first body's gravitational self-attraction.[1] Inside the Roche limit, orbiting material disperses and forms rings whereas outside the limit material tends to coalesce. The term is named after Édouard Roche, who is the French astronomer who first calculated this theoretical limit in 1848.[2]
space  physics  gravity  mechanics  wiki  reference  nibble  phase-transition  proofs  tidbits  identity  marginal
july 2017 by nhaliday
Lanchester's laws - Wikipedia
Lanchester's laws are mathematical formulae for calculating the relative strengths of a predator–prey pair, originally devised to analyse relative strengths of military forces.
war  meta:war  models  plots  time  differential  street-fighting  methodology  strategy  tactics  wiki  reference  history  mostly-modern  pre-ww2  world-war  britain  old-anglo  giants  magnitude  arms  identity
june 2017 by nhaliday
Beta function - Wikipedia
B(x, y) = int_0^1 t^{x-1}(1-t)^{y-1} dt = Γ(x)Γ(y)/Γ(x+y)
one misc. application: calculating pdf of Erlang distribution (sum of iid exponential r.v.s)
concept  atoms  acm  math  calculation  integral  wiki  reference  identity  AMT  distribution  multiplicative
march 2017 by nhaliday
[1604.03640] Bridging the Gaps Between Residual Learning, Recurrent Neural Networks and Visual Cortex
We discuss relations between Residual Networks (ResNet), Recurrent Neural Networks (RNNs) and the primate visual cortex. We begin with the observation that a shallow RNN is exactly equivalent to a very deep ResNet with weight sharing among the layers. A direct implementation of such a RNN, although having orders of magnitude fewer parameters, leads to a performance similar to the corresponding ResNet. We propose 1) a generalization of both RNN and ResNet architectures and 2) the conjecture that a class of moderately deep RNNs is a biologically-plausible model of the ventral stream in visual cortex. We demonstrate the effectiveness of the architectures by testing them on the CIFAR-10 dataset.
papers  preprint  neuro  biodet  interdisciplinary  deep-learning  model-class  identity  machine-learning  nibble  org:mat  computer-vision
february 2017 by nhaliday
Wald's equation - Wikipedia
important identity that simplifies the calculation of the expected value of the sum of a random number of random quantities
math  levers  probability  wiki  reference  nibble  expectancy  identity
january 2017 by nhaliday
Breeding the breeder's equation - Gene Expression
- interesting fact about normal distribution: when thresholding Gaussian r.v. X ~ N(0, σ^2) at X > 0, the new mean μ_s satisfies μ_s = pdf(X,t)/(1-cdf(X,t)) σ^2
- follows from direct calculation (any deeper reason?)
- note (using Taylor/asymptotic expansion of complementary error function) that this is Θ(t) as t -> 0 or ∞ (w/ different constants)
- for X ~ N(0, 1), can calculate 0 = cdf(X, t)μ_<t + (1-cdf(X, t))μ_>t => μ_<t = -pdf(X, t)/cdf(X, t)
- this declines quickly w/ t (like e^{-t^2/2}). as t -> 0, it goes like -sqrt(2/pi) + higher-order terms ~ -0.8.

Average of a tail of a normal distribution: https://stats.stackexchange.com/questions/26805/average-of-a-tail-of-a-normal-distribution

Truncated normal distribution: https://en.wikipedia.org/wiki/Truncated_normal_distribution
gnxp  explanation  concept  bio  genetics  population-genetics  agri-mindset  analysis  scitariat  org:sci  nibble  methodology  distribution  tidbits  probability  stats  acm  AMT  limits  magnitude  identity  integral  street-fighting  symmetry  s:*  tails  multi  q-n-a  overflow  wiki  reference  objektbuch  proofs
december 2016 by nhaliday
Overcoming Bias : A Future Of Pipes
The future of computing, after about 2035, is adiabatic reservable hardware. When such hardware runs at a cost-minimizing speed, half of the total budget is spent on computer hardware, and the other half is spent on energy and cooling for that hardware. Thus after 2035 or so, about as much will be spent on computer hardware and a physical space to place it as will be spent on hardware and space for systems to generate and transport energy into the computers, and to absorb and transport heat away from those computers. So if you seek a career for a futuristic world dominated by computers, note that a career making or maintaining energy or cooling systems may be just as promising as a career making or maintaining computing hardware.

We can imagine lots of futuristic ways to cheaply and compactly make and transport energy. These include thorium reactors and superconducting power cables. It is harder to imagine futuristic ways to absorb and transport heat. So we are likely to stay stuck with existing approaches to cooling. And the best of these, at least on large scales, is to just push cool fluids past the hardware. And the main expense in this approach is for the pipes to transport those fluids, and the space to hold those pipes.

Thus in future cities crammed with computer hardware, roughly half of the volume is likely to be taken up by pipes that move cooling fluids in and out. And the tech for such pipes will probably be more stable than tech for energy or computers. So if you want a stable career managing something that will stay very valuable for a long time, consider plumbing.

Will this focus on cooling limit city sizes? After all, the surface area of a city, where cooling fluids can go in and out, goes as the square of city scale , while the volume to be cooled goes as the cube of city scale. The ratio of volume to surface area is thus linear in city scale. So does our ability to cool cities fall inversely with city scale?

Actually, no. We have good fractal pipe designs to efficiently import fluids like air or water from outside a city to near every point in that city, and to then export hot fluids from near every point to outside the city. These fractal designs require cost overheads that are only logarithmic in the total size of the city. That is, when you double the city size, such overheads increase by only a constant amount, instead of doubling.
hanson  futurism  prediction  street-fighting  essay  len:short  ratty  computation  hardware  thermo  structure  composition-decomposition  complex-systems  magnitude  analysis  urban-rural  power-law  phys-energy  detail-architecture  efficiency  economics  supply-demand  labor  planning  long-term  physics  temperature  flux-stasis  fluid  measure  technology  frontier  speedometer  career  cost-benefit  identity  stylized-facts  objektbuch  data  trivia  cocktail  aphorism
august 2016 by nhaliday
Coefficient of relationship - Wikipedia, the free encyclopedia
relatedness by consanguinity

Average percent DNA shared between relatives – 23andMe Customer Care: https://customercare.23andme.com/hc/en-us/articles/212170668-Average-percent-DNA-shared-between-relatives
summary of relatedness by consanguinity
shouldn't it be 2^-4 ~ 6% for first cousins?
wiki  sapiens  reference  genetics  evolution  concept  cheatsheet  kinship  metrics  intersection-connectedness  multi  brands  biotech  data  graphs  trees  magnitude  identity  estimate  measurement  inference
july 2016 by nhaliday

Copy this bookmark: