nhaliday : calculation   66

exponential function - Feynman's Trick for Approximating \$e^x\$ - Mathematics Stack Exchange
1. e^2.3 ~ 10
2. e^.7 ~ 2
3. e^x ~ 1+x
e = 2.71828...

errors (absolute, relative):
1. +0.0258, 0.26%
2. -0.0138, -0.68%
3. 1 + x approximates e^x on [-.3, .3] with absolute error < .05, and relative error < 5.6% (3.7% for [0, .3]).
nibble  q-n-a  overflow  math  feynman  giants  mental-math  calculation  multiplicative  AMT  identity  objektbuch  explanation  howto  estimate  street-fighting  stories  approximation  data  trivia  nitty-gritty
october 2019 by nhaliday
Factorization of polynomials over finite fields - Wikipedia
In mathematics and computer algebra the factorization of a polynomial consists of decomposing it into a product of irreducible factors. This decomposition is theoretically possible and is unique for polynomials with coefficients in any field, but rather strong restrictions on the field of the coefficients are needed to allow the computation of the factorization by means of an algorithm. In practice, algorithms have been designed only for polynomials with coefficients in a finite field, in the field of rationals or in a finitely generated field extension of one of them.

All factorization algorithms, including the case of multivariate polynomials over the rational numbers, reduce the problem to this case; see polynomial factorization. It is also used for various applications of finite fields, such as coding theory (cyclic redundancy codes and BCH codes), cryptography (public key cryptography by the means of elliptic curves), and computational number theory.

As the reduction of the factorization of multivariate polynomials to that of univariate polynomials does not have any specificity in the case of coefficients in a finite field, only polynomials with one variable are considered in this article.

...

In the algorithms that follow, the complexities are expressed in terms of number of arithmetic operations in Fq, using classical algorithms for the arithmetic of polynomials.

[ed.: Interesting choice...]

...

Factoring algorithms
Many algorithms for factoring polynomials over finite fields include the following three stages:

Square-free factorization
Distinct-degree factorization
Equal-degree factorization
An important exception is Berlekamp's algorithm, which combines stages 2 and 3.

Berlekamp's algorithm
Main article: Berlekamp's algorithm
The Berlekamp's algorithm is historically important as being the first factorization algorithm, which works well in practice. However, it contains a loop on the elements of the ground field, which implies that it is practicable only over small finite fields. For a fixed ground field, its time complexity is polynomial, but, for general ground fields, the complexity is exponential in the size of the ground field.

[ed.: This actually looks fairly implementable.]
wiki  reference  concept  algorithms  calculation  nibble  numerics  math  algebra  math.CA  fields  polynomials  levers  multiplicative  math.NT
july 2019 by nhaliday
Bareiss algorithm - Wikipedia
During the execution of Bareiss algorithm, every integer that is computed is the determinant of a submatrix of the input matrix. This allows, using the Hadamard inequality, to bound the size of these integers. Otherwise, the Bareiss algorithm may be viewed as a variant of Gaussian elimination and needs roughly the same number of arithmetic operations.
nibble  ground-up  cs  tcs  algorithms  complexity  linear-algebra  numerics  sci-comp  fields  calculation  nitty-gritty
june 2019 by nhaliday
Hyperbolic angle - Wikipedia
A unit circle {\displaystyle x^{2}+y^{2}=1} x^2 + y^2 = 1 has a circular sector with an area half of the circular angle in radians. Analogously, a unit hyperbola {\displaystyle x^{2}-y^{2}=1} {\displaystyle x^{2}-y^{2}=1} has a hyperbolic sector with an area half of the hyperbolic angle.
nibble  math  trivia  wiki  reference  physics  relativity  concept  atoms  geometry  ground-up  characterization  measure  definition  plots  calculation  nitty-gritty  direction  metrics  manifolds
november 2017 by nhaliday
Power of a point - Wikipedia
The power of point P (see in Figure 1) can be defined equivalently as the product of distances from the point P to the two intersection points of any ray emanating from P.
nibble  math  geometry  spatial  ground-up  concept  metrics  invariance  identity  atoms  wiki  reference  measure  yoga  calculation
september 2017 by nhaliday
Historically significant lunar eclipses - Wikipedia
On 30 June 1503, Christopher Columbus beached his two last caravels and was stranded in Jamaica. The indigenous people of the island welcomed Columbus and his crew and fed them, but Columbus' sailors cheated and stole from the natives. After six months, the natives halted the food supply.[8]

Columbus had on board an almanac authored by Regiomontanus of astronomical tables covering the years 1475–1506; upon consulting the book, he noticed the date and the time of an upcoming lunar eclipse. He was able to use this information to his advantage. He requested a meeting for that day with the Cacique, the leader, and told him that his god was angry with the local people's treatment of Columbus and his men. Columbus said his god would provide a clear sign of his displeasure by making the rising full Moon appear "inflamed with wrath".

The lunar eclipse and the red moon appeared on schedule, and the indigenous people were impressed and frightened. The son of Columbus, Ferdinand, wrote that the people:

“ with great howling and lamentation came running from every direction to the ships laden with provisions, praying to the Admiral to intercede with his god on their behalf... ”
Columbus timed the eclipse with his hourglass, and shortly before the totality ended after 48 minutes, he told the frightened indigenous people that they were going to be forgiven.[8] When the moon started to reappear from the shadow of the Earth, he told them that his god had pardoned them.[9]
history  age-of-discovery  medieval  early-modern  europe  the-great-west-whale  conquest-empire  civilization  farmers-and-foragers  stories  cocktail  trivia  big-peeps  impro  persuasion  dark-arts  wiki  reference  space  nibble  leadership  sky  earth  cycles  navigation  street-fighting  calculation
august 2017 by nhaliday
Introduction to Scaling Laws
http://galileo.phys.virginia.edu/classes/304/scaling.pdf

Galileo’s Discovery of Scaling Laws: https://www.mtholyoke.edu/~mpeterso/classes/galileo/scaling8.pdf
Days 1 and 2 of Two New Sciences

An example of such an insight is “the surface of a small solid is comparatively greater than that of a large one” because the surface goes like the square of a linear dimension, but the volume goes like the cube.5 Thus as one scales down macroscopic objects, forces on their surfaces like viscous drag become relatively more important, and bulk forces like weight become relatively less important. Galileo uses this idea on the First Day in the context of resistance in free fall, as an explanation for why similar objects of different size do not fall exactly together, but the smaller one lags behind.
nibble  org:junk  exposition  lecture-notes  physics  mechanics  street-fighting  problem-solving  scale  magnitude  estimate  fermi  mental-math  calculation  nitty-gritty  multi  scitariat  org:bleg  lens  tutorial  guide  ground-up  tricki  skeleton  list  cheatsheet  identity  levers  hi-order-bits  yoga  metabuch  pdf  article  essay  history  early-modern  europe  the-great-west-whale  science  the-trenches  discovery  fluid  architecture  oceans  giants  tidbits  elegance
august 2017 by nhaliday
The Earth-Moon system
nice way of expressing Kepler's law (scaled by AU, solar mass, year, etc.) among other things

1. PHYSICAL PROPERTIES OF THE MOON
2. LUNAR PHASES
3. ECLIPSES
4. TIDES
nibble  org:junk  explanation  trivia  data  objektbuch  space  mechanics  spatial  visualization  earth  visual-understanding  navigation  experiment  measure  marginal  gravity  scale  physics  nitty-gritty  tidbits  identity  cycles  time  magnitude  street-fighting  calculation  oceans  pro-rata  rhythm  flux-stasis
august 2017 by nhaliday
Kelly criterion - Wikipedia
In probability theory and intertemporal portfolio choice, the Kelly criterion, Kelly strategy, Kelly formula, or Kelly bet, is a formula used to determine the optimal size of a series of bets. In most gambling scenarios, and some investing scenarios under some simplifying assumptions, the Kelly strategy will do better than any essentially different strategy in the long run (that is, over a span of time in which the observed fraction of bets that are successful equals the probability that any given bet will be successful). It was described by J. L. Kelly, Jr, a researcher at Bell Labs, in 1956.[1] The practical use of the formula has been demonstrated.[2][3][4]

The Kelly Criterion is to bet a predetermined fraction of assets and can be counterintuitive. In one study,[5][6] each participant was given \$25 and asked to bet on a coin that would land heads 60% of the time. Participants had 30 minutes to play, so could place about 300 bets, and the prizes were capped at \$250. Behavior was far from optimal. "Remarkably, 28% of the participants went bust, and the average payout was just \$91. Only 21% of the participants reached the maximum. 18 of the 61 participants bet everything on one toss, while two-thirds gambled on tails at some stage in the experiment." Using the Kelly criterion and based on the odds in the experiment, the right approach would be to bet 20% of the pot on each throw (see first example in Statement below). If losing, the size of the bet gets cut; if winning, the stake increases.
nibble  betting  investing  ORFE  acm  checklists  levers  probability  algorithms  wiki  reference  atoms  extrema  parsimony  tidbits  decision-theory  decision-making  street-fighting  mental-math  calculation
august 2017 by nhaliday
[1705.03394] That is not dead which can eternal lie: the aestivation hypothesis for resolving Fermi's paradox
If a civilization wants to maximize computation it appears rational to aestivate until the far future in order to exploit the low temperature environment: this can produce a 10^30 multiplier of achievable computation. We hence suggest the "aestivation hypothesis": the reason we are not observing manifestations of alien civilizations is that they are currently (mostly) inactive, patiently waiting for future cosmic eras. This paper analyzes the assumptions going into the hypothesis and how physical law and observational evidence constrain the motivations of aliens compatible with the hypothesis.

http://aleph.se/andart2/space/the-aestivation-hypothesis-popular-outline-and-faq/

simpler explanation (just different math for Drake equation):
Overall the argument is that point estimates should not be shoved into a Drake equation and then multiplied by each, as that requires excess certainty and masks much of the ambiguity of our knowledge about the distributions. Instead, a Bayesian approach should be used, after which the fate of humanity looks much better. Here is one part of the presentation:

Life Versus Dark Energy: How An Advanced Civilization Could Resist the Accelerating Expansion of the Universe: https://arxiv.org/abs/1806.05203
The presence of dark energy in our universe is causing space to expand at an accelerating rate. As a result, over the next approximately 100 billion years, all stars residing beyond the Local Group will fall beyond the cosmic horizon and become not only unobservable, but entirely inaccessible, thus limiting how much energy could one day be extracted from them. Here, we consider the likely response of a highly advanced civilization to this situation. In particular, we argue that in order to maximize its access to useable energy, a sufficiently advanced civilization would chose to expand rapidly outward, build Dyson Spheres or similar structures around encountered stars, and use the energy that is harnessed to accelerate those stars away from the approaching horizon and toward the center of the civilization. We find that such efforts will be most effective for stars with masses in the range of M∼(0.2−1)M⊙, and could lead to the harvesting of stars within a region extending out to several tens of Mpc in radius, potentially increasing the total amount of energy that is available to a future civilization by a factor of several thousand. We also discuss the observable signatures of a civilization elsewhere in the universe that is currently in this state of stellar harvesting.
preprint  study  essay  article  bostrom  ratty  anthropic  philosophy  space  xenobio  computation  physics  interdisciplinary  ideas  hmm  cocktail  temperature  thermo  information-theory  bits  🔬  threat-modeling  time  scale  insight  multi  commentary  liner-notes  pdf  slides  error  probability  ML-MAP-E  composition-decomposition  econotariat  marginal-rev  fermi  risk  org:mat  questions  paradox  intricacy  multiplicative  calculation  street-fighting  methodology  distribution  expectancy  moments  bayesian  priors-posteriors  nibble  measurement  existence  technology  geoengineering  magnitude  spatial  density  spreading  civilization  energy-resources  phys-energy  measure  direction  speculation  structure
may 2017 by nhaliday
A sense of where you are | West Hunter
Nobody at the Times noticed it at first. I don’t know that they ever did notice it by themselves- likely some reader brought it to their attention. But this happens all the time, because very few people have a picture of the world in their head that includes any numbers. Mostly they don’t even have a rough idea of relative size.

In much the same way, back in the 1980s,lots of writers were talking about 90,000 women a year dying of anorexia nervosa, another impossible number. Then there was the great scare about 1,000,000 kids being kidnapped in the US each year – also impossible and wrong. Practically all the talking classes bought into it.

You might think that the people at the top are different – but with a few exceptions, they’re just as clueless.
west-hunter  scitariat  commentary  discussion  reflection  bounded-cognition  realness  nitty-gritty  calculation  fermi  quantitative-qualitative  stories  street-fighting  mental-math  being-right  info-dynamics  knowledge  hi-order-bits  scale  dysgenics  drugs  death  coming-apart  opioids  elite  ability-competence  rant  decision-making
may 2017 by nhaliday
Beta function - Wikipedia
B(x, y) = int_0^1 t^{x-1}(1-t)^{y-1} dt = Γ(x)Γ(y)/Γ(x+y)
one misc. application: calculating pdf of Erlang distribution (sum of iid exponential r.v.s)
concept  atoms  acm  math  calculation  integral  wiki  reference  identity  AMT  distribution  multiplicative
march 2017 by nhaliday
More on Multivariate Gaussians
Fact #1: mean and covariance uniquely determine distribution
Fact #3: closure under sum, marginalizing, and conditioning
covariance of conditional distribution is given by a Schur complement (independent of x_B. is that obvious?)
pdf  exposition  lecture-notes  stanford  nibble  distribution  acm  machine-learning  probability  levers  calculation  ground-up  characterization  rigidity  closure  nitty-gritty  linear-algebra  properties
february 2017 by nhaliday
Relationships among probability distributions - Wikipedia
- One distribution is a special case of another with a broader parameter space
- Transforms (function of a random variable);
- Combinations (function of several variables);
- Approximation (limit) relationships;
- Compound relationships (useful for Bayesian inference);
- Duality;
- Conjugate priors.
stats  probability  characterization  list  levers  wiki  reference  objektbuch  calculation  distribution  nibble  cheatsheet  closure  composition-decomposition  properties
february 2017 by nhaliday
probability - How to prove Bonferroni inequalities? - Mathematics Stack Exchange
- integrated version of inequalities for alternating sums of (N choose j), where r.v. N = # of events occuring
- inequalities for alternating binomial coefficients follow from general property of unimodal (increasing then decreasing) sequences, which can be gotten w/ two cases for increasing and decreasing resp.
- the final alternating zero sum property follows for binomial coefficients from expanding (1 - 1)^N = 0
- The idea of proving inequality by integrating simpler inequality of r.v.s is nice. Proof from CS 150 was more brute force from what I remember.
q-n-a  overflow  math  probability  tcs  probabilistic-method  estimate  proofs  levers  yoga  multi  tidbits  metabuch  monotonicity  calculation  nibble  bonferroni  tricki  binomial  s:null  elegance
january 2017 by nhaliday
cv.complex variables - Absolute value inequality for complex numbers - MathOverflow
In general, once you've proven an inequality like this in R it holds automatically in any Euclidean space (including C) by averaging over projections. ("Inequality like this" = inequality where every term is the length of some linear combination of variable vectors in the space; here the vectors are a, b, c).

I learned this trick at MOP 30+ years ago, and don't know or remember who discovered it.
q-n-a  overflow  math  math.CV  estimate  tidbits  yoga  oly  mathtariat  math.FA  metabuch  inner-product  calculation  norms  nibble  tricki
january 2017 by nhaliday
In Computers We Trust? | Quanta Magazine
As math grows ever more complex, will computers reign?

Shalosh B. Ekhad is a computer. Or, rather, it is any of a rotating cast of computers used by the mathematician Doron Zeilberger, from the Dell in his New Jersey office to a supercomputer whose services he occasionally enlists in Austria. The name — Hebrew for “three B one” — refers to the AT&T 3B1, Ekhad’s earliest incarnation.

“The soul is the software,” said Zeilberger, who writes his own code using a popular math programming tool called Maple.
news  org:mag  org:sci  popsci  math  culture  academia  automation  formal-methods  ai  debate  interdisciplinary  rigor  proofs  nibble  org:inst  calculation  bare-hands  heavyweights  contrarianism  computation  correctness  oss  replication  logic  frontier  state-of-art  technical-writing  trust
january 2017 by nhaliday
Bounds on the Expectation of the Maximum of Samples from a Gaussian
σ/sqrt(pi log 2) sqrt(log n) <= E[Y] <= σ sqrt(2) sqrt(log n)

upper bound pf: Jensen's inequality+mgf+union bound+choose optimal t (Chernoff bound basically)
lower bound pf: more ad-hoc (and difficult)
pdf  tidbits  math  probability  concentration-of-measure  estimate  acm  tails  distribution  calculation  iidness  orders  magnitude  extrema  tightness  outliers  expectancy  proofs  elegance
october 2016 by nhaliday
Doomsday rule - Wikipedia, the free encyclopedia
It takes advantage of each year having a certain day of the week, called the doomsday, upon which certain easy-to-remember dates fall; for example, 4/4, 6/6, 8/8, 10/10, 12/12, and the last day of February all occur on the same day of the week in any year. Applying the Doomsday algorithm involves three steps:
1. Determination of the anchor day for the century.
2. Calculation of the doomsday for the year from the anchor day.
3. Selection of the closest date out of those that always fall on the doomsday, e.g., 4/4 and 6/6, and count of the number of days (modulo 7) between that date and the date in question to arrive at the day of the week.

This technique applies to both the Gregorian calendar A.D. and the Julian calendar, although their doomsdays are usually different days of the week.

Easter date: https://en.wikipedia.org/wiki/Computus
https://www.tondering.dk/claus/cal/easter.php
*When is Easter? (Short answer)*
Easter Sunday is the first Sunday after the first full moon on or after the vernal equinox.

*When is Easter? (Long answer)*
The calculation of Easter is complicated because it is linked to (an inaccurate version of) the Hebrew calendar.

...

It was therefore decided to make Easter Sunday the first Sunday after the first full moon after vernal equinox. Or more precisely: Easter Sunday is the first Sunday after the “official” full moon on or after the “official” vernal equinox.

The official vernal equinox is always 21 March.

The official full moon may differ from the real full moon by one or two days.

...

The full moon that precedes Easter is called the Paschal full moon. Two concepts play an important role when calculating the Paschal full moon: The Golden Number and the Epact. They are described in the following sections.

...

*What is the Golden Number?*
Each year is associated with a Golden Number.

Considering that the relationship between the moon’s phases and the days of the year repeats itself every 19 years (as described in the section about astronomy), it is natural to associate a number between 1 and 19 with each year. This number is the so-called Golden Number. It is calculated thus:

GoldenNumber=(year mod 19) + 1

However, 19 tropical years is 234.997 synodic months, which is very close to an integer. So every 19 years the phases of the moon fall on the same dates (if it were not for the skewness introduced by leap years). 19 years is called a Metonic cycle (after Meton, an astronomer from Athens in the 5th century BC).

So, to summarise: There are three important numbers to note:

A tropical year is 365.24219 days.
A synodic month is 29.53059 days.
19 tropical years is close to an integral number of synodic months.]

In years which have the same Golden Number, the new moon will fall on (approximately) the same date. The Golden Number is sufficient to calculate the Paschal full moon in the Julian calendar.

...

Under the Gregorian calendar, things became much more complicated. One of the changes made in the Gregorian calendar reform was a modification of the way Easter was calculated. There were two reasons for this. First, the 19 year cycle of the phases of moon (the Metonic cycle) was known not to be perfect. Secondly, the Metonic cycle fitted the Gregorian calendar year worse than it fitted the Julian calendar year.

It was therefore decided to base Easter calculations on the so-called Epact.

*What is the Epact?*
Each year is associated with an Epact.

The Epact is a measure of the age of the moon (i.e. the number of days that have passed since an “official” new moon) on a particular date.

...

In the Julian calendar, the Epact is the age of the moon on 22 March.

In the Gregorian calendar, the Epact is the age of the moon at the start of the year.

The Epact is linked to the Golden Number in the following manner:

Under the Julian calendar, 19 years were assumed to be exactly an integral number of synodic months, and the following relationship exists between the Golden Number and the Epact:

Epact=(11 × (GoldenNumber – 1)) mod 30

...

In the Gregorian calendar reform, some modifications were made to the simple relationship between the Golden Number and the Epact.

In the Gregorian calendar the Epact should be calculated thus: [long algorithm]

...

Suppose you know the Easter date of the current year, can you easily find the Easter date in the next year? No, but you can make a qualified guess.

If Easter Sunday in the current year falls on day X and the next year is not a leap year, Easter Sunday of next year will fall on one of the following days: X–15, X–8, X+13 (rare), or X+20.

...

If you combine this knowledge with the fact that Easter Sunday never falls before 22 March and never falls after 25 April, you can narrow the possibilities down to two or three dates.
tricks  street-fighting  concept  wiki  reference  cheatsheet  trivia  nitty-gritty  objektbuch  time  calculation  mental-math  multi  religion  christianity  events  howto  cycles
august 2016 by nhaliday

Copy this bookmark: