**tidbits**

The Reason Why | West Hunter

yesterday by nhaliday

There are odd things about the orbits of trans-Neptunian objects that suggest ( to some) that there might be an undiscovered super-Earth-sized planet a few hundred AU from the Sun..

We haven’t seen it, but then it would be very hard to see. The underlying reason is simple enough, but I have never seen anyone mention it: the signal from such objects drops as the fourth power of distance from the Sun. Not the second power, as is the case with luminous objects like stars, or faraway objects that are close to a star. We can image close-in planets of other stars that are light-years distant, but it’s very difficult to see a fair-sized planet a few hundred AU out.

--

interesting little fun fact

west-hunter
scitariat
nibble
tidbits
scale
magnitude
visuo
electromag
spatial
space
measurement
paradox
physics
We haven’t seen it, but then it would be very hard to see. The underlying reason is simple enough, but I have never seen anyone mention it: the signal from such objects drops as the fourth power of distance from the Sun. Not the second power, as is the case with luminous objects like stars, or faraway objects that are close to a star. We can image close-in planets of other stars that are light-years distant, but it’s very difficult to see a fair-sized planet a few hundred AU out.

--

interesting little fun fact

yesterday by nhaliday

Rational Sines of Rational Multiples of p

7 days ago by nhaliday

For which rational multiples of p is the sine rational? We have the three trivial cases

[0, pi/2, pi/6]

and we wish to show that these are essentially the only distinct rational sines of rational multiples of p.

The assertion about rational sines of rational multiples of p follows from two fundamental lemmas. The first is

Lemma 1: For any rational number q the value of sin(qp) is a root of a monic polynomial with integer coefficients.

[Pf uses some ideas unfamiliar to me: similarity parameter of Moebius (linear fraction) transformations, and finding a polynomial for a desired root by constructing a Moebius transformation with a finite period.]

...

Lemma 2: Any root of a monic polynomial f(x) with integer coefficients must either be an integer or irrational.

[Gauss's Lemma, cf Dummit-Foote.]

...

nibble
tidbits
org:junk
analysis
trivia
math
algebra
polynomials
fields
characterization
direction
math.CA
math.CV
ground-up
[0, pi/2, pi/6]

and we wish to show that these are essentially the only distinct rational sines of rational multiples of p.

The assertion about rational sines of rational multiples of p follows from two fundamental lemmas. The first is

Lemma 1: For any rational number q the value of sin(qp) is a root of a monic polynomial with integer coefficients.

[Pf uses some ideas unfamiliar to me: similarity parameter of Moebius (linear fraction) transformations, and finding a polynomial for a desired root by constructing a Moebius transformation with a finite period.]

...

Lemma 2: Any root of a monic polynomial f(x) with integer coefficients must either be an integer or irrational.

[Gauss's Lemma, cf Dummit-Foote.]

...

7 days ago by nhaliday

George Jedenoff: A 101-year-old TidBITS Reader - TidBITS

29 days ago by TomasMartinez

We’ve known since the beginning that TidBITS readers are an unusual and fascinating group, but George Jedenoff stands out. As an infant, he escaped the Russian Revolution with his parents, and after picking up a Stanford MBA and serving with the US Navy Reserve during World War II, he worked his way through the ranks of the steel industry, retiring as president of Kaiser Steel. And that was all before Apple even existed—he started with the Mac in 1987.

101Años
TidBits
Interesante
Gruber
29 days ago by TomasMartinez

What every computer scientist should know about floating-point arithmetic

7 weeks ago by nhaliday

Floating-point arithmetic is considered as esoteric subject by many people. This is rather surprising, because floating-point is ubiquitous in computer systems: Almost every language has a floating-point datatype; computers from PCs to supercomputers have floating-point accelerators; most compilers will be called upon to compile floating-point algorithms from time to time; and virtually every operating system must respond to floating-point exceptions such as overflow. This paper presents a tutorial on the aspects of floating-point that have a direct impact on designers of computer systems. It begins with background on floating-point representation and rounding error, continues with a discussion of the IEEE floating point standard, and concludes with examples of how computer system builders can better support floating point.

https://stackoverflow.com/questions/2729637/does-epsilon-really-guarantees-anything-in-floating-point-computations

"you must use an epsilon when dealing with floats" is a knee-jerk reaction of programmers with a superficial understanding of floating-point computations, for comparisons in general (not only to zero).

This is usually unhelpful because it doesn't tell you how to minimize the propagation of rounding errors, it doesn't tell you how to avoid cancellation or absorption problems, and even when your problem is indeed related to the comparison of two floats, it doesn't tell you what value of epsilon is right for what you are doing.

...

Regarding the propagation of rounding errors, there exists specialized analyzers that can help you estimate it, because it is a tedious thing to do by hand.

https://www.di.ens.fr/~cousot/projects/DAEDALUS/synthetic_summary/CEA/Fluctuat/index.html

This was part of HW1 of CS24:

https://en.wikipedia.org/wiki/Kahan_summation_algorithm

In particular, simply summing n numbers in sequence has a worst-case error that grows proportional to n, and a root mean square error that grows as {\displaystyle {\sqrt {n}}} {\sqrt {n}} for random inputs (the roundoff errors form a random walk).[2] With compensated summation, the worst-case error bound is independent of n, so a large number of values can be summed with an error that only depends on the floating-point precision.[2]

cf:

https://en.wikipedia.org/wiki/Pairwise_summation

In numerical analysis, pairwise summation, also called cascade summation, is a technique to sum a sequence of finite-precision floating-point numbers that substantially reduces the accumulated round-off error compared to naively accumulating the sum in sequence.[1] Although there are other techniques such as Kahan summation that typically have even smaller round-off errors, pairwise summation is nearly as good (differing only by a logarithmic factor) while having much lower computational cost—it can be implemented so as to have nearly the same cost (and exactly the same number of arithmetic operations) as naive summation.

In particular, pairwise summation of a sequence of n numbers xn works by recursively breaking the sequence into two halves, summing each half, and adding the two sums: a divide and conquer algorithm. Its worst-case roundoff errors grow asymptotically as at most O(ε log n), where ε is the machine precision (assuming a fixed condition number, as discussed below).[1] In comparison, the naive technique of accumulating the sum in sequence (adding each xi one at a time for i = 1, ..., n) has roundoff errors that grow at worst as O(εn).[1] Kahan summation has a worst-case error of roughly O(ε), independent of n, but requires several times more arithmetic operations.[1] If the roundoff errors are random, and in particular have random signs, then they form a random walk and the error growth is reduced to an average of {\displaystyle O(\varepsilon {\sqrt {\log n}})} O(\varepsilon {\sqrt {\log n}}) for pairwise summation.[2]

A very similar recursive structure of summation is found in many fast Fourier transform (FFT) algorithms, and is responsible for the same slow roundoff accumulation of those FFTs.[2][3]

https://eng.libretexts.org/Bookshelves/Electrical_Engineering/Book%3A_Fast_Fourier_Transforms_(Burrus)/10%3A_Implementing_FFTs_in_Practice/10.8%3A_Numerical_Accuracy_in_FFTs

However, these encouraging error-growth rates only apply if the trigonometric “twiddle” factors in the FFT algorithm are computed very accurately. Many FFT implementations, including FFTW and common manufacturer-optimized libraries, therefore use precomputed tables of twiddle factors calculated by means of standard library functions (which compute trigonometric constants to roughly machine precision). The other common method to compute twiddle factors is to use a trigonometric recurrence formula—this saves memory (and cache), but almost all recurrences have errors that grow as O(n‾√) , O(n) or even O(n2) which lead to corresponding errors in the FFT.

...

There are, in fact, trigonometric recurrences with the same logarithmic error growth as the FFT, but these seem more difficult to implement efficiently; they require that a table of Θ(logn) values be stored and updated as the recurrence progresses. Instead, in order to gain at least some of the benefits of a trigonometric recurrence (reduced memory pressure at the expense of more arithmetic), FFTW includes several ways to compute a much smaller twiddle table, from which the desired entries can be computed accurately on the fly using a bounded number (usually <3) of complex multiplications. For example, instead of a twiddle table with n entries ωkn , FFTW can use two tables with Θ(n‾√) entries each, so that ωkn is computed by multiplying an entry in one table (indexed with the low-order bits of k ) by an entry in the other table (indexed with the high-order bits of k ).

[ed.: Nicholas Higham's "Accuracy and Stability of Numerical Algorithms" seems like a good reference for this kind of analysis.]

nibble
pdf
papers
programming
systems
numerics
nitty-gritty
intricacy
approximation
accuracy
types
sci-comp
multi
q-n-a
stackex
hmm
oly-programming
accretion
formal-methods
yak-shaving
wiki
reference
algorithms
yoga
ground-up
divide-and-conquer
fourier
books
tidbits
chart
caltech
nostalgia
https://stackoverflow.com/questions/2729637/does-epsilon-really-guarantees-anything-in-floating-point-computations

"you must use an epsilon when dealing with floats" is a knee-jerk reaction of programmers with a superficial understanding of floating-point computations, for comparisons in general (not only to zero).

This is usually unhelpful because it doesn't tell you how to minimize the propagation of rounding errors, it doesn't tell you how to avoid cancellation or absorption problems, and even when your problem is indeed related to the comparison of two floats, it doesn't tell you what value of epsilon is right for what you are doing.

...

Regarding the propagation of rounding errors, there exists specialized analyzers that can help you estimate it, because it is a tedious thing to do by hand.

https://www.di.ens.fr/~cousot/projects/DAEDALUS/synthetic_summary/CEA/Fluctuat/index.html

This was part of HW1 of CS24:

https://en.wikipedia.org/wiki/Kahan_summation_algorithm

In particular, simply summing n numbers in sequence has a worst-case error that grows proportional to n, and a root mean square error that grows as {\displaystyle {\sqrt {n}}} {\sqrt {n}} for random inputs (the roundoff errors form a random walk).[2] With compensated summation, the worst-case error bound is independent of n, so a large number of values can be summed with an error that only depends on the floating-point precision.[2]

cf:

https://en.wikipedia.org/wiki/Pairwise_summation

In numerical analysis, pairwise summation, also called cascade summation, is a technique to sum a sequence of finite-precision floating-point numbers that substantially reduces the accumulated round-off error compared to naively accumulating the sum in sequence.[1] Although there are other techniques such as Kahan summation that typically have even smaller round-off errors, pairwise summation is nearly as good (differing only by a logarithmic factor) while having much lower computational cost—it can be implemented so as to have nearly the same cost (and exactly the same number of arithmetic operations) as naive summation.

In particular, pairwise summation of a sequence of n numbers xn works by recursively breaking the sequence into two halves, summing each half, and adding the two sums: a divide and conquer algorithm. Its worst-case roundoff errors grow asymptotically as at most O(ε log n), where ε is the machine precision (assuming a fixed condition number, as discussed below).[1] In comparison, the naive technique of accumulating the sum in sequence (adding each xi one at a time for i = 1, ..., n) has roundoff errors that grow at worst as O(εn).[1] Kahan summation has a worst-case error of roughly O(ε), independent of n, but requires several times more arithmetic operations.[1] If the roundoff errors are random, and in particular have random signs, then they form a random walk and the error growth is reduced to an average of {\displaystyle O(\varepsilon {\sqrt {\log n}})} O(\varepsilon {\sqrt {\log n}}) for pairwise summation.[2]

A very similar recursive structure of summation is found in many fast Fourier transform (FFT) algorithms, and is responsible for the same slow roundoff accumulation of those FFTs.[2][3]

https://eng.libretexts.org/Bookshelves/Electrical_Engineering/Book%3A_Fast_Fourier_Transforms_(Burrus)/10%3A_Implementing_FFTs_in_Practice/10.8%3A_Numerical_Accuracy_in_FFTs

However, these encouraging error-growth rates only apply if the trigonometric “twiddle” factors in the FFT algorithm are computed very accurately. Many FFT implementations, including FFTW and common manufacturer-optimized libraries, therefore use precomputed tables of twiddle factors calculated by means of standard library functions (which compute trigonometric constants to roughly machine precision). The other common method to compute twiddle factors is to use a trigonometric recurrence formula—this saves memory (and cache), but almost all recurrences have errors that grow as O(n‾√) , O(n) or even O(n2) which lead to corresponding errors in the FFT.

...

There are, in fact, trigonometric recurrences with the same logarithmic error growth as the FFT, but these seem more difficult to implement efficiently; they require that a table of Θ(logn) values be stored and updated as the recurrence progresses. Instead, in order to gain at least some of the benefits of a trigonometric recurrence (reduced memory pressure at the expense of more arithmetic), FFTW includes several ways to compute a much smaller twiddle table, from which the desired entries can be computed accurately on the fly using a bounded number (usually <3) of complex multiplications. For example, instead of a twiddle table with n entries ωkn , FFTW can use two tables with Θ(n‾√) entries each, so that ωkn is computed by multiplying an entry in one table (indexed with the low-order bits of k ) by an entry in the other table (indexed with the high-order bits of k ).

[ed.: Nicholas Higham's "Accuracy and Stability of Numerical Algorithms" seems like a good reference for this kind of analysis.]

7 weeks ago by nhaliday

URI Parsing with Javascript

april 2019 by skinnymuch

URI Parsing with Javascript. GitHub Gist: instantly share code, notes, and snippets.

snippets
tidbits
dev
$project_2018_js
javascript
april 2019 by skinnymuch

What APFS Does for You, and What You Can Do with APFS - TidBITS

march 2019 by mcbakewl

JEFF CARLSON - 23 July 2018

OSX_13
APFS
TidBITS
march 2019 by mcbakewl

Attach an iPad to Your MacBook as a Second Display with Mountie - TidBITS

march 2019 by handcoding

“The Mountie has poor documentation, and frustratingly, Ten One Design doesn’t provide any written materials on its Web site. Speaking as someone who could burn down a house while constructing an IKEA table, it took me an hour to get right. In the photo below, the colored clips attach to the back of the iPad and MacBook screens. My first time, I put them in front and thought I was supposed to put up with covering up the corner of my screen. The narrower other side, the actual front of the Mountie, covers the bezel but not the monitor. Someone with better mechanical skills would figure it out more quickly—also someone with better observational skills, as I was only later informed that there’s a demonstration video on the Web site I could have watched.”

mountie
tidbits
lunadisplay
2019
march 2019 by handcoding

Locast Gives You No Frills Broadcast TV at No Cost - TidBITS

march 2019 by handcoding

“Now an organization named Locast is providing a similar service, thanks to a legal loophole: Aereo lost its lawsuit because it resold broadcast transmissions. Locast gives them away for free. So far, broadcasters haven’t shown interest in shutting Locast down.

“In short, Locast provides live television broadcasts that you can watch in a Web browser, or through an app for iOS or Android. It’s a nonprofit organization, and there’s no cost to sign up for an account or use the service. Instead, you can make a voluntary donation—the organization’s Web form defaults to suggesting $5 a month, but there’s no obligation to pay anything. This separation of revenue from services is the secret sauce protecting Locast from lawyers.”

internet
tv
2019
streaming
tidbits
“In short, Locast provides live television broadcasts that you can watch in a Web browser, or through an app for iOS or Android. It’s a nonprofit organization, and there’s no cost to sign up for an account or use the service. Instead, you can make a voluntary donation—the organization’s Web form defaults to suggesting $5 a month, but there’s no obligation to pay anything. This separation of revenue from services is the secret sauce protecting Locast from lawyers.”

march 2019 by handcoding

A more efficient reducer for adding/editing/removing objects in an array using a hash table

january 2019 by skinnymuch

A more efficient reducer for adding/editing/removing objects in an array using a hash table - index.js

javascript
tidbits
short_tutorials
redux
best_of
deeper_understanding
bookmarked_on_site
dev
january 2019 by skinnymuch

An inefficient reducer pattern for updating an array of objects

january 2019 by skinnymuch

An inefficient reducer pattern for updating an array of objects - index.js

javascript
tidbits
short_tutorials
redux
best_of
deeper_understanding
bookmarked_on_site
dev
january 2019 by skinnymuch

Implement Redux as Vanilla JS in a Single React Component

january 2019 by skinnymuch

I’m currently working on a project where I’m patching a few React components piecemeal into an existing ASP.NET build. Because I’m limited in how much I can change in the overall project…

javascript
tidbits
short_tutorials
redux
best_of
deeper_understanding
bookmarked_on_site
dev
january 2019 by skinnymuch