**floatingPoint**

Posit Arithmetic (PDF)

11 days ago by euler

This document focuses on the details of posit mode, which can be thought of as “beating floats at their own game.” It introduces a type of unum, Type III, that combines many of the advantages of Type I and Type II unums, but is less radical and is designed as a “drop-in” replacement for an IEEE 754 Standard float with no changes needed to source code at the application level. The following table may help clarify the terminology.

floatingpoint
posits
unum
11 days ago by euler

Intel 8087 - Wikipedia

11 days ago by euler

The sales of the 8087 received a significant boost when IBM included a coprocessor socket on the IBM PC motherboard. Due to a shortage of chips, IBM did not actually offer the 8087 as an option for the PC until it had been on the market for six months. Development of the 8087 led to the IEEE 754-1985 standard for floating-point arithmetic. There were later x87 coprocessors for the 80186 (not used in PC-compatibles), 80286, 80386, and 80386SX processors. Starting with the 80486, the later Intel x86 processors did not use a separate floating point coprocessor; floating point functions were provided integrated with the processor.

floatingpoint
posits
unum
8087
11 days ago by euler

Unum (number format) - Wikipedia

11 days ago by euler

The unums (universal numbers[1]) is an arithmetic and a binary representation format for real numbers analogous to floating point, proposed by John Gustafson as an alternative to the now ubiquitous IEEE 754 arithmetic. The first version of unums, now officially known as Type I unum, was introduced in his book The End of Error.[2] Gustafson has since created two newer revisions of the unum format, Type II and Type III, in late 2016. Type III unum is also known as posits[3][4][5] and valids; posits present arithmetic for single real values and valids present the interval arithmetic version. This data type can serve as a replacement for IEEE 754 floats for programs which do not depend on specific features of IEEE 754. Details of valids have yet to be officially articulated by Gustafson.

floatingpoint
posits
unum
11 days ago by euler

New Approach Could Sink Floating Point Computation

11 days ago by euler

We caught up with Gustafson at ISC19. For that particular crowd, the supercomputing set, one of the primary advantages of the posits format is that you can get more precision and dynamic range using less bits than IEEE 754 numbers. And not just a few less. Gustafson told us that a 32-bit posit can replace a 64-bit float in almost all cases, which would have profound implications for scientific computing. Cutting the number of bits in half not only reduces the amount of cache, memory and storage to hold these values, but also substantially reduces the bandwidth needed to shuffle these values to and from the processor. It’s the main reason why he thinks posit-based arithmetic would deliver a two-fold to four-fold speedup compared to IEEE floats.

floatingpoint
posits
unum
11 days ago by euler

New Approach Could Sink Floating Point Computation

floatingpoint

12 days ago by synergyfactor

In 1985, the Institute of Electrical and Electronics Engineers (IEEE) established IEEE 754, a standard for floating point formats and arithmetic that

12 days ago by synergyfactor

New Approach Could Sink Floating Point Computation

math
programming
floatingPoint
computerScience
cpus
everynumber
IFTTT
Pocket
posits
via:HackerNews

13 days ago by unconsidered

13 days ago by unconsidered

New Approach Could Sink Floating Point Computation

13 days ago by mcherm

There is ACTUALLY a credible alternative to IEEE 754 for floating point.

floatingpoint
programming
computerscience
via:HackerNews
13 days ago by mcherm

New Approach Could Sink Floating Point Computation

13 days ago by gojomo

New Approach Could Sink Floating Point Computation

floatingPoint
posits
math
cpus
computerScience
13 days ago by gojomo

Floating point error is the least of my worries

maths
modelling
computers
programming
floatingpoint

4 weeks ago by pozorvlak

Modeling error is usually several orders of magnitude greater than floating point error. People who nonchalantly model the real world and then sneer at floating point as just an approximation strain at gnats and swallow camels.

4 weeks ago by pozorvlak

linux - bash + arithmetic calculation with bash - Unix & Linux Stack Exchange

6 weeks ago by kme

Since arithmetic expansion only does integer division, the proposed answer uses AWK, passing in values as variables, with '/dev/null' as the input file, and the math in a 'BEGIN' block.

bash
awk
shellscripting
floatingpoint
math
tipsandtricks
solution
6 weeks ago by kme