recentpopularlog in

jabley : compression   43

Fast Lossless Compression of Scientific Floating-Point Data
In scientific computing environments, large amounts of floating-point data often need to
be transferred between computers as well as to and from storage devices. Compression
can reduce the number of bits that need to be transferred and stored. However, the runtime
overhead due to compression may be undesirable in high-performance settings
where short communication latencies and high bandwidths are essential. This paper describes
and evaluates a new compression algorithm that is tailored to such environments.
It typically compresses numeric floating-point values better and faster than other algorithms
do. On our data sets, it achieves compression ratios between 1.2 and 4.2 as well
as compression and decompression throughputs between 2.8 and 5.9 million 64-bit double-precision
numbers per second on a 3GHz Pentium 4 machine
paper  comp-sci  compression  algorithms  data  filetype:pdf 
april 2017 by jabley
High Throughput Compression of Double-Precision Floating-Point Data
This paper describes FPC, a lossless compression algorithm for linear streams of 64-bit
floating-point data. FPC is designed to compress well while at the same time meeting the
high throughput demands of scientific computing environments. On our thirteen datasets,
it achieves a substantially higher average compression ratio than BZIP2, DFCM, FSD,
GZIP, and PLMI. At comparable compression ratios, it compresses and decompresses 8
to 300 times faster than the other five algorithms.
paper  comp-sci  compression  algorithms  data  filetype:pdf 
april 2017 by jabley
Data Compression
This paper surveys a variety of data compression methods spanning almost forty years
of research, from the work of Shannon, Fano and Hu man in the late 40's to a technique
developed in 1986. The aim of data compression is to reduce redundancy in stored or
communicated data, thus increasing e ective data density. Data compression has important
application in the areas of le storage and distributed systems.
Concepts from information theory, as they relate to the goals and evaluation of data
compression methods, are discussed briey. A framework for evaluation and comparison of
methods is constructed and applied to the algorithms presented. Comparisons of both theoretical
and empirical natures are reported and possibilities for future research are suggested.
filetype:pdf  compression  mathematics  theory  huffman  shannon  information-theory  overview 
july 2015 by jabley

Copy this bookmark:





to read