Deep Lossless Compression Algorithm Based on Arithmetic Coding for Power Data

Classical lossless compression algorithm highly relies on artificially designed encoding and quantification strategies for general purposes. With the rapid development of deep learning, data-driven methods based on the neural network can learn features and show better performance on specific data do...

Full description

Saved in:
Bibliographic Details
Published inSensors (Basel, Switzerland) Vol. 22; no. 14; p. 5331
Main Authors Ma, Zhoujun, Zhu, Hong, He, Zhuohao, Lu, Yue, Song, Fuyuan
Format Journal Article
LanguageEnglish
Published Basel MDPI AG 16.07.2022
MDPI
Subjects
Online AccessGet full text
ISSN1424-8220
1424-8220
DOI10.3390/s22145331

Cover

More Information
Summary:Classical lossless compression algorithm highly relies on artificially designed encoding and quantification strategies for general purposes. With the rapid development of deep learning, data-driven methods based on the neural network can learn features and show better performance on specific data domains. We propose an efficient deep lossless compression algorithm, which uses arithmetic coding to quantify the network output. This scheme compares the training effects of Bi-directional Long Short-Term Memory (Bi-LSTM) and Transformers on minute-level power data that are not sparse in the time-frequency domain. The model can automatically extract features and adapt to the quantification of the probability distribution. The results of minute-level power data show that the average compression ratio (CR) is 4.06, which has a higher compression ratio than the classical entropy coding method.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:1424-8220
1424-8220
DOI:10.3390/s22145331