Need for external compression methods in PostgreSQL

 Every modern database system has some way to compress its data at some level. The obvious reason for this feature is to reduce the size of it's database, especially in today's world where the data is growing exponentially. The less obvious reason is to improve query performance; the idea is: smaller data size means less data pages to scan, which means lesser disk i/o and faster data access. So, in any case, data de-compression should be fast enough so as not to hamper the query performance, if not improve it.

Compression is offered at different levels : page compression, row compression, column compression, etc. Columnar databases have the advantage of a very high compression ratio of its column because of presence of a repetetive pattern of contiguous data in a column. Another case is when, in a row oriented database, the column values are so large that it makes sense to compress individual values of the column. Such values can even be kept separately if they do not fit in a single page. And the row has pointers to the out-of-line compressed data. In PostgreSQL, such technique is called TOAST (The Oversized-Attribute Storage Technique), where, for columns that can contain variable-length data, the data is transparently compressed and stored in the same row, or else if it is still too large, it is stored in smaller chunks as rows in a separate table called a toast table, where these chunks themselves may or may not be compressed.

Compression is offered for different purposes. It may not be restricted for only data compression. E.g. in a replication system, the transfer of redo logs from the master to slave can become a huge network bottleneck, so many RDBMS offer to compress redo logs.

And then comes the compression algorithms that the RDBMS uses or gives options to choose. This applies especially more to data compression. Since data is user's data, a specific pattern in the user data might suit a particular compression algorithm, while a different pattern might be suitable for another compression algorithm. Moreover, this implies that it would be far more beneficial if the RDBMS gives an option to choose a specific compression algorithm for a specific column or a specific user-defined type out of a list of well-known standard compression libraries such as zlib, lz4, ztd, snappy, gzip, etc. Or, the library algorithm may very well be a completely customized one.

Secondly, there has been a lot of advancements to optimize compression algorithms for specific platforms, and provide hardware accelerators for Compression, Encryption and SIMD that are closely coupled to CPU cores, which can then be levergaed by compression or encryption algorithms. One such example is the Kunpeng Zlib Acceleration Engine, which offers a hardware-enabled infrastructure for compression on a "Kunpeng 920" ARM64 processor. I haven't got a chance to test this capability, but it does sound promising.

Furthermore, the compression/encryption algorithms inherently do repetitive tasks over the data, which is a natural fit for leveraging SIMD vectorization. There has been independent projects going on on both ARM64 and Intel to do such platform-specific enhancements in well known libraries like zlib, lz4 etc. Check out this NEON Intrinsics case study that optimizes zlib's adler-32 algorithm using NEON intrinsics.

All this directly points to an urgent need for RDBMS servers to give users a choice for specific native compression algorithms/libraries for specific tables or specific columns. As of this writing, PostgreSQL uses its own built-in compression algorithm based on LZ for toast table compression. Imagine if there were an interface to select zlib instead of the built-in algorithm. Further, select the zlib compression level. Still further, add an interface for users to create an extension that uses a customized algorithm native to a specific platform that uses hardware acceleration.

Well, there is exactly such a proposed feature in the making. Check out this discussion thread in the PostgreSQL hackers community. It may be a long way to go (as of this writing), but I am very hopeful of this feature going in, because the use-cases are strong enough as shown above, there are no fundamental objections to this functionality, and there are work-in-progress patches submitted.

I went ahead and applied this patch, and played around it. Roughly, below is how the interface looks like. After the patch-set fully materializes, the interface might be different, but I think the essence of it would remain more or less the same. Below is the output of my tests; please note that it is just to emphasize with examples how cool and useful this feature would be, and to make sense of whatever I explained above in this blog.

CREATE TABLE zlibtab(t TEXT COMPRESSION zlib WITH (level '4'));
CREATE TABLE lztab(t TEXT);                                                     
ALTER TABLE lztab ALTER COLUMN t SET COMPRESSION pglz;                          
                                                                                
pgg:s2:pg$ time psql -c "\copy zlibtab from text.data"                          
COPY 13050                                                                      
                                                                                
real    0m1.344s                                                                
user    0m0.031s                                                                
sys     0m0.026s                                                                

pgg:s2:pg$ time psql -c "\copy lztab from text.data"                            
COPY 13050                                                                      
                                                                                
real    0m2.088s                                                                
user    0m0.008s                                                                
sys     0m0.050s                                                                
                                                                                
                                                                                
pgg:s2:pg$ time psql -c "select pg_table_size('zlibtab'::regclass), pg_table_size('lztab'::regclass)"
 pg_table_size | pg_table_size                                                  
---------------+---------------                                                 
       1261568 |       1687552                                                  

pgg:s2:pg$ time psql -c "select NULL from zlibtab where t like '0000'"  > /dev/null

real    0m0.127s
user    0m0.000s
sys     0m0.002s

pgg:s2:pg$ time psql -c "select NULL from lztab where t like '0000'"  > /dev/null

real    0m0.050s
user    0m0.002s
sys     0m0.000s

Notice how two different compression algorithms differ in the compressed size, and the speed of inserting data (compression) and selecting data (decompression).

You would even be able to create a new compression access method using the same way as we do for creating a new index :
CREATE ACCESS METHOD pglz1 TYPE COMPRESSION HANDLER my_compression_handler;
where my_compression_handler should be a PostgreSQL C function that could be created using a PostgreSQL extension. This function assigns its own implementation functions for a set of pre-defined hooks that define everything that the PostgreSQL core needs to know to make use of the compression access method :

Datum
my_compression_handler(PG_FUNCTION_ARGS)
{
       CompressionAmRoutine *routine = makeNode(CompressionAmRoutine);

       routine->cmcheck = my_cmcheck;
       routine->cminitstate = my_cminitstate;
       routine->cmcompress = my_cmcompress;
       routine->cmdecompress = my_cmdecompress;
       routine->cmdecompress_slice = NULL;

       PG_RETURN_POINTER(routine);
}

This is PostgreSQL's way of being highly extensible : Allow user to use built-in methods, but also provide a way for the user to define his/her own methods for doing the same job. All the above functions would be inside an PostgreSQL extension, that could be created using:
CREATE EXTENSION my_compression;

Comments

  1. Hi,

    Congrats for the article!

    A simple miss in PostgreSQL compression area is a way to send a compressed base backup via pg_basebackup -h . I think that using a simple streaming compression algorithmic like lz4 will be great.
    Best regards,
    Alexandre

    ReplyDelete
    Replies
    1. There are some ideas floating on those lines in the community in the "parallel backup" thread. You might want to keep a watch on it :

      https://www.postgresql.org/message-id/flat/CA%2BTgmoZubLXYR%2BPd_gi3MVgyv5hQdLm-GBrVXkun-Lewaw12Kg%40mail.gmail.com

      Delete
    2. Server-side compression plus parallelism is a great add to make replica outside local network faster.
      Thanks!

      Delete
    3. Hey Alexandre, FYI there's already a way to do compressed pg_basebackups using the TAR mode. See here for details - https://www.cybertec-postgresql.com/en/speeding-up-creation-of-postgres-replicas/

      Delete
  2. See also http://blog.cleverelephant.ca/2020/04/developer-life.html

    ReplyDelete
    Replies
    1. Thanks for pointing this out. Good read. And some Postgis internals too...

      Delete

Post a Comment

Popular posts from this blog

PostgreSQL on ARM

Backtraces in PostgreSQL