Sunday, 30 August 2020

Need for external compression methods in PostgreSQL

 Every modern database system has some way to compress its data at some level. The obvious reason for this feature is to reduce the size of it's database, especially in today's world where the data is growing exponentially. The less obvious reason is to improve query performance; the idea is: smaller data size means less data pages to scan, which means lesser disk i/o and faster data access. So, in any case, data de-compression should be fast enough so as not to hamper the query performance, if not improve it.

Compression is offered at different levels : page compression, row compression, column compression, etc. Columnar databases have the advantage of a very high compression ratio of its column because of presence of a repetetive pattern of contiguous data in a column. Another case is when, in a row oriented database, the column values are so large that it makes sense to compress individual values of the column. Such values can even be kept separately if they do not fit in a single page. And the row has pointers to the out-of-line compressed data. In PostgreSQL, such technique is called TOAST (The Oversized-Attribute Storage Technique), where, for columns that can contain variable-length data, the data is transparently compressed and stored in the same row, or else if it is still too large, it is stored in smaller chunks as rows in a separate table called a toast table, where these chunks themselves may or may not be compressed.

Compression is offered for different purposes. It may not be restricted for only data compression. E.g. in a replication system, the transfer of redo logs from the master to slave can become a huge network bottleneck, so many RDBMS offer to compress redo logs.

And then comes the compression algorithms that the RDBMS uses or gives options to choose. This applies especially more to data compression. Since data is user's data, a specific pattern in the user data might suit a particular compression algorithm, while a different pattern might be suitable for another compression algorithm. Moreover, this implies that it would be far more beneficial if the RDBMS gives an option to choose a specific compression algorithm for a specific column or a specific user-defined type out of a list of well-known standard compression libraries such as zlib, lz4, ztd, snappy, gzip, etc. Or, the library algorithm may very well be a completely customized one.

Secondly, there has been a lot of advancements to optimize compression algorithms for specific platforms, and provide hardware accelerators for Compression, Encryption and SIMD that are closely coupled to CPU cores, which can then be levergaed by compression or encryption algorithms. One such example is the Kunpeng Zlib Acceleration Engine, which offers a hardware-enabled infrastructure for compression on a "Kunpeng 920" ARM64 processor. I haven't got a chance to test this capability, but it does sound promising.

Furthermore, the compression/encryption algorithms inherently do repetitive tasks over the data, which is a natural fit for leveraging SIMD vectorization. There has been independent projects going on on both ARM64 and Intel to do such platform-specific enhancements in well known libraries like zlib, lz4 etc. Check out this NEON Intrinsics case study that optimizes zlib's adler-32 algorithm using NEON intrinsics.

All this directly points to an urgent need for RDBMS servers to give users a choice for specific native compression algorithms/libraries for specific tables or specific columns. As of this writing, PostgreSQL uses its own built-in compression algorithm based on LZ for toast table compression. Imagine if there were an interface to select zlib instead of the built-in algorithm. Further, select the zlib compression level. Still further, add an interface for users to create an extension that uses a customized algorithm native to a specific platform that uses hardware acceleration.

Well, there is exactly such a proposed feature in the making. Check out this discussion thread in the PostgreSQL hackers community. It may be a long way to go (as of this writing), but I am very hopeful of this feature going in, because the use-cases are strong enough as shown above, there are no fundamental objections to this functionality, and there are work-in-progress patches submitted.

I went ahead and applied this patch, and played around it. Roughly, below is how the interface looks like. After the patch-set fully materializes, the interface might be different, but I think the essence of it would remain more or less the same. Below is the output of my tests; please note that it is just to emphasize with examples how cool and useful this feature would be, and to make sense of whatever I explained above in this blog.

CREATE TABLE zlibtab(t TEXT COMPRESSION zlib WITH (level '4'));
CREATE TABLE lztab(t TEXT);                                                     
ALTER TABLE lztab ALTER COLUMN t SET COMPRESSION pglz;                          
                                                                                
pgg:s2:pg$ time psql -c "\copy zlibtab from text.data"                          
COPY 13050                                                                      
                                                                                
real    0m1.344s                                                                
user    0m0.031s                                                                
sys     0m0.026s                                                                

pgg:s2:pg$ time psql -c "\copy lztab from text.data"                            
COPY 13050                                                                      
                                                                                
real    0m2.088s                                                                
user    0m0.008s                                                                
sys     0m0.050s                                                                
                                                                                
                                                                                
pgg:s2:pg$ time psql -c "select pg_table_size('zlibtab'::regclass), pg_table_size('lztab'::regclass)"
 pg_table_size | pg_table_size                                                  
---------------+---------------                                                 
       1261568 |       1687552                                                  

pgg:s2:pg$ time psql -c "select NULL from zlibtab where t like '0000'"  > /dev/null

real    0m0.127s
user    0m0.000s
sys     0m0.002s

pgg:s2:pg$ time psql -c "select NULL from lztab where t like '0000'"  > /dev/null

real    0m0.050s
user    0m0.002s
sys     0m0.000s

Notice how two different compression algorithms differ in the compressed size, and the speed of inserting data (compression) and selecting data (decompression).

You would even be able to create a new compression access method using the same way as we do for creating a new index :
CREATE ACCESS METHOD pglz1 TYPE COMPRESSION HANDLER my_compression_handler;
where my_compression_handler should be a PostgreSQL C function that could be created using a PostgreSQL extension. This function assigns its own implementation functions for a set of pre-defined hooks that define everything that the PostgreSQL core needs to know to make use of the compression access method :

Datum
my_compression_handler(PG_FUNCTION_ARGS)
{
       CompressionAmRoutine *routine = makeNode(CompressionAmRoutine);

       routine->cmcheck = my_cmcheck;
       routine->cminitstate = my_cminitstate;
       routine->cmcompress = my_cmcompress;
       routine->cmdecompress = my_cmdecompress;
       routine->cmdecompress_slice = NULL;

       PG_RETURN_POINTER(routine);
}

This is PostgreSQL's way of being highly extensible : Allow user to use built-in methods, but also provide a way for the user to define his/her own methods for doing the same job. All the above functions would be inside an PostgreSQL extension, that could be created using:
CREATE EXTENSION my_compression;

Thursday, 30 July 2020

Backtraces in PostgreSQL

PostgreSQL 13 has introduced a simple but extremely useful capability to log a stack trace into the server logs when an error is reported. Let's see the details.

There is a GUC to enable stacktrace generation : backtrace_functions. Set it to a comma-separated function names.

SET backtrace_functions TO 'func1,func2';

If the error is thrown from one of these functions, a backtrace will be generated and logged into the server log.

Note that only superusers can set the backtrace_functions GUC. It can be set locally in a session, or can be included in postgresql.conf file to globally set it.

It's easy to see how it would help in a situation where a customer reports an error message. We can find from where it came from by grep'ing for it in the source code. But beyond that, it was all guess work. Not anymore. Now, you can ask the customer to set backtrace_functions to all such functions which are emitting this error message, and get the stack trace. In most cases, the root cause of the error is not in the function which emits the error; its located somewhere in the middle of the stack; hence the stack trace is critical.

This capability is already available in many other databases like MySQL, Greenplum, Oracle.

What's still missing in PostgreSQL - and is present in most of these other databases - is being able to generate stack trace when a server backend crashes with a segmentation fault or other such unexpected signals, or when the server PANICs due to some reason. This capability would make a much bigger difference. We will get rid of having to explain steps to generate core file. More importantly, this helps in situations where the crash happens only randomly. Even with a single unexpected crash, the customer would always be ready with a backtrace. I am hopeful this would be implemented in the next major release of PostgreSQL.

Let's see how a PostgreSQL stack trace log looks like. We will try to use a non-existent type to create a table. Supposing we know that the "type does not exist" error comes from typenameType() in the source code. So we do this :

postgres=# set backtrace_functions TO 'typenameType';
postgres=# create table tab (id invalidtype);
ERROR:  type "invalidtype" does not exist
LINE 1: create table tab (id invalidtype);

Here's a snippet from the server log :
2020-07-28 20:17:01.482 CST [22454] ERROR:  type "invalidtype" does not exist at character 22
2020-07-28 20:17:01.482 CST [22454] BACKTRACE:                                 
    postgres: amit postgres [local] CREATE TABLE(typenameType+0xa4) [0xaaaaafcd2ac4]
    postgres: amit postgres [local] CREATE TABLE(+0x20f550) [0xaaaaafcd4550]   
    postgres: amit postgres [local] CREATE TABLE(transformCreateStmt+0x53c) [0xaaaaafcd7a10]
    postgres: amit postgres [local] CREATE TABLE(+0x44df20) [0xaaaaaff12f20]   
    postgres: amit postgres [local] CREATE TABLE(standard_ProcessUtility+0x16c) [0xaaaaaff1225c]
    postgres: amit postgres [local] CREATE TABLE(+0x44a4e4) [0xaaaaaff0f4e4]   
    postgres: amit postgres [local] CREATE TABLE(+0x44af88) [0xaaaaaff0ff88]   
    postgres: amit postgres [local] CREATE TABLE(PortalRun+0x198) [0xaaaaaff10ed8]
    postgres: amit postgres [local] CREATE TABLE(+0x44764c) [0xaaaaaff0c64c]    
    postgres: amit postgres [local] CREATE TABLE(PostgresMain+0x970) [0xaaaaaff0d3d4]
    postgres: amit postgres [local] CREATE TABLE(+0x3b3be4) [0xaaaaafe78be4]    
    postgres: amit postgres [local] CREATE TABLE(PostmasterMain+0xdc0) [0xaaaaafe79b70]
    postgres: amit postgres [local] CREATE TABLE(main+0x480) [0xaaaaafb82510]  
    /lib/aarch64-linux-gnu/libc.so.6(__libc_start_main+0xe0) [0xffffaac956e0]  
    postgres: amit postgres [local] CREATE TABLE(+0xbd5d8) [0xaaaaafb825d8]    
2020-07-29 18:01:02.726 CST [28776] STATEMENT:  create table tab (id invalidtype);    

Each line of the backtrace has the function name, an offset into that function, and the return address of that frame.

For some stack frames, the function name is not present; instead, the function address is present. These are static functions. For such functions, the function names are not exposed. But we may be able to get their names from their addresses, with the help of addr2line command-line tool :

$ addr2line  0x20f550 0x44df20 -a -f -e `which postgres`
0x000000000020f550
transformColumnDefinition
:?
0x000000000044df20
ProcessUtilitySlow.constprop.0
:?

If it's a debug build, even the file name and offset is printed.



Now let's see how this simple feature is implemented.

In most of the RDBMS's including PostgreSQL, Greenplum, MySQL, the feature is implemented using a simple function backtrace() to generate the stacktrace:

int backtrace(void **buffer, int size);

This function only returns all the return addresses of the frames.  So it should be followed by a call to backtrace_symbols() that converts the addresses returned by backtrace() into strings that describe the addresses using the function names if available :

char **backtrace_symbols(void *const *buffer, int size);

All the details of these functions are nicely described in their man pages. These functions are available in most of the platforms.

Note a couple of points :

1. For the function names to be available for backtrace_symbols(), the executable has to be built using linker options that allow adding all these symbols into a "dynamic symbol table". These options can be given with one of the following ways (these are gcc compiler options) :
gcc -rdynamic
gcc -Wl,-E

2. Sometimes particular stack frames might be missing, when compiled with gcc -O2 or higher optimization level. E.g. check this sample program backtrace.c from the backtrace() man pages.

I compile it without -O2 :
amit:pg:error$ gcc -rdynamic -o backtrace backtrace.c
I get the full stack :
amit:pg:error$ ./backtrace 6
backtrace() returned 11 addresses
./backtrace(myfunc3+0x2c) [0xaaaad6b2edc0]
./backtrace(+0xe84) [0xaaaad6b2ee84]
./backtrace(myfunc+0x2c) [0xaaaad6b2eebc]
./backtrace(myfunc+0x24) [0xaaaad6b2eeb4]
./backtrace(myfunc+0x24) [0xaaaad6b2eeb4]
./backtrace(myfunc+0x24) [0xaaaad6b2eeb4]
./backtrace(myfunc+0x24) [0xaaaad6b2eeb4]
./backtrace(myfunc+0x24) [0xaaaad6b2eeb4]
./backtrace(main+0x60) [0xaaaad6b2ef28]
/lib/aarch64-linux-gnu/libc.so.6(__libc_start_main+0xe0) [0xffff8c5ba6e0]
./backtrace(+0xcc4) [0xaaaad6b2ecc4]

Now I compile it with -O2 :
amit:pg:error$ gcc -O2 -rdynamic -o backtrace backtrace.c
amit:pg:error$ ./backtrace 6
backtrace() returned 4 addresses
./backtrace(myfunc3+0x38) [0xaaaac7183e40]
./backtrace(main+0x4c) [0xaaaac7183cfc]
/lib/aarch64-linux-gnu/libc.so.6(__libc_start_main+0xe0) [0xffffb91286e0]
./backtrace(+0xd38) [0xaaaac7183d38]

There is no frame for myfunc2() and myfunc(). One possibility is that the compiler has replaced the recursive calls of myfunc() and also myfunc2() call with the tail end call myfunc3(), which is called tail call optimization.

The point being: we need to be aware of such missing frames in a few scenarios.


Tuesday, 23 June 2020

Leveraging SIMD Vectorization

With the advent of column store databases, there was an urge to make use of SIMD vector processing. It naturally fits into the way table data is arranged. Let's first briefly check what is SIMD. It stands for Single Instruction Multiple Data. Today, CPU instructions support this kind of mechanism where the same instruction can be executed simultaneously on multiple data elements. E.g. Say, you want to double all the column values. Or remove the red component of the RGB values of pixels of an image. For large data, these operations are CPU bottlenecks. So SIMD cuts the CPU time significantly by operating simultaneously on 2, 4, 8, 16 or 32 (or more) data elements depending on the size of each data element. So suppose we want to do "arr[i] *= 2" for each element of "int32 arr[]". Normally we would iterate through each of the elements for doing this operation. In the generated assembly code, MUL instruction will be run on each of the elements. With SIMD, we would arrange for loading 4 (or more) adjacent array elements into a 128-bit (or larger) CPU "vector" register, and then arrange for a "vectorized" version of the MUL instruction to be called using this register, and repeat this for each subsequent 4 element array section.

How do we arrange for generating such vectorized assembly instructions ? Well, one way is to write such an assembly code. But in most of the cases, we won't need this method, thanks to the below two methods :

1. Vectorization Intrinsics

For a programmer, an intrinsic is just like any other function call. Underneath, the compiler replaces it with an appropriate assembly instruction. So instead of having to deal with registers using assembly instruction inside C/C++ code, call the corresponding intrinsic function. Each CPU architecture has it's own set of intrinsics API, and corresponding header file. As an example, let's vectorize a snippet of PostgreSQL code using ARM architecture's SIMD intrinsics, to see how big a difference it makes by vectorizing things. Before that, you might want to quickly go through the NEON architecture to understand the naming conventions for registers, lanes and vectors. NEON is ARM's brand name for SIMD architecture. NEON unit is a mandatory part of ARMv8 chip.

Here is a PostgreSQL code snippet from the mul_var() function that is used to multiply two PostgreSQL NUMERIC data types. As of this writing, it looks like this :

for (i2 = Min(var2ndigits - 1, res_ndigits - i1 - 3), i = i1 + i2 + 2;
     i2 >= 0; i2--)
   dig[i--] += var1digit * var2digits[i2];

where, the variables are declared as :
int32 *dig;
int16  var1digit, *var2digits;

Here, you can see that the loop iterates i2+1 times. On each iteration, both i and i2 are decremented. That means, there is a fixed contiguous section of each of the two arrays where we want to repeatedly do the same arithmetic operation for every array element in this section. The arithmetic being done here is : multiply two int16 variables, and add up that product into an int32 variable. An assembly instruction is available which exactly does that : VMLA. The corresponding intrinsic is : vmlal_s16()

Let's first simplify the above backward for-loop into an equivalent forward loop :

i2 = Min(var2ndigits - 1, res_ndigits - i1 - 3);
count = i2 + 1;
digptr = &dig[i1 + 2];
for (i = 0; i < count; i++)
   digptr[i] += var1digit * var2digits[i];

So we want to vectorize the above multiply+accumulate statement. We have this intrinsic :
int16x8_t   vmlaq_s16(int16x8_t a, int16x8_t b, int16x8_t c);
This does a+(b*c) and returns the result. a, b and c are vectors. The type int16x8_t signifies that the vector is in a 128-bit NEON register having 8 lanes, each lane having 16-bit signed integers. So vmlaq_s16() does the multiply+accumulate operation on all 8 lanes of the 3 vectors in parallel, and returns the 8 result values again in a int16x8_t vector. Each multiple+accumulate is contained in one particular lane of all the 3 vectors.
To avoid overflow, as can be seen in the above C snippet, the multiplication is accumulated into a 32-bit integer. So instead of vmlaq_s16(), we have to use an intrinsic that operates on 16-bit values and returns 32bit values :
int32x4_t  vmlal_s16(int32x4_t a, int16x4_t b, int16x4_t c);
Since only 4 32-bit data elements can be accommodated in a 128-bit vector, 4 elements could be parallelized rather than 8.

As can be seen, all these operations use the 128-bit registers, even though they need not be fully occupied, as in the case with int16x4 vectors. We need to first load the C array element values into these registers, and in the end, store the resultant values back from the registers into the result array elements. We have intrinsics for that also. Although there are intrinsics that operate on a mix of scalar and vectors, the intrinsic used above uses only vectors. So the same var1digit value can be loaded into all 4 lanes of a 16x4 vector.

With these instrinsics, the final code looks like this :

#include <arm_neon.h>
......
......
int i2 = Min(var2ndigits - 1, res_ndigits - i1 - 3);
int remainder;
int count = i2 + 1;
int32 *digptr = &dig[i1 + 2];

/* Load the same var1digit value into all lanes of 16x4 vector. */
int16x4_t   var1digit_16x4 = vdup_n_s16(var1digit);     // VDUP.16 d0,r0

/* Parallelize each group of 4 digits */
remainder = count%4;
count -= remainder;
for (i = 0; i < count; i += 4)
{
    /*
     * 1. Load required data into vectors
     * 2. Do multiply-accumulate-long operation using 16x4 vectors,
     *    whose output is a 32x4 vector which we need, because digptr[]
     *    is 32bit.
     * 3. Store back the result vector into digptr[]
     */

    /* Load 4 var2digits into 16x4 vector and digptr into 32x4 */
    int16x4_t    var2digits_16x4 = vld1_s16(&var2digits[i]);
    int32x4_t    dig_32x4 = vld1q_s32(&digptr[i]);

    /* Vector multiply-accumulate-long: vmlal_<type>. Vr[i] := Va[i] + Vb[i] * Vc[i] */
    dig_32x4 = vmlal_s16(dig_32x4, var1digit_16x4, var2digits_16x4);

    /* Store back the result into &digptr[i] */
    vst1q_s32(&digptr[i], dig_32x4);
}

/* Do the last remaining digits */
for (; remainder != 0; remainder--, i++)
    digptr[i] += var1digit * var2digits[i];


I created a schema that contains numerics with large precisions, as shown here, and ran the following query that multiplies t1.val and t2.val. With the non-vectorized code, the execution time showed .874 milliseconds :
$ psql -c "explain analyze SELECT t1.id, t2.id, t1.val * t2.val FROM num_data t1, num_data t2"
                                                      QUERY PLAN                                                      
-----------------------------------------------------------------------------------------------------------------------
 Nested Loop  (cost=0.00..1039.85 rows=67600 width=40) (actual time=0.016..0.840 rows=100 loops=1)
   ->  Seq Scan on num_data t1  (cost=0.00..12.60 rows=260 width=275) (actual time=0.003..0.004 rows=10 loops=1)
   ->  Materialize  (cost=0.00..13.90 rows=260 width=275) (actual time=0.001..0.002 rows=10 loops=10)
         ->  Seq Scan on num_data t2  (cost=0.00..12.60 rows=260 width=275) (actual time=0.001..0.002 rows=10 loops=1)
 Planning Time: 0.156 ms
 Execution Time: 0.874 ms
(6 rows)

With the above vectorized code, the same query execution time is now .360 ms, i.e. more than 2x speedup :

$ psql -c "explain analyze SELECT t1.id, t2.id, t1.val * t2.val FROM num_data t1, num_data t2"
                                                      QUERY PLAN                                                      
-----------------------------------------------------------------------------------------------------------------------
 Nested Loop  (cost=0.00..1039.85 rows=67600 width=40) (actual time=0.016..0.322 rows=100 loops=1)
   ->  Seq Scan on num_data t1  (cost=0.00..12.60 rows=260 width=275) (actual time=0.007..0.008 rows=10 loops=1)
   ->  Materialize  (cost=0.00..13.90 rows=260 width=275) (actual time=0.001..0.002 rows=10 loops=10)
         ->  Seq Scan on num_data t2  (cost=0.00..12.60 rows=260 width=275) (actual time=0.001..0.002 rows=10 loops=1)
 Planning Time: 0.169 ms
 Execution Time: 0.360 ms
(6 rows)


Since individual digits of the number have to be multiplied by the digits of the other number, the benefit is more for numerics with large precision. The schema I created has values with precisions in the range of 200-600. But the benefit starts showing up from around 20 precision onwards, with my ARM64 VM.


2. Auto-vectorization

It's not always necessary to write code that uses intrinsics. Often if we arrange/simplify the code, today's compilers, with appropriate compiler options, try to identify if the code can be vectorized, and generate appropriate assembly instructions that leverage the CPU architecture's SIMD. In fact, above where I simplified the backward for-loop to a forward for-loop that uses a single variable increment, the gcc compiler is able to auto-vectorize the simplified for-loop. Here are the changes again:

diff --git a/src/backend/utils/adt/numeric.c b/src/backend/utils/adt/numeric.c
index f3a725271e..4243242ad9 100644
--- a/src/backend/utils/adt/numeric.c
+++ b/src/backend/utils/adt/numeric.c
@@ -7226,6 +7226,7 @@ mul_var(const NumericVar *var1, const NumericVar *var2, NumericVar *result,
      int                res_weight;
      int                maxdigits;
      int            *dig;
+      int            *digptr;
      int                carry;
      int                maxdig;
      int                newdig;
@@ -7362,10 +7363,14 @@ mul_var(const NumericVar *var1, const NumericVar *var2, NumericVar *result,
             *
             * As above, digits of var2 can be ignored if they don't contribute,
             * so we only include digits for which i1+i2+2 <= res_ndigits - 1.
+            *
+            * For large precisions, this can become a bottleneck; so keep this for
+            * loop simple so that it can be auto-vectorized.
             */
-            for (i2 = Min(var2ndigits - 1, res_ndigits - i1 - 3), i = i1 + i2 + 2;
-                  i2 >= 0; i2--)
-                  dig[i--] += var1digit * var2digits[i2];
+            i2 = Min(var2ndigits - 1, res_ndigits - i1 - 3);
+            digptr = &dig[i1 + 2];
+            for (i = 0; i <= i2; i++)
+                  digptr[i] += var1digit * var2digits[i];
      }

With this change, in mul_var() assembly code, I could see the multiply-accumulate instructions that operate on NEON vectors (these are arm64 instructions) :
    smlal   v1.4s, v2.4h, v3.4h
    smlal2  v0.4s, v2.8h, v3.8h

gcc compiler option to enable auto-vectorization is "-ftree-loop-vectorize". With gcc -O3, it is always enabled.

Although there are examples where gcc is able to auto-vectorize even backward loops, in the above case, it could not do so for the original code, seemingly because of two decrementing variables. That's why I had to simplify it to a forward loop with a single variable increment, which is as simple as it gets.

To check whether gcc has been able to vectorize a particular code, use the  gcc -fopt-info-all option. This outputs info such as this :
numeric.c:7217:3: optimized: loop vectorized using 16 byte vectors
Or in case it can't vectorize, you would see something like this :
numeric.c:7380:3: missed: couldn't vectorize loop
numeric.c:7381:15: missed: not vectorized: relevant stmt not supported: _39 = *_38;

With this auto-vectorization method, the speedup I observed was around 2.7x. This speedup is higher than the intrinsics method, probably because the compiler might have used a better combination of assembly vectorized instructions than I did.

Conclusion

Vectorizing operations gives significant returns in repetitive operations. Although it suits well for columnar data, there could be some regions in current PostgreSQL code that might benefit from such tweaks to leverage SIMD.  As far as possible, we should arrange for the compiler's auto-vectorization. Such change is cleaner and clearly portable. Compare this with method 1 where we had to use intrinsics specific to the CPU architecture. But that example was chosen for the sake of explaining how to make use of intrinsics. In cases where it is not possible for the compiler to vectorize the code, we should use compiler intrinsics. E.g. check this out.

Thursday, 21 May 2020

PostgreSQL on ARM


In my last blog, I wrote that applications that have been running on x86 might need to undergo some adaptation if they are to be run on a different architecture such as ARM. Let's see what it means exactly.

Recently I have been playing around with PostgreSQL RDBMS using an ARM64 machine. A few months back, I even didn't know whether it can be compiled on ARM, being oblivious of the fact that we already have a regular build farm member for ARM64 for quite a while. And now even the PostgreSQL apt repository has started making PostgreSQL packages available for ARM64 architecture. But the real confidence on the reliability of PostgreSQL-on-ARM came after I tested it with different kinds of scenarios.

I started with read-only pgbench tests and compared the results on the x86_64 and the ARM64 VMs available to me. The aim was not to compare any specific CPU implementation. The idea was to find out scenarios where PostgreSQL on ARM does not perform in one scenario as good as it performs in other scenarios, when compared to x86.


Test Configuration

ARM64 VM:
   Ubuntu 18.04.3; 8 CPUs; CPU frequency: 2.6 GHz; available RAM : 11GB
x86_64 VM:
   Ubuntu 18.04.3; 8 CPUs; CPU frequency: 3.0 GHz; available RAM : 11GB

Following was common for all tests :

PostgreSQL parameters changed : shared_buffers = 8GB
pgbench scale factor : 30
pgbench command :
for num in 2 4 6 8 10 12 14
do
   pgbench [-S]  -c $num -j $num -M prepared -T 40
done
What it means is : pgbench is run with increasing number of parallel clients, starting from 2 to 14.


Select-only workload

pgbench -S option is used for read-only workload.


Between 2 and 4 threads, the x86 performance is 30% more than ARM, and the difference rises more and more. Between 4 and 6, the curves flatten a bit, and between 6 and 8, the curves suddenly become steep. After 8, it was expected to flatten out or dip, because the machines had 8 CPUs. But there is more to it. The pgbench clients were running on the same machines where servers were installed. And with fully utilized CPUs, the clients took around 20% of the CPUs. So they start to interfere from 6 threads onward. In spite of that, there is a steep rise between 6 and 8, for both ARM and x86. This is not yet understood by me, but possibly it has something to do with the Linux scheduler, and the interaction between the pgbench clients and the servers. Note that, the curve shape is mostly similar on both x86 and ARM. So this behaviour is not specific to architectures. One difference in the curves, though, is : the ARM curve has a bit bigger dip from 8 threads onward. Also, betweeen 6 and 8, the sudden jump in transactions is not that steep for ARM compared to x86. So the end result in this scenario is : As the CPUs become more and more busy, PostgreSQL on ARM lags behind x86 more and more. Let's see what happens if we remove the interference created by pgbench clients.


select exec_query_in_loop(n)

So, to get rid of the noise occurring because of both client and server on the same machines, I arranged for testing exactly what I intended to test: query performance. For this, pgbench clients can run on different machines, but that might create a different noise: network latency. So instead, I wrote a PostgreSQL C language user-defined function that keeps on executing in a loop the same exact SQL query that is run by this pgbench test. Execute this function using the pgbench custom script. Now, pgbench clients would be mostly idle. Also, this won't take into account the commit/rollback time, because most of the time will be spent inside the C function.

pgbench custom script : select exec_query_in_loop(n);
where n is the number of times the pgbench query will be executed on the server in a loop.
The loop query is the query that gets normally executed with pgbench -S option:
SELECT abalance FROM pgbench_accounts WHERE aid = $1
Check details in exec_query_in_loop()




Now, you see a very different curve. For both curves, upto 8 threads, transactions rate is linearly proportional to number of threads. After 8, as expected, the transactions rate doesn't rise. And it has not dipped, even for ARM. PostgreSQL is consistently around 35% slower on x86 compared to ARM. This sounds not that bad when we consider that the ARM CPU frequency is 2.6 GHz whereas x86 is 3.0 Gz. Note that the transaction rate is single digit, because the function exec_query_in_loop(n) is executed with n=100000.

This experiment also shows that the previous results using built-in pgbench script have to do with pgbench client interference. And that, the dip in curve for ARM for contended threads is not caused by the contention in the server. Note that, the transactions rates are calculated at client side. So even when a query is ready for the results, there may be some delay in the client requesting the results , calculating the timestamp, etc, especially in high contention scenarios.


select exec_query_in_loop(n) - PLpgSQL function

Before using the user-defined C function, I had earlier used a PL/pgSQL function to do the same work. There, I stumbled across a different kind of performance behaviour.





Here, PostgreSQL on ARM is around 65% slower than on x86, regardless of number of threads. Comparing with the previous results that used a C function, it is clear that PL/pgSQL execution is remarkably slower on ARM, for some reason. I checked the perf report, but more or less the same hotspot functions are seen in both ARM and x86. But for some reason, anything executed inside PL/pgSQL function becomes much slower on ARM than on x86.

I am yet to check the cache misses to see if those are more on ARM. As of this writing, what I did was this (some PostgreSQL-internals here) : exec_stmt_foreach_a() calls exec_stmt(). I cloned exec_stmt() to exec_stmt_clone(), and made exec_stmt_foreach_a() call exec_stmt_clone() instead. This sped up the overall execution, but it sped up 20% more for ARM. Why just this change caused this behaviour is kind of a mystery to me as of now. May be it has to do with the location of a function in the program; not sure.


Updates

The default pgbench option runs the tpcb-like built-in script, which has some updates on multiple tables.



Here, the transaction rate is only around 1-10% percent less on ARM compared to x86. This is probably because major portion of the time goes in waiting for locks, and in disk writes during commits. And the disks I used are non-SSD disks. But overall it looks like, updates on PostgreSQL are working good on ARM.


Next thing, I am going to test with aggregate queries, partitions, high number of CPUs (32/64/128), larger RAM and higher scale factor, to relatively see how PostgreSQL scales on the two platforms with large resources.


Conclusion

We saw that PostgreSQL RDBMS works quite robustly on ARM64. While it is tricky to compare the performance on two different platforms, we could still identify which areas it is not doing good by comparing patterns of behaviour in different scenarios in the two platforms.

ARM: Points to be noted


The story of ARM began in 1993 with a joint venture of Apple with ARM (then Acorn RISC Machines) to launch the "Apple Newton" handheld PC. And the story continues today with news that Apple is going to switch their MACs to ARM processors.  What has not changed in the story is ARM's reputation as a power-efficient processor. This is the primary reason why it is so popular in smarthphones, and why it has made its way into smart cars, drones and other internet-of-things devices where it is crucial to preserve battery life and minimize heat generation. Today even data centers can run on ARM. Due to such widespread market disruption happening, I thought about putting some specific points which I think are good-to-know for users and software developers who have just begun using the ARM ecosystem ...

The reason why ARM power consumption is less has to do with the inherent nature of RISC architecture on which ARM is based. RISC instructions are so simple that each of them requires only one clock cycle to execute; so they require less transistors, and hence less power is required and less heat is generated.

Ok, but then why ARM processors started making their way into data centers? After all, mobile phones and data centers don't have anything in common. Or do they?

Well, both consume power, and both need to perform well for a given price. Even though data centers are huge as compared to the size of a mobile phone, their CPU usage is also huge. So power efficiency is equally important. And so is the price for a given performance.


Divide and conquer

So, just replace the existing expensive processors with more number of cheaper ARM processors, so that the total CPU power will be equal to the existing power ? Yes, this does work. Suppose, there are 4 CPUs serving 16 parallel processes, it's better for them to be instead served by 8 or 16 lower performing CPUs.  Overall throughput will likely be higher.

But what if there is a single long database query which needs high CPU power ? Even here, the database query can make use of multiple CPUs to run a parallelized query.  Here we see that even the software needs to adapt to this paradigm shift: divide the task into number of parallel tasks wherever possible. We need to understand the fact that more than the power of a single CPU, what counts is the total power of all the CPUs.

Another thing is that, the worloads are not always high. For instance, cloud service workloads are always mixed, frequently with numerous small tasks, where again a server with large number of low power CPUs fits well.


big.LITTLE

In the ARM's big.LITTLE architecture, there can be two or more cores of different performance capacity in the same SoC. And if the workload processed by one of them changes, the other one can take over that workload on the fly if it is more suitable for the changed workload. This way unnecessary power usage and heat generation is prevented because the low-power processor type gets chosen. There has been support for doing such scheduling particularly for big.LITTLE in the linux kernel.


ARM's licensing model

As many of you might know, ARM does not manufacture chips; it designs them. And it's clients buy its license to manufacture chips based on ARM's design. Now, there are two kinds of licenses.

One is the core license.  When a company buys the core license, it has to manufacture the complete CPU core using ARM's in-house core design without modifying it. The ARM's family of core designs that it licenses, are named Cortex-A**. E.g. in Qualcomm's Snapdragon 855 chipset, all CPU cores are based on Cortex-A series; it means they used the ARM core license.

The other is the ARM architecture license. When a company buys this license and not the core license, it has to design it's own core, but the core design has to be compatible with the ARM instruction set. Such cores are often called custom cores, because they have their own micro-architecture that is not designed by ARM. This provides flexibility to the big companies to build cores as per their own needs. Companies like Qualcomm, Huawei, Apple and Samsung have built such custom cores.

The beauty of this licensing model is : the ready-made core design is available to just anybody (of course a license has to be bought). And hence there are a number of vendors who all have manufactured compatibile chips. This drives innovation and competition.


Applications

Applications for mobile devices were already written from scratch on ARM processers. But what about the software running on servers ? Well, Linux kernel has support for ARM, so OSes like Ubuntu, CentOS and Debian already have officially supported ARM images. Furthermore, if you are running on, say Ubuntu, almost all the usual x86 packages that are present in the Ubuntu repository are already there for ARM as well, at least for ARMv8. I was able to install the PostgreSQL database package, and have been running pgbench with high contention, and it runs just fine. (Probably in later blogs, I will elaborate on PostgreSQL further) Also, the compilers like gcc/g++ are already tuned for ARM architecture, so most of the hardware-specific compiler optimizations are transparently done for ARM.

But when it comes to running software meant for data servers, a lot of adaptation might be required to have a reasonable performance. For instance, applications have to be aware of the implications of the ARM's weak memory model, especially for code synchronizatoin. Secondly, they should leverage in-built ARM capabilities like NEON (which is the ARM's brand name for SIMD) to parallelize same operation on multiple data; and so on.

A lot of research and analysis is going on to optimize sofware running in the ARM ecosystem as a whole. But we are already seeing a gradual transition and adaptation to this ecosystem.

Need for external compression methods in PostgreSQL

 Every modern database system has some way to compress its data at some level. The obvious reason for this feature is to reduce the size of ...