Oracle patch set 220.127.116.11 has been released and with it comes an important new feature: the In-Memory option. CERN has been involved in the testing of this feature since an early stage so I'd like to take the occasion to share my experience with you!
What is it?
It is a new static pool in the System Global Area, keeping a copy of the data stored In-Memory in Columnar format:
How to use it?
The new static parameter "inmemory_size" (default: 0) controls the amount of memory assigned to the In-Memory Column Store. For example to assign 100 GB of memory:
alter system set inmemory_size=100G scope=spfile;
Now let's see how the dynamic memory components in the SGA change before and after the restarting the database:
select substr(component,1,20) as component,current_size/1024/1024/1024 as current_size_GB from v$memory_dynamic_components where current_size != 0 and component not like 'PGA%' order by current_size desc;
My SGA target was set to 186G, before assinging memory to the IMC store the default buffer cache was using the bulk of the SGA memory: 169 GB. After giving 100G to the IMC store the buffer cache was left with 69 GB.
Ofcourse the memory used by the new column store has to come from somewhere! One of the performance challenges when using the new In-Memory option will be to decide how to use the memory available in your system. Keep in mind that OLTP workloads work best with row-based storage and benefit from data stored in the buffer cache, while the columnar store will help to optimize analytics-type workloads.
Once the inmemory_area is defined, we need to indicate which tables are candidates to be stored in the IMC store, using ALTER TABLE ... INMEMORY.
To indicate that the table "electron" is to be populated into In-Memory Column Store after first full-table scan:
ALTER TABLE "electron" INMEMORY ;
To indicate that the table "electron" is to be populated into In-Memory Column Store immediately after instance start-up:
ALTER TABLE "electron" INMEMORY PRIORITY CRITICAL ;
Simlarly you can define various compression options as follows:
ALTER TABLE "electron" INMEMORY MEMCOMPRESS FOR QUERY; --default compression
ALTER TABLE "electron" INMEMORY MEMCOMPRESS FOR CAPACITY HIGH; --capacity high compression
ALTER TABLE "electron" INMEMORY MEMCOMPRESS FOR CAPACITY LOW; --capacity low compression
In-Memory Column Store population
Some work will have to be done to transform the row-storage into the columnar format. Whenever the In-Memory population is triggered the background-workers spring to live. IM population start after the first full scan of the table or at instance startup for CRITICAL tables.
The IM background-workers can be seen running on the instance as processes named ORA_WXXX_orasid:
The default number of IM background-workers is 1/2*CPU-cores. While the IM population is in progress, the database remains available for running queries, but you won't see the full benefit of reading your table-data from the IMC store until the population is complete.
You can have a peak at the V$IM_SEGMENTS to see the status of your table population:
select segment_name,ROUND(SUM(BYTES)/1024/1024/1024,2) "Orig. Bytes GB", ROUND(SUM(INMEMORY_SIZE)/1024/1024/1024,2) "In-memory GB", ROUND(SUM(BYTES-BYTES_NOT_POPULATED)*100/SUM(BYTES),2) "% bytes in-memory", ROUND(SUM(BYTES-BYTES_NOT_POPULATED)/SUM(INMEMORY_SIZE),2) "compression ratio" from V$IM_SEGMENTS group by owner,segment_name order by SUM(bytes) desc;
And you will see "% of bytes in-memory" growing as the table population is progressing.
Another useful new view is V$INMEMORY_AREA. Which shows you the how much space in the IM pools is used and the current population status:
select POOL, ROUND(ALLOC_BYTES/1024/1024/1024,2) as "ALLOC_BYTES_GB", ROUND(USED_BYTES/1024/1024/1024,2) as "USED_BYTES_GB", populate_status from V$INMEMORY_AREA;
(showing example of pool-sizes for "inmemory_size=120G")
Example of compression rates
The compression rate for data stored in the In-Memory Columnar Store depends on type of data you have in the table. This tables shows the compression rates I got for some of my tables using "MEMCOMPRESS FOR QUERY" and "MEMCOMPRESS FOR CAPACITY HIGH" on my dataset:
The "electron"-table contains a mixture of floats, doubles and integers used to describe electron-particles recorded by the experiment. The "Event Filter"-table contains booleans stating whether or not a set of conditions was passed (most of the booleans are "false"). The "Missing Energy"-table contains only floats and doubles, all of which describe energy-values that are unique for each measurement, here the compression rate is the lowest. As expected the compression rate strongly depends on the data that you are storing, but a clear benefit of the IM store is that you will be able to fit more data in memory than you could using the buffer cache.
Performance tests using the In-Database Physics Analysis benchmark
For my work in the CERN Openlab collaboration I've looked at the possibilty of using database technology for perfoming particle physics analyis. This is a typical analytics workload involving full table scan of tables containing many columns. The "electron"-table in my test dataset has 340 columns and 50 million rows. I start with a simple test using a "select count(*)" with a predicate filter on 2 columns:
select count(*) from DATA12_8TEV."electron" where "pt">18000. and abs("eta")<2.5;
This query executes 40 times faster when reading data from the In-Memory Column Store compared to reading the same data from the buffer cache. The improved performance illustrates the advantage of using columnar storage, I could directly access the 2 columns used in the predicate filtering rather than extracting the relevant column from the blocks using the row-based storage.
The count(*) query is bit boring, so let's look at using an actual physics analysis query. This following query selects proton-collision events where we detected high-momentum, oppositely charged electron-pairs:
with "sel_electron" as (select "electron_i","RunNumber","EventNumber","E","px","py","pz","charge","pt","phi","eta" from DATA12_8TEV."electron" where ("author"=1 or "author"=3) and "cl_E"/cosh("tracketa") >10000. and (abs("cl_eta")!=0 and abs("cl_eta")<2.47) and (abs("cl_eta")<1.37 or abs("cl_eta")>1.52) and BITAND("OQ",1446)=0 and abs("trackz0pvunbiased") < 2. and "tightPP"=1) select "RunNumber","EventNumber",el_sel_n, electron0."electron_i" as mu_id0, electron1."electron_i" as mu_id1, electron0."charge" as mu_charge0, electron1."charge" as mu_charge1, electron0."pt"/1000. as mu_pt0, electron1."pt"/1000. as mu_pt1, electron0."eta" as el_eta0, electron1."eta" as el_eta1, (case when abs(electron0."phi"-electron1."phi")
This query executes around 7x times faster using data stored in the IMC store rather than the buffer cache!
Displaying the "INV_MASS"-variable returned by the query as an histogram gives us an interesting image:
That big peak around INV_MASS=90. is caused by electron-positron pairs being produced by the decay of the Z0-boson, one of the two elementary particles that mediates the weak interaction. Since you're reading the CERN database-blog I hope you also enjoy learning something about particle physics!