How to speed up external C functions via Java

Are you calling external C libraries from inside your Oracle SQL? Especially if those calls are made on a row-by-row basis you might see a significant performance benefit from calling your external C libraries via Java!

Introduction: As part of my work as an Openlab fellow for the CERN IT-DB group I've been looking at the possibility of performing In-Database physics analysis. Currently analysis of physics data from the LHC experiments is mainly done by writing C++ code. It's relativly easy to write the basic event selection criteria as select-statements applying predicates via the where-clause, but I need to have the option of calling some of the existing C++ code from inside the SQL.

Calling external code in Oracle SQL

To test this I used some C++ code from the ATLAS experiment which recalculates a specific variable for a large number of rows in my table. Calling a piece of C(++)-code from SQL is not that difficult. First I compile the code as a standalone library (called "") and copy the library to the $ORACLE_HOME/bin directory. I then need to create a new external libary object in Oracle (inside the schema calling the code) using the following SQL-command:
CREATE OR REPLACE LIBRARY "MV1_lib" AS '/ORA/dbs01/oracle/product/rdbms/bin/';
And finally I create a PL/SQL function that calls this library:
FUNCTION mv1Eval_fromExternal( w_IP3D double precision, w_SV1 double precision, w_JetFitterCombNN double precision, jet_pt double precision, jet_eta double precision ) return double precision
AS EXTERNAL library "MV1_lib" name "mv1Eval" language c parameters (w_IP3D double, w_SV1 double, w_jetFitterCombNN double, jet_pt double, jet_eta double);
The C++ code is stored as MV1.cpp which contains the function "mv1Eval". This function calculates the likelihood that a "jet" of particles seen in our particle accelerator was caused by the production of a b-quark in the event.

A physics analysis query calling external code

I call this code in the following query, which returns the invariant mass of pairs of b-tagged jets detected in the LHC collission events:
sel_jet as (select /*+ MATERIALIZE */ "jet_i","EventNo_RunNo","E","pt","phi","eta","fl_w_IP3D","fl_w_SV1","fl_w_JetFitterCOMBNN" from "jet" where "pt">25000. and abs("eta")<2.5),
sel_bjet as (select /*+ MATERIALIZE */ "jet_i","EventNo_RunNo","E","pt","phi","eta" from sel_jet where mv1Eval_fromExternal("fl_w_IP3D","fl_w_SV1","fl_w_JetFitterCOMBNN","pt","eta")>0.60173),
sel_jet_events1 as (select "EventNo_RunNo" from sel_bjet group by "EventNo_RunNo" HAVING MAX("pt")>45000. and COUNT(*) = 2)
select "EventNo_RunNo",
INV_MASS_JETS(jet0."E",jet1."E",jet0."pt",jet1."pt",jet0."phi",jet1."phi",jet0."eta",jet1."eta")/1000. as "DiJetMass"
from sel_jet_events1 INNER JOIN sel_jet jet0 USING ("EventNo_RunNo") INNER JOIN sel_jet jet1 USING ("EventNo_RunNo") where jet0."jet_i"<jet1

The above query in words: the first select pre-filters my "jet"-table (27 million rows) using a full table scan selecting jets with high tranverse momentum (pt>25000.) in the central region of our detector (eta<2.5), the second select calls the "mv1Eval_fromExternal"-function to select jets that pass the likelihood criteria for b-quarks (>0.60173), the third select requires exactly 2 b-jets per event (of which one with pt>45000.) and the final select returns the "invariant-mass" for each pair of b-jets in the selected events.

The query took 511 seconds to complete, but where is this time spent? The SQL monitoring plans shows me the following:

163 seconds was spend on the initial full table scan and 343 seconds was spent during the scan of the in-line view that was calling the external function!

To test why the external function call too so long,  I replaced the code inside "mv1Eval"  with a simple "return 1.", and still >300 seconds was spend calling the external function! As it turns out, the time spent was dominated by the overhead of the "extproc" agent passing data between the database process and the external procedure, rather than the actual call of the code itself.

How to make it faster

It was suggested to me to use Java instead of C++ as it provides a controlled environment executed within the same process and address space as the Oracle process. Rather then re-writing the C++ code in Java I decided to try to call the code via JNI (Java Native Interface). Here's how I did this:

PL/SQL function calling the Java procedure:
FUNCTION mv1Eval_java( w_IP3D IN NUMBER, w_SV1 IN NUMBER, w_JetFitterCombNN IN NUMBER, jet_pt IN NUMBER, jet_eta IN NUMBER ) return double precision
as language java
name 'MV1_interface.mv1Eval(double, double,double,double,double) return double';

Java code calling the C-interface as a native function:
public class MV1_interface {
public native static double mv1Eval(double fl_w_IP3D, double fl_w_SV1, double fl_w_JetFitterCOMBNN, double pt, double eta);

Load the java-code into the database:
loadjava -u MLIMPER/ -v -resolve

Create MV1_interface.c and MV1_interface.h files to match the java-code using the java-commands:
javac MV1_interface
javah MV1_interface


Edit MV1_interface.c to call the mv1Eval-function from MV1.cpp:
#include "MV1_interface.h"
#include "MV1.h"
JNIEXPORT jdouble JNICALL Java_MV1_1interface_mv1Eval (JNIEnv *, jclass, jdouble w_IP3D, jdouble w_SV1, jdouble w_JetFitterCombNN, jdouble jet_pt, jdouble jet_eta){
double value = mv1Eval(w_IP3D, w_SV1, w_JetFitterCombNN, jet_pt, jet_eta);
return value;

Compile the code to create a shared libary :
g++ -fPIC -Wall -I$(JAVA_HOME)/include -I$(JAVA_HOME)/include/linux -shared -o MV1.c MV1_interface.c

Copy the to the location you defined in :
cp /ORA/dbs01/oracle/product/rdbms/bin

And finally you need to set the permission (as sysdba!) to allow the user to load the library:
select seq, kind, grantee, name, enabled from dba_java_policy where name like '%java.lang.RuntimePermission%';
exec dbms_java.disable_permission(98);
exec dbms_java.grant_permission('MLIMPER','SYS:java.lang.RuntimePermission',
exec dbms_java.enable_permission(98);


When I use mv1Eval_java in the same query as before, I see the query execution time has reduced significantly:

The monitoring plans shows that now only 80 seconds was spent during the full table scan calling the mv1Eval-function! Excellent, that's more than 4 times faster! 



Add new comment

You are here