Oracle

Minimal Oracle - 1

The Oracle Database software is large, several gigabytes in Oracle Home for the part that is deployed on the operating system, and additional megabytes in SYSTEM tablespace for the part that is deployed as stored procedures (mainly the dbms_% packages). And this is not a problem with the traditional deployment methods where you can have a .zip golden image of the Oracle Home, and a database template to start a new DB. But this monolithic approach is not adapted to the current way people want to deploy software:

Oracle Index compression for range scan on file names

Do you have tables with a column storing filenames? Long filenames with full path? If this is the case, then you probably realized how an index on this can be large. And when looking at the values sorted, you have seen the inefficiency of it: a big part of the full name is reapeated because it has the same prefix for files in the same (sub)directory. The 12cR2 Advanced Index Compression (COMPRESS ADVANCED LOW) does not help here because it only compresses identical values, like the basic compression of tables. With unique filenames, we cannot expect any benefit.

Oracle LIKE predicate and cardinality estimations

There are not many ways to access efficiently to table rows. Either you want lot of them, because your predicate is not very selective, and you read the whole table in the fastest you can do. This is Table Full Scan. Or you use a structure that gives you access to the subset of rows you need. There are mostly two structures for that: sort and hash.

An .rpm to install Oracle Database 18c

It was announced at Oracle Open World 2017 and here it is just before the start of OOW18: an RPM to install the Oracle Database software.

Download

In the Oracle Database 18c download page there are two files for 18.3. One is a zip of the Oracle Home that we have to unzip and run the setup (named runInstaller but different than the one we had in pre-18c releases). The other file is an RPM: oracle-database-ee-18c-1.0-1.x86_64.rpm

ODC Appreciation Day : Reduce CPU usage by running the business logic in the Oracle Database

Here is my #ThanksODC post. A long one... There's a point that should always be a major topic for database developer community discussions: where to run the procedural code. The access to data is in the database, for sure, and the language for it is SQL. But very often, the business logic of a transaction cannot be executed in one single SQL statement. Either because it is too complex and requires a procedural language.

Unindexed Foreign Keys in Oracle and PostgreSQL

In Oracle we need to have a index on the foreign key column as soon as we have the intention to delete from the parent row, or a locking situation may block all transactions around the child table. PostgreSQL has a similar way to manage isolation, with MVCC, then do you think you also need to index the foreign keys? Here is a test that confirms that postgres does need to not lock the tables even without index on the foreign key. 

Efficiently query DBA_EXTENTS for FILE_ID / BLOCK_ID

Did you ever try to query DBA_EXTENTS on a very large database with LMT tablespaces? I had to, in the past, in order to find which segment a corrupt block belonged to. The information about extent allocation is stored in the datafiles headers, visible though X$KTFBUE, and queries on it can be very expensive. In addition to that, the optimizer tends to start with the segments and get to this X$KTFBUE for each of them. At this time, I had quickly created a view on the internal dictionary tables, forcing to start by X$KTFBUE with materialized CTE, to replace DBA_EXTENTS.

Oracle Cloud: upload large files through the Object Store REST API

In the previous post I used a simple oci-curl() function as a Command Line Interface to the Oracle Cloud Infrastructure without installing any client tool or language. It was easy for simple things such as starting and stopping services. But it can also be more powerful because it is simply a wrapper to call the OCI REST API, simplifying the sign-in and authentication, but allowing to run any GET, POST, PUT and DELETE method.

Oracle Cloud: start/stop automatically the Autonomous Databases

In the previous post I've setup all the environment to be able to easily control the OCI services without bothering with the sign-in headers, and without installing anything. In this post I'll used the oci-curl() function to stop all my Autonomous Database services. In the previous post, I've set the environment variables for the private and public key, and the user, tenant and compartment OCIDs.

Oracle Cloud Infrastructure API Keys and OCID

As you may have read in the news, CERN is testing some Oracle Cloud services. When a large organisation is using the Cloud Credits, there's a need to control the service resources. This requires automation and then the GUI interface from the Cloud portal is not sufficient. We can control the Oracle Cloud Infrastructure through the REST API, OCI CLI, OCI SDKs, and all those methods require a RSA key for sign-in and some OCI (Oracle Cloud Identifier) to identify the user, the tenant, the compartment, the service,...

Optimizer Statistics Gathering - pending and history

How do you manage when you need to gather statistics on some tables in a critical environment? Some queries are too long because of stale statistics. But other queries on the same tables are ok. You cannot leave the inital problem without fixing it. Adding hints or SQL Profiles for the identified queries is not the right solution when you identified that stale statistics are the problem. But you want to reduce the risk of regression on other queries at maximum.

Hadoop performance troubleshooting with stack tracing, an introduction.

Topic: This post is about profiling and performance tuning of distributed workloads and in particular Hadoop applications. You will learn of a profiler application we have developed and how it has successfully been applied to tuning Sqoop to improve the throughput of data transfer from Oracle to Hadoop.

 

XFS on RHEL6 for Oracle - solving issue with direct I/O

Recently we were refreshing our recovery system infrastructure, by moving automatic recoveries to new servers, with big bunch of disks directly connected to each of them. Everything went fine until we started to run recoveries - they were much slower than before, even though they were running on more powerful hardware. We started investigation and found some misconfigurations, but after correcting them, performance gain was still too small.

How to create your own Oracle database merge patch

A little bit scary title, isn't it? Please keep in mind that definitely it is neither supported nor advised method to solve your problems and you should be really careful while doing it - hopefully not on production environment. But it may sometimes happen that you end up with the situation where creating your own merge patch for Oracle database could not be as crazy idea as it sounds :).

Pages