Oracle

ODC Appreciation Day : Reduce CPU usage by running the business logic in the Oracle Database

Here is my #ThanksODC post. A long one... There's a point that should always be a major topic for database developer community discussions: where to run the procedural code. The access to data is in the database, for sure, and the language for it is SQL. But very often, the business logic of a transaction cannot be executed in one single SQL statement. Either because it is too complex and requires a procedural language.

Unindexed Foreign Keys in Oracle and PostgreSQL

In Oracle we need to have a index on the foreign key column as soon as we have the intention to delete from the parent row, or a locking situation may block all transactions around the child table. PostgreSQL has a similar way to manage isolation, with MVCC, then do you think you also need to index the foreign keys? Here is a test that confirms that postgres does need to not lock the tables even without index on the foreign key. 

Efficiently query DBA_EXTENTS for FILE_ID / BLOCK_ID

Did you ever try to query DBA_EXTENTS on a very large database with LMT tablespaces? I had to, in the past, in order to find which segment a corrupt block belonged to. The information about extent allocation is stored in the datafiles headers, visible though X$KTFBUE, and queries on it can be very expensive. In addition to that, the optimizer tends to start with the segments and get to this X$KTFBUE for each of them. At this time, I had quickly created a view on the internal dictionary tables, forcing to start by X$KTFBUE with materialized CTE, to replace DBA_EXTENTS.

Oracle Cloud: upload large files through the Object Store REST API

In the previous post I used a simple oci-curl() function as a Command Line Interface to the Oracle Cloud Infrastructure without installing any client tool or language. It was easy for simple things such as starting and stopping services. But it can also be more powerful because it is simply a wrapper to call the OCI REST API, simplifying the sign-in and authentication, but allowing to run any GET, POST, PUT and DELETE method.

Oracle Cloud: start/stop automatically the Autonomous Databases

In the previous post I've setup all the environment to be able to easily control the OCI services without bothering with the sign-in headers, and without installing anything. In this post I'll used the oci-curl() function to stop all my Autonomous Database services. In the previous post, I've set the environment variables for the private and public key, and the user, tenant and compartment OCIDs.

Oracle Cloud Infrastructure API Keys and OCID

As you may have read in the news, CERN is testing some Oracle Cloud services. When a large organisation is using the Cloud Credits, there's a need to control the service resources. This requires automation and then the GUI interface from the Cloud portal is not sufficient. We can control the Oracle Cloud Infrastructure through the REST API, OCI CLI, OCI SDKs, and all those methods require a RSA key for sign-in and some OCI (Oracle Cloud Identifier) to identify the user, the tenant, the compartment, the service,...

Optimizer Statistics Gathering - pending and history

How do you manage when you need to gather statistics on some tables in a critical environment? Some queries are too long because of stale statistics. But other queries on the same tables are ok. You cannot leave the inital problem without fixing it. Adding hints or SQL Profiles for the identified queries is not the right solution when you identified that stale statistics are the problem. But you want to reduce the risk of regression on other queries at maximum.

Hadoop performance troubleshooting with stack tracing, an introduction.

Topic: This post is about profiling and performance tuning of distributed workloads and in particular Hadoop applications. You will learn of a profiler application we have developed and how it has successfully been applied to tuning Sqoop to improve the throughput of data transfer from Oracle to Hadoop.

 

XFS on RHEL6 for Oracle - solving issue with direct I/O

Recently we were refreshing our recovery system infrastructure, by moving automatic recoveries to new servers, with big bunch of disks directly connected to each of them. Everything went fine until we started to run recoveries - they were much slower than before, even though they were running on more powerful hardware. We started investigation and found some misconfigurations, but after correcting them, performance gain was still too small.

How to create your own Oracle database merge patch

A little bit scary title, isn't it? Please keep in mind that definitely it is neither supported nor advised method to solve your problems and you should be really careful while doing it - hopefully not on production environment. But it may sometimes happen that you end up with the situation where creating your own merge patch for Oracle database could not be as crazy idea as it sounds :).

Nuances of Oracle Managed Files (OMF) and RMAN

Oracle Managed Files (OMF) have many advantages, but the fact that such files could coexist in the same database with manually added (and named) ones, could sometimes lead to confusion. Situation is made worse by the fact, that there is no straightforward way (at least of which I'm aware of...or rather was - please check the comment of Mikhail Velikikh) to say if the file is Oracle managed or not. Oracle documentation seems to confirm this:

Potential of "import catalog" command

Since version 11.1 of Oracle database, there is very useful command available, allowing DBAs to easily move RMAN recovery catalog schemas between databases. Its functionality is even broader, as it also allows one catalog schema to be merged into another - either the whole schema or just the metadata of chosen databases. Command I'm writing about is of course import catalog, which I had a chance to use recently, to move our recovery catalog to the new database.

Datafile without backups - how to restore?

Have you ever had a problem with restoring datafiles without any backups available? It's easy, of course if you have all archived logs from the time datafile was created. Please check it here: Re-Creating Data Files When Backups Are Unavailable. Moreover, RMAN is clever enough to create empty datafile automatically during restore phase and then recover it using archived logs. So far, so good, but...

Pages

Subscribe to RSS - Oracle

You are here