Databases at CERN blog

A Summer at CERN: Evaluating OpenStack Trove as DBaaS Solution

I have been one of the 23 students participating in the CERN openlab summer student programme this year. Like two of my fellow students in the database group, Sneha and Anti already did, I want to share some insights into the project I worked on and in general about my experience with the summer programme. Thus, the post is divided into a general part and a technical part, which will sum up what I did with OpenStack and its component Trove.

Importance of testing yours backup strategy

Most of you for sure know, that ability to restore data in case of failure is a primary skill for each DBA. You should always be able to restore and recover data you’re responsible for. This is an axiom. To be sure, that you’re able to do it, you should test it on regular basis. There is of course possibility to use some Oracle features, like backup ... validate or restore ...

Indexes in Oracle DB part 2

In my first post about indexes I promised that more in this topic will follow up and here it is... This series of articles is based on observation how developers fail to correctly implement indexing in their applications based on Oracle and aims to provide guidelines on how indexes should be used. Today let’s focus on index scans. Understanding how their work might be very helpful in planning the indexing strategy!

Backups in Data Guard environment

Physical standby databases seem to be ideal candidates for offloading backups from primary ones. Instead of "wasting" resources (unless you're already using Active Data Guard for example), you could avoid affecting primary performance while backing up your database, especially if your storage is under heavy load even during normal (user- or application-generated) workload. So, if you're seeking for good reasons to convince your boss/finance department/etc.

My experience testing the Oracle In-Memory Column Store

Oracle patch set has been released and with it comes an important new feature: the In-Memory option. CERN has been involved in the testing of this feature since an early stage so I'd like to take the occasion to share my experience with you!

What is it?

It is a new static pool in the System Global Area, keeping a copy of the data  stored In-Memory in Columnar format:

How to verify if archived log deletion policy is correctly applied?

What is the best way to handle archived logs deletion in environments with standby and downstream capture databases? One could use own scripts, to delete for example all backed up archived logs, older than n days. But better way, will be to set RMAN archived log deletion policy, because then, additional options could be specified, to delete archived logs which are not only backed up n times, but also applied or shipped to other databases in the environment.

Avoid commiting database passwords in your version control system.

Having the datasource password in my version control systems is an issue that has run after me since the beginning of the time. It is  classic that you are always postponing it in the development process untill somebody from the security team comes to your office and tells you "what the @#$ are these passwords doing in the svn/git???" To avoid this embarrasing situation you have different choices:


Subscribe to Databases at CERN blog