Most of you for sure know, that ability to restore data in case of failure is a primary skill for each DBA. You should always be able to restore and recover data you’re responsible for. This is an axiom. To be sure, that you’re able to do it, you should test it on regular basis. There is of course possibility to use some Oracle features, like backup ... validate or restore ...
This is a short blog post about the sessions being given by CERN people at the Oracle OpenWorld 2014 and JavaOne 2014 conferences:
Topic: This post is about using SystemTap for investigating and troubleshooting Oracle RDBMS. In particular you will learn how to probe Oracle processes and their userspace functions. These techniques aim to be useful as well as fun to learn for those keen into peeking under the hood of the technology and improve their effectiveness in troubleshooting and performance investigations.
In my first post about indexes I promised that more in this topic will follow up and here it is... This series of articles is based on observation how developers fail to correctly implement indexing in their applications based on Oracle and aims to provide guidelines on how indexes should be used. Today let’s focus on index scans. Understanding how their work might be very helpful in planning the indexing strategy!
This summer I got selected for an internship at CERN and it was a summer beyond imagination! I am here to share my wonderful experience with you all and throw some light on the most significant part of this-my project: Developing web-interface for a Physics analysis database, SQLPlotter.
Physical standby databases seem to be ideal candidates for offloading backups from primary ones. Instead of "wasting" resources (unless you're already using Active Data Guard for example), you could avoid affecting primary performance while backing up your database, especially if your storage is under heavy load even during normal (user- or application-generated) workload. So, if you're seeking for good reasons to convince your boss/finance department/etc.
Reaching the end of my internship as an Openlab student at CERN, I think it’s appropriate to write a summary of my project in our blog. I will try to give a good overview for the past weeks.
Topic: Counting the number of distinct values (NDV) for a table column has important applications in the database domain, ranging from query optimization to optimizing reports for large data warehouses. However the legacy SQL method of using SELECT COUNT (DISTINCT <COL>) can be very slow. This is a well known problem and Oracle 126.96.36.199 provides a new function APPROX_COUNT_DISTINCT implemented with a new-generation algorithm to address this issue by providing fast and scalable cardinality estimates.
Oracle patch set 188.8.131.52 has been released and with it comes an important new feature: the In-Memory option. CERN has been involved in the testing of this feature since an early stage so I'd like to take the occasion to share my experience with you!
What is it?
It is a new static pool in the System Global Area, keeping a copy of the data stored In-Memory in Columnar format:
What is the best way to handle archived logs deletion in environments with standby and downstream capture databases? One could use own scripts, to delete for example all backed up archived logs, older than n days. But better way, will be to set RMAN archived log deletion policy, because then, additional options could be specified, to delete archived logs which are not only backed up n times, but also applied or shipped to other databases in the environment.