TLDR; Apache Spark 3.0 comes with many improvements, including new features for memory monitoring.
I've already mentioned on this blog very useful Consolidated Database Replay feature, for example while testing unified auditing performance impact (http://db-blog.web.cern.ch/blog/szymon-skorupinski/2014-06-unified-auditing-performance) or while investigating problems with hanging workload capture (http://db-b
Topic: This post provides a short summary and pointers to previous work on Extended Stack Profiling for troubleshooting and performance investigations.
Recently we were refreshing our recovery system infrastructure, by moving automatic recoveries to new servers, with big bunch of disks directly connected to each of them. Everything went fine until we started to run recoveries - they were much slower than before, even though they were running on more powerful hardware. We started investigation and found some misconfigurations, but after correcting them, performance gain was still too small.
A little bit scary title, isn't it? Please keep in mind that definitely it is neither supported nor advised method to solve your problems and you should be really careful while doing it - hopefully not on production environment. But it may sometimes happen that you end up with the situation where creating your own merge patch for Oracle database could not be as crazy idea as it sounds :).
Regular readers of our blog probably already know that for most of our databases we're using two storage layers to keep our backups - NAS volumes as a primary layer and tapes as secondary one - please check "Datafile without backups - how to restore?" for more details. If you read another post "Importance of testing yours ba
Oracle Managed Files (OMF) have many advantages, but the fact that such files could coexist in the same database with manually added (and named) ones, could sometimes lead to confusion. Situation is made worse by the fact, that there is no straightforward way (at least of which I'm aware of...or rather was - please check the comment of Mikhail Velikikh) to say if the file is Oracle managed or not. Oracle documentation seems to confirm this:
I've already described how important is to test your backup strategy and restore/recovery procedures, but while doing so, you could of course encounter some problems, not really related with the recoverability as such. Recently, we've got such a problem on our recovery server, at the very beginning of an automatic restore (database name masked):
If you plan to introduce changes in your environment and want to estimate their impact, Real Application Testing feature seems to be one of the best options. As we needed to check the influence of changes planned in our databases, I've started to look for good candidates to capture the workloads. I wanted to capture only workloads associated with small number of schemas, but from several databases, to be able to properly simulate as much types of production workloads existing in our databases as possible.
The views expressed in this blog are those of the authors and cannot be regarded as representing CERN’s official position.
Christian Antognini, Karl Arao, Martin Bach, Mark Bobak, Wolfgang Breitling, Doug Burns, Kevin Closson, Cloudera blog, Wim Coekaerts, Bertrand Drouvot, Enkitec blog, Pete Finnigan, Richard Foote, Randolf Geist, Marco Gralike, Brendan Gregg, Kyle Hailey, Tim Hall, Uwe Hesse, Frits Hoogland, Hortonworks blog, Integrity Oracle Security, Tom Kyte, Adam Leventhal, Jonathan Lewis, Cary Millsap, James Morle, Karen Morton, Arup Nanda, Mogens Nørgaard, Oracle The Data Warehouse insider, Oracle Enterprise Manager, Oracle Linux blog, Oracle Multitenant, Oracle Optimizer blog, Oracle R technologies, Oracle Upgrade blog, Oracle Virtualization blog, Kerry Osborne, Tanel Poder, Planet PostgreSQL, Kellyn Pot'Vin, Pythian blog, Greg Rahn, Mark Rittman, Riyaj Shamsudeen, Chen Shapira, Carlos Sierra, Szymon Skorupinski