A Performance Dashboard for Apache Spark
Posted by Luca Canali on Tuesday, 12 February 2019Topic: This post dives into the steps for deploying and using a performance dashboard for Apache Spark, us
Topic: This post dives into the steps for deploying and using a performance dashboard for Apache Spark, us
Usually when you are developing a new feature or fixing an issue, you want to focus in your business logic. If your application delegates the authentication in some SSO system you usually mocks the response from this last one. However for integration tests, it is nice to be able to test your application against the full SSO cycle, specially if you have to use things like the SAML2 Web Profile.
Virtual Private Databases (VPD) is an Enterprise Edition feature related to security. It restricts the scope of Data Manipulation Language to a subset of the table rows by transparently adding a where clause before executing them. It is also called Row-Level Security (RLS). Where the policy is enabled, it is like having the selected DML (SELECT, INSERT, UPDATE, DELETE) operate on a transcient view. And the predicates for this view can be dynamic and even query tables that the user cannot see.
The Oracle Database software is large, several gigabytes in Oracle Home for the part that is deployed on the operating system, and additional megabytes in SYSTEM tablespace for the part that is deployed as stored procedures (mainly the dbms_% packages). And this is not a problem with the traditional deployment methods where you can have a .zip golden image of the Oracle Home, and a database template to start a new DB. But this monolithic approach is not adapted to the current way people want to deploy software:
In this post, I will be talking about my openlab project and internship experience.
My project was about evaluating, comparing and testing Oracle Ksplice with Red Hat Kpatch.
Do you have tables with a column storing filenames? Long filenames with full path? If this is the case, then you probably realized how an index on this can be large. And when looking at the values sorted, you have seen the inefficiency of it: a big part of the full name is reapeated because it has the same prefix for files in the same (sub)directory. The 12cR2 Advanced Index Compression (COMPRESS ADVANCED LOW) does not help here because it only compresses identical values, like the basic compression of tables. With unique filenames, we cannot expect any benefit.
There are not many ways to access efficiently to table rows. Either you want lot of them, because your predicate is not very selective, and you read the whole table in the fastest you can do. This is Table Full Scan. Or you use a structure that gives you access to the subset of rows you need. There are mostly two structures for that: sort and hash.
This post will walk through the different remote server's JMX and local monitoring client SSL connection possibilities.
It was announced at Oracle Open World 2017 and here it is just before the start of OOW18: an RPM to install the Oracle Database software.
In the Oracle Database 18c download page there are two files for 18.3. One is a zip of the Oracle Home that we have to unzip and run the setup (named runInstaller but different than the one we had in pre-18c releases). The other file is an RPM: oracle-database-ee-18c-1.0-1.x86_64.rpm
Here is my #ThanksODC post. A long one... There's a point that should always be a major topic for database developer community discussions: where to run the procedural code. The access to data is in the database, for sure, and the language for it is SQL. But very often, the business logic of a transaction cannot be executed in one single SQL statement. Either because it is too complex and requires a procedural language.