Databases at CERN blog

Performance comparison of different file formats and storage engines in the Hadoop ecosystem

TOPIC

 

This post reports performance tests for a few popular data formats and storage engines available in the Hadoop ecosystem: Apache Avro, Apache Parquet, Apache HBase and Apache Kudu. This exercise evaluates space efficiency, ingestion performance, analytic scans and random data lookup for a workload of interest at CERN Hadoop service.

 

 

INTRO

 

Distributed Deep Learning with Apache Spark and Keras

In the following blog posts we study the topic of Distributed Deep Learning, or rather, how to parallelize gradient descent using data parallel methods. We start by laying out the theory, while supplying you with some intuition into the techniques we applied. At the end of this blog post, we conduct some experiments to evaluate how different optimization schemes perform in identical situations.

Darwin and Hadoop join forces to improve a face recognition algorithm

In this blog entry we introduce evolutionary algorithms and an integration between an evolutionary computation tool, ECJ, and Apache Hadoop. This research aims at speeding up the evaluation of solutions by distributing the workload among a cluster of machines. Finally, we make sense out of this integration showing how it has been used for improving a face recognition algorithm.

Custom Flume sources for ingesting data from database tables and log files

On our way to build a central repository that stores consolidated audit and log data generated by the databases, we needed to develop several components that will help us to achieve such purpose. In this case, we will be talking about two custom sources for Apache Flume that have been developed in order to collect data from databases tables and (alert & listener) log files. Both these sources are implemented in a generic way, without any project dependency, so they can be used for any other project and the code is publicly accessible.

Pages

Subscribe to Databases at CERN blog