Event image

Title: Understanding and Improving the Efficiency of Failure Resiliency for Big Data Frameworks

Speaker: Florin Dina (Rice University)

Abstract:  Big data processing frameworks (MapReduce, Hadoop, Dryad) are hugely popular today. A strong selling point is their ability to provide failure resilience guarantees. They can run computations to completion despite occasional failures in the system. However, an overlooked point has been the efficiency of the failure resilience provided. The vision of this work is that big data frameworks should not only finish computations under failures but minimize that impact of the failures on the computation time.

This part of the talk presents the first in-depth analysis of the efficiency of the failure resilience provided by the popular Hadoop framework at the level of a single job. The results show that compute node failures can lead to variable and unpredictable job running times. The causes behind these results are detailed in the talk. The second part of the talk focuses on providing failure resilience at the level of multi-job computations. It presents the design, implementation and evaluation of RCMP, a MapReduce system based on the fundamental insight that using replication as the main failure resilience strategy oftentimes leads to significant and unnecessary increases in computation running time. In contrast, RCMP is designed to use job re-computation as a first-order failure resilience strategy. RCMP enables re-computations that perform the minimum amount of work and also maximizes the efficiency of the re-computation work that still needs to be performed.

Florin Dinu Rice UniversityShort Bio:  Florin Dinu is a final year graduate student in the Systems Group at Rice University, Houston, TX. He is advised by Prof. T.S. Eugene Ng. Before joining Rice in 2007, he received a B.A. in Computer Science from Politehnica University Bucharest in 2006 and then worked as a junior researcher at the Fokus Fraunhofer Institute in Berlin, Germany. His Ph.D dissertation focuses on the efficiency of failure resilience in big data processing frameworks. He has also done work on the benefits of centralized network control, congestion inference and improving data transfers for big data computations.