Apache: Big Data 2016 has ended
Register Now or Visit the Website for more Information 
Wednesday, May 11 • 5:10pm - 6:00pm
Less Is More: Doubling Storage Efficiency with HDFS Erasure Coding - Zhe Zhang, LinkedIn & Kai Zheng, Intel

Sign up or log in to save this to your schedule, view media, leave feedback and see who's attending!

Ever since its creation, HDFS has been relying on data replication to shield against most failure scenarios. However, with the explosive growth in data volume, replication is getting expensive: the default 3x replication scheme incurs a 200% storage overhead. Erasure coding (EC) uses far less storage space while still providing the same level of fault tolerance. Under typical configurations, EC reduces the storage cost by ~50% compared with 3x replication.

In this talk we will introduce the design and implementation of HDFS-EC, and recommended use cases. We will also provide preliminary performance results. Equipped with the Intel ISA-L library, HDFS-EC has largely eliminated the computational overhead in codec calculation. Under sequential I/O workloads, it achieves twice the throughput compared with 3x replication, by performing striped I/O to multiple DataNodes in parallel.


Zhe Zhang

Zhe Zhang is a software engineer at LinkedIn working on Hadoop. He’s an Apache Hadoop Committer and author of HDFS Erasure Coding. Before joining LinkedIn in Feburary 2016 Zhe was an engineer in Cloudera HDFS team. Prior to that he worked at the IBM T. J. Watson Research Center... Read More →

Kai Zheng

Kai is a senior software engineering in Intel that works in big data and security fields for quite a few of years. He is a key Apache Kerby initiator, Directory PMC member and Apache Hadoop committer.

Wednesday May 11, 2016 5:10pm - 6:00pm PDT
Georgia B