Deduplication has become quite effective at eliminating duplicates in data, thus multiplying the effective capacity of disk-based backup systems, and enabling them as realistic tape replacements. Despite these improvements, single-node raw capacity is still mostly limited to tens or a few hundreds of terabytes, forcing users to resort to complex and costly multi-node systems, which usually only allow them to scale to singledigit petabytes. As the opportunities for deduplication efficiency optimizations become scarce, we are challenged with the task of designing deduplication systems that will effectively address the capacity, throughput, management and energy requirements of the petascale age.
We present a high-performance deduplication prototype, designed at SRL from the ground up to optimize overall single-node performance, by making the best possible use of a node¹s resources, and achieve three important goals: scale to large capacity, provide good deduplication efficiency, and near-raw-disk throughput.
We will also discuss the requirements and challenges in designing commercial large scale cloud deduplication system.
Petros Efstathopoulos is a Technical Director at Symantec Research Labs in Culver City, CA. He holds a Ph.D. degree in Computer Science from the University of California, Los Angeles (UCLA) and a B.Sc. degree in Electrical and Computer Engineering from the National Technical University of Athens, Greece (NTUA).
Since 2000 Dr. Efstathopoulos has been working on operating system kernel projects, mostly working with the Linux kernel. During his Ph.D. he worked on the design and implementation of the Asbestos operating system, and introduced decentralized information flow control to contain the effects of bugs and provide improved security. His research interests include operating systems, system/network security, information flow control, system management, virtualization, storage, and file systems.