Back to results list
Please use this identifier to cite or link to this item:
|Title:||Optimizing big data systems with non-volatile memories : from graph computing to flash-based SSD arrays||Authors:||Han, Lei||Advisors:||Shao, Zili (COMP)
Xiao, Bin (COMP)
Computer storage devices
|Issue Date:||2019||Publisher:||The Hong Kong Polytechnic University||Abstract:||Big data has been exerting an increasingly pervasive and profound influence on everyday life. For example, social networks, such as Facebook and Twitter, produce huge volumes of data and analyze big data to learn relationships between users, which are usually linked as large-scale graphs. However, as a huge collection of data over a time frame for processing and managing, big data remains extraordinarily complex and large for current computing infrastructures, leading to high processing costs and high storage resource consumption. Graph processing is an important part of big data analysis. Processing large-scale graphs on traditional platforms including CPU, GPU and FPGA is inefficient due to the many random memory access. Moreover, high-variety information with various data characteristics has significantly boosted. Persistently storing them on SSD-based arrays for high-velocity incurs high disk replacement rates due to the limited lifetime of SSDs. In addition, employing erasure codes for data protection in storage systems consumes high computational resources, further exacerbating the inefficiency of big data processing. In this thesis, we optimize big data systems with non-volatile memories from several aspects, including improving the performance of large-scale graph processing, extending the lifetime of SSD arrays and flash chips, and improving the efficiency of erasure coding on SSD arrays. In the first part, we focus on optimizing the computational performance of big data with an emerging metal-oxide resistive random access memory (ReRAM). In the case of large-scale graph traversal, processing breadth-first search (BFS) on traditional platforms issues many random and irregular memory accesses, especially on CPU-based and GPU-based platforms. This leads to a huge amount of data movement between memories and processors, so that processors are always waiting for memories and executing instructions slowly. Moreover, the off-chip main memory in traditional platforms is a major consumer of energy. To weaken these limitations, we propose a novel ReRAM-based processing-inmemory architecture for BFS, called RPBFS. In RPBFS, the ReRAM-based memory banks are separated into graph banks and master banks. We design an efficient graph mapping scheme to distributively store a graph on multiple graph banks. To reduce data movement overhead, we design an efficient traversal scheme that can constrain a graph search inside the related graph banks through collaboration with a master bank. Moreover, we propose an analytical performance model for RPBFS, which can help us identify bottlenecks and provide optimization opportunities for our design. The experimental results show that the proposed schemes can significantly improve graph traversal performance and achieve high energy reductions compared with both CPU-based and GPU-based BFS implementations.
In the second part, we optimize the storage efficiency for big data systems with NAND-based flash memory and ReRAM, achieving lower operational cost. Flash-based SSD arrays are increasingly being deployed in data centers. Compared with hard disk drive arrays, SSD arrays drastically enhance storage density and I/O performance, and reduce power and rack space. However, SSDs suffer aging issues since a flash block can only be experienced by a limited number of program/erase (P/E) cycles. The ability of storage systems to maintain service in the time aspect is particularly relevant to operational cost, frequently replacing failed drives makes service unstable. To optimize this, first, we propose FreeRAID which applies approximate storage via the interplay of RAID and SSD controllers to improve the lifetime of SSD-based RAID arrays. Our basic idea is to reuse faulty blocks (which contain pages with uncorrectable errors) to store approximate data (which can tolerate more errors). FreeRAID integrates two key techniques: dual-space management, which can efficiently allocate independent space for normal and approximate data, and adaptive-FTL, which can dynamically switch FTL schemes for an SSD according to its lifespan stage. We conduct experiments and compare our FreeRAID with conventional RAID and FTL schemes. The experimental results show that we can significantly increase the lifetime of SSD-based RAID arrays. Second, we extend the lifetime optimization to embedded storage systems. We propose Rebirth-FTL, a pure software management in the flash translation layer for the lifetime optimization. Rebirth-FTL efficiently and effectively manages two spaces, approximate space and normal space, with approximation-aware address mapping, coordinated garbage collection and differential wear leveling. We also develop a scheme to pass approximate information from userland to kernel space in Linux, which can collaborate with Rebirth-FTL to optimize the lifetime of flash memory. A lifetime model is also presented for lifetime analysis. We implement Rebirth-FTL on an embedded development board and a simulator. Evaluations across a wide variety of workloads show that Rebirth-FTL significantly outperforms conventional FTLs in lifetime extensions and satisfies the workloads quality. Third, erasure codes such as Cauchy Reed-Solomon codes have been gaining ever-increasing importance for fault-tolerance in SSD-based RAID arrays. However, erasure coding on processor-based implementations such as a dedicated RAID controller relies on Galois Field arithmetic to perform matrix-vector multiplication, increasing computational complexity and leading to a huge number of memory accesses. We propose Re-RAID which uses ReRAM as the main memory in both RAID and SSD controllers. In Re-RAID, erasure coding can be processed in ReRAM memory to achieve high throughput. To minimize the overhead for recovering a single failure, we propose a confluent Cauchy-Vandermonde matrix as the generator matrix, which allows ReRAM memory on SSDs to perform the reconstruction task for a single failure. Experimental results show that our Re-RAID has a significant performance improvement in encoding and decoding compared with conventional processor-based implementation.
|Description:||xv, 134 pages : color illustrations
PolyU Library Call No.: [THS] LG51 .H577P COMP 2019 Han
|URI:||http://hdl.handle.net/10397/81012||Rights:||All rights reserved.|
|Appears in Collections:||Thesis|
Show full item record
Files in This Item:
|991022244149103411_link.htm||For PolyU Users||168 B||HTML||View/Open|
|991022244149103411_pira.pdf||For All Users (Non-printable)||1.58 MB||Adobe PDF||View/Open|
Citations as of Feb 19, 2020
Citations as of Feb 19, 2020
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.