Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/85953
DC FieldValueLanguage
dc.contributorDepartment of Computing-
dc.creatorXu, Wenjian-
dc.identifier.urihttps://theses.lib.polyu.edu.hk/handle/200/9503-
dc.language.isoEnglish-
dc.titleTowards efficient analytic query processing in main-memory column-stores-
dc.typeThesis-
dcterms.abstractRecently, there is a resurgence of interest in main-memory analytic databases because of the large RAM capacity of modern servers and the increasing demand for real-time analytic platforms. In such databases, operations like scan, sort and join are at the heart of almost every query plan. However, current implementations of these operations have not fully leveraged the new features (e.g., SIMD, multi-core) provided by modern hardware. The goal of this dissertation is to design efficient algorithms for scan, sort and join by judiciously exploiting every bit of RAM and all the available parallelisms in each processing unit. Scan is a crucial operation since it is closest to the underlying data in the query plan. To accelerate scans, a state-of-the-art in-memory data layout chops data into multiple bytes and exploits early-stop capability by high-order bytes comparisons. As column widths are usually not multiples of byte, the last-byte of such layout is padded with 0's, wasting memory bandwidth and computation power. To fully leverage the resources, we propose to weave a secondary index into the vacant bits (i.e., bits originally padded with 0's), forming our new storage layout. This storage layout enables skip-scan, a new fast scan that enables both data skipping and early stopping without any space overhead.-
dcterms.abstractWith the advent of fast scans and denormalization techniques, sorting could become the new bottleneck. Queries with multiple attributes in clauses like GROUP BY, ORDER BY, SQL:2003 PARTITION BY are common in real workloads. When executing such queries, state-of-the-art main-memory column-stores require one round of sorting per input column. To accelerate that kind of multiĀ­column sorting operation, we propose a new technique called "code massaging", which manipulates the bits across the columns so that the overall sorting time can be reduced by eliminating some rounds of sorting and/or by improving the degree of SIMD data level parallelism. Join stays as a time-consuming operation when the denormalization overhead is too large to be applicable. Hash joins have been studied, improved, and reexamined over decades. Its major optimization direction is to partition the input columns to make the working set fit into the caches, such that the locality of hash probing is improved. As an alternative, we propose to utilize a secondary index to improve hash joins without physical partitioning. Specifically, in the build phase, hash values are scattered evenly into logical partitions of the hash table; in the probe phase, the secondary index is used as hints to re-order the probing sequence, such that the locality of hash probing is increased. Finally, we benchmark the performance of the proposed techniques in our column-store research prototype. Extensive experiments on benchmarks and real data show that our methods offer significant performance improvement over their counterparts. In addition, our methods also show decent scalability on modern multi-core CPUs.-
dcterms.accessRightsopen access-
dcterms.educationLevelPh.D.-
dcterms.extentxx, 150 pages : illustrations-
dcterms.issued2018-
dcterms.LCSHHong Kong Polytechnic University -- Dissertations-
dcterms.LCSHQuerying (Computer science)-
dcterms.LCSHComputer algorithms-
Appears in Collections:Thesis
Show simple item record

Page views

49
Last Week
0
Last month
Citations as of Apr 21, 2024

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.