Publication: LLAMA: A Persistent, Mutable Representation for Graphs
No Thumbnail Available
Date
2015-01-16
Authors
Published Version
Published Version
Journal Title
Journal ISSN
Volume Title
Publisher
The Harvard community has made this article openly available. Please share how this access benefits you.
Citation
Macko, Peter. 2015. LLAMA: A Persistent, Mutable Representation for Graphs. Doctoral dissertation, Harvard University, Graduate School of Arts & Sciences.
Research Data
Abstract
Graph-structured data is large, ever-changing, and ubiquitous. These features demand that graph analytic applications both compute on and modify large graphs efficiently, and it is also beneficial for analysts to be able to focus just on the last few hours or days of data. We present LLAMA, a graph storage and analysis system that supports mutability and out-of-memory execution, and SLOTH, a sliding window extension of LLAMA that efficiently maintains a view of just the most recently added parts of a graph.
LLAMA performs comparably to immutable main-memory analysis systems for graphs that fit in memory and significantly outperforms existing out-of-memory analysis systems for graphs that exceed main memory. It bases its implementation on the compressed sparse row (CSR) representation, which is a read-only representation commonly used for graph analytics. We augment this representation to support mutability and persistence using a novel implementation of multi-versioned array snapshots, making it ideal for applications that receive a steady stream of new data, but need to perform whole-graph analysis on consistent views of the data. We leverage the multi-versioned nature of this representation to build SLOTH, a sliding window data structure that advances the window by creating a snapshot from new data and aging out old data by deleting the corresponding snapshots.
We compare LLAMA to state-of-the-art systems on representative graph analysis workloads, showing that LLAMA scales well both out-of-memory and across parallel cores. Our evaluation shows that LLAMA's mutability introduces modest overheads of 3-18% relative to immutable CSR for in memory execution and that it outperforms state-of-the-art out-of-memory systems in most cases, with a best case improvement of a factor of 5 on breadth-first-search. We evaluate SLOTH with various sliding window configurations and demonstrate that LLAMA is a good building block for sliding window computation.
Description
Other Available Sources
Keywords
Computer Science
Terms of Use
This article is made available under the terms and conditions applicable to Other Posted Material (LAA), as set forth at Terms of Service