Phase Reconciliation for Contended In-Memory Transactions

DSpace/Manakin Repository

Phase Reconciliation for Contended In-Memory Transactions

Citable link to this page


Title: Phase Reconciliation for Contended In-Memory Transactions
Author: Narula, Neha; Cutler, Cody; Kohler, Eddie W; Morris, Robert

Note: Order does not necessarily reflect citation order of authors.

Citation: Narula, Neha, Cody Cutler, Eddie Kohler, and Robert Morris. 2014. "Phase Reconciliation for Contended In-Memory Transactions." In Proceedings of the 11th USENIX Symposium on Operating Systems Design and Implementation (OSDI '14), Broomfield, CO, October 6-8, 2014: 511-524.
Full Text & Related Files:
Abstract: Multicore main-memory database performance can collapse when many transactions contend on the same data. Contending transactions are executed serially—either by locks or by optimistic concurrency control aborts—in order to ensure that they have serializable effects. This leaves many cores idle and performance poor. We introduce a new concurrency control technique, phase reconciliation, that solves this problem for many important workloads. Doppel, our phase reconciliation database, repeatedly cycles through joined, split, and reconciliation phases. Joined phases use traditional concurrency control and allow any transaction to execute. When workload contention causes unnecessary serial execution, Doppel switches to a split phase. There, updates to contended items modify per-core state, and thus proceed in parallel on different cores. Not all transactions can execute in a split phase; for example, all modifications to a contended item must commute. A reconciliation phase merges these per-core states into the global store, producing a complete database ready for joined phase transactions. A key aspect of this design is determining which items to split, and which operations to allow on split items.

Phase reconciliation helps most when there are many updates to a few popular database records. Its throughput is up to 38x higher than conventional concurrency control protocols on microbenchmarks, and up to 3x higher on a larger application, at the cost of increased latency for some transactions.
Published Version:
Other Sources:
Terms of Use: This article is made available under the terms and conditions applicable to Open Access Policy Articles, as set forth at
Citable link to this page:
Downloads of this work:

Show full Dublin Core record

This item appears in the following Collection(s)


Search DASH

Advanced Search