ppt

advertisement

A Relativistic Enhancement to

Software Transactional Memory

Philip Howard, Jonathan Walpole

Two Methodolgies

Relativistic Programming:

Optimize Read Side

Concurrency

Serialize Writes with traditional locking (of entire instance of type, e.g. entire tree)

Issue: Update performance degrades as write/read ratio increases

Transactional Memory:

Concurrent Writes (and

Reads)

Takes advantage of disjoint access concurrency

Works for all read/write ratios, but does not take advantage of high read/write ratios

Two Methodolgies, Take Two

Relativistic Programming:

R/W Joint access

Concurrency

Silent on W/W concurrency

Multiple Readers/ One Writer scales even in presence of high contention (e.g. small

RB tree)

Transactional Memory:

R/W Disjoint access concurrency

W/W Disjoint access concurrency

R/W performance may degrade as contention increases (e.g. data set becomes small)

Can We glue them together?

Outline

1. SwissTM

2. Relativistic Programming

3. Relativistic Transactional Memory

4. Modifications to SwissTM

5. Correctness Argument

6. Performance

7. Future Work: Privatization?

8. What is the Performance Cost?

9. What functionality do we give up?

10.References

SwissTM

• SwissTM very similar to STM of Ennals Paper

• Word Based

• Lock-based (from Ennals and Fraser)

• Not Obstruction Free

• 2 Phase Locking with Contention Management

• Invisible Reads: Read Transactions Cannot Block

Writes

• Opaque: See All or nothing: "safe from crashes/loops"

• Weakly Atomic: Allows accesses outside TM

Purpose of SwissTM is to stretch STMs to a mixed workload including larger transactions, on complex, non-uniform data structures with irregular access patterns.

"Strectching Transactional Memory"

Aleksandar Dragojevíc, Rachid Guerraoui, Michal Kapalka, 2009

SwissTM

Global transaction counter, commit-ts (contention issue?)

Every write transaction gets a transaction number at commit: myts = increment&set( commit-ts );

Version number of a word is the transaction counter (myts) of the transaction that last wrote to the word.

Every transaction, records global transaction count at start-transaction: ts.valid-ts = commit-ts ;

Read Transaction Commit is a noop (Already known to be consistent)

Each read_word() checks that the version is the same as its ts.valid-ts

(Sandwich the read of the word’s value between 2 reads of the word’s version/lock)

There are read-locks -- but only gotten at commit time by write transactions

Read transactions don't get read locks!

SwissTM Example: Readonly Tx

t.start(); t.read_word(&checking);

<- commit of writer happens here t.read_word(&savings); if (checking + savings < 1000) {

// balance has dropped below $1000

} t.commit();

Meanwhile a writer: t.start(); t.read_word(&checking); t.write_word(&checking, checking+m); t.read_word(&savings); t.write_word(&savings, savings-m); t.commit();

Relativistic Programming Review

Writers must keep data in an always consistent state:

E.g. To update, copy old version to private memory, update it, atomically publish new version (with memory barriers to assure that the updates to the private copy happen before the publish).

Readers must use demarcated read-sections. Outside the read section, readers cannot hold a reference to the shared object.

Read sections are used by writers to define a grace period: wfr() blocks a writer (at least) until all read sections that were active when the write transaction started have ended.

Moving a node requires wfr()

Relativistic Transactional Memory

1. Weakly Atomic (as opposed to strongly atomic)

2. Writes that get rolled back are never visible to readers

3. Writes become visible in program order

4. Allow delays to drain concurrent readers out of their readsections

Which of these requirements requires modifications to

SwissTM?

Which of these requirements would not be met by optimistic update in place STMs?

RP reads vs. SwissTM reads

1. RP reads are semi-visible, but (like SwissTM) they don't lock anything.

2. RP reads cannot fail.

3. RP reads are truely concurrent with writers (during writer's commit), so must be done "relativistically", honoring all of the relativistic writers instructions.

What if our STM optimistically did updates in place and then rolled back any transaction that conflicted at its commit time?

Modifications to SwissTM

1. A separate "RP" re-do log stored in program order.

Commit code executes from this new log, rather than the original SwissTM log. All writes are recorded: I.e. when there is a rewrite , both the original write and the rewrite will be done, in their original program order positions.

2. Added new primitives that will be logged into new "RP" log:

1. wait_for_readers: Grace period to allow existing readers to complete their read-sections

2. rp-produce (flag to a write) Add a memory barrier before a write. Only needed on certain writes: i.e. when it was used in the original RP implementation.

3. rp-free: Register a call back to free memory

Correctness of RP

RP is OK if updates are commutable -- if the order of operations does effect the integrity of the ADT. (We will discuss this later...)

How does STM effect RP?

SwissRP is a modified RB tree implementation that is RP safe

SwissRP has all the RP tricks: always keeping the data structure "safe" for concurrent readers, memory barriers, grace periods for correctness and memory freeing safety.

So only difference that readers "see" between a RP and

SwissRP is that all of the writes are delayed until commit time.

Integrity of the STM

1. Moving a transactional read to a relativistic read has no impact on transactional writes, because the Transactional reads were already invisible to transactional writes.

2. All the original SwissTM meta-data is maintained -- integrity of conflict detection, contention management, etc. is maintained.

3. Changes to the "actual writes to memory" during commit should have no impact (by definition of a transaction) -- except to allow relativistic reads.

4. wait_for_readers() only waits for Relativistic Readers -- not for a different Write Transaction t'. Therefore no deadlock.

Performance:

SwissTM vs. RP

What was the original Problem?

Does RP-STM solve it?

Update Performance

Read Performance

Future Enhancements; Privatization

How might we make Updates even faster?

How would we do this?

Is it worth it?

Privatization: Atomically take an object or part of an object private (to an individual thread, say) work on it for a while and then atomically make it public again.

What is the Performance Tradeoff of

RP-STM?

Functionality

What functionality do we give up by moving from Transactional

Reads to Relativistic Reads?

What is a value of transactions that we might have forgotten to mention on Monday?

Bank of America

Consider a write transaction that transfers money from one account to another.

Consider a read transaction that reports the total balance for all my accounts: read savings; read checking

BofA: If your total balance drops below $1000 at any time during the month, you will be charged $5 for debit card purchases during that month.

Can we add a wfr() to the transfer transaction to fix it?

Do all write transactions have to anticipate and code for an "arbitrary" relative reader? So we have concept of RP-safe code similar to thread-safe code?

Do kernel's have these sorts of read transactions?

Can RP reads and transactional reads coexist?

Conclusions

• RP and SwissTM are compatible.

• RP and SwissTM complement each other well for read and update performance.

• Not all STM are compatible with RP

• Demonstrates that RP has different Write

Synchronization Alternatives.

References

Strectching Transactional Memory

Aleksandar Dragojevíc, Rachid Guerraoui, Michal Kapalka

PLDI 2009

Also ppt slides by Gar Nissan

Relativistic Read-Black Trees

Philip Howard Jonathan Walpole, 2011

Concurrency and Computation: Practice and Experience (in submission), July 2011 .

fin

Read Scalability

Write Scalability

Varying Update Rate

Download