Why Are Memory Barriers Important?
A trip to main memory costs hundreds of clock cycles on commodity hardware. Processors use caching to decrease the costs of memory latency by orders of magnitude. These caches re-order pending memory operations for the sake of performance. In other words, the reads and writes of a program are not necessarily performed in the order in which they are given to the processor. When data is immutable and/or confined to the scope of one thread these optimizations are harmless. Combining these optimizations with symmetric multi-processing and shared mutable state on the other hand can be a nightmare. A program can behave non-deterministically when memory operations on shared mutable state are re-ordered. It is possible for a thread to write values that become visible to another thread in ways that are inconsistent with the order in which they were written. A properly placed memory barrier prevents this problem by forcing the processor to serialize pending memory operations.
Memory Barriers As Protocols
Memory barriers are not directly exposed by the JVM; instead they are inserted into the instruction sequence by the JVM in order to uphold the semantics of language level concurrency primitives. We'll look at the source code and assembly instructions of some simple Java programs to see how. Let's begin a crash course in memory barriers with Dekker's algorithm. This algorithm uses three volatile variables to coordinate access to a shared resource between two threads.
Try not to focus on the finer details of this algorithm. Which parts are relevant? Each thread attempts to enter the critical section on the first line of code by signaling intent to do so. If a thread observes a conflict on line three (both threads have signaled intent) the conflict is resolved by turn taking. Only one thread can access the critical section at a given point in time.
Read more: InfoQ