Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
443 views
in Technique[技术] by (71.8m points)

concurrency - Memory effects of synchronization in Java

JSR-133 FAQ says:

But there is more to synchronization than mutual exclusion. Synchronization ensures that memory writes by a thread before or during a synchronized block are made visible in a predictable manner to other threads which synchronize on the same monitor. After we exit a synchronized block, we release the monitor, which has the effect of flushing the cache to main memory, so that writes made by this thread can be visible to other threads. Before we can enter a synchronized block, we acquire the monitor, which has the effect of invalidating the local processor cache so that variables will be reloaded from main memory. We will then be able to see all of the writes made visible by the previous release.

I also remember reading that on modern Sun VMs uncontended synchronizations are cheap. I am a little confused by this claim. Consider code like:

class Foo {
    int x = 1;
    int y = 1;
    ..
    synchronized (aLock) {
        x = x + 1;
    }
}

Updates to x need the synchronization, but does the acquisition of the lock clear the value of y also from the cache? I can't imagine that to be the case, because if it were true, techniques like lock striping might not help. Alternatively can the JVM reliably analyze the code to ensure that y is not modified in another synchronized block using the same lock and hence not dump the value of y in cache when entering the synchronized block?

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

The short answer is that JSR-133 goes too far in its explanation. This isn't a serious issue because JSR-133 is a non-normative document which isn't part of the language or JVM standards. Rather, it is only a document which explains one possible strategy that is sufficient for implementing the memory model, but isn't in general necessary. On top of that, the comment about "cache flushing" is basically totally out place since essentially zero architectures would implement the Java memory model by doing any type of "cache flushing" (and many architectures don't even have such instructions).

The Java memory model is formally defined in terms of things like visibility, atomicity, happens-before relationships and so on, which explains exactly what threads must see what, what actions must occur before other actions and other relationships using a precisely (mathematically) defined model. Behavior which isn't formally defined could be random, or well-defined in practice on some hardware and JVM implementation - but of course you should never rely on this, as it might change in the future, and you could never really be sure that it was well-defined in the first place unless you wrote the JVM and were well-aware of the hardware semantics.

So the text that you quoted is not formally describing what Java guarantees, but rather is describing how some hypothetical architecture which had very weak memory ordering and visibility guarantees could satisfy the Java memory model requirements using cache flushing. Any actual discussion of cache flushing, main memory and so on is clearly not generally applicable to Java as these concepts don't exist in the abstract language and memory model spec.

In practice, the guarantees offered by the memory model are much weaker than a full flush - having every atomic, concurrency-related or lock operation flush the entire cache would be prohibitively expensive - and this is almost never done in practice. Rather, special atomic CPU operations are used, sometimes in combination with memory barrier instructions, which help ensure memory visibility and ordering. So the apparent inconsistency between cheap uncontended synchronization and "fully flushing the cache" is resolved by noting that the first is true and the second is not - no full flush is required by the Java memory model (and no flush occurs in practice).

If the formal memory model is a bit too heavy to digest (you wouldn't be alone), you can also dive deeper into this topic by taking a look at Doug Lea's cookbook, which is in fact linked in the JSR-133 FAQ, but comes at the issue from a concrete hardware perspective, since it is intended for compiler writers. There, they talk about exactly what barriers are needed for particular operations, including synchronization - and the barriers discussed there can pretty easily be mapped to actual hardware. Much of the actual mapping is discussed right in the cookbook.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...