Over the years, there have been many attempts to contrast parallelism and concurrency, as they relate to computation. Two fairly recent examples (from a quick google search) are:
- Guy Steele Jr, How to Think About Parallel Programming: Not! (video), from 0:29:45 to 0:30:47
- David Liebke, "From Concurrency to Parallelism: an illustrated guide to multi-core parallelism in Clojure"(pdf), the first Clojure Conj., slide 2
Is there justification for this stance? Oxford American Dictionary says parallel (computing) means "involving the simultaneous performance of operations", and concurrent means "existing, happening, or done at the same time". With such similar definitions, it's no wonder that these terms get confused, but while I can understand attempting to clarify their differences, they are not mutually exclusive. In fact, if operations are parallel (simultaneous), they are also necessarily concurrent (happening at the same time).


But now to get serious. Since concurrency is a precondition to parallelism, parallel programming is necessarily concurrent programming. And in common usage, although atomic actions (e.g. transactions) can be considered abstractly to take no time, in reality they do take time and can thereby be parallel. Plus, regardless of how one implements parallelism, virtually all parallel activities share some resources in a mutually-exclusive fashion, so using that as a basis for differentiating concurrency and parallelism is not as definitive as it might at first seem. (Parallel activities which don't share resources are rare enough that they merit their own terminology, like "embarrassingly parallel".) In fact, it's worse than that: Occurrences at different places (e.g. on different processors) are (relativistically speaking) operating relative to different clocks, so simultaneity is not clearly and objectively defined.
But even assuming that one comes up with objective definitions of parallelism and simultaneity, etc., to delineate some concurrent programs as parallel ones, what features should a parallel programming methodology have over a concurrent programming methodology? And moreover, what would the advantages and disadvantages of building parallel programs be, as opposed to concurrent ones? I'll address that in my next blog entry.
2 comments:
It seems to me that an indicator showing degree of parallelism while executing a concurrent program could be quite useful to the designer. Am I wrong, Dave?
As long as the programmer is responsible for embedding/scheduling the application on the platform, and especially if there is one intended target platform (or only a few), then yes, it's undoubtedly useful. It's the same mindset as programming in assembly language to make best use of the arithmetic units or to prevent pipeline bubbles using predictive branching on a particular chip. There will always be a market for such "bare metal" computing. But when the number of potential platforms goes up and/or the target platform goes out of the programmer's control (as is unavoidable in "the cloud" or if the application is to last any length of time), it is an exercise in futility. Maximizing total concurrency becomes the goal, to be potentially automatically squelched for platforms allowing less (see "variable granularity").
Post a Comment