Some time ago, I commented on a video by Carl Hewitt regarding the Actor model (primarily theoretical aspects), and I may still comment on that further... but this post is regarding another video on actors, from a recent Halmstad Colloquium, by Hewitt's former (many years ago) student, Gul Agha, who wrote a book on the subject in 1986, and who has since has been a professor at UIUC. (I've had the pleasure benefiting from Gul's gracious hospitality on a few occasions over the years.) This particular talk regards more practical matters relating to the Actor model, with the benefit of significant hindsight. Dr. Agha seems honest here about how some implementations have fudged the rules, and how those who haven't fudged them have paid some performance penalties.
First, in three slides, Agha summarizes some of Actors' important characteristics, and explains their merits. In the first (at 5:18), Actors are described as concurrent (with one another), communicating with asynchronous message passing and no shared state. In the next slide (at 8:40), he outlines basic principles, specifically: Encapsulation (of state by an actor), fairness (a.k.a. liveness, a guarantee that each sent message will eventually be received), location transparency (the independence of an actor's behavior from its physical location), and mobility (apparently related to the location actually being flexible during execution). In the third slide (at 18:10), Agha brings this all down to how an Actor is typically implemented: as "encapsulated state + behavior ('methods') + thread of control + mailbox".
Some of the merits are argued to be decreased complexity (with message interleavings being the only potential source of nondeterminism), tolerance of communication latency (the possibility of "hiding" message latency with other computation during transport), and portability (with actor addresses being independent of location or platform). State encapsulation within an actor is likened to data hiding, a mainstay of software engineering (and OO programming). The model is also said to facilitate mobility (which I assume means during execution).
Some of those benefits are actually not so clear to me. For example, regarding the fairness constraint facilitating correctness: The Actor model permits the creation of unlimited number of messages to be directed to an actor, but then also requires that every message be eventually handled (by that actor). This implies that, regardless of how quickly an actor can handle each message, it may not be fast enough to satisfy the model's requirements. That's a problem: Even beyond the requirement cited by Agha of unbounded queuing, the queue could actually grow infinitely (unless the parallelism can grow infinitely). I won't venture into the case when an actor doesn't terminate, and how/if this is formally different than an actor just refusing to handle more messages and violating fairness.
Also, while location-independent addressing (of actions) may aid mobility, mobility can actually be thwarted by the fact that an actor can maintain an unlimited and unobvious amount of persistent internal state, even while the actor is not active (i.e. while awaiting another message). It means there are essentially two kinds of state to be moved around, in completely different ways and under different circumstances: The state within a message, and that within an actor. And moving intra-actor state, especially to another kind of processor architecture, can be difficult at best.
Even some of the cited merits have corresponding drawbacks. For example, while data hiding is an important software engineering principle (in sequential programming), part of the state hidden within an actor is the set of actors with which that actor may communicate. So the entire communication and control topology of the program is hidden within actors -- and that is actually completely antithetical to engineering principles of structured systems. And since there is no limitation on the size or persistence of the state within an actor, the only way to characterize that hidden state (including the state representing the actors with which it may communicate) is in terms of the sequence of all of the messages that the actor has ever received throughout its history. This makes the propagation and transformation of state during an execution very difficult to track without knowing a lot about (and keeping track of) what's happening inside of each actor.
Dr. Agha also mentions the significant overhead associated with the required "by value" messages (with payload), which is avoidable in other models for communication between co-located procedures, in cases where they could just share access to a common area ("by reference"). Agha suggests a potential solution where static analysis could theoretically determine when an actual message transfer (copy) could be avoided, implementing sharing under the covers. This static analysis is much easier said than done, and something which has been proposed for decades with other models (e.g. Linda) while never very successfully implemented (without hardware support).
These issues are hardly new. When I was devising the F-Nets computational model around 1990 (see chapter 10 of the Scalable Planning book for a rewrite of my 1991 dissertation), it was already addressing many of these drawbacks, seen in most message-passing programs, with the rest (and others) addressed by its more complete language, ScalPL (Scalable Planning Language), shortly thereafter. (Described in the same book and many posts in this blog.) And, ScalPL retains the advantages of Actors cited above, but in addition:
- ScalPL more cleanly handles data hiding by offering two kinds of "plans" (i.e. program fragments), called "tactics" and "strategies". Tactics, which are essentially functions or methods implemented with traditional sequential programming languages, have short-lived activations/evaluations ("actons"), and don't hide or maintain any state between activations; Strategies, are graphically built in ScalPL itself of other plans (tactics and strategies), and are generally long-lived and do maintain/hide state internally, in "resources". Since strategies have clean interfaces, with just a few simple additional constructs, they also serve as classes and their instances (objects).
- The stateless and functional nature of tactics in ScalPL, together with the clear graphical representation (within strategies) of how they interact with resources and therefore with each other, makes the flow of both state and control clear, to both the human and automated analyst, allowing optimizations for efficient data sharing and/or migration without delving into the innards of the tactic implementations. These features make ScalPL potentially the closest representation to structured programming available in the concurrent/parallel world. It is essentially an executable software engineering diagram.
- Those same ScalPL resources also serve to preserve state (as necessary) for tactics between activations, with negligible overhead. Since ScalPL strategies are composed only of these resources and other subplans (tactics and strategies) which access them, everything is ultimately built from just resources and tactics, leading to true mobility: Everything that might need to be moved is either a tactic (which is stateless, and therefore basically constant data, between activations), a resource (which is designed for mobility, acting as a message when necessary), or a strategy (which is ultimately composed of the previous two).
- Fairness in ScalPL takes the form "if a tactic is continuously activateable, then it eventually will activate". Unlike Actors, since resources -- i.e. the connections between tactics -- are not (in essence) queues, data results of one component generally can't pile up unboundedly to be forced upon non-ready downstream tactics. Instead, upstream components must wait (i.e. won't be activateable) for downstream results to be consumed unless circumstances permit queuing (explicitly or implicitly) of prior results.
- In the video, Dr. Agha mentions the merits of sharing a thread among multiple actors, as an optimization, as opposed to one actor per thread. In ScalPL, since tactics (actons) are atomic and maintain no state between activations, they are short-lived "threadlets", and an implementor is not even tempted to devote a long-lived thread to each one.
I will be happy to receive information from, or engage in discussion with, those who find my observations here unfounded. I find the Actors model an important step in the progress of our knowledge of parallelism, in part because it can be used to describe so many traditional message-passing programs which possess its basic attributes. (The same cannot be said for ScalPL: It does not pretend to explain existing programs developed without knowledge of the model.) That said, I do hope that today's increased hardware technology and available parallelism are matched by advances in programming technologies.