Thursday, December 18, 2014

What do concurrent traces have to say about determinism?

Let's talk about execution traces of concurrent programs, and which ones we consider "the same."

In concurrent programming models, your program constructs allow you to specify computation as indexed by a particular independent execution unit ("thread", "channel", "actor" -- at this level of detail, we can use these terms interchangeably), and to specify how those units communicate or interact. This means you have one artifact, the static program, that gives rise to a set of traces, or possible executions -- records of which parts of the program were evaluated in what order.

We might say that if this set of program traces has only one element, then the program is sequential. But the accuracy of this statement depends on how exactly we represent a program trace! If we are forced to give a total ordering on all computation steps -- unifying a trace with an "interleaving of instructions" -- then yes, the only class of programs for which every trace is "the same" is those that are deterministic and sequential.

But note that this representation precludes deterministic parallel programs, a notable use case for concurrent programming (see Lindsey Kuper's excellent post on the relationship between parallelism and concurrency). You could have two cores simultaneously computing independent things, each of which are totally deterministic -- but you cannot give a total ordering on the steps of execution. You could conceivably do so with respect to a global clock, but such an approach seems uninformative for reasoning about the program you wrote irrespective of the hardware it's running on.

Instead, we can represent an execution trace as a partial ordering of evaluation steps. Or, in other words, we can treat two totally-ordered traces as the same whenever their ordering choices only differ for independent computational components.

This equivalence relation is what literature on CLF refers to as "concurrent equality":

let x1 = e1 in let x2 = e2 in t
is concurrently equivalent to
let x2 = e2 in let x1 = e1 in t

iff e2 doesn't use x1 and e1 doesn't use x2.

Explicating a full trace in terms of its concurrent structure reveals the aforementioned partial order, and means we can visualize the trace as a DAG.

For example, here's a CLF program to sort a list represented as a mapping from locations to elements, by swapping any two adjacent elements that aren't in order, bubble-sort-style:

When we run it on the described 3-element input, a possible trace of the output has the following structure:

There's another possible trace, one that swaps the second and third array positions first, rather than the first and second. Or, viewing the swaps pictorially, there are two orderings for swaps:

We can identify these traces as distinct because of the different dependencies they introduce; each ordering between two swaps is necessary and non-arbitrary. Note: the program is (observably) deterministic; both executions take the same input to the same output.

On the other hand, here are the possible traces for sorting the four-element array [4,3,2,1]:

Again, there are two of them. But that number depends on the syntax I've written them down in. If I required only one swap per line -- one "at a time" -- then I'd have to make an arbitrary choice about which of two independent swaps to do first (e.g. (4,3) or (2,1)?).

In this sense, the 4-element sort traces have concurrent structure where the 3-element traces do not. Remember that both programs are deterministic, and both have multiple possible interleavings of computation units ("scheduler nondeterminism"?). But the difference between these programs is, I think, not well articulated by existing concurrency terminology. Or perhaps it's appropriate to use the term parallelism in this setting? In one set of traces, there is nondeterminism but no parallelism; in the latter set there is both; in Tamara there is parallelism but no (essential) nondeterminism. It still feels like the wrong word, though, because there's nothing inherent about "running computation on different cores simultaneously"; it's just a lack of dependency between actions.

Anyway - what I find so compelling about CLF's notion of concurrency is that it isn't something we talk about as a property of a behavior, it's something we can talk about as a property of a "static artifact" -- the script to a play, or an investigative report of people's independent/interactive behavior. Something to be analyzed after-the-fact of that behavior.

It's my hope that someday we can stop thinking of concurrency as something only related to low-level system processes and instead use it to form compelling mental and programming models of multi-agent behavior.


Edit: Section 7.2 in this ebook about multiagent systems contains a strikingly similar discussion to the one I've laid out here.

Friday, December 5, 2014

Design a PL for Knitting at Disney Research Pittsburgh!

My friend Jim just told me about an exciting-sounding internship opportunity for Ph.D. students in programming languages! Check it out:

Disney Research Pittsburgh seeks a PhD student with experience in Programming Languages or Compilers for a summer internship. The student will be expected to apply programming language methods to the analysis and control of knitting machines, with potential topics including:
  •   developing a semantics for machine knitting
  •   formally describing a notion of machine knittability
  •   inverse knitting
  •   knit scheduling
  •   knitting pattern optimization
  •   knitting failure analysis and robustness estimation
  •   translation from hand knitting to machine knitting
As part of the internship, the student will write up their work for submission to an appropriate top-tier venue; applicants with prior publications in such venues will be strongly favored.
Interested students should contact Jim McCann ( to arrange a further conversation. Please include or link to a description of your current research area and previous publications.

Sunday, November 23, 2014

Reactions to "Purely Functional Retrogames"

I just read this blog series on "purely functional retrogames" from six years ago, on the basis of a recommendation from my Twitter friend/occasional conference-co-attendee Scott Vokes.

Things I thought were interesting points/good takeaways:

  • It's better to nest state-keeping records deeply, rather than making wide/flat ones; never have more than a few fields to refer to, and you can pass around chunks at a time to different subroutines.
  • Being forced to think out data-dependencies can be a good thing:

  • [In a purely functional style], functions can't access any data that isn't passed in. That means you need to think about what data is needed for a particular function, and "thread" that data through your program so a function can access it. It sounds horrible when written down, but it's easy in practice. 
    In fact, just working out the data dependencies in a simple game is an eye-opening exercise. It usually turns out that there are far fewer dependencies than you might imagine.
    Obviously, I like this point, since it's largely what my research is about! Programming a game in linear logic honestly looks more like imperative programming than functional (since you manipulate state directly and locally), but with proof-theoretic tools, we can easily extract and visualize those key data dependencies.
  • The outputs of the "process world state" step are effectively a changelog, or instructions for which changes to make, not a changed world itself! He didn't elaborate much on this point, so it's less clear to me why this approach simplifies threading of the world data, but maybe I'll give it a try next time I write a game functionally.
  • From the followup: bugs/difficulties in functional code have a lot in common with bugs in imperative code. Yes, I can definitely corroborate that statement with personal experience, though I'd argue they're a lot easier to debug in a functional setting because you can limit your scope of analysis to the literal scope of the error (rather than worrying about invisible interactions with global state).
Some nitpicks:

  • From Part 1: "I know that ML has references and that processes in Erlang can be used to mimic objects, but if you go down that road you might as well be using C." Whoa now, what? And lose inductive datatypes and pattern matching? No thanks. Honestly, I'd be pretty interested in a detailed comparison between writing a game in purely functional ML vs "ref cells allowed" ML.
  • From the followup: "not being able to reuse names" - OK, so Erlang (as of 2008) doesn't support shadowing I guess? That's not inherent to FP; seems like a minor quibble. Same goes for the dictionary type lament, I think.
  • In Part 4 he said the followup would address "what about FRP?" and never did. I'm really curious about this, as someone who finds "direct-style" functional programming of games to be a lot more comprehensible than FRP.
Part of what's interesting to me here is how much both functional programming and game development have changed in the last six years. He said something about there being "more Atari hackers than Haskell devs" at the time of writing; I'd doubt that's true anymore. And, while there have been advances on two "ends of a spectrum" in tools for hobbyist game development -- I'm mainly talking about Twine and Unity; interactive fiction and 3D world sims -- there really hasn't been that much attention paid (AFAIK*) to enabling the game design tropes so common in "retrogames" -- top-down worlds, discrete movement, non-"realistic" physics. Maybe we see those things as outdated now, and maybe that's fair, but I'd argue it's strange that we're not yet seeing "first-person shooter" or "puzzle platformer" as equally outdated and overplayed.

I wish more people paid attention as this series does to the fact that "simple arcade games" are still pretty dang hard to program in a reasonable, scalable, intuitive way, and that nailing down good tools for that domain can still open a pretty wide space of novel, experimental design.


* with the notable exception of PuzzleScript.

Wednesday, November 19, 2014

Focalization in Multi-Agent Interactive Simulations

[This is a post about game/interactive narrative design theory. If you're here for the logic and type theory, there won't be much of that in this post.]

Many of the games that interest me, including my own experiments, play with the idea of focalization in a narratology sense,* which is essentially a generalization/reformulation of the idea of "perspective" in literature (which should call to mind the terms "first/second/third person," for example). In particular, in settings where multiple agents (loosely construed as "entities in a game with somehow separable trajectories in state space") can act and interact, independently or interdependently, there is a huge space of design decisions to be made.

In this post I'm going to describe a design space for multi-agent games as context for reflecting on two projects I worked on recently, the IFComp entry Origins and the PROCJAM entry Quiescent Theater.

Focalization in Games

Quick run-down of narrative focalization (summarized from the link above, which is most of what I know about it): it refers to the variety of choices for how much the narrator of a story (and thus the reader) knows compared to the character(s). In this way, it's kind of an epistemological construct, more than "first/second/third" person's concentration on syntactic presentation. There are three points on the spectrum that Genette identifies:

  • Zero: The "omniscient" focalization where the narrator knows more than the characters.
  • Internal: The narrator knows exactly what one of the characters knows.
  • External: The narrator knows less than any of the characters; they can only describe behavioral facts.
Where literature makes more interesting use of this than cinema is that it seems to have quite a lot to do with interiority -- otherwise "zero" and "external" focalization would be identified. They're the same "camera angle," so to speak. (That is, this is a difference between cinema and literature unless you're Chuck Palahniuk, who thinks that writers should always write with external focalization.)

Where games make this more interesting than literature is that there are sort of two axes along which to consider focalization: acting and sensing. Along the sensing axis, you get basically the same range as in film or literature (depending on your medium for content delivery) -- this gives us the analog of "first person" for FPSes and "third person" for top-down strategy games. For acting (I'm deliberately avoiding the terms "controlling" and "choosing" because I see those as narratives prescribed *on top of* player inputs affecting agent actions), the default design choice made in most games is internal focalization: the player wants to know "which one is me" when there are several avatars on screen; they are asked to in some sense embody a particular in-game agent. "You play as [a certain character]" can describe these games.

But many games do use external focalization (and to a lesser extent zero). For example, most battle-simulation strategy games, such as Chess, allow the player's input to affect multiple agents(/pieces/units). Digital examples include Starcraft, which is more interesting than Chess in the sense that you can assign subsets of your units to work on multiple parts of your mission simultaneously. And there's where the computational angle appeals to me -- we have constructs and abstractions for modeling complex simultaneous action and interaction. It's called concurrency!

(As a note, my investigations into focalization were initially prompted by a talk that Robert Yang gave at Different Games 2013, which you can read a blog post about here.)

One could mix-and-match different choices of focalization on the acting and sensing axes. I could make one of those good ol' 4-quadrant squares for you, but as with most ontologies, this whole thing is a big simplification that may not prove especially useful. Suffice to say, there are interesting ways to combine the choices of "what your player can affect" and "what your player can see!"


My and Vincent Zeng's IFComp submission, Origins,  was (from my perspective) an attempt to play with focalization by showing two concurrent streams of action, wherein it appears to the player that they are controlling just one character's trajectory, but in fact the trajectories are synchronized. Some of them result in the characters converging in a final scene; some of them involve more subtle interactions-at-a-distance (as a hint - pay attention to what happens with the dog). The idea is that the default player stance of "I am this one character" should be challenged; they're really more like a drama manager coordinating two actors.

The two choices at the beginning of the game are sort of meant to be a switch to flip between "zero" and "internal" focalization. In "myopic" mode, the character you click on initially will be the only one whose "internality" (set of choices) you see, and you will be restricted to making choices from their perspective. In "omniscient" mode, you see both characters' dilemmas and can make the decision by clicking in either column.

One piece of inspiration for this piece was the game The Swapper, a platformer where you can create clones of the player character that all move in lock-step. The trick then is to choose carefully which locations you place your clones in, so that their surrounding context makes their actions meaningfully different from yours. Additionally, you can swap "consciousness" (control of the swapper gun) with any of your clones. In light of this example, the myopic mode of Origins is the one without the swapping ability. That said, it's not as meaningful of an ability in Origins, because you can't create new clones or do anything else special with "control" of the character, other than see a little more of their interior state.**

Quiescent Theater

On the opposite end of the scripted-to-simulationist spectrum of storytelling, Rob Simmons and I created Quiescent Theater, a program that procedurally generates Twine games based on action-based world models written in the linear logic programming language Celf. The actions of all the agents are pre-scripted, so player inputs cannot affect them. What the player can affect is their traversal through the space of simultaneous scenes, which must be focalized through a character. Any time the character they are "following" interacts with other characters, though, they can swap focalization to one of those other characters.

This mechanic means that any given traversal through a Quiescent Theater production kind of defies classification by Genette's original three types of focalization: it is sort of "interior", in the sense that the player senses/perceives exactly what the character they are following senses/perceives, but sort of "exterior" in the sense that no interiority is ever revealed -- in the scenes with multiple characters, in fact, it is important to keep an "exterior" perspective so that is agnostic to which of the characters the player followed to get there.

After finishing this project, I found the experience of traversing most of the stories unsatisfying in ways that I expect could be improved by allowing more exteriority; more insight into what actions are taking place alongside or simultaneous with those of the character I'm following. For instance, in the live drama Tamara that helped inspire this experiment, a gunshot or shouting in one room (scene) could be overheard by an adjacent one, and that might affect which character you choose to follow next.

A Design Space for Multi-Agent Games

An important question to ask about any game's design is "What effects do player actions have?"

In the space of games where there is more than one agent -- which is really most games, since I'm not excluding things that lack a standard notion of "agency" here -- we can refine this question to "What effects do player actions have on each agent?"

Here are some possibilities:
  • The action affects the same, single agent. This is the most common choice in digital games; you play "as" a single character for the entire game. (Even within this space, there are of course interesting questions to ask about the behavior of non-player characters.)
  • The action affects every agent in the "same" or analogous way, e.g. telling a group of Starcraft troupes to storm the enemy base, or telling the clones in The Swapper to move. (This was the choice we made in Origins.)
  • A single action affects a single agent or group of agents, but different actions affect different agents/groups, or the player can instantiate an action with an agent/group of agents of their choosing.
Within the first and last last bullet points, there's another range of possibility in the answers to the question, "While the player-affected characters are acting, what are the other agents doing?" Some possibilities include:
  • Idling or carrying on with some non-interfering autonomous action, waiting for their turn (see: the Sims iPhone game; Chess; Theatrics)
  • Carrying on with autonomous action that will potentially interfere with the player character, thus affecting the player's choices and creating tension with respect to scheduling decisions (see: Lemmings; Quiescent Theater)
  • Performing some action as an immediate reaction to whatever the player-affected character does. This direction is perhaps a generalization of the second bullet point above; agents have a script to decide what any single action will do to them, but that script need not be identical for ever character. For example, with rich character AIs, their reactions may depend on individual goals, traits, and social state (see e.g. Versu).

Concluding Thoughts (on programming languages)

Because, of course, I can't help but swing this discussion back around to tools, I have to say I'm hopeful about logic programming as a powerful vector in multi-agent simulation. Part of me wonders if the reason we've seen single-character-focused, first-person games dominate the design space is because object-oriented languages encourage it: for every entity in the game, you have to give it a set of methods describing its behavior. If you want an entity to interact with another entity, well, you have to decide which entity owns that behavior (which one is the actor, so to speak, and which is the acted-upon). That makes it a sensible design decision to introduce asymmetry in the architecture of non-player vs. player characters: the player will always be the actor and the NPCs the acted-upon.

From a logic programming perspective, everyone's state lives together in a big soup, including pieces of state that relate multiple entities, like is_mad_at(A,B). This means that the mechanics of the game can be written in a way that's completely agnostic to which character decisions are determined by a human interactor.

In the comments I'd be curious to hear from you about your favorite multi-agent game mechanics, or thoughts about tool & programming language design for multi-agent simulations.


*Not in the logic sense, the one I was initially more familiar with. Incidentally, my self-stipulated win condition for my thesis includes working out a way to include both senses of the word in a single chapter.

** V and I were initially were just undecided about which mode to publish as telling a better story -- which meant we said "hey, let's publish both and let the player decide!", which I think was in retrospect a poor choice. They are, in practice, only subtly different variations on the same branching story.

Tuesday, November 4, 2014

"Tamara" as a concurrent trace

My last post described the idea of concurrent storytelling to encapsulate the idea of narrative structures where simultaneous action can lead to multiple experiences of a story, either through nondeterminism or through differing sequential traversals of the story graph.

Since then, I have obtained a copy of the Tamara script (which is written like a really bizarre choose-your-own-adventure book) and begun transcribing its scene structure as a deterministic Celf program such that the output trace models the scene dependency graph.

To do this, I model each of the 10 characters as a predicate over a location and a scene identifier, e.g.

tamara : scene -> location -> type.

The scene IDs roughly correspond to the letter-and-numbered scenes in the script, except that they are slightly finer grained to accommodate the different perspectives induced by character entrances and exits, although those entrances/exits are contained within a single script-scene.

In the simple case, a rule describing a scene's structure would take its required characters (indexed by the corresponding scene number and location) as premises and spit out those same characters as conclusions, but now indexed by their new scenes and locations. For example:

: emilia b8 leda * carlotta b8 leda 
-o {emilia c12 diningroom * carlotta c12 atrium}.

This rule describes the script scene B8, a dialogue between Emilia and Carlotta, after which Emilia exits to the dining room and Carlotta exits to the atrium (both of which are part of scene C12). I coded up sections A through C this way.

The resulting output is this big sequence of let-bindings, which I drew out on paper to visualize the dependencies:

To figure out whether my idea for compiling this structure into a Twine passage graph made sense, I did this example by hand. The basic translation was variable Xi in the CLF trace (which represent characters in certain states) would turn into passages named Xi, whose contents are purely [[links]] to scenes that the character transitions to at that state. Scenes are also passages, named e.g. b8, which contain the text of the scene as well as <<display>>ed variable passages representing the characters who are part of the scene. In cute mnemonic form: characters "link" the viewer to a scene; scenes "display" characters to the viewer.

For example, the scene passage corresponding to the CLF trace line

let {[X46, X47]} = b8_emilia_carlotta [X43, X45]

would have the title "b8_emilia_carlotta", and contain the text

<<display "x46">>
<<display "x47">>

while the *variable* passages X43 and X45 would contain the text

[[Stay with Emilia.|b8_leda]]
[[Follow Carlotta.|b8_leda]]


The resulting Twine passage graph then has isomorphic structure:

...and you can play it, choosing who to follow as you would in the original performance. Though please note that I've elided a huge amount of text, including introductory context, so it may not be too meaningful for someone unfamiliar with the play. Doing this compilation by hand is hugely labor-intensive, so think of this as a teaser that you may choose to ignore until I've written the compiler and produced the story in full.

Sunday, October 26, 2014

Concurrent Storytelling

This post outlines one way of understanding what my thesis is about: a tool for concurrent storytelling. For this post, I'll speak mainly in terms of narrative structure & system design rather than PL & logic. My goal is that an audience more oriented in interactive fiction than in programming languages can follow along. Please feel free to send me feedback about the accessibility of this writing from that perspective!

Previously, I've described my thesis programming language as a tool for designing game mechanics. But the term "game mechanics" is fairly broad, and in my proposal, I gave examples ranging from 2d puzzle games to parser interactive fiction. While I'm still excited about designing a core language that can express that entire range of mechanics, I've also narrowed my focus somewhat to examples that are especially illuminating to design this way.

In my INT 7 paper presented in June, I wrote about what I identified as the first solid successful experiment: narrative generation through a social simulation mechanic. In other words: describe general rules for character interaction; throw a bunch of characters into the pot; stir, and observe the (highly nondeterministic) sequence of actions that arises. Tweak the action rules until this process is likely to create something recognizably "story-like."

What I just referred to as a "sequence of actions" is actually better understood as a DAG of actions: a directed edge between actions A1 and A2 only exists if A2 depends on some resource(s) created by A1. "Resources" in the narrative setup include things like the presence/availability of characters required for a scene, but also pieces of state like "Romeo has a crush on Juliet" or "Tybalt carries a weapon."

Such a DAG is represented in CLF's notion of trace terms, and can be visualized with my coauthor's tool:

(This example interprets the story as a general directed graph (potentially with cycles), not a DAG, because it collapses the specific action instantiations into a single node. But the relevant structure of causal dependencies between actions can still be seen.)

What was interesting about this work to me was the way the concurrent structure of the trace term directly corresponded to the concurrent structure of the story. Even as a non interactive artifact, there is still an underlying intensional model of independent action among characters. This structure would be interesting to observe even for completely deterministic stories (i.e. those where characters always do the same thing and don't make choices). There is something like a choreography of character action that makes independent actions give rise to the global story.

To make this observation more concrete, consider the experimental theater piece Tamara. The play is set in the mansion of the Italian poet & historical figure Gabriele D'Annunzio, and the cast incorporates other artists and aristocrats as well as the servants of the household. The show is also performed in a mansion-shaped space, containing many and varied rooms. At the start of the show, audience members are instructed to pick a cast member to follow, and the cast disperses to different areas of the mansion to perform independent, concurrent scenes. When a character exits a room, each audience member may either follow them to the next scene they enter, or, if the room has any remaining characters, may stay and watch them perform the rest of the scene.

I have not managed to get my hands on a copy of the script (if anyone has any pointers for how to find or buy it, please let me know!), but if I were to reconstruct it myself, I would draw something looking very much like a CLF trace: each node is a scene; edges in are character entrances; edges out are character departures. (To be more precise, the scene node would need to be duplicated for every non-simultaneous entrance or departure.) The mechanics of actually synchronizing such an object so that the timing of scenes fits together is a separate issue, but one that could be analyzed atop the DAG abstraction.

So the next research question I have is: can one take the CLF trace corresponding to the Tamara script (or other "concurrent story") and have a computer "perform" it in a way that would give the player a (structurally) similar experience to an attendee of a live performance? In other words, permit the same interactions/choices afforded to the participants in Tamara?

My hypothesis is that the straightforward/na├»ve compilation of a CLF trace into Twine -- where the program DAG is literally interpreted as a passage graph -- would give such a reading. If the tuple of resources emitted by a rule application correspond to choices out of a passage, and those resources correspond to characters, then each potential choice will correspond to "following" (or remaining with) a character.

I'm hoping to test this hypothesis soon by writing such a compiler as part of #PROCJAM 2014!

Meanwhile, I have submitted a kind of related "concurrent storytelling" experiment (in collaboration with Vincent Zeng) to the 2014 IFComp.

Monday, June 30, 2014

Relating Event Calculus and Linear Logic, Part 2

Before I begin the technical content of this post, just a quick update: I gave my talk on "Generative Story Worlds as Linear Logic Programs" at INT in Milwaukee on the 17-18th, and I kept a travelogue, which includes notes from all the talks! The travelogue also details my time at AdaCamp, an "unconference" event on feminism & open technology, which I would strongly recommend to any women interested in those topics. I had a great time meeting people at both of these events!


Where I last left off in my discussion of the event calculus, I was discussing the idea of "encoding" vs. "embedding." In some discussions, these terms map onto the alternate terms "deep embedding" and "shallow embedding." If I understand them correctly, Alexiev's program gives a deep embedding of the event calculus: the core constructs of the calculus, fluents and events, are modeled as terms in the underlying logic program.

I was curious what it would mean to give a (shallow) embedding of the event calculus in linear logic, or indeed if this were even a possible task. On one level, EC and LL have entirely different purposes: the event calculus enables introspection on event specifications to determine intervals, and linear logic enables deduction that models reachability between state configurations.

On another level, EC and LL "do the same thing," i.e. model the same type of domain. They model evolving state, and they enable reasoning about propositions that hold in some configurations but not in others.

So the shallowest possible embedding seems like it would be one that populates a signature S for which S; D |- A iff D @ T |-_{EC} A @ T' for T < T'. In other words, the preorder given by logical entailment in linear logic should map to the preorder given by time in the event calculus.

To achieve this mapping, we would model
- fluents (time-indexed propositions) as linear propositions (or types)
- events as linear logic rules

An example domain (the same Yale Shooting Problem as from the previous post) is given below.

%% first take: fluents as types; events as rules.
%% "terminates E Ts" & "initiates E Is" |--> ruleE : Ts -o {Is}.

% fluents as types.
loaded : type.
empty : type.
dead : type.
alive : type.

% events as rules.
load : empty -o {loaded}.
unload : loaded -o {empty}.
shoot : loaded * alive -o {empty * dead}.
wait : {1}.

#trace * {empty * alive}.

This encoding does in some sense "simulate" the domain, but what it fails to take account for is control over "when things happen." The programmer of the domain has no means by which to dictate which rules proof search will select in which order, so unfortunately we cannot give an adequate encoding in this scheme.

We could try to fix it by explicitly modeling a happens E predicate and using it as a guard on our rules, but without explicitly modeling time as well, we still cannot control the order in which events take place.

Thus, take 2, in which events are data and fluents are boolean-indexed types. This version of the code uses a clock that gets incremented after each event "happens".

time : type.
z : time.
s : time -> time.
infty : time.

happens : event -> time -> type.
clock : time -> type.

bool : type.
tt : bool.
ff : bool.
loaded : bool -> type.
dead : bool -> type.

happens/load : clock T * happens load T * loaded ff  
                   -o {loaded tt * clock (s T)}.
happens/unload : clock T * happens unload T * loaded tt  
                   -o {loaded ff * clock (s T)}.
happens/shoot : clock T * happens shoot T * loaded tt * dead ff  
                    -o {clock (s T) * loaded ff * dead tt}.
happens/wait : clock T * happens wait T -o {clock (s T)}.

init : type.
- : init -o {loaded ff * dead ff}.

#trace * {init
* clock z * happens wait z * happens wait (s (s z)) * happens shoot

(s (s (s z))) * happens load (s z)}.

Results: this correctly models and simulates the domain, including control over event occurrences. But it fails a basic property of the event calculus: filling in the "gaps" between events logically by way of inertia. In this setup, we have to manually specify what happens at each and every timestep.

In the final take, I give a program that maintains events in a queue, which is centrally managed by a single rule:

happens : event -> time -> type.
holds : fluent -> type.

elist : type.
nil : elist.
cons : event -> time -> elist -> elist.

queue : elist -> type.

% domain things
load : event.
unload : event.
wait : event.
shoot : event.

loaded : fluent.
dead : fluent.

% clock management
tick : clock T -o {clock (s T)}.
stop : queue nil * clock _ -o {1}.

% key rule: "happens" management
happen : clock T * queue (cons E T' Es) * leq T' T
             -o {queue Es * happens E T}.

% action cause/effects
happens/load : happens load T -o {holds loaded * clock T}.
happens/unload : happens unload T * holds loaded -o {clock T}.
happens/shoot : happens shoot T * holds loaded -o {holds dead * clock T}.
happens/wait : happens wait T -o {clock T}.

init : type.
- : init -o {clock z
* queue (cons load z (cons wait (s (s z)) (cons shoot (s (s (s z))) nil)))

#trace * {init}.

This program more or less gives me what I was initially after: a forward-chaining program that simulates a sequence of events specified in event calculus terms, filling in the gaps logically. It's still not *quite* an adequate encoding, because we don't have control over when the tick rule fires, so it might happen any time -- note that the happen rule invokes "less than or equal" to catch the cases where we've incremented past the next event's scheduled time. But we're able to keep them in the correct order nonetheless, via the queue as well as the synchronization technique of revoking the clock token until after the event's effects have been incurred.

After carrying out this iterative design experiment, I conclude that the distinction I should have made previously was not between encoding and embedding, but rather between a program analysis and execution. And perhaps that's what I've learned about the event calculus in the process: it isn't really for writing specifications we can run, just ones we can analyze. My proofs-as-programs training led me to faulty expectations.