Learning BSV‎ > ‎Advanced Bluespec‎ > ‎

Mutually Exclusive Rules

Can you assume that the rule guards (conditions) are always mutually exclusive with one-at-a-time semantics?

The short answer is that the guards of BSV rules are almost always NOT all mutually exclusive.  Thinking of our clocked semantics, at each clock cycle many rules might be available for execution (that is, their explicit and implicit guards might be satisfied).  Some of these rules are independent of each other, so the scheduler can allow all of them to fire, since none will interfere with any other.  Some rules, however, are mutually incompatible; so if more than one is available, the scheduler will have to arbitrate between them.

But that is all about clocked semantics.  What if  you are concerned with one-at-a-time semantics?

In term rewriting semantics (TRS), each step is a separate ball game.  Each rule is either available or it isn't, and in general many rules are available at each step.  Whether they are independent of each other or whether they conflict, is irrelevant, because you're only going to choose one of them anyway. TRS says that a nondeterministic choice is made among the available rules, and just one of them is selected for execution; that is then the end of that step and the whole thing repeats.

One theoretical ideal says that a design is considered functionally correct (though maybe with inadequate performance) if it gives the specified result under ANY nondeterministic choice.  There are of course variants of this:
1.  Correct answer under any choice (as above) -- sometimes called demonic nondeterminism.
2.  Correct answer under some choice -- sometimes called angelic nondeterminism.
3.  Correct answer under any strongly fair choice -- in which any continuously available rule will always eventually be selected.
4.  Correct answer under any weakly fair choice -- a stronger requirement, in which any continually (rather than continuously) available rule will always eventually be selected.

But in fact we do deviate from this ideal.  This is partly because we use our knowledge of the actual scheduling to avoid unnecessary logic, and partly because performance cannot in practice be separated from functional correctness.  Sometimes, for example, it is necessary to ensure that data will be accepted in every clock cycle, for otherwise it is lost.  So let me tell you what our scheduler actually does; and I'll try to tell the story from the TRS viewpoint.

1.  From time to time the scheduler scans all the rules in a predetermined order (the "urgency" order), to see which are available.  Each of them gets marked YES, NO or MAYBE.  A rule gets a YES if its conditions (explicit and implicit) are satisfied, and if it hasn't been pre-empted by a rule earlier in the ordering with which it "conflicts".  The definition of conflict is pragmatic -- to assist efficient implementation in electronics.  A rule gets a MAYBE if its availability depends on a rule already selected but not yet executed.

2.  When this scan has finished (or, equivalently because of our conflict definition, earlier as the results of the first scan become available), the rules are scanned again in another predetermined order (the "execution" or "earliness" order).  The YES rules are executed. Actually from the formal point of view it's probably best to think of this as the atomic selection and execution of the rule -- that is, you do the test again.  But our pragmatic definitions are such that if the rule got a YES on the first scan we know it will still be available in the second scan, so we don't actually need to do the test again.  (The dual isn't true -- a rule which got a NO might actually have become available when it's reached in the second scan.  But the NOs are not considered further -- this slight lack of optimality saves a considerable amount of logic.)  The MAYBEs are tested again, and executed if available.

Note that there's an equivalent formulation of all this that avoids the MAYBE tag.  If for a given rule you can't produce a YES/NO answer during the first scan you hold the first scan up and get on with the second scan until enough information is available to continue with the first.  (This situation arises when one rule loads a wire which needs to be read to determine the availability of a second rule.)  In this way the two crucial moments in a rule's life during these scans (its selection and its execution) can be shown as nodes on a single dependency graph (we call it the "Jacob Graph").

3.  When both scans have finished, a number of implicit rules are executed, which do things like unloading the wires (and committing the writes to ConfigRegs and esoteric things like that).

Then steps 1, 2 and 3 repeat.  You can consider steps 2 and possibly 3 as implementing the TRS -- step 1 can be thought of as a kind of oracle influencing the non-deterministic choices in step 2.

An electronic implementation will model all three steps in a single clock cycle.  Things are a bit more complex when there are multiple clock domains -- but I suggest we leave that for the time being!