Learning BSV‎ > ‎Advanced Bluespec‎ > ‎

Rules Of Rules

We tend to simplify how rules work by saying they have 3 basic attributes:

  1. rules are atomic
  2. rules fire or don't at most once per cycle
  3. rules don't conflict with other rules

In reality, rules are smarter than this simple explanation because this allows us to fire as many rules as possible and functionally correct in a cycle. So in more detail we have.

  1. rules are atomic.. rules fire completely or not at all, and you can imagine that nothing else happens *during* their execution.
  2. explicit and implicit conditions may prevent rules from firing (as described in training)
  3. every rule fires exactly 0 or 1 times every cycle (at this point in our product's history anyway ;)
  4. rules that conflict in some way *may* fire together in the same cycle, but only if the compiler can schedule them in a valid order to do so -- that is, where the overall effect is as if they had happened one at at time as in (1) above (more on this later, clearly)
  5. rules determine if they are going to fire or not before they actually do so. They are considered in their order of "urgency" (by a "greedy algorithm"): they "will fire" if they "can fire" and are not prevented by a conflict with a rule which has been selected already. Actually it's OK to think of this phase as being completed (except for wires.. again later).. before any rules are actually executed. This is what "urgency" is about.
  6. After determining which rules are going to fire, the simulator can then schedule their execution. (In hardware, of course, it's all done by combinational logic which has the same effect.) Rules do not need to execute in the same order as they were considered for deciding whether they "will fire"... For example.. rule1 can have a higher urgency than rule2, but it is possible that rule2 executes its logic before rule1. Urgency is used to determine which rules "will fire"... Earliness defines the order they fire in..
  7. All reads from a register must be scheduled before any writes to the same register: that is to say, any rule which reads from a register must be scheduled "earlier" than any other rule which writes to it.
  8. Constants may be "read" at any time; a register *might* have a write but no read (as in some simple examples)...
  9. The compiler creates a sequence of steps, where each step is essentially a rule firing. Its inputs are valid at the beginning of the cycle, its outputs are valid at the end of the cycle. Data is not allowed to be driven "backwards" in the schedule: that is, no action may influence any action that happened "earlier" in the cycle. This would go against causality, and constitutes a "feedback" path that the compiler will not allow.
  10. If the compiler is not told otherwise, methods have higher urgency than rules, and will execute earlier than rules, unless there's some reason to the contrary. There is a compiler switch to flip this around and make rules have higher urgency..

This all is deeply based in the TRS (term rewriting systems) notion. The effect of these scheduling constraints is that it's OK to think of registers as behaving like hardware registers which are clocked once each time a rule is fired -- this lets you think of the rules as happening one-at-a-time, which greatly simplifies your analysis of the design. The goal is to be able to insure that a design works in the face of future clocking changes, different pragmatic decisions about when we allow rules to fire together, and even in ESE, where rules fire in a system with no clocks at all..

So a bit more. Wires are relatively simple if you follow my idea of using registers first, then changing registers to wires. Otherwise, wires are a bit special.. They can break the idea of sequence of urgency evaluation and earliness execution. I.E. if a wire leaves on rule and goes into another rule, then we can't decide if that second rule can/will fire until we *execute* the first rule. So this affects how rules may or may not fire, as you can imagine...

Also, conflicts are not detected on action methods as strictly as I described. Wires do only allow one write.. Most others may allow more than one call per cycle (depending on the internal schedule developed in the called module). Essentially the compiler verifies "atomicity" of rules in determining if they can fire in sequence or not.. If two rules can execute in any order and still get the same result, then they don't conflict with each other. If two rules get different results if you execute one before the other vs vice versa, then they conflict. The compiler obviously needs to simplify this at places because it can't see dynamic data values (umm, necessarily, and perhaps not yet?).

If execution order matters and there is no obvious information about what the order should be (i.e. an obvious schedule or descending_urgency, etc), then the compiler randomly picks one to rule over the other.. In small examples this can be hard to follow. In practical examples, there tends to be other things going on in rules which will force a particular schedule so that the constants or register reads/writes fall from that...

Also, that two rules write to the same register in a cycle rarely happens, but when it does, the effect is the same after two cycles as if one rule ran one cycle and other other ran the next cycle, from the point of view of both rules having executing.. This is a by product of TRS, again, so in cases where this unexpectedly happens (and there are cases where we do this on purpose for speed), one works around this by either ensuring the rules don't fire together or forcing a order as needed.

So in practice, all this describes more exactly what I guess I have come to know as "the compiler does the right thing". But it helps to think a little about the software sequential angle of scheduling this stuff as a functional language might.

The "swap conundrum"

Aka, it's trivial in verilog, so why is it harder in bsv ? Well, the answer lies in the future :) Registers and cycle times are a somewhat artificial boundary applied to bsv code so that we can generate real hardware. But in the future, bsv may be creating rules that fire more than once in a cycle, untimed logic, multicycle logic, etc, etc. So we need to be careful that how we think about a register fits into the way rules and TRS is supported..

First consider that registers are like all modules in that inputs need to be valid at the beginning of the cycle (or really at the beginning of the step - meaning when this rule fires), and outputs are loaded at the next clock edge (though they are really valid at the end of the step).. The classic swap case fails in BSV because ruleA needs to read reg y at the start of step1, and write x at the end of step 1. That means ruleB then attempts to read X at the beginning of step 2, and y writes at the end of step 2, which wants to be driven into ruleA step1.. That's a step back in time, as it were, which is not allowed. This generates a scheduling error...

   rule ruleA;   x <= y; endrule
rule ruleB; y <= x; endrule

Or think of it this way. Image the compiler trying to schedule tick1 and tick2 as below

ruleA (tick1)
 x._write( y._read() )
 y read
 x write
 ruleB (tick2)
 y._write( x._read() )
 x read
 y write

Since register x must read before write, this schedule doesn't work.  And the reverse case has the same problem with y.

How to fix this? The best way is to put both writes in the same rule. clearly real hardware doesn't have a problem reading and writing registers, but since BSV wants to be able to schedule regardless of clocks (which is impossed more as BSV may start to fire rules more than once in cycle in future implementations), this adds a bit of quirk.. So this works:

   rule swap;  x <=y;  y <= x; endrule

Schedule wise, step 1 reads x and y ta the beginning nd writes x,y at the end...

On top of all this, at the end of the cycle, all rules are re-executed potentially, and of course registers get reloaded with outputs of other registers.. For that matter, all hardware is just a series of state driving to combinational logic then loading back into the same state potentially (pipelines are a simplified case of this) as we show in some of the training slides...

Interesting Pragmas (Attributes) for rules..

Please check the reference manual for complete details...

  1. descending urgency - define the order in which the rules decide to fire. (if these rules can all fire in the same cycle, it defines an order in which those rules execute in that cycle)..
  2. execution_order - define the order in which the rules fire, once it is decided that these rules will fire via "urgency".
  3. preempts - define a list of rules, and if a rule higher in the list fires, then the following rules do not fire.
  4. mutual_exclusive - tell the compiler that two rules will never execute in the same cycle.. The compiler will not generate hardware to enforce this, but will generate a simulation assert to verify this never happens in simulation.
  5. fire_when_enabled - an assertion that tells the compiler to ensure that this rule doesn't conflict with any other rules.. I.E. fire this rule when it's enabled (since no other rule will conflict and prevent it from firing).. If the compiler detects conflicting rules, this will cause a compiler error to be reported.
  6. no_implicit_conditions - as assertion that tells the compiler to ensure that there are no hidden implicit conditions on a particular rule.. The combination of this and fire_when_enabled causes the compiler to verify that this rule will fire every cycle...
   (* fire_when_enabled, no_implicit_conditions *)
rule runEveryCycle ( runCounter );
counter <= counter + 1; // fire every cycle (when ruleCounter is true)