Learning BSV‎ > ‎Advanced Bluespec‎ > ‎

Rules Of Wires

Wires are used to pass data between rules within the same clock cycle. Wires in Bluespec are a bit different than one normally thinks of in a hardware design, as they have differing features and some have side effects.

First of all, it is important to keep in mind that the compiler is scheduling rules in steps within a clock cycle and rules only fire once per cycle (I recommend reading Rules of Rules before proceeding to make sure you understand rules and scheduling).

All wire types in Bluespec are built using RWire.  But before we discuss the differences between RWire and its derived types (DWire, Wire, PulseWire, BypassWire), we need to understand the basic features of all RWire (and hence, all wires).
  1. wires truly become wires in hardware - they do not save "state" between cycles.
  2. A wire's schedule requires that it be written before it is read (as opposed to a register that is read before it is written)
  3. a wire can not be written more than once in a cycle
There are two ways to explain wires.  The first is perhaps simpler and more hardware specific.  The second is more correct, as it is very dependent on the scheduler of the compiler.

Simplified view of wires

If you think of actual hardware, when a clock edge occurs, data may change in registers which propagates to gates which eventually drives the hardware wire to some value (different or not), but from a timing point of view, we consider this wire "in transition" or "not valid" until its hold time is met.  To some degree this is true of wires in Bluespec also:  All wire types effectively "reset" at the beginning of a cycle.   It does not have a value that is usable by another rule (or other combinational logic) until it has been driven by a rule in that cycle.   Its value does not hold past the pos edge of a clock functionally.

So if ruleA writes a wire and ruleB reads that wire, it makes sense that ruleA *must* execute before ruleB executes.   This is fundamentally just saying that a wire needs to be driven before you can read the value on it.  I think the average hardware engineer imagines that there can be no other possibility.

From the scheduler's point of view

From the page Rules of Rules, we learned that a clock edge is a fairly artificial boundary to TRS Systems and to Bluespec.  All rules within a clock cycle fire either 0 or 1 times.  And all rules must be scheduled into individual steps such that the scheduling order of that block is respected (i.e. for wires the write is scheduled before the read).  This effectively means that any rule that reads a wire must be scheduled after the rule that writes it.

Now this can be used in to get around certain peculiarities in Bluespec, but one does need to be careful.  Let us consider the "swap" conundrum discussed in the Rules of Rules section.   We can now break the compilers scheduling problem with a wire like so:

(* synthesize *)
module mkTest(  );
   Reg#(int)  a  <- mkReg(0);
   Reg#(int)  b  <- mkReg(1);
   Wire#(int) aw <- mkWire;

   rule driveA;   aw <= a; endrule
   rule swapA;    a <= b;  endrule
   rule swapB;    b <= aw; endrule

compiling this with the -show-schedule switch shows us the new working schedule:

=== Generated schedule for mkTest ===

Rule schedule
Rule: swapB
Predicate: aw.whas
Blocking rules: (none)
Rule: swapA
Predicate: True
Blocking rules: (none)
Rule: driveA
Predicate: True
Blocking rules: (none)
Logical execution order: driveA, swapA, swapB


And we see from the reported logical execution order (in bold) that the wire is driven first, then effectively it doesn't' matter which order swapA and swapB occur in.

As the very last step of a clock cycle, the wire's value is reset to invalid, and hence it can not be read again in the next cycle unless some rule has written to it.   This adds a new scheduling wrinkle.  In determining urgency (i.e. if a rule will fire or not),  the scheduler must know if a rule that writes to a wire is going to fire or not before it can allow the rule reading that wire to fire.  In the example above, if there were other implicit signals involved in determining if rule "driveA" fires, those conditions may get carried forward to determine if rule "swapA" fires or not.  In purely register based designs urgency can be determined before execution order.  When wires are present, urgency may depend on execution order (to know if a rule has fired before a later rule can fire).   Additionally, urgency may be determined in a different order than execution order.   An example of this can be found in our small examples section at  http://www.bluespec.com/wiki/SmallExamples/ex_05_c/Tb.bsv

Wire types

Details of the various wires is available in the reference manual, but it is worth some added discussion here.   I generally recommend people start thinking about wires with two specific types of wires (which are not the basic RWire type).


Wire has the same interface methods as a Reg.  It has _write and _read, so it looks like a register read and write (discussion of why this is very useful later).  A wire can be defined with any data type.   But a wire has an implicit condition with it, in that is it not ready if it has not been written.   This means that a rule that reads a Wire can not fire if it has not been written to by  a earlier rule in this cycle.

This is one way to determine if a rule has fired or not, I.e.  ruleB can use the data from a wire, and if the rule fires, then the wire was ready.  If the wire is not written, you also can't read the value of it (as you shouldn't be able to).


DWire is similar to Wire, in that it also has the same interface methods as a register.  A DWire is always_ready. It doesn't prevent rules from firing But it can be set with a "default value" to be read when no other rule writes it.   It's schedule is still such that the write method must occur before the read method, but if there is absolutely no write to the wire at all, the scheduler can assume it still has a value (the default value).  The scheduler still fires any rule reading from this wire after any rules that write it.  But if those rules don't fire, you still get the default value.

This is often used to signal exceptional conditions between rules.  A simple example is to imagine this scenario

Wire#(Bool)  didFire <- mkDWire(False);

rule cnt1 ( < some condition > );
    didFire <= True;

rule collect;
    if (didFire)
      $display("rule cnt1 did fire!");

In this case, the rule collect will fire whether cnt1 fires or not, but it can look at this boolean flag didFire.   It's false by default, unless cnt1 fired to set it to true.  Viola..  rule firing status.   This is also hand for exception flags, etc.  

If the data type is Bool, you could also use PulseWire, but I prefer to use DWire for both Bool flags and other data flags, rather than have two types.


Less used, a bypass wire is the same as a wire, but the compiler enforces that it is always_enabled. Meaning that it *must* be written every cycle.  This is possibly the closest thing to a good old fashion verilog wire.  If you do not write the wire via some rule in a way the compiler can logically know it will always fire, then you will get a compiler error.


Finally, all wire types are built on the base wire type RWire.  You are welcome to use this wire, of course, and in some instances it makes perfect sense.  But it has the following features.
  1. it is always_ready
  2. it has a maybe type built into it by default
  3. it has a unique read method (wget)
  4. it has a unique write method (wset)
An example usage:

RWire#(int)  data <- mkRWire;

rule rA;
    data.wset( 10 );

rule rB (data.wget matches tagged Valid .val);
    $display("Data driven and it's value is %x", val);

wget is used to read the data, but it is a Maybe#() type wire.  If the wire is not driven, then it's value is "Invalid", if it is driven, then it's value is "Valid" with it's data value.  The advantages to this are flexibility, in that there is no implicit ready condition, you can test to see if it's valid or not (i.e. was it driven), and the data itself is masked by the Valid field (meaning you can't physically read it unless it was driven).

RWires and Atomicity
Users often have certain expectations of atomicity which may not be warranted when something is broken into multiple rules communicating with RWires, and thus it can result in behavior contrary to the expectations.


Suppose you have a FIFO with an enq method, and all the work is done within the enq method.

Now suppose you break that method into setting an RWire, and have an internal method that actually does the enq.

At this point, the overall enq action spans two rules, each of which is atomic, but which the user may erroneously expect to be atomic together.

But now, for example, a 'clear' method can sneak in between those two rules, because scheduling might allow it.

Now consider this behavior (logical sequence of rules):
   - external rule that calls the enq method (which sets the RWire)
   - external rule that calls the clear method
   - internal rule that does the work of the enq method

Externally, the user sees 'enq' followed by 'clear', and might expect the FIFO to be empty.  But, with the rule sequence above, the FIFO may not be empty.  Technically correct per rule semantics, but surprising behavior for the user.

This is the danger that one always has to be careful about, when using RWires; you have to think extra hard about whether your semantic expectations of atomicity actually translate into atomicity per rule semantics.

Wires Vs Registers

One features of Wire and DWire is that they have the same interface as Reg.  In fact, they are same interface!  While it may appear a bit confusing, it may also be more clear to only use the Reg#() class and then instantiate the module as mkRegU, mkWire, mkDWire, etc.    Additionally, we can also consider that a register may want to be turned into a wire and vice versa.  Consider the following simple example:

interface T11;
    method Action _write( int a );
    method int    _read();

(* synthesize *)
module mkT11( T11 );
   Reg#(int) regA <- mkRegU;
   Reg#(int) regB <- mkRegU;
   rule pipe;
      regB <= regA;

   method Action _write( int a );
      regA <= a;
   method int    _read();
      return regB;


This simply implements two registers;  the write method writes to regA, rule pipe writes the value of regA into regB, then _read returns regB.    Suppose we later decide we don't need both register phases, we can simply change regB to instantiate a mkWire, and now we only have one register.  Compile and the compiler generates the correct logic.  This is a trivial case, but we want to let the compiler do it's job.  And the interface methods are the same for Reg and Wire/DWire, so you could chnage regB to either of these cases:

Wire#(int) regB <- mkWire;
Reg#(int)  regB <- mkWire;

In this case the wire has an implicit condition, but rule pipe is firing every cycle anyway.  Changing both to wires, means the block is now purely combinational.

Finally, the compiler will rigorously check to verify the new wired code will schedule and work, but we still want to be careful to no affect external logic where possible.  In pipelined blocks such as these, we want to try to interlock reading and writing from the pipeline.  For instance, if the input and output methods were via FIFO enq and deq/first, then we could change the depth of the pipeline inside the block randomly and not break functionality outside the block (this is discussed elsewhere in greater depth).  With this interlocking, changing registers to wires doesn't break functionality, and timing convergence is certainly simplified.  Using a wire on a  critical path means that you could easily change that wire to a register later for timing reasons, without breaking functionality.

Urgency vs Execution order