Here Comes Rita
Oh Roomba, How I Love Thee

Results Generaton - Environment or Test Generator?

There are a variety of ways to do results checking in a testbench.  They tend to fall into three categories.

  1. Tests generate expected results.  The test writer is assumed to have a detailed understanding of what his or her test is trying to accomplish and is given methods to flag an error should something unexpected occur.
  2. Test generators generate expected results based on knowing the effect certain commands in any given test sequence have on the Device Under Test (DUT).  The generator has the ability to generate a large number of sequences based on general templates created by its developer.  Results checking for directed tests must be done using method #1 above. 
  3. Monitors and other testbench infrastructure in the verification environment generate expected results based on observing the inputs to and outputs from the DUT.  Stimulus can be provided by a test generator or directed test.

Almost every time I've worked on a new testbench I end up having a discussion with someone about this topic.  Why?  Because unlike some who prefer options 1 or 2, I am a big fan of option 3 (testbench infrastructure generates expected results). 

If you're serious about writing robust and reusable testbenches you'll know that option 1 shouldn't be considered a choice for general testing.  It should be reserved instead as a way to handle testing of special cases and other assorted odds and ends that show up as you're trying to fill those final gaps in your functional coverage.  But what about option 2?  Since a test generator knows what it is trying to accomplish it seems natural to expect it to be the source of generated results.  In fact, it's possible to successfully verify complex chips using this method (unlike the strategy where test writers generate their own checking, which breaks down quickly as verification requirements become more complicated). 

Why would anyone want to go through the added trouble to allow the environment to predict the expected results when the generator already knows the answer?  Simple.  What happens to your test environment if I remove the test generator?  Ah, you might argue, but my test generator is always present!  Oh really?  You're quite certain?  Let me ask you this question.  Is your test generator always present because it can easily be adapted to drive the exact same types of traffic at the full chip as it did in your module level test environment, or do you not reuse your test environments because the checking is in the generator? "What?" you say... "Who needs module level test environments that can be run at the full chip?"  Only verification engineers and designers who don't want to have to spend hours debugging problems that could have been caught immediately by the checkers present in your module level environment.  Other than that, no one important.  So what are the other pros and cons of implementing checking in your environment?  Here are some that I find the most interesting:


  1. Environments can be reused in many levels of simulation (already mentioned).
  2. Environments that know how to predict expected results can be packaged up with the RTL and reused (internally or externally) as an IP block.
  3. Directed tests can take advantage of the built in checking, in many cases making them easier to write.
  4. Test generators are easier to write and debug if you don't have to create tap points to dump out expected results.  In many cases they can be as simple as fire and forget (think of the test environment for an Ethernet controller or a switch).


  1. It takes more time to make the test environment self sufficient.
  2. It may be exceedingly difficult (in some cases, impossible) to know what to expect without either knowing the intent of the generator or monitoring the internal state of the DUT.
  3. Because the test generator may be fire and forget, development work will need to be done to be able to know when a test is really over.  Specman has some built in methods to handle this type of scenario (you may need to write this yourself in other languages) by allowing various parts of the environment (monitors, scoreboards, transaction generators, etc) to object to a test being complete.  The test is only over when no one has any objections left.

I believe the pros outweigh the cons in most cases, which is why I tend to write my checking into my environment.  Your mileage may vary.