Previous month:
September 2006
Next month:
November 2006

Large-Scale Hardware Simulation: Modeling and Verification Strategies

I received a link to the paper "Large-Scale Hardware Simulation: Modeling and Verification Strategies" from a colleague last month, but just got around to reading it today.  It's an interesting analysis of verification strategies written by Douglas Clark from DEC back in 1990.  In the paper, Doug discusses the importance of using bug rate instead of checking boxes off of a test plan as the determining factor for when verification is complete.  He also discusses the importance of random verification, the benefits of simulating the real design instead of a model, and the value of doing everything possible to trade engineering time for CPU cycles. 

The thing that strikes me the most about this paper is that it is still relevent, even after 16 years.  The other thing about the paper that I find interesting is that some people feel directed testing is fine for the small designs of today.  If you think about it, the large designs in 1990 were potentially even smaller than the small designs of today.  Anyways, take a look at the paper and let me know what you think!

The AOP vs. OOP Saga Continues

A couple of weeks ago I posted a link to an article from Mentor describing how OOP techniques make it unnecessary to use AOP, and supposedly do an even better job than AOP in many cases.  The topic has picked up now over on the Verification Guild - where Adam Rose from Mentor, Janick Bergeron from Synopsys, and a cast of others have been responding to the age old question - "What's the difference between AOP and OOP?"  Check out responses from my co-consultant in crime David Robinson and myself.

Also, if you'd like, I'd recommend checking out a couple of interesting articles written by people using AOP as part of the AspectJ programming environment:

[email protected]: AOP myths and realities

In the article, Ramnivas discusses in great detail several myths about development in AOP, some of which directly address points made in this discussion thread:

  • Myth 1: AOP is good only for tracing and logging
  • Myth 2: AOP doesn't solve any new problems
  • Myth 3: Well-designed interfaces obviate AOP
  • Myth 4: Design patterns obviate AOP
  • Myth 5: Dynamic proxies obviate AOP
  • Myth 6: Application frameworks obviate AOP
  • Myth 7: Annotations obviate AOP
  • Myth 8: Aspects obscure program flow
  • Myth 9: Debugging with aspects is hard
  • Myth 10: Aspects can break as classes evolve
  • Myth 11: Aspects can't be unit tested
  • Myth 12: AOP implementations don't require a new language
  • Myth 13: AOP is just too complex
  • Myth 14: AOP promotes sloppy design
  • Myth 15: AOP adoption is all or nothing

I'd also like to point out another article by Nicholas Lesiecki, a software engineer at Google:

[email protected]: Enhance design patterns with AspectJ, Part 1
AOP makes patterns lighter, more flexible, and easier to (re)use

It took years for the software development community to understand how to use OOP effectively.  It will probably take years more for AOP techniques to fully take hold.  That doesn't mean the feature doesn't add value - in my experience, it adds tremendous value.  It also doesn't mean that OOP techniques are obsolete.  If the tools, languages, and libraries most commonly used for hardware verification development weren't under the control of the Big Three EDA companies, we might actually be able to get past this perpetually silly AOP vs. OOP argument and focus on figuring out how to apply the right solutions where appropriate in order to be successful at our primary goal - taping out reliable products as quickly as possible.