While I was preparing for my trip to Asia for the Verification Now seminar series, a colleague forwarded me a link to a recent article entitled “OVM vs. VMM: What’s Next?” on the System-Level Design Community over at Chip Design Magazine. In the article, Ed Sperling writes about the “battle for dominance between the Verification Methodology [Manual] (VMM) and the Open Verification Methodology ([OVM]).” The main focus of the article is the fact that Synopsys on one side and Cadence/Mentor on the other are each pushing their own verification methodology libraries. The article also discusses possible modes of interoperability between the OVM and VMM which are currently under review by the VIP TSC.
Karen Bartleson from Synopsys quickly responded to some inaccuracies in the article on her blog (I agree with her observations) and went on to say that she (and users) would like to see a single “Accellera-sanctioned standard.”
I’d like to take a moment to comment on some of the observations from Ed and Karen’s articles. First, a word on the different methods proposed to bridge the gap between the VMM and the OVM (also known within the committee as the “short term” solution). Ed’s sources mentioned three possible approaches:
- Bridge the environment. Create a compiler or other binary compatibility layer to allow the VMM and OVM to work together.
- Match the data types. Send results from the VMM and OVM to a common scoreboard, where data-type matching and comparison will occur.
- Wrap the code. Wrap components from one methodology library in another methodology library.
Let’s take them one by one.
First of all, to the best of my understanding the committee has not recently discussed and is not currently discussing what Ed describes as Option 1. I guess the way this would work though is that under the hood, objects from one methodology would be accessible as an object from another methodology. This seems tricky, and perhaps overkill for the problem at hand (we are dealing with the same underlying language, after all).
The second option discussed by Ed, matching data types, is really a simplification of the larger problem. That is, how do you pass data between two methodologies. For example, those of you who attended one of the Verification Now seminars would have seen examples from my Layered Stimulus presentation comparing ovm_sequence_item with vmm_data. What happens if you have such a class defined in one methodology that you’d like to use in another? Should you have to create your own version in the new methodology? Should you be limited to comparing data in a scoreboard? Or, is there a way to reuse the object as coded in its native methodology (including coverage and random constraints) but create an OVM or VMM specific view?
Option 2 doesn’t really address some basic underlying issues such as messaging or simulation phase synchronization. For example, when you print a log message from the VMM, will you be able to control it from an OVM portion of a testbench, and vice versa. On the simulation phase side, how do you deal with OVM testbench components that may reside in a VMM testbench where component configuration, running, and cleanup happen in slightly different ways from VMM counterparts?
The final option mentioned is the creation of a wrapper. The idea here is that for all intents and purposes, a verification component would look as though it came from the native library, not the underlying one (i.e. a VMM-based verification component wrapped with OVM used in an OVM environment). This is a difficult problem to solve, and doesn’t appear to be something the Accellera VIP TSC will fully address in the short term.
Finally, let’s explore the idea of converging on a single verification methodology as proposed by Karen. Based on feedback I heard as I traveled around the world over the last couple of weeks I think there is a strong desire to see the industry move in that direction. There are two questions that come to mind though. First, and most obviously, should the methodology be based more closely on the VMM or the OVM, or on an amalgamation of the two. Each vendor wants the Accellera VIP TSC to more closely align with verification features that address their core strengths. So, there will be a lot of posturing and positioning by the EDA vendors (and even some users) before all is said and done. Whether there should be a common verification library though is another matter.
Here’s the thing. There are many competing libraries in the software world, many different operating systems, and many different ways to write a document electronically. I think shooting for a common verification methodology library is a worthwhile goal, but at some point, someone is going to find that the “standard” doesn’t meet their needs and will develop something new and hopefully even better (for them) than the “standard”. So, let’s move toward a common approach while at the same time leaving open the possibility that some methodologies (like some languages) may be better suited to solving certain types of problems than others.