Trying Out Full Posts In Feed
Subjects Wanted

Verification Metrics Panel Discussion

Earlier in the week I wrote about the presentations that were given at the DV Club luncheon here in Austin.  Dave Williamson from ARM, Sanjay Gupta from IBM, and Shahram Salamian from Intel gave interesting talks on the types of verification metrics they've used on recent projects and issues they've faced when communicating those metrics with management.  Afterward, the speakers took questions from the audience:

  • How do you gauge the quality of checkers (other than from experience)?

    According to Shahram, Intel doesn't currently have metrics in place that are able to automatically determine the quality of checkers.  They do make a point to analyze bugs as they are found and use that information to look for holes in the checkers (since it's likely checkers and existing tests that failed to catch a given bug in the past may be missing related bugs).

    Jim made the comment that you shouldn't trust someone who says they've covered specific testcases because they have a checker in place.  His feeling was that you always need to have a test that explicitly exercises the functionality being checked by the checker.  My personal feeling is that it takes a combination of things - a checker, a test that exercises the functionality, and functional coverage metrics that have been shown to correctly identify whether a feature has been hit or not to be certain everything has been done correctly.

    The final comment from Sanjay was that it's important to write the appropriate level of functional coverage metrics and do coverage reviews. 

    It would have been interesting to hear any of the panelists talk about techniques such as extreme programming or agile project planning, but alas, they didn't.
  • How can one measure the productivity of a team?

    Shahram mentioned that Intel uses metrics like the number of lines of verification code written and the ratio of verification to design engineers.  I tend to agree with looking at the ratio (2:1 verification to design being optimal) but I'm not convinced lines of code is really related to an programmer's productivity (though perhaps useful in aggregate). 

    Sanjay's focus was on reducing the number of people required on the project by automating as much as possible.  I do think process automation is a great goal - and one that needs to be balanced out with the time available and the skills of the team members. 

    Jim mentioned that he felt in many cases, productivity metrics were implicitly collected as a result of seeing whether a team was able to meet schedule dates provided by management.  I'd be very interested to hear comments from readers about this one...
  • Some of the functional coverage charts presented showed occasional dips.  What caused them?  How do you know how much functional coverage you need?

    The ARM team learned about coverage while working on their most recent projects.  They kept adding metrics throughout, but found that the final year of the three year project was the one that was most focused on coverage. 

    The Intel team is always finding bugs outside the functional coverage space.  There is a constant effort to analyze bugs (as mentioned earlier) and figure out what else hasn't been covered. 

    IBM's challenge was that functional coverage points weren't always hit in the planned bench (block, cluster, full chip, etc).  Often points that were expected to be covered in one bench were much easier to cover in another bench.

I'd like to close my summary of the panel discussion with an interesting quote from the book "Quality Software Project Management" by Futrell, Shafer, and Shafer.

"... the act of taking measurements will, in and of itself, affect the quality of a product, usually in a favorable way."

Indeed.

     

Comments