Conservative vs. Liberal Programming Practices

Listening to the discussion about UVM extensibility today on the Accellera VIP-TSC call, I was reminded of a great post from Steve Yegge of Google.

It's *very* long, but a good read. In summary, Steve proposes that there are two competing world views when it comes to programming:

"Conservative" programming views

  1. Software should aim to be bug free before it launches.
  2. Programmers should be protected from errors. 
  3. Programmers have difficulty learning new syntax. 
  4. Production code must be safety-checked by a compiler. 
  5. Data stores must adhere to a well-defined, published schema. 
  6. Public interfaces should be rigorously modeled. 
  7. Production systems should never have dangerous or risky back-doors. 
  8. If there is ANY doubt as to the safety of a component, it cannot be allowed in production 
  9. Fast is better than slow. 

"Liberal" programming views

  1. Bugs are not a big deal. 
  2. Programmers are only newbies for a little while. 
  3. Programmers figure stuff out amazingly fast when their jobs depend on it. 
  4. Succinctness is power. 
  5. Rigid schemas limit flexibility and slow down development. 
  6. Public interfaces should above all else be simple, backward-compatible, and future-compatible.
  7. System flexibility can mean the difference between you getting the customer (or contract) vs. your competitor nabbing it instead. 
  8. Companies should take risks, embrace progress, and fiercely resist ossification. 
  9. Premature optimization is the root of all evil. 

Steve's point is that everyone falls somewhere on the spectrum between conservative and liberal (programming), whether they realize this or not.

On the Accellera VIP-TSC, and often in our everyday verification work, each of us often has debates where one side or the other claims a technical position assuming their position is based on a fundamental law of nature. It can be useful to admit to yourself that there are different views of acceptable programming practices, and there are pros and cons of each. Understanding this can improve your interaction with your team members, and can allow a team to more efficiently make conscious decisions about preferred styles as we move through a development process.


Characteristics of Quality VIP

Many companies I work with end up using some form of verification IP (VIP) to help speed up and improve the quality their verification effort. And most of these companies soon discover that the term "quality VIP" is an oxymoron. Engineers have grand expectations when trying to convince their management to invest in VIP. They want VIP that (in order of importance)...

  1. Is free of major bugs
  2. Is easy to integrate into an existing environment, and comes with initial vendor support for such integration
  3. Comes with clear documentation and working code examples showing typical and advanced use models
  4. Has an interface compatible with the testbench language and methodology library I'm using (SV/UVM/OVM/VMM, e/eRM, C/SystemC, etc, etc, etc)
  5. Makes it easy to write tests and debug simulations
  6. Comes with a pre-packaged set of tests, checkers, and assertions for the interface in question
  7. Does not significantly slow down simulations
  8. In some cases, that can be synthesized for use with emulators
  9. And, in a perfect world, provides full access to source code to make it possible to debug issues that inevitably come up

None of the above specifically require that the core of any VIP be written in any particular language. So vendors that focus on the fact that the core of their VIP is written in, for example, SystemVerilog, are really, in my view, missing the point. A specific VIP implementation may help vendors reach the goals above, but if goals are not met, especially items 1-6, the implementation choices made are irrelevant.

I've worked with several customers who have used VIP on their projects. And in almost every case, there were some significant issues with the VIP in question that caused significant (i.e. 1-6 months) of project delay. These issues spanned multiple vendors and multiple VIP implementation styles. 

So the next time you're speaking with your vendor, don't be fooled by talk about VIP implementation styles. Ask them how they meet the requirements listed above. 

Are there other characteristics you look for when evaluating VIP? Let me know in the comments below.

And check out Austin SNUG 2012 papers from Asif Jafri on Low Power Verification and Jonathan Bromley on de-mystifying SystemVerilog clocking block useage in the Resources -> Papers and Presentations section of the Verilab website.


The $10,000 ASIC

I’ve been to DAC each year since 2007. The first time I went to DAC, everything was new and exciting. After a few years, I realized there was a lot that was the same as the year before. To try to get some of the excitement back I decided to set a goal for myself for the conference this year. I would pick a question and spend the conference discussing with as many people as possible. The scenario I chose was:

Imagine you had a team of engineers with expertise in chip architecture, design, and verification available, but a non-recurring tool and IP budget of only $10,000. Under this scenario, is it currently possible to design and create a prototype of the chip using modern techniques? If not today, what would it take to make this possible 10 years from now? And if it was possible to design and create a prototype for $10,000 in tool and IP costs, would that make it easier for you to design chips?

Now, by “prototype” I meant an FPGA prototype that would be suitable for proving to an investor (internal or external) that it would be worth the effort to take the next step and spend a few million dollars going through the process of fabricating the device. And I’m not referring to a “student-grade” design. Think more along the lines of the types of complex ASICs startups today are developing. I asked if reducing tool costs to such a small number would make things easier because it is entirely possible that tool costs are not anywhere close to the gating factor when a company decides to invest in a new chip development effort. For example, if tool costs were only 10–20% of the total, and engineering time was the main gating factor, it might make only a marginal difference if the 10–20% of the cost was reduced to practically zero.

Continue reading "The $10,000 ASIC" »


Delivering Accurate Project Schedules

I was cleaning out my inbox this afternoon and came across a mail thread I'd exchanged with Gary Smith just after DVCon 2011 on project planning. Gary had mailed me a couple of slides, the contents of which he graciously gave me permission to share. The first described Gordon Bell's Law of Project Management:

A = Actual time to reach first milestone
S = Scheduled time to reach first milestone 

Bell Factor = A / S

Then, multiply the program schedule by the "Bell Factor." This gives you the actual time it will take to complete the design.

If the Bell Factor is 1.0, the schedule has too much fat.

If the Bell Factor is 1.2, you have a well-run program.

If the Bell Factor is over 1.8, (fire everyone and [*]) start over.

[*] added by Gary.

The second described Jen-Hsun Huang's "Three Miracle Law":

0 miracle design: You will drop behind your competition.

1 miracle design: You will keep up with your competition.

2 miracle design: You will leap ahead of your competition.

3 miracle design: Your design will fail, your start-up will fail, and you will be out of a job.

Both of these quotes are especially relevant when I work with companies to enhance project planning capabilities. One of the biggest roadblocks to accurate project planning, in my view, is the belief that it is possible to give an exact date when a specific task will be completed. In my experience, the only way to be able to predict exactly how long something will take is if you've done it before. And if you've done it before, you are likely operating with a "0 miracle design". And clearly, we don't want to drop behind our competition. 

So is there a better way? Agile planning gives us another option - planning poker, combined with continuous data collection of how much time it actually took to complete estimated tasks. The idea is to separate the estimation of relative magnitude of tasks from the actual calculations to determine task end dates. During a planning poker session, engineers use unitless cards to estimate magnitude. The group then determines how many estimated tasks they would like to try to accomplish during a given 2-3 week period. And at the end of the period, the project manager keeps track of how many tasks (and thus, points), were completed. Over time, it is possible to start estimating how long the team as a whole will take to complete a certain number of points-worth of tasks. Though it is probably useful to point out that it is not really possible to exactly estimate a single tasks. This is a technique that works at the team level.

I consistently find that planning meetings are much less stressful and provide more accurate results if planning poker is used vs. something such as ideal person days. Why? Because it is impossible to game the system. If everyone thinks they are being clever by picking low numbers in order to please management, it will simply change the number of points that is completed during each planning period. So as long as the team remains consistant in the types of point values they assign to certain tasks, the time it will take to finish a group of tasks will become clear. 

 


UVM Drivers and Monitors

The UVM User guide recommends that an agent is composed of a driver, monitor, and sequencer (UVM 1.1a User Guide pg 35):

Uvm_users_guide_1.1.pdf (page 43 of 198)-2

But I am frequently amazed to find that there are a large number of verification engineers who insist that creating a monitor is often not useful. These engineers prefer to perform checking based directly on the stimulus generated in a test, sequence, or driver. Why should we waste time creating a monitor, they argue, when we have all the info we need right here in the driver?! 

For the record, you should, under almost every circumstance, create a monitor that can be run in a completely passive fashion when creating an agent for a DUT interface. This is because as your verification effort progresses from block to full chip, you will often want to reuse checkers. And in the full chip, a checker must frequently be passive - it is observing what is going on inside the DUT without having any direct control. If you construct your testbench using checkers based on stimulus, you will eventually have to rewrite those checkers if you hope to use them in a full chip environment. 

You were planning to reuse your checkers... right?

 

-----

Want more info on writing testbench stimulus? Check out this paper on scenarios and sequences from DVCon 2010.

 


Are You Ready For a Change?

I recently read an article in the Austin American Statesman that got me thinking. Back in 2007, a trucker, Louis Martinez, was fired from his job for "refusing to drive a truck carrying a load of steel shelving that was stacked higher than allowed and was improperly secured with broken straps." According to the Statesman:

It was the fifth time the company, Safeshred Inc., had asked him to drive an improperly loaded or permitted truck, the Supreme Court acknowledged. After pointing out the safety concerns, Martinez agreed to drive the truck but soon returned after feeling the load shift, the court said, adding that he was fired after refusing an order to return to the road.

Interestingly:

Had Martinez chosen to drive the truck and been hurt, he could have sued Safeshred and sought punitive damages based on "the employer's malicious intent in ordering the illegal act," the ruling stated.

But by refusing to drive, Martinez never performed the illegal, and potentially dangerous, act he was ordered to perform. "Thus, allowing punitive damages based on the unrealized consequences of the illegal directive would amount to impermissibly punishing the employer for harm the plaintiff never actually endures," Lehrmann wrote.

So, to recap: employee refuses to do something dangerous, gets fired, and would have been better off if he had just done the dangerous thing and sued for damages after the predictable bad thing happened. Sound familiar? 

Continue reading "Are You Ready For a Change?" »


UVM and the Death of SystemVerilog

Earlier this year, the Accellera VIP-TSC approved version 1.0 of the UVM. Supported by all of the major EDA vendors, the UVM has been billed as the next generation in verification methodology goodness. Better than the VMM. Better than the OVM. A chance for the verification community to shed some of the baggage carried over from years of backward-compatibility requirements and methodology fits and starts. Another purported benefit is that testbenches written with SystemVerilog/UVM can be more easily ported to simulators from different vendors. There is also a developing market in UVM verification IP to allow testbenches, in theory, to be quickly constructed from commercially available components.

All of this sounds great, right? Vendors standardizing on languages and methodologies and competing on tools. It's how the world should be. Except there are a few small problems that vendors are unlikely to tell you about before you start your next project.

First and foremost is a problem that is glaringly obvious to anyone who's tried learning SystemVerilog and the UVM (or one of the other VMs over the years): it's difficult and time consuming to learn SystemVerilog and any of the VMs... especially if you have never used a verification language before.  Folks with limited software backgrounds (read: most design and verification engineers) find seemingly simple concepts like inheritance and factories to be mind boggling, even if they won't admit it. And folks with deep software backgrounds will find SystemVerilog an absolute pit of despair when compared with modern languages such as Python and Ruby, and the UVM complex in a way that clearly is meant to patch over serious deficiencies in the underlying language. Plus, any testbench that has to deal with multi-language issues is clearly out of luck in the simplicity and ease of use department.

Now that the UVM has arrived and the methodology bickering between the major vendors has mostly (well, somewhat) ceased, the complexity of the UVM and the earlier VMs on which it is based can be viewed more clearly and with less controversy. And the results are not good.  After years of experience working with many multiple clients, it seems the only way out of our current dilemma is to start looking at other languages and development frameworks. For that to happen, major semiconductor companies may need to start funding this type of development again, since it is abundantly clear the EDA vendors are incapable of this level of innovative thinking. Or more kindly, perhaps they feel there is no money in innovation. Either way, major advances in design and verification productivity need to get here sooner rather than later.  


SystemVerilog 2012, DVCon 2010, and Travel

Now that the SystemVerilog 2009 standard has been released, the P1800 working group is getting ready to start work on the next version of the SystemVerilog standard.  As part of that effort, they are soliciting feedback in preparation for an open meeting on February 26 (the Friday after DVCon) where new features will be discussed (see also Brad Pierce's blog article on the topic).  Since you'll already be at DVCon (you ARE going to DVCon, right?), it should be easy to take the time to attend the IEEE meeting the next day.  Having said that, I'm going to miss the meeting as I'll be on my way to Shanghai to kick off a week of workshops in Asia, but one of my colleagues from Verilab may be attending.  

Speaking of DVCon, the early registration deadline is fast approaching.  Full conference registration is $485 before January 29.  After that, registration goes up to $565. The conference should be outstanding this year, despite the fact that I will be more actively involved than ever before ;-).  If you're planning to attend, please stop by and say hello at one of the following events:

  • Monday Tutorial: Advanced Verification Techniques Using VMM - I'll be presenting for 45 minutes on the new phasing and factory features in the VMM 1.2.
  • Wednesday Industry Leader's Panel: "What Keeps You Up At Night?" (2010-02-17 - I'm moderating this panel - check out my recent post What Keeps You Up at Night for details)
  • Thursday session 12.1: Stimulating Scenarios in the OVM and VMM - My favorite methodology topic now as a DVCon paper with my coauthor Scott Roland.
  • Thursday Panel: Ever-Onward! Minimizing Verification Time and Effort - Somehow I managed to sneak onto this panel with this distinguished group of panelists. Come by and heckle me for a bit while the votes for best paper are calculated.

As a side note, I'll be traveling extensively in the next couple of months.  If you're located in any of the following cities, send me a note and perhaps we can catch up while I'm in town:

  • Boston (this week!)
  • Mountain View (next week)
  • Irvine (next week)
  • San Jose (for DVCon)
  • Shanghai (first week of March)
  • Taipei/Hsinchu
  • Tokyo

Finally, if any of you are reading this and thinking - why isn't JL writing about something more interesting like, say, the UVM, please send me a note letting me know. I've had blog writer's block/burnout for the last few months and could use the encouragement ;-).


The Relevance of Formal Methods

In a fascinating (to me) twist of fate, I will be moderating a panel on the “next big thing” in formal methods at FMCAD 2009 in Austin, Texas. The panel, entitled “What will be the next breakthrough solutions in formal?” is being held from 11:50-14:00 on Wednesday, November 18, and is made up of four distinguished panelists:

  • Harry Foster, Mentor Graphics
  • Ziyad Hanna, Jasper Design Automation
  • Kevin Harer, Synopsys
  • Axel Scherer, Cadence

Those of you who have been reading Cool Verification for awhile will note that I have never really discussed the topic of formal methods on this blog.  Truth is, I’ve yet to come across a situation on a project where I needed to use formal to get the job done. My theory is that there are a couple of issues preventing wide-spread adoption of formal tools in a standard verification flow.  First, the impression I get is that using formal well requires special expertise to write appropriate properties, partition the design, and interpret the results of the tools.  Second, the fact that you need special tools at all. In other words, the fact that I need a separate tool in my flow other than Questasim, VCS, or IUS (not to mention a separate license) makes it difficult for someone to try out formal techniques outside the tool flow of a typical project.

During the panel I plan to ask panelists about just such issues, plus several other questions proposed by thoughtful engineers just like you on the Google Moderator site set up for just this purpose.  In fact, I’d like to request your assistance. If you have a question you’d like to see asked of one of the panelists please submit it (or vote on your favorites) via the Google Moderator site or by mailing me directly at jl at coolverification dot com. 

I’m also quite interested to hear any stories readers may have with respect to the adoption (or lack thereof) of formal tools in your respective companies and current projects.  There are several types of formal tools out there… Any types of tools you’ve had great success with in particular? Or great failures? Does the choice of formal technique heavily depend on the application domain in question? What do you think the EDA vendors need to address in tool functionality/usability in order for you to consider adopting formal more broadly?

And of course, if you’re planning on being in Austin for FMCAD please let me know!


Do You Know Your Project's 'Truck Factor'?

Yesterday I presented another full-day workshop on verification planning - this time in Denver. During these workshops I discuss two major topics. First, we discuss a framework to help you understand your design. We break up a design into "efficient" subsections, and then search for "operations" that make up each of these subsections. Second, once you have an understanding of your design, we discuss the concept of a Failure Mode and Effects Analysis (FMEA). An FMEA is a cross-functional meeting used to brainstorm possible faults and failures. We use that information to come up with a verification plan.

Continue reading "Do You Know Your Project's 'Truck Factor'?" »