Moshe Gavrielov, General Manager and Executive Vice President of Cadence's Verification Division provided the keynote this morning at DVCon, entitled "Taking An Enterprise-Wide Approach to Next-generation System-level Development". The overall theme of the talk was to demonstrate that verifying today's complex systems is an entirely different problem than that faced in the smaller designs of the past and that in order to solve this new problem, a more comprehensive solution is needed.
Asking questions such as
- "This could never happen in electronics… or could it?"
- "Systems are immune to these problems… or are they?"
- "Automobiles are fully tested… aren't they?"
and giving examples of several recent high profile design failures from companies such as Boeing, Airbus, Intel, Transmeta, and Segway, Gavrielov attempted to drive home the need to manage the complexity and risk of system development.
What are typical risks? According to Gavrielov, a survey of product failure contributors showed that 70% of failures were caused by logical/functional bugs. It doesn't help that gate counts and lines of embedded software are going up as well. In fact, "project costs are getting much, much larger", he said.
Productivity, quality, and predictability risks are also ever present in large design projects. The solution? Enterprise System Level (ESL) design and verification. An ESL approach should allow teams to "manage system complexity and risk", Gavrielov said, and will enable a "predictable and rapid path to system level quality".
Gavrielov believes that high speed simulation engines (and my assumption is that belief broadly encompasses languages such as e, SystemVerilog, SystemC, and specific point tools) is an "important, but not sufficient" characteristic of a robust solution. In addition, an approach is needed for planning and management so that project information can be shared between engineers and managers. In other words, it's the process, not just the underlying tools that make for a successful project. Another key, according to Gavrielov, is the creation of an executable verification plan.
Those of you familiar with the Cadence verification flow will recognize the use of the phrase "executable plan". vManager, a tool from Cadence that has been around since 2004 is used by many verification teams to capture information about the current state of a verification effort and allow users to present views of that information to themselves and management. By mapping functional coverage data directly onto the verification plan, users can see how far along they've progressed in the verification effort.
At the end of the talk, John Cooley asked whether Gavrielov was trying to sell something. Though Gavrielov denied it, the answer was most certainly yes in my opinion. While the general concepts could be applied by anyone doing design and verification, the specific terminology used throughout the speech aligned perfectly with Cadence's Incisive Enterprise System-Level (ESL) Verification Solution, and the Incisive Plan-to-Closure Methodology. A quick glance through the DVCon Events Page on the Cadence web site shows how the topic of Gavrielov's keynote fit perfectly with the types of products being demoed by Cadence during other portions of the conference. When you look at the page, note there is not a single reference to SystemVerilog, Specman, or the e language.
Another member of the audience asked Gavrielov about the additional layers of infrastructure he was adding to what should be considered a robust solution to the verification problem. "This additional infrastructure needs to be sold to management. If you fail, it's easy to motivate them to invest. But what do you tell them if you haven't failed yet? Should everyone just hurry up and fail?", said the audience member. Gavrielov responded that managers who used to be involved in project work but have moved on may be oblivious to the problem, and that engineers flogging their managers won't help the issue. His feeling is that soon it will be clear that it's not possible to verify a large, complex project in any other way. In fact, "projects that plan for respins are doomed for failure", says Gavrielov. In other words, the complexity of large design projects is such that you need to plan to do things right the first time, and shouldn't rely on the safety net of a respin to cover for a bad front-end methodology.
For another take on the keynote, please check out Richard Goering's EETimes article entitled DVCon keynote: Verification takes a broader view. I had the opportunity to meet Richard earlier today and all I can say is that after spending a couple of days trying to write interesting material about DVCon, I'm impressed as hell at how quickly he was able to get his high quality article posted (just an hour or two after the keynote). Of course, I'm sure it helps to have many, many years of experience in the news business. But, in my defense, Richard doesn't have a cool photo from the conference in his story!