It's late... very late... and I'm working on slides for one of my DAC presentations, "Zero to Sequences in 30 Minutes". My slides contain source code, and the code is not easy to read. I thought, "Wouldn't it be great if I could colorize the code?" Luckily, Sean over at IntelligentDV wrote a blog article on exactly this topic last year. However, did I mention it's very late? So the idea of screwing around with a new editor and installing syntax files didn't really appeal to me. However, after reading that article plus some follow-up Google searching, I realized if I could get vim to save its syntax-highlighted output as HTML I could copy that directly into PowerPoint. But how?
This blog post by "Automatthias" describes one solution. From VIM, run the following command in a syntax-highlighted buffer:
This saves the current buffer with a .html extension. Open that extension in your favorite browser and you can copy the colorized text directly into PowerPoint! The only catch - make sure you select "Keep Source Formatting". Otherwise, the colors will be lost.
By the way, as many of you are aware, Denali is holding a competition to determine "EDA's Next Top Blogger". There are several excellent bloggers in contention, including your's truly. I've had the pleasure of meeting and working with some of them, including Karen Bartleson, Harry Gries, and John Busco (who I believe is the only one of the entire group of contestants who's been blogging longer than me), and let me tell you, I'm honored to be in such good company! If you've enjoyed reading Cool Verification over the last 4 years (yes, it's been that long!) please head over to the Denali Night Live site and cast your vote!
Those of you who attended SNUG in San Jose know that Cliff Cummings of Sunburst Design was a co-author on our award-winning paper on multi-stream scenarios in the VMM (honorable mention - technical committee award). Cliff (who won Best Paper at SNUG this year) has been a great friend and partner to myself and my colleagues here at Verilab. As part of our continuing collaborative efforts, I’m planning on sitting in on Cliff’s SystemVerilog course next week in Beaverton, OR.
Sunburst Design is offering Cool Verification promotional pricing of $2,200 for the 4-day Advanced SystemVerilog for Design & Verification Class or $1,650 for the 3-day Advanced SystemVerilog for Verification class.
To get this pricing, you must register at the web site:
Also, as a special thanks from me for signing up with such short notice, I’m offering to present a new seminar I’m working on as private webinar to the first two registrants using the promotional link above[*]. Here’s the abstract of that material (subject to change):
Choosing a SystemVerilog base class library can be a difficult task, as it is not always clear what features are critical to enhancing productivity. EDA vendors heavily market their solutions but are not able to provide an unbiased viewpoint on the differences between their solutions and others. In this one hour presentation, JL Gray from Verilab will review the major features of the VMM and OVM and describe which features should be given the utmost consideration during the selection process. He will then delve deeper into key topics.
Cliff is also offering special pricing for displaced engineers seeking this or other Sunburst Design training. For details please visit the web site:
If any of you are in the Portland area or can make it up there on short notice, I’d enjoy the opportunity to spend the week with you!
[*] Restrictions apply. Contact me for details.
Ok, I know what you’re all thinking… “JL, you haven’t even finished writing up DVCon yet and now you’re talking about SNUG?!” Yes, I know. Let’s just say an annoying stream of illnesses have taken their toll on the Gray family (and many other folks as best as I can tell) over the last few weeks. But things seem almost back to normal now and I didn’t want to miss out on an opportunity to let everyone know about my official debut as a conference paper presenter at SNUG San Jose next week.
My Verilab colleagues Jason Sprott and Sumit Dhamanwala, along with Cliff Cummings from Sunburst Design and yours truly authored a paper entitled “Using the New Features in VMM 1.1 for Multi-Stream Scenarios”. I’ll be presenting the paper during session MA4: Verification with VMM I on Monday at 11am. Those of you who attended one of the Verification Now 2008 seminars back in the fall will recognize the topic. I discussed the yet to be announced Multi-Stream Scenario additions to the VMM in one of my presentations.
Unlike my Verification Now presentation which compared stimulus in the OVM to the VMM, the SNUG presentation will delve into the topic of Multi-Stream Scenarios in the VMM in more detail. Specifically, I will review the following topics:
- Recap: Single Stream Scenarios
- Complex Stimulus with Multi-Stream Scenarios
- Multi-Stream Scenario Registries (Channel, MSS, and MSSG)
- Single Stream vs. Multi-Stream Scenarios
- Resource Sharing: Grab/Ungrab
- Multi-channel grab
One of the things I hope to touch on is the importance of using the registries when building multi-stream scenarios instead of directly instantiating sub-scenarios, channels, or scenarios from other multi-stream scenario generators (using the generator registry). Those features were added to the MSS solution to allow integrators and test writers to modify the behavior of specific scenarios and scenario generators without having to modify the underlying scenarios themselves.
Another goal is simply to promote the topic of reusable, multi-stream stimulus itself. The VMM has historically supported a flat testbench structure. New features such as MSS, when combined with the vmm_subenv should lead to more reusable and maintainable testbenches.
I’m thrilled to have an opportunity to present at SNUG this year and to meet readers of this blog and my Twitter feed. I will be twittering SNUG using the hash tag #snug, unless someone gives me a good reason to use a different tag. Of course, that begs the question – who will twitter my presentation while I’m presenting? How about this… I will buy the person who Twitters the most insightful comments and/or questions during my presentation a delicious beverage of his/her choice. Any takers?
Last week I wrote a post about the Mentor OVM/VMM compatibility layer, and subsequently posted a response to that post from Tom Fitzpatrick. Now, it’s my turn to respond. Tom made several points that I want to address.
I'd like to share with everyone a response to my last blog post on the Mentor OVM/VMM interoperability solution from Tom Fitzpatrick over at Mentor. By the way, there are some good comments on the original post as well, as well as a pointer to the documentation which I somehow couldn't find earlier in the week. Tom has made some excellent points below that I will address in a subsequent post. For now, enjoy!
Update 1/16/2009: Found the Mentor docs for the interop solution (thanks to Tom Fitzpatrick from Mentor) - fixed appropriate comments below.
Last month Mentor and Cadence each released separate versions of compatibility layers to allow VMM code to interoperate with OVM code on their respective simulators. I’ve spent some time over the last couple of days reviewing the Mentor solution and wanted to share my initial thoughts.
The Mentor solution uses a customized version of VMM 1.0.1. Unfortunately for Mentor and Cadence, Synopsys released a new version of the VMM, 1.1, a couple of weeks later which contains changes in stimulus generation capabilities (among other things) that might impact the way the compatibility solution is architected. I will be interested to see how Mentor modifies their solution to take the changes into account.
In addition to a modified version of the VMM, Mentor uses a library of compatibility widgets to deal with the following aspects of OVM/VMM interoperability (with examples of associated new classes):
Two interesting announcements in the last few weeks. First, Mentor and Cadence both announced compatibility libraries to enable VMM testbench components to interoperate within an OVM environment. Then, today, Synopsys announced updates to the VMM, including the addition of multi-stream scenarios, transaction iterators, and a Performance Analyzer package to assist in the gathering of statistical functional coverage metrics.
While I was preparing for my trip to Asia for the Verification Now seminar series, a colleague forwarded me a link to a recent article entitled “OVM vs. VMM: What’s Next?” on the System-Level Design Community over at Chip Design Magazine. In the article, Ed Sperling writes about the “battle for dominance between the Verification Methodology [Manual] (VMM) and the Open Verification Methodology ([OVM]).” The main focus of the article is the fact that Synopsys on one side and Cadence/Mentor on the other are each pushing their own verification methodology libraries. The article also discusses possible modes of interoperability between the OVM and VMM which are currently under review by the VIP TSC.
Karen Bartleson from Synopsys quickly responded to some inaccuracies in the article on her blog (I agree with her observations) and went on to say that she (and users) would like to see a single “Accellera-sanctioned standard.”
I’d like to take a moment to comment on some of the observations from Ed and Karen’s articles. First, a word on the different methods proposed to bridge the gap between the VMM and the OVM (also known within the committee as the “short term” solution). Ed’s sources mentioned three possible approaches:
- Bridge the environment. Create a compiler or other binary compatibility layer to allow the VMM and OVM to work together.
- Match the data types. Send results from the VMM and OVM to a common scoreboard, where data-type matching and comparison will occur.
- Wrap the code. Wrap components from one methodology library in another methodology library.
Let’s take them one by one.
Last Tuesday I was in Santa Clara presenting the first of the 2008 Verification Now seminars (the next one is tomorrow, October 21, in Austin). About 50 attendees showed up to hear presentations on Requirements Based Verification, Layered Stimulus Generation in the VMM and OVM, and to see demonstrations from Certess, Denali, and SpringSoft.
The trip was a bit of a marathon – I flew out to Santa Clara Monday afternoon, arrived at 5pm, and then headed over to the Denali office to work on recording a portion of the material in webinar format to assist with the translation efforts for the seminar in Japan. Apparently we’re going to have a translator in Japan live-translating my presentation. Participants will be able to wear headphones and listen to the translator if they don’t like listening to me directly, UN style! I was at the Denali office until around 8:30pm or so, and then headed back to the hotel to get some rest. Unfortunately, some pre-seminar jitters kept me awake later than I’d hoped (worrying that I wouldn’t hear the alarm and would be late to my own seminar ;-).