Meet AMIQ Consulting Panelists at CDNLive Munich

We are participating at CDNLive Munich on May 19-21, 2014. Two of our consultants will be presenting in the Canvas Conversations section, on May 20, 2014 at 12:00pm. Aurelian Munteanu will talk about how to use regression automation in order to speed-up regression analysis in “Be Fast, Stay Informed!” and Daniel Ciupitu, about how to use smart generation in order to speed-up coverage closure in “Coverage Aware Generation”.

We have posted a summary of their presentations below. We are keen to meet those of your participating at the conference and hear your insights and feedback. Or, if you are not at the event, we will be happy to hear your thoughts on the topics thorough your comments.

Be Fast, Stay Informed!

The ASIC industry requires fast product development for reduced time to market, without any impact on quality. Verification consumes most of the development time – more than 60%. Pre-silicon verification is usually done in multiple iterations, each iteration including at least a regression. A regression consumes HW resources (e.g. CPU time), SW resources (e.g. licenses) and a lot of human effort. How can we optimize the regression process?

We successfully implemented automated flows that integrate the capabilities of existing tools. For example use eManager to spare regression analysis hours by enhancing it to report by e-mail useful information about regression progress and status. Or when the regression starts, save enough information to easily reproduce a failure.

You can see more details here.

Coverage Aware Generation

When defining coverage items we need to make a trade-off between the number of coverage buckets, their size and the simulation time required to get them covered. This implies merging a large number of buckets into relevant intervals without losing quality. Critical values such as minimum or maximum are coded as individual buckets. For some coverage items we cannot anticipate the sensitive values or there are too many of them. In such cases we typically define the coverage buckets as power of two intervals, in order to make sure that you tested every bit of the covered field.

This triggers another problem. The coverage intervals are uneven and now covering the smaller sized buckets takes a lot of time. In large projects it can take weeks of regression time to fill all of them.

From our experience the solution is to generate items being aware of how they will be covered. If the coverage item has power of two intervals, the generated items should not follow a flat distribution. A distribution that guarantees more hits for the smaller intervals and less hits for the larger intervals makes more sense. All buckets get the same chance to be hit/filled, even though one bucket may contain only one value while other buckets may contain thousands.

You can see more details here.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Subscribe to our newsletter

Do you want to be up to date with our latest articles?