Highlights of DVCon EU 2022

This post presents some of the highlights we jotted down the paper while attending DVCon Europe 2022 (6-7 December, Munich).

This was the first face-to-face edition after the pandemics and it showed: you could feel the energy, you could feel that people were happy to meet in person again.

Table of Contents

Keynote: Challenges in Soc Verification for 5G and Beyond (Axel Jahnke, Nokia)

Axel zoomed into the verification challenges and possible solutions from the perspective of a 5G ASIC project. Beside the usual suspects (e.g. complexity, scale, critical dependencies, interoperability between EDA vendors etc), Axel also addressed the human resources challenge: it is very hard to find [good] verification engineers (VEs). Here are his main points (i.e. italics) on the topic and my personal view/experience:

  • Cultural bias: verification is not seen as a creative process or as an important part of the ASIC creation process. For many new graduates or accomplished engineers verification equals testing equals dumb work. This bias has been here since the dawn of pre-silicon/functional verification for a number of reasons. And, I think, this bias will continue to exist until companies[ and academia] will communicate better the importance and challenges of verification, that is: celebrate the verification sign-off, praise verification engineer’s efforts to assure a functional design, celebrate the successful bug hunt, outsource the design work instead of verification, push the verification status numbers up to company’s board of directors etc.
  • RTL designers are not interested in converting to verification engineers or taking over verification tasks. My experience tells me there are three reasons: the cultural bias(see above), the required skill set and the technology. The verification engineers are closer to SW development than RTL design implementation, since they have more of a SW engineer skill set, which requires some extra effort from design engineers. Beside that, there is also a “natural skill set”: some people are better equipped to build things, while others are better equipped to find ways to break them. Regarding the technology friendliness: whatever the EDA vendors sell in their presentations, the technology itself (i.e. SystemVerilog, UVM) is not RTL designer friendly enough. I wonder what happened to the “every designer will be a verification engineer” selling point for SystemVerilog? What happens to all the effort put into the language to make it look like Verilog so that RTL designers jump on the verification train?
  • The verification technology (SystemVerilog, UVM etc) needs quite a long induction program. SW engineers usually turn down the possibility of starting a verification career. And that is explainable considering the cultural bias and the fact that SystemVerilog/UVM seems primitive to many CS graduates. And you cannot blame them. A SW programmer usually has experience with multiple [modern] programming languages and if one compares SystemVerilog/UVM to other programming languages one could see the limitations and the “80s” feature set.
  • Universities don’t include verification in their curricula (e.g. courses, labs, BSc., MSc. or PhD program). From my own experience I can say there is a cultural bias in the academia as well; professors&co see the verification as a necessary burden, but simple enough not to have it in the curricula. People in academia are more interested in answering the question “what have you implemented so far” rather than “what works today and how do you prove it works”? Maybe things will change a bit if professors will also start asking for a verification report instead of a checklist of accomplished tasks.

All in all, Axel gave a full, honest and accurate picture of the real life projects’ verification challenges.

Tutorials

The first day was dedicated to tutorials and EDA tool marketing.

An end-to-end approach to design and Verification of Battery Management Systems: from Requirements to Virtual Field Testing

This tutorial is the result of the collaboration between NXP Semiconductors, MathWorks and Speedgoat.
I particularly enjoy the walk through the verification process from simulation on a desktop PC to using emulators, virtual batteries, power sources, and finally the real batteries; it’s an excellent case study of a multiple platform verification strategy. The first part of the presentation focused on the utility and need of tracing the requirements from the start of a project. They presented an interface that can map the requirement to its implementation status (IMPLEMENTED, MISSING) and verification status (PASSED, FAILED, NOT EXECUTED ). The second part focused on test harnesses, which are similar to SW unit tests( i.e. isolation of the component under test). They also presented the model based design toolbox, which encapsulates all the necessary tools for verification and validation (e.g. HW access tools, debug tools, configuration tools, build tools, demo tools). Furthermore, this toolbox is encapsulated in the MathWorks ecosystem (i.e. Matlab, Simulink).
Next the presentation addressed the virtual field testing – The presentation focused on verification using System-in-the-loop (SIL) and Processor-in-the-loop (PIL) techniques:

  • The first step is the Model-In-the-Loop verification. In this step the model is simulated on a PC. The output is represented by the verification results.
  • SoftWare-In-the-loop. In this step, everything described at the MIL step is still happening, but on top of that an object file is obtained based on compiled code generated from the initial model. This object file is then executed locally and the results are compared with the ones from the MIL step.
  • Processor-in-the-loop. Similar to the SWIL step, the only difference is that the object file is run on the Microcontroller, not on the PC.
  • Hardware-In-the-loop (HIL). Similar with SWIL step, but using a real-time system

The entire described architecture is used to verify battery management systems and seamlessly tie in all the parts of the verification process (requirements, implementation, closure).

The presenters promised to bring the actual testing kit for their next presentation. If they do, I would like to attend again.

The Open-Source DRAM Simulator DRAMSys4.0 (Dr. Matthias Jung, Fraunhofer IESE)

Dr. Matthias provided excellent visualizations and analogies to lay out a background on DRAM functionality, making it easy for even those with little prior knowledge to understand.
One of the standout features of DRAMSys4.0 is its impressive speedup, which can range from 4x to 10000x depending on the level of memory bus utilization (compared to RTL simulation). For a simulation with 0.1-1% utilization, a typical speedup of 400x can be achieved, while for ~100% utilization, a speedup of 4x can be expected. This speedup is achieved through the use of TLM level simulation, which reduces the number of simulated events compared to clock cycle level simulation. One caveat is that the DRAM controller needs to be simulated together with the DRAM chip.
The presentation also introduced the use of a new DSL language (which translates JEDEC specifications into Petri nets), that greatly simplifies the process of writing a simulator for new iterations of DRAM standards. This was particularly impressive, as the standards can often be hundreds of pages long and writing code for a new standard can be high effort and error-prone.
In addition to architectural exploration, DRAMSys4.0 also supports temperature and power analysis, as demonstrated in the recorded demo. The demo showed the use of DRAMSys to analyze the temperature of a DRAM chip during the bootup sequence of an android system. The chip was stacked on top of the CPU, resulting in a temperature gradient over the memory dies.
The authors of DRAMSys4.0 have also created other tools and scripts, such as a waveform visualizer and a database for simulation results. These tools, along with the ability to plot charts in real time, make DRAMSys a powerful tool for exploring and debugging DRAM architectures. For analyzing a simulation, the user will need to provide some traces for DRAM chip inputs, and may need to configure different aspects of the simulation, such as the architecture and data collection scripts.
Overall, the presentation was of very high quality, with clear and well-explained content. The open source products discussed were impressive, and the presentation was easy to follow, even with the high density of content.

Verification of Inferencing Algorithm Accelerators (Russell Klein, Petri Solanti, Siemens)

This is not yet-another-ml/ai-paper. This tutorial gives an excellent high level overview of the end-to-end implementation and verification flow of ML algorithms. One attending the tutorial can easily grasp:

  • The step-by-step process of implementing an ML accelerator, from the Python/TensorFlow/Caffe2 implementation through quantized C++ HLS model down to final RTL implementation
  • The optimizations one could fine tune to achieve the desired Area/Performance/Power/Accuracy(APPA) constraints and the effects the quantization has on the overall APPA shape
  • The step-by-step verification process of an ML accelerator, from Python sources to RTL sign-off

The presented process is generic and straightforward and it can easily be applied to other mathematical algorithms that follow a similar implementation flow. From this point of view I think this tutorial is an excellent ramp-up into RTL implementation of ML algorithms, for juniors and seniors alike.

Verification of Virtual Platform Models – What do we Mean with Good Enough?

Jakob Engblom from Intel, together with Ola Dahl from Ericsson authored a very interesting tutorial, in which the speaker addresses the pressing issue of verification of virtual platform models in the context of complex development processes and the need for collaboration between teams. The presentation does an excellent job at highlighting the typical communication challenges that arise in these scenarios, and offers suggestions for improvement.

Although the title seems to suggest this applies for Virtual Platform models, presentation contents are applicable to any situation in which three teams need to interact, where two teams provide components and the third team does the integration. For example, the speaker discusses the common scenario of RTL implementation being verified against TLM implementation in a UVM-SV testbench.

The talk poses a very interesting and quite pressing dilemma in the industry at this point in time, namely the fact that development processes are pretty complex and this necessitates collaboration between teams. This implies writing of specifications, from which different teams derive implementation. The road from specification to implementation can thus fork into bugs of two categories: doing the thing right (DTR) and doing the right thing (DRT), or, said otherwise, bugs where the code is implemented correctly, but the intent was wrong (DRT), and bugs where the intent was correct, but the implementation in code does not reflect that intent (DTR).

In my opinion, the discussion around collaboration, documentation, and responsibilities among teams is valuable and much needed in today’s complex industry. Although the presentation does not provide a comprehensive solution to these challenges, it offers useful insights and prompts further discussion on the topic.

The speaker was well prepared and the presentation was engaging, leading to a lengthy Q&A session that further emphasized the importance of this topic. Overall, I found the presentation to be very informative and thought-provoking.

Efficient Loosely-Timed SystemC TLM-2.0 Modeling: A Hands-On Tutorial

Nils Bosbach’s (RWTH Aachen University) and Lukas Jünger’s(MachineWare GmbH) presentation stands out for being a tutorial in the real sense of the word, and should be a template for all tutorials moving forward.
They start everything by doing a short 20 minute introduction to Virtual Platforms and VCML, their own open-source library used for modeling virtual components. In the second part they provide a code skeleton, which models a system with an ARM CPU, an interrupt controller and an UART IP. Some code is missing and replaced with TODOs which allow us to work with their library to define the behavior of the data register of the UART IP (implement the read/write functions), connect the UART IP to the rest of the blocks in the system and to the terminal.
For those that had laptops to work on, this was a student-like experience, learning how to use their API through simple exercises. For those who didn’t have a laptop on hand, Nils went through and did all the exercises in front of the “class”, intentionally missing various steps to also exemplify the debug capabilities of their library.

You may find more info on the VCML GitHub repo.

Papers

The second day was dedicated to technical papers and posters.

Programmable Analysis of RISC-V Processor Simulations using WAL (Lucas Klemmer MSc., Johannes Kepler University, Linz)

This is a nice introduction to the WAL application, which we already recommended in August. In the first part of the presentation Lucas gives an overview of the WAL implementation. In the second part he presented a demo and the usage model of this library.

The WAL application joins the growing OpenSource,Python-based, EDA tools club. I recently discovered there is a PyUVM and also there is even a book on RTL verification using PyUVM and CocoTB.

How creativity kills reuse – A modern take on UVM/SV TB architectures (Andrei Vintila, Sergiu Duda, AMIQ Consulting)

Our colleagues Andrei and Sergiu propose an abstraction layer for external testbench control together with an architecture and a set of rules to use +args to control runtime instantiation and object creation. This approach eliminates the need to recompile the DUT and the testbench every time you want to change a parameter of a sequence.

But I won’t spoil the surprise and let you check the blog post that will be published until the end of December.

SAWD: Systemverilog Assertions WaveformBased Development Tool (Ahmed Alsawi, Qualcomm Ireland)

Ahmed Alsawi presents a simulator-agnostic tool meant to reduce the steps for concurrent assertion debugging. SAWD compliments the simulator by outputting what operator failed in the SVA that is evaluated.

A big focus of the entire presentation is about how it is implemented. SAWD transforms SVAs into “time aware” ASTs by using a dedicated lexer and parser based on the Lark framework. After the SVAs are parsed, SAWD evaluates them in the context of the user-provided waveform (e.g. vcd format) and generates a report and diagrams that highlight the SVA failure points. SAWD provides a PyQT5-based GUI, that allows one to debug the SVA without rerunning a simulation or opening the simulator. It can also test an updated assertion on the same waves to see if it still fails. At the moment SAWD supports relatively complex sequences, including parallel sequences; it misses support of local variables yet.

This is a nice tool that aids SVA failure debugging that I would use in my everyday debug flow. Unfortunately the author doesn’t seem to be positive about releasing it as open source.

Acknowledgements

As always, many thanks to this year’s organizers and the Technical Program Committee.

This article is a collaborative effort achieved with the help of my colleagues: Ioana, Bogdan, Mihai, Daniel, Robert, Iuliana, Marius, Bogdan, Andra, Andrei and Dragos.

Until the next DVCon – Comment, Share and Subscribe to keep the community alive!

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Subscribe to our newsletter

Do you want to be up to date with our latest articles?