This post presents some of the highlights of the technical program AMIQ consultants enjoyed attending at DVCon Europe 2023 (14-16 November, on site).
Overall we were very satisfied with the expo, presentations, panels, tutorials and of course the hospitality. There are many good changes that we noticed:
- DVCon grew in size: there were more participants than in 2022 edition and more companies’ booths
- Tutorials traction increased: there were more participants attending the tutorials
- Tutorials quality increased: most of them focused on delivering knowhow rather than marketing stew
- A breath of fresh air: there were more young people compared with previous years
- AI/ML is either a hot topic or a wannabe cool kid, I don’t know anymore
- Virtual Platforms / SystemC got more attention than in previous years
- Python applications for Design and Verification try to get in the mainstream (which reflects Wilson Research Group 2022 Functional Verification Study)
- Higher quality papers than previous year, better prepared presenters
- Next fun level achieved: we had a super dinner
Mind the president’s brief style of this conference highlights post.
Table of Contents
Keynote: Energy-efficient High Performance Compute, at the heart of Europe
Philippe Notton, CEO and founder, introduced us to objectives, activity and the ecosystem of SiPearl.
Here is a list of the main takes out of this presentation:
- SiPearl started with the financial support of the EU and it is a promising star in the sky of the EU’s semiconductor industry and embodiment of the EU’s technological sovereignty dream
- EU is allocating a budget to subsidize development in the supercomputer industry
- EU is 2nd rank among top 10 list of supercomputers globally
- SiPearl is building the world’s first energy-efficient HPC-dedicated microprocessor called RHEA
- RHEA is build on top of the ARM architecture/IP, which comes with a number of advantages (big ecosystem, extensive OS and SW support etc); at the same time experimentation with RISC-V is in progress
- Foundry: TSMC is considered until an independent EU foundry will be available
- RHEA can interact with any third-party accelerator: GPU, artificial intelligence or quantum
- EU is still dependent on CAD solutions provided by US companies, but that might change in the future
- SiPearl is hiring
T3.2 Model-Based Approach for Developing Optimal HW/SW Architectures for AI systems (Siemens)
This tutorial is a nice complement to 2022’s Verification of Inference Algorithm Accelerators paper(also by Siemens).
The presentation targeted a system with multiple AI systems inside, which is a typical architecture for automotive applications. Some key aspects of the tutorial:
- Most power efficient way to implement AI is at the ASIC/FPGA level
- High complexity due to AI computation architecture: can be distributed or centralized.
- Detailing the process of implementing a neural network in RTL
- the whole process starts from a Python model of the neural network
- run c++ algorithm for translating the python model in a system architecture & a logical architecture
- refine the logical architecture to fine-tune it for performance
- build a VP of the refined architecture
- perform architectural exploration over the VP, to determine which functions need to be accelerated
- Perform HLS to get the final design
- Aside from the complex process needed for RTL implementation, validation and verification (i.e. more or less equivalence checking) needs to be performed between stages to ensure that the final RTL is verified, as well as accurate enough for the intended application
Overall, this tutorial shows the challenges of implementing and verifying a neural network implemented in RTL. The fact that this process is so convoluted reflects the fact that AI is still new in our industry and there is further progress to be made on both methodology and available tools.
T1.3 Scalable agile processor verification using SystemC UVM friends
A very well structured presentation that introduces one to verification using SystemC and aditional libraries. It covered an introduction to UVM and parallels to available constructs in the SystemC world. While some functionality of SV is covered by SystemC and most functionality of UVM is covered by UVM-SystemC, the necessity of constrained random and coverage driven verification bring forth the need of essential additional libraries for metric driven verification:
- SystemC-UVM – provides UVM features to SystemC verification environments
- CRAVE – provides constraint random generation
- FC4SC – Functional Coverage for SystemC (C++11 header only library with no dependencies)
- PyUCIS – a library to handle coverage data saved by FC4SC in UCIS format
After the introduction of all available tools and libraries, the presenters presented a case study based on RISC-V, diving into details of verification and tool topologies.
T4.3 Open-Source Virtual Platforms for Industry and Research
A presentation that combined updates from the SystemC working group together with a presentation of Open Source models by third party peers. This presentation focused mainly on the methods, tools and libraries available to develop and use virtual platforms.
Here are the most important takeaways and presented examples:
- Python starts playing and ever more important role: SystemC WG presented PySysC, a Python library for SystemC
- Abstraction plays an important role in developing useful models towards a set goal
- Abstraction levels from lowest to highest: analog, gate-level, RTL, AT models, LT models, Virtualization, Analytical models
- Abstraction is a control knob that trades accuracy for performance
- Virtual platforms are developed based on specification, same as RTL
- Tools used: SystemC, VCML (Virtual Components Modeling Library)
- SystemC uses the concept of time, supports parallelism and provides standardized interfaces for connections with other platforms. While SystemC lacks models for frequently needed parts like register models, standard communication protocols or TLM logging and tracing, VCML provides all these missing features
- VCML has an extensive unit test suite, so the models are pre-verified. Supports major OS, under apache license, VPs for ARM and OpenRISC, commercial support available through MachineWare
- Some of the highlights of VCML that were presented are the following:
- All available building blocks are open source (models, tracing, configurability, debug support, support for using devices together with the models through OS functionality)
- Debug available through GDB or Lauterbach’s Trace32
- Examples were presented for the stack of class inheritance used for a CPU and a peripheral
- Example of debugging options using Trace32
- VCML has predefined models for peripherals like memory, ethernet, SPI, CAN, UART, various sensors
- OpenRISC free implementation available: OR1KMVP on GitHub
- ARMv8 virtual platform available: AVP64 on GitHub
Furthermore, the presentation was wrapped up with a full example of a system integration, with connections and a Linux boot simulation.
P4.1 Verilator + UVM-SystemC: a match made in heaven
Luca Sasselli(QT Technologies) presented a 100% OpenSource simulation and verification flow that uses Verilator together with UVM-SystemC. The presentation also showcased the proposed flow to a RISC-V-based project.
Beside the benefits, the authors were not shy to enumerate the limitations of this approach:
- Connecting to internal signals(C++ variable) are subject to race conditions, but there are ways to tackle this problem
- Verilator is cycle-accurate, so there is no intra cycle info
- UVM-SystemC is still in beta
- Coverage is limited to functional coverage(FC4SC), no code coverage
- CRAVE randomization stability is questionable
I give a big hand to Luca for the engaging, high quality, presentation.
P1.1 Verification of an AXI cache controller using multi-thread approach based on OOP design patterns
Francesco Rua(ST Microelectronics) described in great detail the testbench architecture and implementation for an AXI cache controller.
Verification goal of the showcased project is the data consistency and throughput, nothing special so far. But here it comes a very tough constraint for the testbench: in order to be portable from one project to the next it must insensitive to DUT changes, that is to be scalable and provide a black-box approach.
In order to solve this constraint, Francesco deployed a number of architectural and SW design patterns:
- point-to-point scoreboards and a functional model for data consistency checking
- multi-thread approach: each request spawns multiple threads that run concurrently, while each thread executes its job step by step
- State pattern: modeled state machines as a state object. Many state machines in many threads will produce high amounts of information that need to be shared among testbench components
- Observer pattern: communication between threads are based on notifications (i.e. publisher-subscriber)
- Decorator pattern: There were limitations generated by the fact that the publishers could send notifications before the subscribers subscribed to the publishers. This was solved with the Decorator pattern by wrapping the publishers and delaying notifications
All of the constructs employed in the project were presented one by one followed by the action diagram of all these processes.
A big hand goes to Francesco for showcasing how OOP design patterns can be used in a testbench and for the super-high-quality presentation, with very well made diagrams. I am pretty sure it required a ton of work.
P4.1 Largescale Gate-level Optimization Leveraging Property Checking
Lucas Klemmer (Johannes Kepler University Linz) is back (see here his 2022 contribution).
This time he presents a method for post-synthesis super-optimization of RISC-V designs. Basically the proposed method customizes the netlist considering the SW application: it eliminates the gates that serve the unused instructions. The process uses assertions and a formal engine to find all gates that are not used, resulting in an up to 50% reduction for the given example.
This presentation brought up memories of a project that could use such an approach. The SoC was using a programmable microcontroller to manage the bring-up of a group of RX/TX antennas. The procedure was very basic and the microcontroller was obviously underused. One could reduce the microcontroller instance to the minimum instruction set used by the power-up automaton.
A nice addition to the paper would be to devise 1) a method to validate the optimized instance against the SW, 2) generation of an instruction filter (based on the SW) that will handle the unsupported instructions, in case such instructions would be “injected” by noise and 3) a way to integrate the optimized netlist at the SoC-level.
Thank you Lucas for your contribution to the body of knowledge of Design and Verification domain.
P4.4 Effective Design Verification – Constrained Random with Python and Cocotb
Suruchi Kumari and Deepak Narayan Gadde(Infineon) build a case for using Python language for verification projects, highlighting the benefits of a modern, object oriented, lightweight language.
The authors presented a verification flow using co-simulation of RTL and CocoTB/Python. The flow has been tested on 3 different DUTs and overall the conclusion is that it speeds up TB development and reduces costs at the cost of a longer sim time. The provided examples included coverage collection and randomization, the pillars of metric driven verification. They also introduced the audience into the Coverage Analysis View, an inspection tool for the collected coverage, which is a really nice touch.
Regarding the simulation performance impact: an engineer experienced with CocoTB explained that the performance problem can be fixed by generating clocks in SV, a future experiment for the team.
I recommend you to read more about CocoTB here.
P4.4 Testbench Linting – Open-Source Way
An overall energetic and vibrant presentation about a SystemVerilog linter implementation in Python.
The ruleset today provides:
- 11 rules showcased for naming, labeling and style
- 11 rules to avoid specific constructs that can be problematic for verification/performance/certain simulators
- 4 rules for UVM guidelines
It might not seem a lot, but it is very good start given in the future it can be extended.
The presentation also provided the use case of quantitative analysis of SystemVerilog constructs (sequences, assume, cover, property) used in a testbench.
There are two GitHub repos that worth mentioning:
- PySlint GitHub repo
- PySlang GitHub repo that provides components for lexing, parsing, type checking, and elaborating SystemVerilog code
I give a big hand to Srinivasan Venkataramanan, Deepa Palaniappan(AsFigo Technologies) and Satinder Paul Singh(Cogknit EU) for the excellent presentation and technical content.
P1.3 The Three Body Problem
The authors (Peter Birch/VyperCore and Ben Marshall/PQShield) presented Blockwork: an Open Source build framework for 21st century indeed:
- uses Docker to isolate the build environment from the host machine and to normalize the disk
- separates the execution(Python) from the configuration(YAML)
- guarantees reproducibility of results: for identical inputs it will produce identical results (Docker helps with this)
- guarantees traceability: one knows at all times for any output which were the inputs and which transformations were applied to them
- guarantees modularity: it abstracts the flow execution graph from the implementation
- guarantees scalability: one can define build flows of any complexity
- can be integrated with existing job runners
- it is EDA vendor agnostic
P5.2 Accelerate Functional Coverage Closure Using Machine-Learning-Based Test Selection
Jakub Pluciński & Co (Nokia) presented their experiments with using various AI/ML algorithms to reduce the coverage closure effort.
It was by far the best documented paper targeting usage of AI/ML in the metric driven verification domain.
Their idea was to apply a ranking procedure on a set of possible scenarios even before a regression is run, such that redundancy is reduced upfront, before verification execution. They considered that each test has a signature in the form of a set of input parameters. The signatures of the various tests are fed into an autoencoder that aims to select the best stimuli with the goal of having as little redundancy as possible.
The authors showcased the full flow, end-to-end: the architecture of the solution, the integration in the verification flow and the research results for various supervised and unsupervised learning algorithms and how they compare to each other.
The research highlighted that the appropriate algorithms for the task use unsupervised learning methods. The supervised learning methods “saturate” early in the process, labelling tests as “non-interesting” after the first run. The successful unsupervised learning method selected tests based on their relative contribution between consecutive iterations.
A big up to Jakub&Co for the research and presentation: they gave us hope that AI/ML can have a place in the verification domain.
SystemC Evolution Day
This year too we had a full day dedicated to SystemC.
- ~70 people participated this year
- systemc 3.0.0 rc for public review will be available soon with C++17 as base language
- FC4SC and CRAVE library are now available as publicly released libraries
- A new FSS WG, Federated Simulation Standardization working group, with the aim of bringing multiple industries together to discuss how to bridge various simulation standards already available (avionics, automotive, mechatronics, semiconductors etc.)
- This topic was approached with skepticism from the audience, with the reasoning that “we don’t need new standards” which require modifications to SystemC.
- Community shows concern over the lack of human resources available to drive development in the SystemC world
- There were intense debates about opening the SystemC library to the public and switching to an agile flow to allow more contributions from the community, instead of keeping updates within Accellera members.
- Removal of some less-used supported platforms from SystemC library
- Addition of support for RISC-V and ARM in SystemC library
- Replace automake flow with cmake
- Lack of quality WoW between SystemC VP developers and SW developer; SystemC library does not yet provide sufficient tools to bridge between HW prototyping and SW development: introspection, debug capabilities, configuration, interfacing with toolchains etc. This drives SW developers away from SystemC and into exploring alternatives like QEMU.
- I think the SystemC should be open for public contributions and the Accellera SystemC WG should have the arbiter and enabler role (setting the guidelines, the objectives, the quality standards etc). The SystemC WG has limited resources to provide enhancements and bug fixes at the rate required by the industry.
“All AI All the Time” Poses New Challenges for Traditional Verification
- Moderator: Paul Dempsey, Editor in Chief, Tech Design Forum
- Jean-Marie Brunet, Vice President and General Manager, Hardware-Assisted Verification, Siemens
- Micheala Blott, Senior Fellow, AMD
- Daniel Schostak, Arm Architect and Fellow, Verification, Arm
- Lu Dai, Senior Director of Engineering, Qualcomm
Starter questions were:
- How is AI/ML best suited for Functional Verification?
- Where are we right now? (in terms of AI in Verification Engineering)
The main takeaways from this panel:
- many engineers mislabel post processing scripts as AI
- we must acknowledge that these are still early days for the matter (“AI is like a teenager”)
- AI can process large amounts of data and logs (ML => AI), since it can assist engineers with everyday tasks in a smart way
- Iterative work automation is one possible application
- Currently used to successfully analyze workload ( prediction, RTL switching etc)
- There is a risk of unexpected behavior, you can never tell what the model has actually “learned”
- Consider the analogy with image processing: the resolution of a digital image is impossible to reach analog levels, but it can surpass a threshold that makes it indistinguishable for the human eye. At this point in time AI may be close to the threshold, but that does not make it actually intelligent.
- Quality of data is an important factor. All engineers are aware of Garbage-In-Garbage-Out principle: you can have good models, fitted on poor data which in turn will skew results
- Handling of copyright for generated code is delicate, there is a risk of IP infringement
- Overreliance: who will make the final call for sign-off? Should the verification engineer sign-off just because AI approves of the design? If the
- Data sharing is delicate. Shared data can be poisoned, so sabotages may become a real strategy, since the safest strategy for now is not to use shared data.
Some personal notes:
- As long as the LLM / AI is trained using existing code (“monkey see monkey do”), there will be issues with copyright of generated code. Some companies might not have enough code to train the model. Next level would be for LLM/AI to learn the principles of coding and use those to implement the required programs, in which case the generated code could be considered “original work”. Probably this falls more in the AGI domain, rather than LLM/AI
- If the verification engineers rely too much on the AI (at least at the level it is today), they will be less involved in the project since they will be more inclined to delegate the full responsibility to the AI. The fact that an AI is driving the verification process or it is evaluating the quality level, might give verification engineers a false sense of reliability, which in turn can open the possibility of bugs slipping through. In order to counter such a bias, future verification methodologies will require a careful split of responsibilities between AI and the engineer, such that the engineers can identify and counter potential misshapes of the AI…at least until a human-level AGI will be available
The winners of the 2023 edition are:
- Best engineering paper: The Three Body Problem: There’s more to building Silicon than what EDA tools currently help, Authors: Peter Birch/VyperCore, Ben Marshall/PQShield
- New! Best research paper: Clock Tree Design Considerations in The Presence of Asymmetric Transistor Aging, Authors: Prof. Dr. Freddy Gabbay PhD (Ruppin Academic Center, Israel), Firas Ramadan and Majd Ganaiem (Technion – Israel Institute of Technology)
I give a big up to the people that made this event happen this year.
I would also like to thank the AMIQ engineers (AndreiV, Dragos, Andra, AndreiB, Tiberiu) for their contributions to this post.
Until the next DVCon – Comment, Share and Subscribe to keep the community alive!