umarcor

umarcor

Member Since 3 years ago

UPV/EHU, Bilbo, Bizkaia, Euskadi, Spain, Europe

Experience Points
80
follower
Lessons Completed
70
follow
Lessons Completed
256
stars
Best Reply Awards
90
repos

3592 contributions in the last year

Pinned
⚡ Open Source Verification Bundle for VHDL and System Verilog
⚡ GUI editor for hardware description designs
⚡ Execute Minimal Working Examples (MWEs) defined in the body of Markdown files or GitHub issues.
⚡ Co-simulation and behavioural verification with VHDL, C/C++ and Python/m
⚡ Read-only mirror of the official repo at git://sigrok.org/pulseview. Pull requests welcome. Please file bugreports at sigrok.org/bugzilla.
⚡ Specification of the Wishbone SoC Interconnect Architecture
Activity
Dec
6
1 hour ago
pull request

umarcor merge to ghdl/ghdl

umarcor
umarcor

Update .editorconfig with settings for Ada files

Description Update .editorconfig with settings for Ada files. Indent style and indent size values are the one that are currently used by the GHDL project.

(This PR aims to prevent editor with .editorconfig plugin to force the default tab indentation)

Activity icon
created branch

umarcor in umarcor/verilator create branch revert-const

createdAt 15 minutes ago
push

umarcor push umarcor/verilator

umarcor
umarcor

V3File: m_pid is a constant and cannot be assigned a new value

commit sha: 491d2b3301913a381ea5c4e8588701415dcf18db

push time in 20 minutes ago
Activity icon
issue

umarcor issue comment verilator/verilator

umarcor
umarcor

CI: add Windows/MSYS2 package build/test

While trying to update the Verilator package in MSYS2 (which is ~8 months behind), I found that compilation would fail.

In order to keep better track of this type of regressions, this PR adds MSYS2 testing to CI: https://github.com/verilator/verilator/actions/runs/411990309. A PKGBUILD recipe is added, which is similar to the upstream but builds the sources surrounding it, instead of downloading a tarball. Then, in one CI job a package is built and uploaded as an artifact. In a second job, the artifact is installed and tested, similarly to how any user would do in a clean environment.

GitHub Action msys2/setup-msys2 is used for setting up MSYS2 and for caching installed packages.

Currently, Linux jobs are failing because I need to add myself to the contributors list properly. However, this PR does not modify those at all.


The current build error on MSYS2 is the following:

../V3Os.cpp:353:31: error: 'WEXITSTATUS' was not declared in this scope; did you mean 'PNTSTATUS'?
  353 |         const int exit_code = WEXITSTATUS(ret);
      |                               ^~~~~~~~~~~
      |                               PNTSTATUS
make[2]: *** [../Makefile_obj:293: V3Os.o] Error 1
make[2]: *** Waiting for unfinished jobs....
make[2]: Leaving directory '/d/a/verilator/verilator/msys2/src/build-x86_64-w64-mingw32/src/obj_opt'
make[1]: *** [Makefile:60: ../bin/verilator_bin] Error 2
make[1]: Leaving directory '/d/a/verilator/verilator/msys2/src/build-x86_64-w64-mingw32/src'
make: *** [Makefile:222: verilator_exe] Error 2
umarcor
umarcor

I tried updating the Verilator package in MSYS2 to 4.216, but I'm getting the following error:

../V3File.cpp: In member function 'void VInFilterImp::start(const string&)':
../V3File.cpp:482:19: error: assignment of read-only member 'VInFilterImp::m_pid'
  482 |             m_pid = 0;  // Disabled
      |             ~~~~~~^~~
make[2]: *** [../Makefile_obj:300: V3File.o] Error 1

https://github.com/verilator/verilator/runs/4425043216?check_suite_focus=true

According to https://github.com/verilator/verilator/blob/ac05a779ae19278d1baf87cab328d62b7cf2417c/src/V3File.cpp#L335-L339 and https://github.com/verilator/verilator/blob/ac05a779ae19278d1baf87cab328d62b7cf2417c/src/V3File.cpp#L507, m_pid is constant indeed.

push

umarcor push umarcor/verilator

umarcor
umarcor

Localize variables from other modules when possible

V3Localize can now localize variable references that reference variables located in scopes different from the referencing function. This also means V3Descope has now moved after V3Localize.

umarcor
umarcor

Internals: Add AstUserNAllocator utility classes.

These utility classes can be used to hang advanced data structures off AstNode user*u() pointers, and they take care of memory management for the client. Use via the call operator().

umarcor
umarcor

Localize variables used in multiple functions

Teach V3Localize how to localize variables that are used in multiple functions, if in all functions where they are used, they are always written in whole before being consumed. This allows a lot more variables to be localized (+20k variables on OpenTitan - when building without --trace), and can cause significant performance improvement (OpenTitan simulates 8.5% - build single threaded and withuot --trace).

umarcor
umarcor

Simplify AND(CONST,OR(,)) with redundant terms

V3Expand generates a lot of OR nodes that are under a clearing mask, and have redundant terms, e.g.: 0xff & (a << 8 | b >> 24). The 'a << 8' term in there is redundant as it's bottom bits are all zero where the mask is non-zero. V3Const now removes these redundant terms.

umarcor
umarcor

Simplify redundant masking of AstShiftR/AstShiftL

AND(CONST,SHIFTR(_,C)) appears often after V3Expand, with C a large enough dense mask (i.e.: of the form (1 << n) - 1) to make the masking redundant. E.g.: 0xff & ((uint32_t)a >> 24). V3Const now replaces these ANDs with the SHIFTR node.

Similarly, we also simplify the same with SHIFTL, e.g.: 0xff000000 & ((uint32_t)a << 24)

umarcor
umarcor

Internals: Add more const. No functional change.

umarcor
umarcor

Allow configure override of AR program (#2999).

umarcor
umarcor

Internals: Use AstUserAllocator in V3Order

umarcor
umarcor

In XML, show pinIndex information (#2877).

umarcor
umarcor

Internals: Remove m_classPrefix from AstNodeVarRef/AstNodeCCall

This is now redundant and can be reconstituted in V3EmitC without being explicitly stored.

umarcor
umarcor

Remove no-op VL_CELL. No functional change intended.

umarcor
umarcor

Configure time unit/time precision in the Sym constructor.

This used to be done in the constructor of the top module, but there is no reason to do it there. Internals are cleaner with this in the Sym constructor. No functional change intended.

umarcor
umarcor

Fiix incorrect result by bit tree opt (#3023) (#3030)

  • Add a test to reproduce #3023. Also applied verilog-mode formatting.

  • use unique_ptr. No functional change is intended.

  • Introduce restorer that reverts changes during iterate() if failed.

umarcor
umarcor

Internals: Add const. No functional change.

umarcor
umarcor

V3Hash: Add missing include.

Fixes #3029

umarcor
umarcor

Remove deprecated --inhibit-sim (#3035)

umarcor
umarcor

Internals: Add const. No functional change.

umarcor
umarcor

Add extern "C" to function declarations in VPI tests.

These are necessary to link the executables. So far we have been saved by one of the generated headers forward declaring these functions with extern "C", but changing that header would break these tests.

umarcor
umarcor

Add V3EmitCBase.cpp to hold implementations

No need to keep complex functions in the V3EmitCBase.h header (which is included in a lot of compilation units). No functional change intended.

commit sha: 2070238f1ef58f2b793a21f27ac2adde425d4457

push time in 44 minutes ago
push

umarcor push umarcor/verilator

umarcor
umarcor

Localize variables from other modules when possible

V3Localize can now localize variable references that reference variables located in scopes different from the referencing function. This also means V3Descope has now moved after V3Localize.

umarcor
umarcor

Internals: Add AstUserNAllocator utility classes.

These utility classes can be used to hang advanced data structures off AstNode user*u() pointers, and they take care of memory management for the client. Use via the call operator().

umarcor
umarcor

Localize variables used in multiple functions

Teach V3Localize how to localize variables that are used in multiple functions, if in all functions where they are used, they are always written in whole before being consumed. This allows a lot more variables to be localized (+20k variables on OpenTitan - when building without --trace), and can cause significant performance improvement (OpenTitan simulates 8.5% - build single threaded and withuot --trace).

umarcor
umarcor

Simplify AND(CONST,OR(,)) with redundant terms

V3Expand generates a lot of OR nodes that are under a clearing mask, and have redundant terms, e.g.: 0xff & (a << 8 | b >> 24). The 'a << 8' term in there is redundant as it's bottom bits are all zero where the mask is non-zero. V3Const now removes these redundant terms.

umarcor
umarcor

Simplify redundant masking of AstShiftR/AstShiftL

AND(CONST,SHIFTR(_,C)) appears often after V3Expand, with C a large enough dense mask (i.e.: of the form (1 << n) - 1) to make the masking redundant. E.g.: 0xff & ((uint32_t)a >> 24). V3Const now replaces these ANDs with the SHIFTR node.

Similarly, we also simplify the same with SHIFTL, e.g.: 0xff000000 & ((uint32_t)a << 24)

umarcor
umarcor

Internals: Add more const. No functional change.

umarcor
umarcor

Allow configure override of AR program (#2999).

umarcor
umarcor

Internals: Use AstUserAllocator in V3Order

umarcor
umarcor

In XML, show pinIndex information (#2877).

umarcor
umarcor

Internals: Remove m_classPrefix from AstNodeVarRef/AstNodeCCall

This is now redundant and can be reconstituted in V3EmitC without being explicitly stored.

umarcor
umarcor

Remove no-op VL_CELL. No functional change intended.

umarcor
umarcor

Configure time unit/time precision in the Sym constructor.

This used to be done in the constructor of the top module, but there is no reason to do it there. Internals are cleaner with this in the Sym constructor. No functional change intended.

umarcor
umarcor

Fiix incorrect result by bit tree opt (#3023) (#3030)

  • Add a test to reproduce #3023. Also applied verilog-mode formatting.

  • use unique_ptr. No functional change is intended.

  • Introduce restorer that reverts changes during iterate() if failed.

umarcor
umarcor

Internals: Add const. No functional change.

umarcor
umarcor

V3Hash: Add missing include.

Fixes #3029

umarcor
umarcor

Remove deprecated --inhibit-sim (#3035)

umarcor
umarcor

Internals: Add const. No functional change.

umarcor
umarcor

Add extern "C" to function declarations in VPI tests.

These are necessary to link the executables. So far we have been saved by one of the generated headers forward declaring these functions with extern "C", but changing that header would break these tests.

umarcor
umarcor

Add V3EmitCBase.cpp to hold implementations

No need to keep complex functions in the V3EmitCBase.h header (which is included in a lot of compilation units). No functional change intended.

commit sha: ac05a779ae19278d1baf87cab328d62b7cf2417c

push time in 45 minutes ago
push

umarcor push umarcor/verilator

umarcor
umarcor

Localize variables from other modules when possible

V3Localize can now localize variable references that reference variables located in scopes different from the referencing function. This also means V3Descope has now moved after V3Localize.

umarcor
umarcor

Internals: Add AstUserNAllocator utility classes.

These utility classes can be used to hang advanced data structures off AstNode user*u() pointers, and they take care of memory management for the client. Use via the call operator().

umarcor
umarcor

Localize variables used in multiple functions

Teach V3Localize how to localize variables that are used in multiple functions, if in all functions where they are used, they are always written in whole before being consumed. This allows a lot more variables to be localized (+20k variables on OpenTitan - when building without --trace), and can cause significant performance improvement (OpenTitan simulates 8.5% - build single threaded and withuot --trace).

umarcor
umarcor

Simplify AND(CONST,OR(,)) with redundant terms

V3Expand generates a lot of OR nodes that are under a clearing mask, and have redundant terms, e.g.: 0xff & (a << 8 | b >> 24). The 'a << 8' term in there is redundant as it's bottom bits are all zero where the mask is non-zero. V3Const now removes these redundant terms.

umarcor
umarcor

Simplify redundant masking of AstShiftR/AstShiftL

AND(CONST,SHIFTR(_,C)) appears often after V3Expand, with C a large enough dense mask (i.e.: of the form (1 << n) - 1) to make the masking redundant. E.g.: 0xff & ((uint32_t)a >> 24). V3Const now replaces these ANDs with the SHIFTR node.

Similarly, we also simplify the same with SHIFTL, e.g.: 0xff000000 & ((uint32_t)a << 24)

umarcor
umarcor

Internals: Add more const. No functional change.

umarcor
umarcor

Allow configure override of AR program (#2999).

umarcor
umarcor

Internals: Use AstUserAllocator in V3Order

umarcor
umarcor

In XML, show pinIndex information (#2877).

umarcor
umarcor

Internals: Remove m_classPrefix from AstNodeVarRef/AstNodeCCall

This is now redundant and can be reconstituted in V3EmitC without being explicitly stored.

umarcor
umarcor

Remove no-op VL_CELL. No functional change intended.

umarcor
umarcor

Configure time unit/time precision in the Sym constructor.

This used to be done in the constructor of the top module, but there is no reason to do it there. Internals are cleaner with this in the Sym constructor. No functional change intended.

umarcor
umarcor

Fiix incorrect result by bit tree opt (#3023) (#3030)

  • Add a test to reproduce #3023. Also applied verilog-mode formatting.

  • use unique_ptr. No functional change is intended.

  • Introduce restorer that reverts changes during iterate() if failed.

umarcor
umarcor

Internals: Add const. No functional change.

umarcor
umarcor

V3Hash: Add missing include.

Fixes #3029

umarcor
umarcor

Remove deprecated --inhibit-sim (#3035)

umarcor
umarcor

Internals: Add const. No functional change.

umarcor
umarcor

Add extern "C" to function declarations in VPI tests.

These are necessary to link the executables. So far we have been saved by one of the generated headers forward declaring these functions with extern "C", but changing that header would break these tests.

umarcor
umarcor

Add V3EmitCBase.cpp to hold implementations

No need to keep complex functions in the V3EmitCBase.h header (which is included in a lot of compilation units). No functional change intended.

commit sha: 2a955e8e9396fb60ec00d9941c3489179fc81ea1

push time in 45 minutes ago
started
started time in 1 hour ago
Dec
5
1 day ago
Activity icon
issue

umarcor issue comment stnolting/neorv32

umarcor
umarcor

Add TRNG for UP5KDemo

This PR is just an "idea"... It enables the TRNG module for the UP5KDemo processor template.

For the records: The TRNG uses latches (!) and combinatorial loops (!). I know that both constructs are a bit "delicate" when it comes to FPGAs... So maybe it is a good idea to add the TRNG to check GHDL-yosys can handle such constructs.

@umarcor what do you think? Btw, do you know if there's an option to get a "hardware utilization by entity" report from yosys?

The idea comes from a thread in the German mikrocontroller.net forum where someone has problems with latches and GHDL/yosys. Here is the google translated thread: https://www-mikrocontroller-net.translate.goog/topic/528149?_x_tr_sl=de&_x_tr_tl=en&_x_tr_hl=de

edit I have tested the bitstream. The TRNG works! ✔️

umarcor
umarcor

what do you think? Btw, do you know if there's an option to get a "hardware utilization by entity" report from yosys?

I would ask Tristan explicitly about the latches being supported. Maybe the behaviour of the TRNG is the expected, but the actual hardware is not exactly what you think. I don't know how to get a hardware utilization by entity. However, you might synthesise that module/component alone (as a top entity itself) and see those results. Furthermore, you can generate a diagram to see what is being synthesised. See ghdl/ghdl#1783.

push

umarcor push umarcor/umarcor

umarcor
umarcor

gource/CoCoTb: add cocotb/cocotb-bus

commit sha: 6e954fbe030d42e4cd082dda66737472e09bb917

push time in 22 hours ago
open pull request

umarcor wants to merge pyTooling/Actions

umarcor
umarcor

Add a job template to publish unit test results

Add a new job template to publish unit test results in junit XML format via GH Action dorny/test-reporter to GitHubs API displaying an auto generated markdown file in list of jobs executed in a pipeline.


This PR fixes #8.
Depends on #11

umarcor
umarcor

Apart from adding it to the example pipeline, it needs to be added to the README.

pull request

umarcor merge to pyTooling/Actions

umarcor
umarcor

Add a job template to publish unit test results

Add a new job template to publish unit test results in junit XML format via GH Action dorny/test-reporter to GitHubs API displaying an auto generated markdown file in list of jobs executed in a pipeline.


This PR fixes #8.
Depends on #11

Activity icon
delete

umarcor in umarcor/umarcor delete branch gource

deleted time in 1 day ago
Dec
4
2 days ago
Activity icon
issue

umarcor issue comment pyTooling/Actions

umarcor
umarcor

add Action 'tip'

This PR merges eine/tip into this repository.

By the way, tip/README.md is enhanced, to add info about the Context, the Composite Action version, and Advanced/complex use cases.

umarcor
umarcor

@Paebbels, I think the only missing point to discuss here is the name of the Action.

pull request

umarcor merge to pyTooling/Actions

umarcor
umarcor

Add a job template to publish unit test results

Add a new job template to publish unit test results in junit XML format via GH Action dorny/test-reporter to GitHubs API displaying an auto generated markdown file in list of jobs executed in a pipeline.


This PR fixes #8.
Depends on #11

open pull request

umarcor wants to merge pyTooling/Actions

umarcor
umarcor

Add a job template to publish unit test results

Add a new job template to publish unit test results in junit XML format via GH Action dorny/test-reporter to GitHubs API displaying an auto generated markdown file in list of jobs executed in a pipeline.


This PR fixes #8.
Depends on #11

umarcor
umarcor

For performance reasons, it would be desirable to specify which artifact to download. That can be problematic, indeed, because users might want to upload multiple artifacts but not all of them. Therefore, for now, we can keep this as-is, and filter it when calling the Action below.

open pull request

umarcor wants to merge pyTooling/Actions

umarcor
umarcor

Add a job template to publish unit test results

Add a new job template to publish unit test results in junit XML format via GH Action dorny/test-reporter to GitHubs API displaying an auto generated markdown file in list of jobs executed in a pipeline.


This PR fixes #8.
Depends on #11

umarcor
umarcor

The path should be an input with a default value (artifacts/**/*.xml). See https://github.com/pyTooling/Actions/blob/main/.github/workflows/StaticTypeCheck.yml#L5-L28. So:

inputs:
  path:
    description: 'Path(s) to unit test results to be reported.'
    required: false
    default: 'artifacts/**/*.xml'
    type: string
open pull request

umarcor wants to merge pyTooling/Actions

umarcor
umarcor

Add a job template to publish unit test results

Add a new job template to publish unit test results in junit XML format via GH Action dorny/test-reporter to GitHubs API displaying an auto generated markdown file in list of jobs executed in a pipeline.


This PR fixes #8.
Depends on #11

umarcor
umarcor

I personally don't find that implementation particularly exciting. I believe it is good enough with dorny/test-reporter. In practice, I would expect most users of the Pipeline to replace this job with their specific reporting/hook needs. Therefore, this is mostly for us and for people who want something that just works.

pull request

umarcor merge to pyTooling/Actions

umarcor
umarcor

Add a job template to publish unit test results

Add a new job template to publish unit test results in junit XML format via GH Action dorny/test-reporter to GitHubs API displaying an auto generated markdown file in list of jobs executed in a pipeline.


This PR fixes #8.
Depends on #11

Activity icon
issue

umarcor issue comment pyTooling/Actions

umarcor
umarcor

Add Apache License, 2.0

This adds the Apache License, 2.0 headers and license texts.

Fixes #9.

umarcor
umarcor

The beauty of Apache License is, it brings all contributions under the copyright of the copyright owner. So if copyright is owned by XXX, any contribution from users A, B, C becomes copyrighted by XXX. By defining XXX as a very wide audience, all members of XXX must be ask in case the project / license needs a transformation, correction etc.

My point is that the beauty of such feature is arguable. Bringing all the contributions into a single person is desirable in case all contributors agree with the copyright holder potentially changing the license in the future. They are not only providing the contributions under the terms of the currrent license, but also granting permissions to change it to an incompatible license. Conversely, using a collective as the copyright holder makes the granting reciprocal. The contributor provides the code under the terms of the current license, and at the same time it is given the opportunity to oppose to license changes in the future (i.e. it is made a protector of the openness of the project). There are recent examples of the risks of open source projects being taken over. See e.g. Freenode or Audacity.

Another use case might be if we later somewhen found a society (according to dict.cc - ES: asociación) for e.g. pyTooling and/or EDA², we might want to transition such rights to a legal body. I don't want to ask potentially dozens or hundreds of people for permission. That's why I would like to keep the range small.

While I understand that case, an hypothetical organisation/company/entity can always take this repo, preserve the license and specify the new content with a different copyright. As long as whatever is added is compatible from a license point of view. We might even use this in a closed source project, without publishing the enhancements. Those are not problematic. The problem is if whoever wants to change the license itself, not the holder.

If Apache 3.0 was released, and it was compatible with Apache 2.0 (not more less restrictive), I believe updating it would not need permission from inactive authors.

Overall, I don't feel I/we need to keep tight control on the licensing of this repo. This (pyTooling/Actions) is not something we want to put lots of effort on, but a utility we need for doing the actual interesting work. That is different in other repos of this org, in EDAA or HDL. In those cases, there is a very strong commitment from some developers, which I understand deserves the right to "freely" handle the project as a whole.

So with using authors I would feel better then with contributors.

I agree. In practical terms, I'd say "the author list retrieved through git".


I check different variant without horizontal lines, but I think it's not as readable as with some section lines. We could use a more light-weight style based on ---- instead of ====.

I think the main point is that you expect all the header blocks to have continuous comment symbols. That is, a single header block. That's why additional separators are required. Conversely, I expect each header to be different, if they contain different data (shebang, authors, license, etc.). To me, the first line of code is the first non-empty line not starting with a comment symbol, not just the first non-empty line (which is a separator, per se).

In the example below, I really don't find the one on the right more readable. In fact, I would use single empty rows, instead of double; however, I'm good with using two, as required above Python functions/classes. The fact that white space has meaning is in the DNA of Python.

image


I consider the README as a major part of the documentation. There for I would like to have this covered from beginning under CC-BY 4.0. I just added the license file at the usual place and linked it in the README.

Since CC does not require the license body to be provided (it can be just linked), what about adding a paragraph at the end of the README and adding the rst source when we add the Sphinx site? Otherwise, let's keep it.

push

umarcor push pyTooling/Actions

umarcor
umarcor

Added a PullRequest template.

umarcor
umarcor

add a Pull Request template (#14)

commit sha: c4e1cce63b448f0307047e9cdffcdaf7ba2555b5

push time in 1 day ago
pull request

umarcor pull request pyTooling/Actions

umarcor
umarcor

Added a PullRequest template.

This adds a simple pull request template.

pull request

umarcor merge to pyTooling/Actions

umarcor
umarcor

Added a PullRequest template.

This adds a simple pull request template.

Activity icon
issue

umarcor issue comment pyTooling/Actions

umarcor
umarcor

Rename Jon template 'Params' to 'Parameters'

We use everywhere explicit and full names, except for the job Params.

I suggest to rename it to Parameters.

I think a user can still call the job Params in his workflow, but the template would have a full name.


/cc @umarcor

umarcor
umarcor

That is correct. This is a breaking change. However, since we are in v0, shall we do it "silently" and fix the repos that are using it?

open pull request

umarcor wants to merge pyTooling/Actions

umarcor
umarcor

add Action 'tip'

This PR merges eine/tip into this repository.

By the way, tip/README.md is enhanced, to add info about the Context, the Composite Action version, and Advanced/complex use cases.

umarcor
umarcor

With faster I mean installation speed. However, currently that is also execution speed because the Conatiner Action is built in each run (that's how GitHub's Container Actions work).

I don't think lxml is used, but I'm unsure.

pull request

umarcor merge to pyTooling/Actions

umarcor
umarcor

add Action 'tip'

This PR merges eine/tip into this repository.

By the way, tip/README.md is enhanced, to add info about the Context, the Composite Action version, and Advanced/complex use cases.

Activity icon
issue

umarcor issue comment VUnit/vunit

umarcor
umarcor

Rethinking the runner configuration interface: integration of the OSVVM methodology into VUnit

The most popular open source VHDL verification projects are cocotb, OSVVM and UVVM, along with VUnit. As discussed in larsasplund.github.io/github-facts, there are some philosophical differences between them: OSVVM and UVVM are defined as "VHDL verification methodologies", cocotb is for (Python) co-routine co-simulation, and VUnit is a Python-aided framework of HDL utilities. Naturally, there are some overlapping capabilities because all of them provide basic features such as logging and building/simulation. Therefore, methodologies can be seen as bundles of utilities (frameworks), and some users might refer to using the VUnit runner as a methodology. Nonetheless, it is in the DNA of VUnit to be non-intrusive and allow users to pick features one by one, including reusing the methodologies they are used to.

Currently, it is possible to use OSVVM utilities/libraries in a VUnit environment. Although there are still some corner cases to fix (#754, #767, #768), it is usable already. In fact, some of VUnit's features do depend on OSVVM's core. However, it is currently not possible to use the OSVVM methodology as-is within VUnit. The OSVVM methodology uses top-level entities without generics or ports, and the entrypoints are VHDL configurations. Meanwhile, VUnit needs a top-level generic of type string in order to pass data from Python to VHDL.

Most simulators do support calling a configuration instead of an entity as the primary simulation unit. It should, therefore, be trivial to support OSVVM's configurations as entrypoints in VUnit. I am unsure about VUnit's parser supporting configurations in the parser and dependency scanning features; but that should not be the main challenge anyway.

The main challenge we need to address is that passing generics to VHDL Configurations is not supported in the language. If that was possible, the runner string might be forwarded to the entiy within the configuration. For the next revision, we might propose enhancements in this regard, since revisiting the limitations of configurations is one of the expected areas to work on. Nevertheless, that will/would take several months or years until made available in simulators.

Yesterday, I had a meeting with @JimLewis and he let me know that he's been thinking about implementing some mechanism for passing data between the TCL scripts (.pro files) and the VHDL testbenches. We talked about .ini, .yml and .json, and I suggested to use the latter because there is a JSON reader library available already: Paebbels/JSON-for-VHDL. In fact, JSON-for-VHDL is submoduled in VUnit, in order to pass very complex generics to the testbench.

I believe this is a good oportunity to document the syntax of VUnit's runner generic, make a public API from it, write a data model and provide multiple reader/writer implementations. @JimLewis said he did not put much thought into the data model yet, but he would be willing to include integration with VUnit into the scope when he works on it. Maybe there is no need for him to write a VHDL solution from scratch and it can be based on JSON-for-VHDL + VUnit's runner package.

Enhance VUnit's simulator interfaces to support writing runner (CLI) arguments to a file or to an envvar

Currently, VUnit's runner expects to receive an string, which Python passes as a top-level generic. Actually, there is no limitation in VHDL for using an alternative method. The generic might be a path, and users might read the file in the testbench before passing it to the functions from the runner package. By the same token, the generic might point to a JSON file, and users might convert that to the string syntax expected by the runner. Hence, the main challenge is that VUnit's Python simulator interfaces do not support writing runner parameters to a file.

Well, that is not 100% correct: when GHDL is used, option ghdl_e prevents running the simulation and instead writes all the CLI arguments in a JSON file (#606): https://github.com/VUnit/vunit/blob/7879504ba6a97be82137199e8819f770e4017681/vunit/sim_if/ghdl.py#L303-L322 That is used for building a design once (and generating an executable binary if the simulator is based on a compile & link model) and then executing it multiple times for co-simulation purposes: https://github.com/VUnit/cosim/tree/master/examples/buffer. Therefore, we might want to generalise this to all the simulator interfaces and make it optional to specify the name/location of the JSON file.

Similarly, we might want to support passing runner arguments through an environment variable. In the context of integrating VUnit and cocotb, one of the requirements is specifying environment variables per test/testbench. That's because cocotb bootstraps an independent Python instance, and the UUT is the design; so data needs to be passed through the filesystem or envvars. In fact, this is a requested feature: #708. If we used the same mechanism for the runner, cocotb might re-implement VUnit's runner package in Python (which is "just" 1K lines of VHDL code for 100% compatibility). I believe that would allow to plug cocotb's regression management. The remaining functionality would be for VUnit to "discover" the test cases before executing the simulations.

So, we might have an enumeration option to decide passing the runner string as a top-level generic, as an environment variable or as a file. Taking VHDL 2019 features into account, OSVVM and VUnit might end up using the envvar approach indeed.

runner_cfg

Enhance JSON-for-VHDL

The current implementation of JSON-for-VHDL is functional and synthesisable, but not optimal for simulation. Some years ago, @Paebbels and @LarsAsplund discussed about writing an alternative implementation for simulation only, which would have less constraints and better performance. They also talked about using VHDL 2019 features. I don't remember if using those was a requirement for the optimised simulation-only implementation, or whether that could be done with VHDL 2008.

If we are to use JSON for sharing configuration parameters between VUnit's Python or OSVVM's TCL and VHDL, I guess we would make JSON-for-VHDL a prioritary dependency in the ecosystem.

Co-simulation

The run package might be enhanced for getting the data from a foreign environment. By encapsulating the interaction with the "runner configuration model" in a protected type, we might provide various implementations. For instance, the VHDL testbench might query a remote service which tests to run, and where to push the results.

Terminology: configurations

OSVVM uses VHDL configurations for composing the test harness and the test cases. At the same time, VUnit use the term "configuration" for referring to a set of values for top-level generics. So, in Python users can generate multiple sets of parameters for each testbench/testcase. That is not very conflictive yet because VHDL configurations are not used much in the VUnit ecosystem. However, if we are to improve the integration of OSVVM and VUnit, we might want to reconsider the naming.

/cc @Paebbels @JimLewis @ktbarrett @jwprice100

umarcor
umarcor

We are all coming to the same page now! 🎉 I am particularly happy because I did not expect to sort this out today 😄

In OSVVM, there is one test case per file. It has a name. We don't have any means to run or report on a portion of a test. That is a cool feature of VUnit. I generally don't think that small. If I am testing AxiStream byte enables, I am going to write a test that tests all permutations of the byte enables and have to always run all of them - for better or worse.

That makes sense. I assumed you might have support for multiple test per file because both VUnit and cocotb allow it. However, the methodology hits here. In VUnit and cocotb, the generation of parameter sets is done outside of VHDL, while OSVVM handles it inside. Conceptually, the inputs used for generating VUnit configurations in Python, are the inputs that a single OSVVM test needs, because it does the generation internally. That equivalency is actually cool.

However, you do have the concept of testsuites in OSVVM, which means you do have two levels of hierarchy for organising/calling the tests. Is that done by the location/name of the .pro files? Or is it written in the .pro files explicitly (say, set testcasename whatever; run atestcase). I'm trying to understand whether more than 2 levels might be required in the future. In VUnit, there are three levels Library.Testbench.Test, even though the XUnit report flattens the first two.

What I see a need to communicate is what mode is the test running it? Debug - and hence, enable some extra flags for logging vs regression - and hence, only log to files (in particular, turn off OSVVM mirror mode if it is on), vs artifact collection - a final simulation run, hence, turn on reporting for artifact collection that is not normally needed (OSVVM's final level). It would be handy to communicate results directory information - rather than the static ./results that we currently use or communicate the validated results directory (in the event we want to do file comparisons - static paths don't work so well here).

Nice! This is the information we need. The attributes (Debug, flags, artifact collection) and the directory information. VUnit does provide the tb_path and the output_path already: http://vunit.github.io/run/user_guide.html?highlight=tb_path#special-paths. One of those might be reused as the results path in OSVVM, or an additional path might be passed as an attribute (specific for OSVVM).

Once we understand all of this information, we might write a prototype using JSON-for-VHDL on the VHDL side. That is not the ideal solution, but it is usable already, and we should be able to wrap it in a protected type easily. On the other side (TCL or Python) I think that VUnit supports attributes for testbenches already. I'd need to look better into it.

Curiously, many simulators allow you to run with multiple top level structures. This may be a curious way to accomplish what I need to do. Load a regular simulation with a second top-level that simply changes settings. There could be multiple versions of the second top-level that could accomplish the different classes of settings that I need to make.

That is very interesting. I have never seen multiple top levels unit used for simulation. I think that GHDL supports EntityName [ArchitectureName] or ConfigurationName only. I do know that @tmeissner tried having multiple top units for synthesis, in order to pass them through Yosys -> SymbiYosys, for formal verification purposes. However, I assumed that was just a shortcut for having multiple runs. That is, the multiple top unit were/are completely unrelated. As far as I understand, your comment implies that multiple top units are used "at the same time"?

While the multiple-tops sounds interesting, I think the path that VUnit has been using may prove to be more flexible.

We did have some issues with VUnit's approach because passing complex generics through the CLI is not without issues. Fortunately, the usage of JSON-for-VHDL along with basic encoding (in order to ensure a "simple" character set") proved to be quite robust. I am particularly happy of how that collaboration turned out, even though we should pay attention to JSON-for-VHDL's technical debt (when VHDL 2019 is supported by vendors).

I am not against setting top-level generics if all simulators can do it and they can reach beyond the configuration and set it for the test case entity that is specified by the configuration. If all simulators do that in a similar fashion, I think maybe we should codify it in the standard. Not the simulator part, but explicitly say that the language requires the simulators to provide a mechanism to do it - it would not be a change for existing vendors, but as an expectation to anyone new - or to put pressure on certain vendors.

I think this is sensible. Most simulators can handle strings and naturals at least. Negatives and reals might be more challenging. However, as far as I understand, accepting CLI arguments was added to VHDL 2019 already. Therefore, it might not make much sense to now specify that top-levels can be overriden in the CLI. Instead users can have constants in the entity/architecture which retrieve the values from the CLI. Maybe we should push for that VHDL 2019 feature to be added, and then have an "standard" argparse or getopts in VHDL. From the vendors' point of view, all they need to do is support strings, which they do already. Instead of interpreting the strings and resolving them to types (which they need to do at the moment), that would be deferred to the user, or to the argparse/getopts library.

Activity icon
issue

umarcor issue comment VUnit/vunit

umarcor
umarcor

Rethinking the runner configuration interface: integration of the OSVVM methodology into VUnit

The most popular open source VHDL verification projects are cocotb, OSVVM and UVVM, along with VUnit. As discussed in larsasplund.github.io/github-facts, there are some philosophical differences between them: OSVVM and UVVM are defined as "VHDL verification methodologies", cocotb is for (Python) co-routine co-simulation, and VUnit is a Python-aided framework of HDL utilities. Naturally, there are some overlapping capabilities because all of them provide basic features such as logging and building/simulation. Therefore, methodologies can be seen as bundles of utilities (frameworks), and some users might refer to using the VUnit runner as a methodology. Nonetheless, it is in the DNA of VUnit to be non-intrusive and allow users to pick features one by one, including reusing the methodologies they are used to.

Currently, it is possible to use OSVVM utilities/libraries in a VUnit environment. Although there are still some corner cases to fix (#754, #767, #768), it is usable already. In fact, some of VUnit's features do depend on OSVVM's core. However, it is currently not possible to use the OSVVM methodology as-is within VUnit. The OSVVM methodology uses top-level entities without generics or ports, and the entrypoints are VHDL configurations. Meanwhile, VUnit needs a top-level generic of type string in order to pass data from Python to VHDL.

Most simulators do support calling a configuration instead of an entity as the primary simulation unit. It should, therefore, be trivial to support OSVVM's configurations as entrypoints in VUnit. I am unsure about VUnit's parser supporting configurations in the parser and dependency scanning features; but that should not be the main challenge anyway.

The main challenge we need to address is that passing generics to VHDL Configurations is not supported in the language. If that was possible, the runner string might be forwarded to the entiy within the configuration. For the next revision, we might propose enhancements in this regard, since revisiting the limitations of configurations is one of the expected areas to work on. Nevertheless, that will/would take several months or years until made available in simulators.

Yesterday, I had a meeting with @JimLewis and he let me know that he's been thinking about implementing some mechanism for passing data between the TCL scripts (.pro files) and the VHDL testbenches. We talked about .ini, .yml and .json, and I suggested to use the latter because there is a JSON reader library available already: Paebbels/JSON-for-VHDL. In fact, JSON-for-VHDL is submoduled in VUnit, in order to pass very complex generics to the testbench.

I believe this is a good oportunity to document the syntax of VUnit's runner generic, make a public API from it, write a data model and provide multiple reader/writer implementations. @JimLewis said he did not put much thought into the data model yet, but he would be willing to include integration with VUnit into the scope when he works on it. Maybe there is no need for him to write a VHDL solution from scratch and it can be based on JSON-for-VHDL + VUnit's runner package.

Enhance VUnit's simulator interfaces to support writing runner (CLI) arguments to a file or to an envvar

Currently, VUnit's runner expects to receive an string, which Python passes as a top-level generic. Actually, there is no limitation in VHDL for using an alternative method. The generic might be a path, and users might read the file in the testbench before passing it to the functions from the runner package. By the same token, the generic might point to a JSON file, and users might convert that to the string syntax expected by the runner. Hence, the main challenge is that VUnit's Python simulator interfaces do not support writing runner parameters to a file.

Well, that is not 100% correct: when GHDL is used, option ghdl_e prevents running the simulation and instead writes all the CLI arguments in a JSON file (#606): https://github.com/VUnit/vunit/blob/7879504ba6a97be82137199e8819f770e4017681/vunit/sim_if/ghdl.py#L303-L322 That is used for building a design once (and generating an executable binary if the simulator is based on a compile & link model) and then executing it multiple times for co-simulation purposes: https://github.com/VUnit/cosim/tree/master/examples/buffer. Therefore, we might want to generalise this to all the simulator interfaces and make it optional to specify the name/location of the JSON file.

Similarly, we might want to support passing runner arguments through an environment variable. In the context of integrating VUnit and cocotb, one of the requirements is specifying environment variables per test/testbench. That's because cocotb bootstraps an independent Python instance, and the UUT is the design; so data needs to be passed through the filesystem or envvars. In fact, this is a requested feature: #708. If we used the same mechanism for the runner, cocotb might re-implement VUnit's runner package in Python (which is "just" 1K lines of VHDL code for 100% compatibility). I believe that would allow to plug cocotb's regression management. The remaining functionality would be for VUnit to "discover" the test cases before executing the simulations.

So, we might have an enumeration option to decide passing the runner string as a top-level generic, as an environment variable or as a file. Taking VHDL 2019 features into account, OSVVM and VUnit might end up using the envvar approach indeed.

runner_cfg

Enhance JSON-for-VHDL

The current implementation of JSON-for-VHDL is functional and synthesisable, but not optimal for simulation. Some years ago, @Paebbels and @LarsAsplund discussed about writing an alternative implementation for simulation only, which would have less constraints and better performance. They also talked about using VHDL 2019 features. I don't remember if using those was a requirement for the optimised simulation-only implementation, or whether that could be done with VHDL 2008.

If we are to use JSON for sharing configuration parameters between VUnit's Python or OSVVM's TCL and VHDL, I guess we would make JSON-for-VHDL a prioritary dependency in the ecosystem.

Co-simulation

The run package might be enhanced for getting the data from a foreign environment. By encapsulating the interaction with the "runner configuration model" in a protected type, we might provide various implementations. For instance, the VHDL testbench might query a remote service which tests to run, and where to push the results.

Terminology: configurations

OSVVM uses VHDL configurations for composing the test harness and the test cases. At the same time, VUnit use the term "configuration" for referring to a set of values for top-level generics. So, in Python users can generate multiple sets of parameters for each testbench/testcase. That is not very conflictive yet because VHDL configurations are not used much in the VUnit ecosystem. However, if we are to improve the integration of OSVVM and VUnit, we might want to reconsider the naming.

/cc @Paebbels @JimLewis @ktbarrett @jwprice100

umarcor
umarcor

@JimLewis the discovery strategy is radically different for OSVVM and cocotb; hence, discussing both at the same time can be slightly misleading, although necessary.

From cocotb's and Kaleb's point of view, testbenches are Python scripts/modules, and VUnit provides a Python based infrastructure. By default VUnit and cocotb will run in completely different instances of Python; however, because both are using the same language, there are more opportunities for "discovery". VUnit can import cocotb testbenches without executing them, and use Python's inspection features. It gets all the semantic information about what a module is, which functions it has, etc.

In OSVVM's case, first we need to solve the problem of making .pro scripts usable from Python. I.e., making OSVVM usable from any Python script/project, not only VUnit. That's something @Paebbels is working on in pyEDAA.ProjectModel. ProjectModel will be able to interpret .pro files and extract the same imperative information that the TCL plumbing has. Then, VUnit will not need to parse OSVVM's VHDL sources to find out which pieces compose a testbench or a test; it will just need to ensure that the testbenches/tests defined in the ProjectModel do exist indeed. It might also check if any other files exist which are not defined in the ProjectModel (depending on the usage of wildcards). If YAML files need to be partially pre-generated before starting the simulations, that is precisely the knowledge we need for the Runner.

Activity icon
issue

umarcor issue comment VUnit/vunit

umarcor
umarcor

Rethinking the runner configuration interface: integration of the OSVVM methodology into VUnit

The most popular open source VHDL verification projects are cocotb, OSVVM and UVVM, along with VUnit. As discussed in larsasplund.github.io/github-facts, there are some philosophical differences between them: OSVVM and UVVM are defined as "VHDL verification methodologies", cocotb is for (Python) co-routine co-simulation, and VUnit is a Python-aided framework of HDL utilities. Naturally, there are some overlapping capabilities because all of them provide basic features such as logging and building/simulation. Therefore, methodologies can be seen as bundles of utilities (frameworks), and some users might refer to using the VUnit runner as a methodology. Nonetheless, it is in the DNA of VUnit to be non-intrusive and allow users to pick features one by one, including reusing the methodologies they are used to.

Currently, it is possible to use OSVVM utilities/libraries in a VUnit environment. Although there are still some corner cases to fix (#754, #767, #768), it is usable already. In fact, some of VUnit's features do depend on OSVVM's core. However, it is currently not possible to use the OSVVM methodology as-is within VUnit. The OSVVM methodology uses top-level entities without generics or ports, and the entrypoints are VHDL configurations. Meanwhile, VUnit needs a top-level generic of type string in order to pass data from Python to VHDL.

Most simulators do support calling a configuration instead of an entity as the primary simulation unit. It should, therefore, be trivial to support OSVVM's configurations as entrypoints in VUnit. I am unsure about VUnit's parser supporting configurations in the parser and dependency scanning features; but that should not be the main challenge anyway.

The main challenge we need to address is that passing generics to VHDL Configurations is not supported in the language. If that was possible, the runner string might be forwarded to the entiy within the configuration. For the next revision, we might propose enhancements in this regard, since revisiting the limitations of configurations is one of the expected areas to work on. Nevertheless, that will/would take several months or years until made available in simulators.

Yesterday, I had a meeting with @JimLewis and he let me know that he's been thinking about implementing some mechanism for passing data between the TCL scripts (.pro files) and the VHDL testbenches. We talked about .ini, .yml and .json, and I suggested to use the latter because there is a JSON reader library available already: Paebbels/JSON-for-VHDL. In fact, JSON-for-VHDL is submoduled in VUnit, in order to pass very complex generics to the testbench.

I believe this is a good oportunity to document the syntax of VUnit's runner generic, make a public API from it, write a data model and provide multiple reader/writer implementations. @JimLewis said he did not put much thought into the data model yet, but he would be willing to include integration with VUnit into the scope when he works on it. Maybe there is no need for him to write a VHDL solution from scratch and it can be based on JSON-for-VHDL + VUnit's runner package.

Enhance VUnit's simulator interfaces to support writing runner (CLI) arguments to a file or to an envvar

Currently, VUnit's runner expects to receive an string, which Python passes as a top-level generic. Actually, there is no limitation in VHDL for using an alternative method. The generic might be a path, and users might read the file in the testbench before passing it to the functions from the runner package. By the same token, the generic might point to a JSON file, and users might convert that to the string syntax expected by the runner. Hence, the main challenge is that VUnit's Python simulator interfaces do not support writing runner parameters to a file.

Well, that is not 100% correct: when GHDL is used, option ghdl_e prevents running the simulation and instead writes all the CLI arguments in a JSON file (#606): https://github.com/VUnit/vunit/blob/7879504ba6a97be82137199e8819f770e4017681/vunit/sim_if/ghdl.py#L303-L322 That is used for building a design once (and generating an executable binary if the simulator is based on a compile & link model) and then executing it multiple times for co-simulation purposes: https://github.com/VUnit/cosim/tree/master/examples/buffer. Therefore, we might want to generalise this to all the simulator interfaces and make it optional to specify the name/location of the JSON file.

Similarly, we might want to support passing runner arguments through an environment variable. In the context of integrating VUnit and cocotb, one of the requirements is specifying environment variables per test/testbench. That's because cocotb bootstraps an independent Python instance, and the UUT is the design; so data needs to be passed through the filesystem or envvars. In fact, this is a requested feature: #708. If we used the same mechanism for the runner, cocotb might re-implement VUnit's runner package in Python (which is "just" 1K lines of VHDL code for 100% compatibility). I believe that would allow to plug cocotb's regression management. The remaining functionality would be for VUnit to "discover" the test cases before executing the simulations.

So, we might have an enumeration option to decide passing the runner string as a top-level generic, as an environment variable or as a file. Taking VHDL 2019 features into account, OSVVM and VUnit might end up using the envvar approach indeed.

runner_cfg

Enhance JSON-for-VHDL

The current implementation of JSON-for-VHDL is functional and synthesisable, but not optimal for simulation. Some years ago, @Paebbels and @LarsAsplund discussed about writing an alternative implementation for simulation only, which would have less constraints and better performance. They also talked about using VHDL 2019 features. I don't remember if using those was a requirement for the optimised simulation-only implementation, or whether that could be done with VHDL 2008.

If we are to use JSON for sharing configuration parameters between VUnit's Python or OSVVM's TCL and VHDL, I guess we would make JSON-for-VHDL a prioritary dependency in the ecosystem.

Co-simulation

The run package might be enhanced for getting the data from a foreign environment. By encapsulating the interaction with the "runner configuration model" in a protected type, we might provide various implementations. For instance, the VHDL testbench might query a remote service which tests to run, and where to push the results.

Terminology: configurations

OSVVM uses VHDL configurations for composing the test harness and the test cases. At the same time, VUnit use the term "configuration" for referring to a set of values for top-level generics. So, in Python users can generate multiple sets of parameters for each testbench/testcase. That is not very conflictive yet because VHDL configurations are not used much in the VUnit ecosystem. However, if we are to improve the integration of OSVVM and VUnit, we might want to reconsider the naming.

/cc @Paebbels @JimLewis @ktbarrett @jwprice100

umarcor
umarcor

TCL can access environment variables. Is there something I am missing here? Setting them is nothing more than: set ::env(name-to-set) value-to-set

@JimLewis the problem is not setting environments variables. That is easy regardless of the language (TCL, Python, bash, C...). The problem is accessing envvars from VHDL. As we discussed, that requires VHDL 2019 or direct cosimulation (which is not standardized yet).

Does VUnit have a library that gives VHDL the ability to read environment variables. I would be open to reusing it - if it can address everything we need.

It does not. I could provide it through ghdl/ghdl-cosim (for GHDL), or through VUnit/cosim (for GHDL, and hopefully ModelSim/QuestaSim). However, that's "killing flies with cannon shots". Providing a library for reading envvars through co-simulation (as a workaround for the lack of VHDL 2019 support) would be a "tiny" project itself, such as JSON-for-VHDL; and it does not solve our problem: an agreement on the format/syntax of the runner interface/objects and which attributes it needs.

In VUnit, the runner tells the tesbench which tests to run and which parameters (top-level generic values) to use for each test, along with the location of the tesbench source file. Then, the testbench produces an output that VUnit can use to know which specific tests failed and which passed. So, what are the equivalent requirements in OSVVM? You mentioned that you wanted to pass some parameters from TCL to VHDL. Which are those parameters? For the second part, I know the answer: you produce a YAML file which VUnit could read/use.

Previous