Skip to content

Support for Self Checking #141

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: dev
Choose a base branch
from

Conversation

MuhammadHammad001
Copy link
Contributor

Contributors: @MuhammadHammad001 , @allenjbaum , @UmerShahidengr

Support for Self Checking (Work in Progress)

Supported

  • Overall structure for the self-checking mode.
  • Updated test and signature format for self-checking mode.

Not Supported (In Progress)

  • Complete mechanism for printing on the console in case of a mis-comparison
  • Comprehensive testing for all the extensions including priv and unpriv.

Overall, workflow will look:
SELF_CHECK drawio

@MarcKarasek
Copy link

The above requires that the DUT has some type of console output. This cannot be assumed in all cases.
Currently we provide a mechanism for the DUT via model_test.h to dump information "somewhere". Where this is, is completely up to the DUT and cannot be assumed to be implemented at all.

Since we have a mechanism in place to dump the current signature block for a given test out to the PC and riscof compares this with the sail signature for the same test, I propose to use this as a pass/fail for a test.

In the sigup macro we can clear out the signature area (instead of writing too it) from 0xdeasdbeef to a 0x0000 value., This signature then gets output to the host on test completion. Riscof then can check this file for all 0x0000 values, if it finds one 0xdeadbeef then the test fails.

I also propose adding a new macro to model_test.h to allow someone to do whatever they want on a failed test case. This would be in sigup macro.

All we should care about at the end of any test is did it pass or not anything else is noise. We provide the hooks for a DUT to halt on test failure, dump out whatever they want, etc.. but that is not up to us.

I have this written up and a working POC for I tests. I need to modify it based on feedback from the last ACT mtg.
It involves changes that are outside of the ACT repo itself (linker scripts, etc.) which is another discussion.

From the last meeting we had agreed that this work for self-checking will be in a new branch (4.0???) as it will require some major changes and we do not want to try to have code for both live in the same branch.

[MuhammadHammad001] will you be at next weeks ACT mtg? Or maybe we should setup a separate meeting between Harvey Mudd, myself and 10X???

@allenjbaum
Copy link

allenjbaum commented Jul 9, 2025 via email

@allenjbaum
Copy link

allenjbaum commented Jul 9, 2025 via email

@MarcKarasek
Copy link

MarcKarasek commented Jul 10, 2025

Why do an ecall? The only thing ACT cares about is did the test pass/fail anything else is beyond the scope of certification.
Adding in hooks so that a DUT can debug why it is failing is a good thing but this should IMO be left up to the DUT via hooks RVI specifies in model_test.h. If at the top level the test has already determined (in SIGUPD macro) that the test has failed, I see no reason to call ecall to figure out that the test has failed? We are way over-engineering this, IMO. All we need to do is mark it as failed and move on to the next test.

This goes back to my previous comments. ACT is in the Certification business NOT the Verification business or the debug business.

I thought we had agreed that the previous behavior (comparing to a sail model signature) was going to be in the new 3.x branch that will be created and going forward we will NOT have to carry any baggage from it for the self-checking tests.

I for one want a clean break between these two methods. Not a mishmash of compile it this way for one and this way for another.
This will only lead to more confusion in how to use the ACT.

IMO, we need to make this as brain dead as possible to run at the top level. Fail/Pass nothing with hooks for someone to add debug/halt/whatever they want to do.

At the end of the day Pass/Fail is all RVI should care about.

@allenjbaum
Copy link

allenjbaum commented Jul 10, 2025 via email

@MarcKarasek
Copy link

Yes I want he ability to debug via a macro call from SIGUPD macro.
What I do not want to do is code in any form what that debug will look like, I am proposing to leave this entirely up to the DUT. If they want to implement a ecall mechanism in this macro, great. If they want to do something else also great. RVI provides a hook to do whatever you want.

I am not saying ditch the old methodology. I thought we had agreed that we would make one last release, then branch the tree as "version3.x" or something similar. This would be the place to go if you want the old methodology. Going forward on dev o (or some other branch) would be the self-checking code.
This way we do not have to worry about maintaining two build options in one repo. Bug fixes can go into the version3.x branch as needed. Going forward this frees us up to do whatever changes we see fit (modify SIGUPD for example to take another parameter) w/o having to worry about breaking stuff.

If we provide the hooks to enable the use of ACT for debug w/o dictating what those hooks should do, IMO this will be enough.
Again, RVI from a purely certification point of view, cares does the DUT pass/fail the tests in question. And if it fails a specific test are we going to allow for waivers? (A CSC question, which I am going to bring up next weeek.)

@MarcKarasek
Copy link

As much as possible outside of certification, I want to provide hooks (debug, etc..) for companies to use ACT for debug and any other use case they might come up with.

RVI want to certify a design not verify it.

@allenjbaum
Copy link

allenjbaum commented Jul 11, 2025 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants