-
Notifications
You must be signed in to change notification settings - Fork 46
Support for Self Checking #141
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: dev
Are you sure you want to change the base?
Support for Self Checking #141
Conversation
The above requires that the DUT has some type of console output. This cannot be assumed in all cases. Since we have a mechanism in place to dump the current signature block for a given test out to the PC and riscof compares this with the sail signature for the same test, I propose to use this as a pass/fail for a test. In the sigup macro we can clear out the signature area (instead of writing too it) from 0xdeasdbeef to a 0x0000 value., This signature then gets output to the host on test completion. Riscof then can check this file for all 0x0000 values, if it finds one 0xdeadbeef then the test fails. I also propose adding a new macro to model_test.h to allow someone to do whatever they want on a failed test case. This would be in sigup macro. All we should care about at the end of any test is did it pass or not anything else is noise. We provide the hooks for a DUT to halt on test failure, dump out whatever they want, etc.. but that is not up to us. I have this written up and a working POC for I tests. I need to modify it based on feedback from the last ACT mtg. From the last meeting we had agreed that this work for self-checking will be in a new branch (4.0???) as it will require some major changes and we do not want to try to have code for both live in the same branch. [MuhammadHammad001] will you be at next weeks ACT mtg? Or maybe we should setup a separate meeting between Harvey Mudd, myself and 10X??? |
I don't like this approach at all. Tests should compare expected results,
and on a mismatch, do an ecall, which the ecall handler can determine is a
specific type of ecall (a mismatch ecall),
which will call an RVTEST_MISMATCH, RVTEST_DEBUG, RVTEST_FAIL (or whatever
you want to name it) mismatch routine.
That mismatch routine will have an RVMODEL_FAIL which can do anything your
model wants to do ( e.g. add a failure count to some memory location,
write binary data to the testbench, halt the test, or both, or even
clear the DEADBEEF indicator for that particular test item).
Note that without the RVTEST_SELFTEST variable set, it just reverts to the
current store, extract signature, and compare.
I would implement a couple of predefined RVMODEL_FAIL macros:
- one of which uses an RVMODEL_IO_WRITE_STR macro to output a failure
string that includes all the relevant values of CSRs (RVAL, EPC) and
registers (rs1, rs2, rd).
- Another would do the same thing but output binary data instead (using a
new RVMODEL_IO_WRITE_BIN macro).
- Another would simply write the result into the expected result signature
area, and then the existing signature extract, compare with golden
signature still works.
Or, as above, you can write whatever works for you.
---------
We looked at what it would take to convert the SIGUPD to add the load
register parameter, and it looks like it would take someone about a day.
Almost all test macros that call SIGUPD have a tmp register, so those
macros would be updated to us _TMP or SWREG or whatever name is used for
the temp register
Almost all tests that call SIGUPD directly are in tests with very regular
patterns, where either there is a register that is unused, and you fill
that one in, or they're used in a very regular rolling pattern, so you can
replace it with the Rd used in the (previous-N) instruction (as long as it
isn't X0.) The crypto tests seem to be the bulk of those.
The remaining tests are only store/branch/jump tests I believe. Those will
need a bit more work; e.g. the SIGUPD equivalent will need to have a width
and adjusted offset.
Putting all those changes in a separate branch is fine - but they will be
merged into main (or replaced by Wally tests) which will have the option of
reverting SIGUPD macros to their previous behavior.
…On Wed, Jul 9, 2025 at 8:30 AM Marc Karasek ***@***.***> wrote:
*MarcKarasek* left a comment (riscv-software-src/riscof#141)
<#141 (comment)>
The above requires that the DUT has some type of console output. This
cannot be assumed in all cases.
Currently we provide a mechanism for the DUT via model_test.h to dump
information "somewhere". Where this is, is completely up to the DUT and
cannot be assumed to be implemented at all.
Since we have a mechanism in place to dump the current signature block for
a given test out to the PC and riscof compares this with the sail signature
for the same test, I propose to use this as a pass/fail for a test.
In the sigup macro we can clear out the signature area (instead of writing
too it) from 0xdeasdbeef to a 0x0000 value., This signature then gets
output to the host on test completion. Riscof then can check this file for
all 0x0000 values, if it finds one 0xdeadbeef then the test fails.
I also propose adding a new macro to model_test.h to allow someone to do
whatever they want on a failed test case. This would be in sigup macro.
All we should care about at the end of any test is did it pass or not
anything else is noise. We provide the hooks for a DUT to halt on test
failure, dump out whatever they want, etc.. but that is not up to us.
I have this written up and a working POC for I tests. I need to modify it
based on feedback from the last ACT mtg.
It involves changes that are outside of the ACT repo itself (linker
scripts, etc.) which is another discussion.
From the last meeting we had agreed that this work for self-checking will
be in a new branch (4.0???) as it will require some major changes and we do
not want to try to have code for both live in the same branch.
[MuhammadHammad001] will you be at next weeks ACT mtg? Or maybe we should
setup a separate meeting between Harvey Mudd, myself and 10X???
—
Reply to this email directly, view it on GitHub
<#141 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AHPXVJSJPXCF77DJLGKDYXD3HUYRRAVCNFSM6AAAAAB6OB52C6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZTANJTGA4TENJSGI>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
and oops, I sent this from the wrong deprecated account. Respond to the new
one, ***@***.*** if you reply, and drop any explicit mention
of the ***@***.*** account.
…On Wed, Jul 9, 2025 at 2:31 PM Allen Baum ***@***.***> wrote:
I don't like this approach at all. Tests should compare expected results,
and on a mismatch, do an ecall, which the ecall handler can determine is a
specific type of ecall (a mismatch ecall),
which will call an RVTEST_MISMATCH, RVTEST_DEBUG, RVTEST_FAIL (or whatever
you want to name it) mismatch routine.
That mismatch routine will have an RVMODEL_FAIL which can do anything your
model wants to do ( e.g. add a failure count to some memory location,
write binary data to the testbench, halt the test, or both, or even
clear the DEADBEEF indicator for that particular test item).
Note that without the RVTEST_SELFTEST variable set, it just reverts to
the current store, extract signature, and compare.
I would implement a couple of predefined RVMODEL_FAIL macros:
- one of which uses an RVMODEL_IO_WRITE_STR macro to output a failure
string that includes all the relevant values of CSRs (RVAL, EPC) and
registers (rs1, rs2, rd).
- Another would do the same thing but output binary data instead (using a
new RVMODEL_IO_WRITE_BIN macro).
- Another would simply write the result into the expected result
signature area, and then the existing signature extract, compare with
golden signature still works.
Or, as above, you can write whatever works for you.
---------
We looked at what it would take to convert the SIGUPD to add the load
register parameter, and it looks like it would take someone about a day.
Almost all test macros that call SIGUPD have a tmp register, so those
macros would be updated to us _TMP or SWREG or whatever name is used for
the temp register
Almost all tests that call SIGUPD directly are in tests with very regular
patterns, where either there is a register that is unused, and you fill
that one in, or they're used in a very regular rolling pattern, so you can
replace it with the Rd used in the (previous-N) instruction (as long as it
isn't X0.) The crypto tests seem to be the bulk of those.
The remaining tests are only store/branch/jump tests I believe. Those will
need a bit more work; e.g. the SIGUPD equivalent will need to have a width
and adjusted offset.
Putting all those changes in a separate branch is fine - but they will be
merged into main (or replaced by Wally tests) which will have the option of
reverting SIGUPD macros to their previous behavior.
On Wed, Jul 9, 2025 at 8:30 AM Marc Karasek ***@***.***>
wrote:
> *MarcKarasek* left a comment (riscv-software-src/riscof#141)
> <#141 (comment)>
>
> The above requires that the DUT has some type of console output. This
> cannot be assumed in all cases.
> Currently we provide a mechanism for the DUT via model_test.h to dump
> information "somewhere". Where this is, is completely up to the DUT and
> cannot be assumed to be implemented at all.
>
> Since we have a mechanism in place to dump the current signature block
> for a given test out to the PC and riscof compares this with the sail
> signature for the same test, I propose to use this as a pass/fail for a
> test.
>
> In the sigup macro we can clear out the signature area (instead of
> writing too it) from 0xdeasdbeef to a 0x0000 value., This signature then
> gets output to the host on test completion. Riscof then can check this file
> for all 0x0000 values, if it finds one 0xdeadbeef then the test fails.
>
> I also propose adding a new macro to model_test.h to allow someone to do
> whatever they want on a failed test case. This would be in sigup macro.
>
> All we should care about at the end of any test is did it pass or not
> anything else is noise. We provide the hooks for a DUT to halt on test
> failure, dump out whatever they want, etc.. but that is not up to us.
>
> I have this written up and a working POC for I tests. I need to modify it
> based on feedback from the last ACT mtg.
> It involves changes that are outside of the ACT repo itself (linker
> scripts, etc.) which is another discussion.
>
> From the last meeting we had agreed that this work for self-checking will
> be in a new branch (4.0???) as it will require some major changes and we do
> not want to try to have code for both live in the same branch.
>
> [MuhammadHammad001] will you be at next weeks ACT mtg? Or maybe we should
> setup a separate meeting between Harvey Mudd, myself and 10X???
>
> —
> Reply to this email directly, view it on GitHub
> <#141 (comment)>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AHPXVJSJPXCF77DJLGKDYXD3HUYRRAVCNFSM6AAAAAB6OB52C6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZTANJTGA4TENJSGI>
> .
> You are receiving this because you were mentioned.Message ID:
> ***@***.***>
>
|
Why do an ecall? The only thing ACT cares about is did the test pass/fail anything else is beyond the scope of certification. This goes back to my previous comments. ACT is in the Certification business NOT the Verification business or the debug business. I thought we had agreed that the previous behavior (comparing to a sail model signature) was going to be in the new 3.x branch that will be created and going forward we will NOT have to carry any baggage from it for the self-checking tests. I for one want a clean break between these two methods. Not a mishmash of compile it this way for one and this way for another. IMO, we need to make this as brain dead as possible to run at the top level. Fail/Pass nothing with hooks for someone to add debug/halt/whatever they want to do. At the end of the day Pass/Fail is all RVI should care about. |
ECALL provides enough supporting information to enable debugging. Anything
else doesn't.
You were the one that insisted that debug be possible (as I recall, anyway)
.
The cost here is tiny, so we'd be foolish to ignore it.
The cost of retaining old methodology is also trivial once new methodology
works.
Customers get to decide how they want to use this for their own use.
From a certification perspective, we can insist on just the new
self-checking methodology
- but that doesn't mean that we must eliminate the old way, especially
considering the cost (or lack of it).
But "At the end of the day Pass/Fail is all RVI should care about." is
spurious;
What certification customers care about also matters, perhaps more than
what RVI cares about -- if we want customers..
…On Wed, Jul 9, 2025 at 5:42 PM Marc Karasek ***@***.***> wrote:
*MarcKarasek* left a comment (riscv-software-src/riscof#141)
<#141 (comment)>
Why do an ecall? The only thing ACT cares about is did the test pass/fail
anything else is beyond the scope of certification.
Adding in hooks so that a DUT can debug why it is failing is a good thing
but this should IMO be left up to the DUT via hooks RVI specifies in
model_test.h. If at the top level the test has already determined (in
SIGUPD macro) that the test has failed, I see no reason to call ecall to
figure out that the test has failed? *We are way over-engineering this,
IMO.*
*This goes back to my previous comments. ACT is in the Certification
business NOT the Verification business or the debug business.*
*I thought we had agreed that the previous behavior (comparing to a sail
model signature) was going to be in the new 3.x branch that will be created
and going forward we will NOT have to carry any baggage from it for the
self-checking tests.*
I for one want a clean break between these two methods. Not a mishmash of
compile it this way for one and this way for another.
This will only lead to more confusion in how to use the ACT.
IMO, we need to make this as brain dead as possible to run at the top
level. Fail/Pass nothing with hooks for someone to add debug/halt/whatever
they want to do.
At the end of the day Pass/Fail is all RVI should care about.
—
Reply to this email directly, view it on GitHub
<#141 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AHPXVJR3F2FKDF5G4URPACD3HWZITAVCNFSM6AAAAAB6OB52C6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZTANJUGYZTENRZG4>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Yes I want he ability to debug via a macro call from SIGUPD macro. I am not saying ditch the old methodology. I thought we had agreed that we would make one last release, then branch the tree as "version3.x" or something similar. This would be the place to go if you want the old methodology. Going forward on dev o (or some other branch) would be the self-checking code. If we provide the hooks to enable the use of ACT for debug w/o dictating what those hooks should do, IMO this will be enough. |
As much as possible outside of certification, I want to provide hooks (debug, etc..) for companies to use ACT for debug and any other use case they might come up with. RVI want to certify a design not verify it. |
I really don't care which approach certification takes, but the SW should
be configurable to either, and that's what we should open source.
From the point of view of the final results, it doesn't matter - if there
are no failures, the test result will be identical.
…On Fri, Jul 11, 2025 at 7:45 AM Marc Karasek ***@***.***> wrote:
*MarcKarasek* left a comment (riscv-software-src/riscof#141)
<#141 (comment)>
As much as possible outside of *certification*, I want to provide hooks
(debug, etc..) for companies to use ACT for debug and any other use case
they might come up with.
RVI want to certify a design not verify it.
—
Reply to this email directly, view it on GitHub
<#141 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AHPXVJWFNVEIP7Y6REUS4O33H7EXDAVCNFSM6AAAAAB6OB52C6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZTANRSGYYDINJXHA>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Contributors: @MuhammadHammad001 , @allenjbaum , @UmerShahidengr
Support for Self Checking (Work in Progress)
Supported
Not Supported (In Progress)
Overall, workflow will look:
