-
Notifications
You must be signed in to change notification settings - Fork 37
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
we need a regression test framework which actually works. runtests has to go. #36
Comments
Comment by zultron So, Michael's Machinetalk merge included some nosetests. PR #332 isn't ready for merge yet, but intends to fix some minor problems, and then hopefully start running it in the buildbot. See #105, just closed, for griping about the old system. |
Comment by mhaberler and we need a way to automatically regression-test supplied configurations against API changes. Currently there is no such thing, which creates a lot of work post-fallout. For the most part it would be sufficient to start configs without actually starting a UI, and see if it 'comes up' (i.e. does not fail) The only configs which would not work that way are ones which rely on a UI creating pins, like the ones using gladevcp and POSTGUI_HALFILE = feature, but those could be automatically excluded. Any takers? |
Comment by machinekoder All configs that depend on specific hardware cannot be tested this way. One could however use the Python based HAL configs to add a test flag for excluding hardware specific functions. |
Comment by mhaberler @Strahlex that is a pretty good idea! what I can imagine - add to the exit handler in the cython hal module to check for uniqueness of names. something similar can be added to halcmd to check on exit. re hardware - regression testing the sim configs is already a start. |
Comment by machinekoder We could also add proper per-component runtests or simplifying/unifying the process of creating one. Shouldn't be too hard using a HAL Cython. This would require a way of triggering HAL functions manually from HAL Cython with custom arguments (period for example). |
Comment by machinekoder Just updated machinekit/machinekit#391 Qt has a high sophisticated sanity checking system. Maybe we could use parts of it to prevent common problem. Regression tests are also triggered and connected to PRs. They use Gerrit as Review/Merge tool. |
Comment by machinekoder Additionally the testing framework should be language independent so we can use it for reporting Python tests as well C or Shell tests. Not sure if something like this exists. |
Comment by machinekoder Btw. another way to test configs that require specific hardware without needing the hardware is using stub HAL components that have the same pin names and types but no or simulated functionality. |
Comment by mhaberler I think we need to grind this cat per-language, as I dont see a regression test package which covers all of them. Current thinking:
all of those are debian packaged: so my current proposal would be: organize per-language under src/regressions:
configure.ac will detect prerequisite packages and build as needed a 'make check' target runs all of the above if built a hello-world example for using check is here: mhaberler/machinekit@ec14144 example output of a failed check test:
|
Comment by zultron On 08/25/2015 04:08 AM, Michael Haberler wrote:
When I refactored the nosetests as part of the 'master daemon' project, |
Comment by mhaberler I have been using check with success, simple to understand and does the job; other than Cython wrappers it can be used for timing critical code, and multithreaded tests (of which I have quite a few now) For C++ I found a less filling alternative to gtest, catch - just a single header I'll give that a spin |
Comment by yishinli Hi Michael, What's your most up-to-date branch for regression tests? -Yishin |
Comment by mhaberler Hi Yishin, I have nothing mergable at the moment, but I have been working with Mick since several months on the SMP-safe HAL branch, and that contains several tests using check - have a look under src/regressions/check in https://github.com/mhaberler/machinekit/tree/disentangle I have imported the catch header, but not used that yet for any actual regression tests Assuming you are looking for C - my recommendation would be to install the For Python-based tests, see nosetests/*py and tests/nosetests - you need the python-nose package for that to work |
Comment by yishinli Trying to run "make check", and got error message:
It seems like I need to install Concurrency-Kit, right? -Yishin |
Comment by mhaberler please do not base any work on this branch, and do not assume this will be merged as is - it is not ready, so there's no point in going for a build, and I do not want that branch to start creeping out early and setting any expectations please see the docs at http://check.sourceforge.net/ just take regressions/check_tests as an example to build your own, and take the suite files as examples to read and learn from take regressions/check_tests/check_tests.c and the Submakefile, plus the change in src/Makefile, and create your own test program from it; delete all the *-suite.h includes and the srunner_add_suite() calls which refer to the tests in the *suite includes |
Comment by yishinli After installing ConcurrencyKit, I could compile the "disentangle" branch.
However, while running the "check_test", I got the following error message:
This error came from hal_setup(), where (hal_data == NULL). How to dig why the hal_init() not initializing correctly? void hal_setup(void)
{
// printf("number of cores=%d\n", cores);
comp_id = hal_init("testme");
if (hal_data == NULL) {
fprintf(stderr, "ERROR: could not connect to HAL\n");
exit(1);
}
hal_ready(comp_id);
} |
Comment by mhaberler well you need to do a realtime start before running this test hal_data being null means the HAL shared memory segment cannot be attached to, and that is created by rtapi_app |
Comment by yishinli Thanks, it works!
In our testing environment, the zeroconf_service_withdraw() may occasionally being called; a SIGTERM is triggered to stop haltalk. I'm thinking of writing unit test case to trigger zeroconf_service_withdraw(). |
Comment by mhaberler @yishinli - re haltalk: are you suggesting haltalk crashes? I do not understand what could cause "zeroconf_service_withdraw() may occasionally being called" - it is only called after event loop termination? I did notice sometimes haltalk fails to terminate, is that what you are referring to? |
Comment by yishinli It seems like a haltalk crash. But, there's no segmentation fault message. The only clue I got is SIGTERM is received by haltalk. Also, it happens randomly from 20minutes to more than 2hours. I am trying to gather more meaningful log message with DEBUG=5. Any suggestions? |
Comment by mhaberler @yishinli - if this is master, please open a new issue, this is simply the wrong issue that said:
|
Issue by mhaberler
Sun Jun 15 06:57:00 2014
Originally opened as machinekit/machinekit#220
I'm loosing patience with this utterly naive approach to 'testing' which we inherited - this is the singlemost biggest time waster. This has to go.
The current suite of 'tests' is so full of race conditions and using naive programs and scripts that in many cases the failure of a runtest is basically meaningless - it usuallly just shows the author has not considered timing of parallel activities and how they can be properly interlocked.
Milestone proposal: deprecate runtests.
For each 'failure' found, rewrite the test with the new cython bindings once we have them, make sure the test actually says something, and disable the old one.
The text was updated successfully, but these errors were encountered: