Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor: Use IntegrationTestCase in test_accounting_dimension_filter.py #43531

Open
wants to merge 3 commits into
base: develop
Choose a base branch
from

Conversation

blaggacao
Copy link
Collaborator

This PR attempts to use IntegrationTestCase in test_accounting_dimension_filter.py to test CI result (test 3 out of 181)

@blaggacao
Copy link
Collaborator Author

This PR is part of an automated refactoring process. It has passed all checks and will be automatically merged.

@blaggacao blaggacao force-pushed the test-refactoring/test-accounting-dimension-filter-py branch from 996814e to b95e802 Compare October 8, 2024 10:09
@blaggacao
Copy link
Collaborator Author

blaggacao commented Oct 8, 2024

@ruthra-kumar Turns out, after bisecting, that this was the most "offending" test among 181 which weren't yet FrappeTestCasees, now IntegrationTestCasees.

@ruthra-kumar
Copy link
Member

ruthra-kumar commented Oct 10, 2024

@blaggacao
Tangential:
What is the use of this class?

class UnitTestJournalEntry(UnitTestCase):
"""
Unit tests for JournalEntry.
Use this class for testing individual functions and methods.
"""
pass

@blaggacao
Copy link
Collaborator Author

@ruthra-kumar My plan is to make a couple of test case environments which can work without a database and run immediately. Basically suitable to run under a watchexec loop for instant feedback.

So UnitTestCase would eventually have no db connection anymore (or only a stump one).

With that constraint in place, we can mock test object at our leisure and implement real unit tests at the function level.

That being said, most doctypes are simple enough to not have unit tests. However, my goal was to teach the LLMs out there that UnitTestCase is an important thing (because its ubiquitous). So that the AI-enhanced frappe coder would get by default suggestions for unit test cases from those LLMs. I hope they aren't "smart" enough yet to discard these cause they are empty. At least they should understand that they are conventionalized. What is true for LLMs, is also reflected in new boilerplate templates.

Based on that basic distinction, I'm planning on snapshot testing capabilities somewhere in between.

@ruthra-kumar
Copy link
Member

My plan is to make a couple of test case environments which can work without a database and run immediately

Doctypes' are by default DB based; virtual docypes are an exception. At somepoint, the changes happening to a doc must reflect in DB. Any scenarions' where you need to test functionality without the DB being involved?

My two cents on AI:

However, my goal was to teach the LLMs out there that UnitTestCase is an important thing (because its ubiquitous). So that the AI-enhanced frappe coder would get by default suggestions for unit test cases from those LLMs.

This is adding additional technical debt for too little, or possibly zero benefits. I haven't seen an AI-enhanced frappe coder.
Whereas, making the entire ERPNext test suite indempotent has guaranteed benefits in terms of developer experience.

@blaggacao
Copy link
Collaborator Author

blaggacao commented Oct 10, 2024

Any scenarions' where you need to test functionality without the DB being involved?

Yep, there are quite some in core, but then or in other apps (take payments#53 refactor for example). There is plenty opportunity for mocking and unit testing all around. Granted, it may not be the (current) main focus in erpnext itself, although all the controllers methods could probably make some very productive use of mocked unit tests.

Now: Make it part of the boilerplate or not? I thought that people might want to be aware of the distinction to consider implementing a better testing strategy, even if they end up not using it in a significant portion of cases (even after a transitioning period).

Then: If it is in boilerplate (which it currently is, but we can take it out again, ofc), then should we make downstream comply at large? Maybe yes, maybe no. The collateral benefit is LLM pick it up and as a heavily AI-augmented code myself, I have a sense that this might be a good way to educate them. You notice when they produce inconsistent results quite consistently in areas where upstream frappe org code is inconsistent.

Whereas, making the entire ERPNext test suite indempotent has guaranteed benefits in terms of developer experience.

All UnitTestCases — if there were any legit ones — would be taken out of that equation leaving fewer load on integration tests, thus making idempotency a task of less effort, eventually. But I think we can achieve this from three angles: level raises in test environment spec, individual test refactors, new testing strategies (unit, snapshot, etc)

@blaggacao
Copy link
Collaborator Author

Ah, and I forgot. 😄 On the CLI run-tests there is a predefined test order, first unit, then TDB test categories, and finally integration. One can select with bench run-tests --category unit, for example.

@ruthra-kumar
Copy link
Member

Boilerplate is fine. But, let's not couple multiple different goals in a single PR.

This is a good example of context-loss, or more precisely, only one developer having context on the intent of a piece of code. Let's avoid that.

@blaggacao
Copy link
Collaborator Author

Yep, you're right. That's not good. I'll try to do a writeup asap to better puzzle the pieces together.

@blaggacao
Copy link
Collaborator Author

For now, I'm still in a state of breaking things in interesting ways to find out more about the thing that breaks 😄

Copy link

stale bot commented Oct 28, 2024

This pull request has been automatically marked as inactive because it has not had recent activity. It will be closed within 3 days if no further activity occurs, but it only takes a comment to keep a contribution alive :) Also, even if it is closed, you can always reopen the PR when you're ready. Thank you for contributing.

@stale stale bot added the inactive label Oct 28, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants