Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for snapshot(fix=True) #109

Open
tiangolo opened this issue Sep 4, 2024 · 2 comments
Open

Support for snapshot(fix=True) #109

tiangolo opened this issue Sep 4, 2024 · 2 comments

Comments

@tiangolo
Copy link

tiangolo commented Sep 4, 2024

Thank you for inline-snapshot! It's great. 🙇 🍰

Request

I came to inline-snapshot from Samuel Colvin's insert_assert, one thing I miss that I would like to have is not require running the CLI to generate the snapshots.

I would like to be able to click the icon in my editor (VS Code) to run the test and get the snapshot with that.

This is a request somewhat similar to #62 and #57

Example

I would like to be able to write:

def test_stuff():
    assert "foo" == snapshot(create=True)

...and after that single run (equivalent to running pytest without inline-snapshot parameters), I would get:

def test_stuff():
    assert "foo" == snapshot("foo")

And then:

def test_stuff():
    assert "foo" == snapshot("bar", fix=True)

would result in:

def test_stuff():
    assert "foo" == snapshot("foo")

Alternatives

I can think of a couple of alternatives. It could be that snapshot(fix=True) would cover both fix and create. It could also be snapshot(mode="create") and snapshot(mode="fix") to avoid invalid states like snapshot(fix=True, create=True).

Tests Always Passing

The result of one of these code-generating runs could be a "failure" so that regenerating the test data doesn't automatically "pass". And as the changed code removes the flag, it would not regenerate the data again the next time.

So, I would have to set the function parameter manually when I want to update a snapshot, and as it would be removed right after, no committed code would have those flags, preventing it from always "fixing"/passing the tests automatically.

In Short

In short, I want to be able to run it through the UI, which means it runs pytest without parameters underneath, instead of having to call it through the CLI, figure out the file I want to test, find its path among many other files, or run the tests for everything, just for a single test.

@15r10nk
Copy link
Owner

15r10nk commented Sep 4, 2024

Thank you for your feedback and your sponsoring ❤️

Enabling inline-snapshot by default means (currently because of #57) disabeling of pytest assert-rewrite.

I would like to be able to click the icon in my editor (VS Code) to run the test and get the snapshot with that.

You can enable create and fix by default in the configuration.

This gives you currently no control which snapshots will be updated, which brings the risk that you will change snapshots which you maybe not want to change (if you run your whole test suite). But it might be useful if you only run specific test in your UI.

Another way is to add some extra pytest args in the vscode config .vscode/settings.json:

{
    "python.testing.pytestArgs": [
        "tests", "--inline-snapshot=fix,create"
    ],
}

It would be nice if we could configure a third option beside "Run Test" and "Debug Test", but I think this is currently not possible.

The problem with this approach is that you approve the current behavior every time you run your tests. A bug will not cause your tests to fail but change them and make sure that the bug stays in the code 😄 💥. This is the reason why I'm not promoting this approach.

snapshot(fix=True)

I don't like the idea to create extra syntax for this. It feels like a work around for "I want to click on the snapshot and rerun the test to fix/create the value".
I plan to write a lsp-client for inline-snapshot in the future (I hope not to far in the future 😄). This looks like a good feature for this client but I don't know how easy it would be to run the pytests inside vscode.

Do you really have the use case that you only want to fix one of multiple snapshots in the same test?
I think rerun a test and fix all snapshots in this test might be a better option.

Another workflow might be to run your tests, inline-snapshot will record what it can fix and an lsp-client would allow you to fix the snapshots in your source code after the test failed.
You would not only see the red dot in front of your tests, but also some annotation which snapshot can be fixed, and a code-action to approve the change.

@tiangolo
Copy link
Author

tiangolo commented Sep 4, 2024

Thanks for the quick feedback!

Do you really have the use case that you only want to fix one of multiple snapshots in the same test?

Yeah, my main workflow for inline-snapshot is for example in FastAPI tests, when testing the OpenAPI schema, it's a large dict/JSON with values generated from the standard.

For example, here's a short one: https://github.com/fastapi/fastapi/blob/master/tests/test_tutorial/test_first_steps/test_tutorial001.py#L22-L42

But then, there are some that are really long.

When those don't pass, it's not easy to find what's breaking, so, being able to update the snapshot for each specific one of those and see what are the differences between the old and new OpenAPI with git can help a lot.

lsp client

That sounds very cool! I'm not even sure what it means and how it would work, but that would probably be much better than what I'm thinking. 😅


I think for now I can set and unset the config before each run, that's working. 🤓 The downside is that if I forget to change the config after one run generating, it would re-generate the snapshot, there's no way to "enable only once". But that will work for now.

Feel free to close this one now! ☕

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants