-
-
Notifications
You must be signed in to change notification settings - Fork 298
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix(windows): don't send key twice #280
Conversation
windows ci fails with
Can prs not access the url variable ? |
Hmm. It seems like it's a permission thingy relating to (dis)allowing secrets access on PRs from external repositories. See https://securitylab.github.com/research/github-actions-preventing-pwn-requests/. It is possible to allow PRs access to secrets by changing the workflow trigger to Granted PRs from first-time contributors require explicit approval to run workflows, but there's nothing preventing a more cunning attacker from bypassing this by submitting an innocuous PR before the actual attack. We can change it so that all workflow needs approval, but obviously that's hardly an ideal solution. You got any ideas? |
It can be argued that since the only use for secrets in this repository is npcap's OEM installer download link, the consequences of a pwn is not as significant as that of, let's say, a private key. Considering the consequences for the attacker (likely losing their account), it is unlikely that anyone would find this a worthy target. Obviously this is a questionable attitude towards security in general. Plus, doing this would mean that we will no longer be able to use Github secrets for something more sensitive if such a need ever arises (e.g. a deploy key). So I don't like this option either. |
I didn't find any info about this, but this pr seems to use |
I see its a self hosted runner. |
For reference, this is how Not great, not terrible™️. If we cannot come up with something better, I suppose this is what we have to go for. |
Also for the record, the issue addressed by this PR seems to impact MacOS too, but only occasionally so. Examples: |
I think thats reasonable, but in theory the situation can be better, since many (if not all) the tests don't actually use npcap Maybe we can have some cfgs to not build it and stub its function when targetting tests |
This PR doesn't address the test flakiness issue This PR adress a problem that occurs on windows, if you press a key it triggers twice because release event gets sent, I don't have windows to test so it would be great if you actually test that this is indeed a problem and this pr fixes it I'm just sending this pr without even triggering the bug first because I saw this issue filed many times against crossterm repo |
Hmm. Good idea. I'll give it a shot in a separate PR. |
I see. Sorry about that. So what do you think about these strange transient failures on MacOS? |
Unfortunately I don't have an explanation now My idea is if we can't actual fix it
|
Or maybe switch to next-test which does support this https://nexte.st/book/retries.html?highlight=flaky#retries-and-flaky-tests |
So I tried your idea, but as a wise man once said, trying is the first step towards failure. You tried your best and failed miserably. The lesson is, never try. Jokes aside, it seems like the entire test binary will fail if stuff from Anyways, my conclusion is that it's way too much effort for too little gain. I'll just disable Windows tests for PRs and call it a day. |
Okay, if you rebase now I think the tests should pass. |
* Cache npcap SDK on Windows * Call build function correctly * Log when local cache of SDK is found * Fix clippy warnings * Log to STDERR
I tried to debug the flaky tests today , but that went nowhere, hopefully you find better ideas |
see crossterm-rs/crossterm#797 (comment)
windows release event needs to be explicitly ignored
I don't have windows to test , but I saw this issue filed against crossterm many times , so I'm assuming bandwhich have the same problem
crossterm-rs/crossterm#778 would improve things
note: I don't think this will affect tests, maybe we should start disabling the flaky ones