endpoints.ts unit tests unstable #44
Labels
No labels
bug
customization
duplicate
enhancement
feature parity
help wanted
high priority
invalid
low priority
question
wontfix
No milestone
No project
No assignees
2 participants
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference: woem.men/forkey#44
Loading…
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
💡 Summary
the unit tests definited in backend/test/e2e/endpoints.ts are unstable and sometimes fail, only to succeed when the job is rerun
🥰 Expected Behavior
the unit tests should succeed, especially when the code written does not affect those unit tests. they should be stable meaning that the same unit tests can be run on the same code and have the exact same results
🤬 Actual Behavior
the unit tests are erratic and sometimes fail, sometimes succeed, even when run on the same branch without any code modifications
📝 Steps to Reproduce
No response
💻 Frontend Environment
🛰 Backend Environment (for server admin)
Do you want to address this bug yourself?
left steps to reproduce empty in case we find a pattern (maybe caused by one of the action runners acting up?)
job failure logs
job success logs
assert.strictEqual(newBob.followingCount, 0);
assert.strictEqual(res.status, 400);
assert.strictEqual(res.status, 400);
Are the lines failing. The last 2, it explicitly only checks if the last fails, but it succeeds. It could be that the request before fails, and therefore the one after succeeds. There would be the need to check that too.
The first one,
Would the later check on Alice also fail?
I rn can think of that it maybe fails because the API is serving still old data, by either using cached data or for some weird reason it first fetches the follower count and then the unfollow happens (because it is async?) - would be interesting to introduce a delay and see if then it fails less often (or never) (the fact of one executing before the other, could be a reason of failure also for the second and third failure)
@ashten do you mind sharing another failure log (even better if 2)? So I could double check if there are other tests failing, or only these 3
There is also need for more descriptive test failures. So in this example, if the API fails or passes for an unexpected reasons, also that the responses before are logged would be useful.
This appears to also be an upstream issue. While not at this rate, it still happens https://github.com/MisskeyIO/misskey/actions/runs/12816504556/job/35737592822#step:10:1103