-
Notifications
You must be signed in to change notification settings - Fork 67
Implement Record/Replay functionality for btest #185
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
I'm gonna wait 1 day for the feedback and then merge it |
Two potential bikeshedding points: Do we expect different outputs for different architectures? If we do, that can potentially lead to some targets having a recorded "semi-broken" behavior which will look green in the test runner, and you wouldn't know it without a deeper inspection ("vibe check") of the particular test output. If the test is expected to be broken on the particular architecture (especially runtime broken, rather than build time broken), it should be marked more clearly that this particular architecture for this particular test is "special". And if we go with "different test output per target", perhaps put them in different files, so that diffs fixing codegen don't conflict with each other? |
For me the It seems like the JSON currently contains "fasm-x86_64-windows": {
"status": "RunSuccess",
"stdout": "8070071\r\n"
} although for some reason it works correctly for me (wine version: Maybe we could introduce a way to say that particular test should only be tested for some architectures instead of asserting failure? |
Pretty printing json makes the diff problem quite less severe, sure. But there's still the conceptual problem of "by default we expect every architecture to have a different output". My idea for a system like this would be to have |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM Other then tests/ being bloated with the JSON files.
We think it would be a good idea to have them in a sub-folder.
@Miezekatze64 Thank you for your feedback! I changed up the concept at 4b79ae0. Now build and runtime failures are not something that we expect and you can also disable tests on certain targets right in the json. Let me know what you think. |
Created a PR for this |
Looks good now. |
@nullnominal merged them all into a single file at b48c7e2. Let me know what you think. |
yeah its good |
Thank you everybody for the feedback! Sorry that it took so long. This is a very important for me functionality which I can't continue merging other PRs without. Thank you again! |
Uh oh!
There was an error while loading. Please reload this page.