-
-
Notifications
You must be signed in to change notification settings - Fork 3.5k
Adding Visual Test Report in Github Actions #7653
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: dev-2.0
Are you sure you want to change the base?
Conversation
Nice work @Vaivaswat2244! I'm a little less familiar with github actions myself, @ksen0 and @limzykenneth, does this look OK to you? |
Hello @davepagurek @limzykenneth @ksen0 Just following up on this PR - I'm wondering if there's any additional feedback or if we should move forward with the implementation as is? If you'd prefer to explore alternative approaches first or if we are waiting for the pr05 project, I'd be happy to convert this to a draft PR in the meantime. Otherwise, I'm ready to address any feedback to get this merged. |
Thanks for your patience @Vaivaswat2244 ! I will have time next week (when the GSoC reviews are done:) to review this, though if someone else reviews this before their feedback is welcome. Sorry for the wait. |
Hey @ksen0 , can this be reviewed now? |
Hi @Vaivaswat2244 thanks for all the patience on this and for the reminder. It's really nicely done! I've added notes on 3 points below, let me know what you think about all that, if you have any other ideas (or any questions). The report looks great, especially the filter for "show only failed." (1) There are false positives for FAIL - these tests passed but are missing images - could you investigate why this happens please? ![]() (2) Very, very minor nice-to-haves - for consistency, use lowercase step names ("upload visual test report"). If it's easy to add, include link to PR or the # in the report html. Feel free to ignore this feedback. (3) Major - execution conditions: if there is a failing case, then ![]() In the example above I'm using: name: Testing
on:
push:
branches:
- main
- dev-2.0
pull_request:
branches:
- '*'
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v1
- name: Use Node.js 20.x
uses: actions/setup-node@v1
with:
node-version: 20.x
- name: Get node modules
run: npm ci
env:
CI: true
- name: build and test
id: test
run: npm test
continue-on-error: true
env:
CI: true
- name: Generate Visual Test Report
if: always()
run: node visual-report.js
env:
CI: true
- name: Upload Visual Test Report
if: always()
uses: actions/upload-artifact@v4
with:
name: visual-test-report
path: test/unit/visual/visual-report.html
retention-days: 14
- name: report test coverage
if: steps.test.outcome == 'success'
run: bash <(curl -s https://codecov.io/bash) -f coverage/coverage-final.json
env:
CI: true
- name: fail job if tests failed
if: steps.test.outcome != 'success'
run: exit 1 I was able to test this by adding an artificial problem into the test suite: visualTest("with the default font", function (p5, screenshot) {
p5.createCanvas(50, 50);
p5.textSize(20);
p5.textAlign(p5.LEFT, p5.TOP);
p5.text("wrong text", 0, 0); // should be 'text' so should defintiely fail
screenshot();
}); This failure does show up in the report, but without actual image: ![]() Looks like (4) [Edited to add:] Additional improvements - could be a separate PR, or can include in this one - could include a diff image, mirroring what ![]() If problem (3) is addressed, happy to merge! |
Got it! I reviewed the code and yes currently the report is not generated in case of test failure. This is a logical misunderstanding from my side which I'll fix. Although regarding the false positives I'm not sure, I'll test the new logic and most probably that'll fix the (1) and (3) points. Also I'll update the script for adding diff as well. Thanks for the detailed review on the pr... |
hey everyone, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added a few thoughts, thanks for working on it!
__screenshots__/ | ||
actual-screenshots/ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor, didn't notice it before: for consistency with existing __screenshots__
could you rename actual-screnshots
to something like __screenshots_actual
, please?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well, the __screenshots__
here are not of the visual tests as they are not in the .gitignore file and are present in the repo.
Though I will change the name of the folder to screenshots_actual if you wish.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No you're right it's not comparable, no worries, please disregard
@@ -452,6 +458,7 @@ export function visualTest( | |||
} | |||
} else { | |||
writeImageFile(expectedFilenames[i], toBase64(actual[i])); | |||
writeImageFile(actualFilename, toBase64(actual[i])); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also minor: this looks like it might be an accidental duplicate since it's also on line 449
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this was written because earlier the command was written in both if and else conditions. Now, I've moved writeImageFile(actualFilename, toBase64(actual[i]));
outside the conditions
@@ -441,9 +441,15 @@ export function visualTest( | |||
: []; | |||
|
|||
for (let i = 0; i < actual.length; i++) { | |||
const flatName = name.replace(/\//g, '-'); | |||
const actualFilename = `../actual-screenshots/${flatName}-${i.toString().padStart(3, '0')}.png`; | |||
if (expected[i]) { | |||
const result = await checkMatch(actual[i], expected[i], myp5); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just an idea: Could the missing actual screenshots be potentially explained by some surprising behavior here? Ie, if you move writeImageFile(actualFilename, toBase64(actual[i]));
to above 446, does the issue persist?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you mean moving it out of the if condition... am I right?
If so, tried that as well. Still the same issue.
Resolves #7618
This PR adds support for generating an HTML report to compare visual test results more effectively. The goal is to improve accessibility and readability of visual regression test outputs.
Changes:
Screenshots of the change:
This is the screenshot of the html page generated. It will show expected and Actual visuals of this.
@davepagurek @ksen0 Can you have a look at this?
You can go ahead in the Actions under Summary and download the report from artifacts section.
PR Checklist
npm run lint
passes