Skip to content

Adding Visual Test Report in Github Actions #7653

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 4 commits into
base: dev-2.0
Choose a base branch
from

Conversation

Vaivaswat2244
Copy link

@Vaivaswat2244 Vaivaswat2244 commented Mar 21, 2025

Resolves #7618

This PR adds support for generating an HTML report to compare visual test results more effectively. The goal is to improve accessibility and readability of visual regression test outputs.

Changes:

  • Added a script to generate an HTML report summarizing visual test results.
  • The report includes expected screenshots and actual screenshots
  • Updated the workflow to run the added script after completion of tests
  • Edited the VisualTest.js to save the recorded screenshots to a folder actual-screenshots.
  • Then the generated html report is uploaded as an artifact.

Screenshots of the change:
This is the screenshot of the html page generated. It will show expected and Actual visuals of this.

image
@davepagurek @ksen0 Can you have a look at this?
You can go ahead in the Actions under Summary and download the report from artifacts section.

PR Checklist

@davepagurek
Copy link
Contributor

Nice work @Vaivaswat2244! I'm a little less familiar with github actions myself, @ksen0 and @limzykenneth, does this look OK to you?

@Vaivaswat2244
Copy link
Author

Hello @davepagurek @limzykenneth @ksen0

Just following up on this PR - I'm wondering if there's any additional feedback or if we should move forward with the implementation as is? If you'd prefer to explore alternative approaches first or if we are waiting for the pr05 project, I'd be happy to convert this to a draft PR in the meantime. Otherwise, I'm ready to address any feedback to get this merged.

@ksen0
Copy link
Member

ksen0 commented Apr 24, 2025

Thanks for your patience @Vaivaswat2244 ! I will have time next week (when the GSoC reviews are done:) to review this, though if someone else reviews this before their feedback is welcome. Sorry for the wait.

@Vaivaswat2244
Copy link
Author

Hey @ksen0 , can this be reviewed now?

@ksen0
Copy link
Member

ksen0 commented May 16, 2025

Hi @Vaivaswat2244 thanks for all the patience on this and for the reminder. It's really nicely done! I've added notes on 3 points below, let me know what you think about all that, if you have any other ideas (or any questions).

The report looks great, especially the filter for "show only failed."

(1) There are false positives for FAIL - these tests passed but are missing images - could you investigate why this happens please?

image

(2) Very, very minor nice-to-haves - for consistency, use lowercase step names ("upload visual test report"). If it's easy to add, include link to PR or the # in the report html. Feel free to ignore this feedback.

(3) Major - execution conditions: if there is a failing case, then npm test fails and report is not generated, preventing it from being used. This is the desired outcome when tests fail:

image

In the example above I'm using:

name: Testing

on:
  push:
    branches:
      - main
      - dev-2.0
  pull_request:
    branches:
      - '*'

jobs:
  test:
    runs-on: ubuntu-latest

    steps:
      - uses: actions/checkout@v1
      - name: Use Node.js 20.x
        uses: actions/setup-node@v1
        with:
          node-version: 20.x
      - name: Get node modules
        run: npm ci
        env:
          CI: true
      - name: build and test
        id: test
        run: npm test
        continue-on-error: true
        env:
          CI: true
      - name: Generate Visual Test Report
        if: always()
        run: node visual-report.js
        env:
          CI: true
      - name: Upload Visual Test Report
        if: always()
        uses: actions/upload-artifact@v4
        with:
          name: visual-test-report
          path: test/unit/visual/visual-report.html
          retention-days: 14
      - name: report test coverage
        if: steps.test.outcome == 'success'
        run: bash <(curl -s https://codecov.io/bash) -f coverage/coverage-final.json
        env:
          CI: true
      - name: fail job if tests failed
        if: steps.test.outcome != 'success'
        run: exit 1

I was able to test this by adding an artificial problem into the test suite:

    visualTest("with the default font", function (p5, screenshot) {
      p5.createCanvas(50, 50);
      p5.textSize(20);
      p5.textAlign(p5.LEFT, p5.TOP);
      p5.text("wrong text", 0, 0); // should be 'text' so should defintiely fail
      screenshot();
    });

This failure does show up in the report, but without actual image:

image

Looks like writeImageFile(actualFilename, toBase64(actual[i])); only runs on success, so in addition to updating the ci-test.yml, please check the logic of the visualTest.js. The goal is to ensure that report should help users see the visual test failures.

(4) [Edited to add:] Additional improvements - could be a separate PR, or can include in this one - could include a diff image, mirroring what visualTest.js provides:

image

If problem (3) is addressed, happy to merge!

@Vaivaswat2244
Copy link
Author

Got it! I reviewed the code and yes currently the report is not generated in case of test failure. This is a logical misunderstanding from my side which I'll fix. Although regarding the false positives I'm not sure, I'll test the new logic and most probably that'll fix the (1) and (3) points. Also I'll update the script for adding diff as well. Thanks for the detailed review on the pr...

@Vaivaswat2244
Copy link
Author

Vaivaswat2244 commented May 20, 2025

hey everyone,
I've tried to fix the above mentioned issues. I've checked the logic of visualTest.js and it now writes actual-screenshots no matter what...
Also I've updated the ci-test.yml file as @ksen0 recommended.
Although I'm stuck on the false fail issue...
This one test "Drawing triangles" passes on all the tests. Still the actual screenshot is not getting written in the actual-screenshots folder, when I've changed the logic and it is working for "intentionally failing test". Would be really helpful if someone helped @ksen0 @davepagurek
I'm continuing to fix that. In the meantime I've commited fix for (3) and (4) for review.

Copy link
Member

@ksen0 ksen0 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added a few thoughts, thanks for working on it!

__screenshots__/
actual-screenshots/
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor, didn't notice it before: for consistency with existing __screenshots__ could you rename actual-screnshots to something like __screenshots_actual, please?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, the __screenshots__ here are not of the visual tests as they are not in the .gitignore file and are present in the repo.
Though I will change the name of the folder to screenshots_actual if you wish.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No you're right it's not comparable, no worries, please disregard

@@ -452,6 +458,7 @@ export function visualTest(
}
} else {
writeImageFile(expectedFilenames[i], toBase64(actual[i]));
writeImageFile(actualFilename, toBase64(actual[i]));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also minor: this looks like it might be an accidental duplicate since it's also on line 449

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this was written because earlier the command was written in both if and else conditions. Now, I've moved writeImageFile(actualFilename, toBase64(actual[i])); outside the conditions

@@ -441,9 +441,15 @@ export function visualTest(
: [];

for (let i = 0; i < actual.length; i++) {
const flatName = name.replace(/\//g, '-');
const actualFilename = `../actual-screenshots/${flatName}-${i.toString().padStart(3, '0')}.png`;
if (expected[i]) {
const result = await checkMatch(actual[i], expected[i], myp5);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just an idea: Could the missing actual screenshots be potentially explained by some surprising behavior here? Ie, if you move writeImageFile(actualFilename, toBase64(actual[i])); to above 446, does the issue persist?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you mean moving it out of the if condition... am I right?
If so, tried that as well. Still the same issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants