Skip to content

[components] Scrapeless - fix actions #17377

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Conversation

joy-chanboop
Copy link
Contributor

@joy-chanboop joy-chanboop commented Jul 1, 2025

WHY

  • Issue:
    The run method in universal-scraping-api.mjs was using async and destructured rest parameters, but inputProps field was an empty object.
    As a result, accessing inputProps.url (and similar fields) returned undefined.
  • Solution:
    Added an explicit await to properly wait for the resolved inputProps, ensuring the values are available before proceeding.

Summary by CodeRabbit

  • Chores
    • Updated the versions of the Scrapeless Crawler, Scraping API, and Universal Scraping API actions.
    • Improved internal handling of asynchronous initialization for these actions.
    • Incremented the Scrapeless package version.

Copy link

vercel bot commented Jul 1, 2025

The latest updates on your projects. Learn more about Vercel for Git ↗︎

1 Skipped Deployment
Name Status Preview Comments Updated (UTC)
pipedream-docs-redirect-do-not-edit ⬜️ Ignored (Inspect) Jul 1, 2025 3:18am

Copy link

vercel bot commented Jul 1, 2025

@joy-chanboop is attempting to deploy a commit to the Pipedreamers Team on Vercel.

A member of the Team first needs to authorize it.

Copy link
Contributor

coderabbitai bot commented Jul 1, 2025

Walkthrough

The changes update the version numbers of three action modules and modify their run methods to destructure properties from await this instead of this. This adjustment ensures that any asynchronous initialization of this completes before accessing its properties. Additionally, the package version for @pipedream/scrapeless was incremented from 0.2.0 to 0.2.1. No other logic or control flow was changed.

Changes

Files Change Summary
components/scrapeless/actions/crawler/crawler.mjs Version updated to 0.0.3; run method now destructures from await this
components/scrapeless/actions/scraping-api/scraping-api.mjs Version updated to 0.0.2; run method now destructures from await this
components/scrapeless/actions/universal-scraping-api/universal-scraping-api.mjs Version updated to 0.0.2; run method now destructures from await this
components/scrapeless/package.json Package version updated from 0.2.0 to 0.2.1

Poem

Three little actions, versions anew,
Awaiting themselves, as rabbits now do.
They hop through the code, with patience and care,
Ensuring their props are ready to share.
With a twitch of the nose and a flick of the ear,
Asynchronous bunnies bring updates here!
🐇✨

Warning

There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure.

🔧 ESLint

If the error stems from missing dependencies, add them to the package.json file. For unrecoverable errors (e.g., due to private dependencies), disable the tool in the CodeRabbit configuration.

components/scrapeless/actions/scraping-api/scraping-api.mjs

Oops! Something went wrong! :(

ESLint: 8.57.1

Error [ERR_MODULE_NOT_FOUND]: Cannot find package 'jsonc-eslint-parser' imported from /eslint.config.mjs
at Object.getPackageJSONURL (node:internal/modules/package_json_reader:255:9)
at packageResolve (node:internal/modules/esm/resolve:767:81)
at moduleResolve (node:internal/modules/esm/resolve:853:18)
at defaultResolve (node:internal/modules/esm/resolve:983:11)
at ModuleLoader.defaultResolve (node:internal/modules/esm/loader:801:12)
at #cachedDefaultResolve (node:internal/modules/esm/loader:725:25)
at ModuleLoader.resolve (node:internal/modules/esm/loader:708:38)
at ModuleLoader.getModuleJobForImport (node:internal/modules/esm/loader:309:38)
at #link (node:internal/modules/esm/module_job:202:49)

components/scrapeless/actions/crawler/crawler.mjs

Oops! Something went wrong! :(

ESLint: 8.57.1

Error [ERR_MODULE_NOT_FOUND]: Cannot find package 'jsonc-eslint-parser' imported from /eslint.config.mjs
at Object.getPackageJSONURL (node:internal/modules/package_json_reader:255:9)
at packageResolve (node:internal/modules/esm/resolve:767:81)
at moduleResolve (node:internal/modules/esm/resolve:853:18)
at defaultResolve (node:internal/modules/esm/resolve:983:11)
at ModuleLoader.defaultResolve (node:internal/modules/esm/loader:801:12)
at #cachedDefaultResolve (node:internal/modules/esm/loader:725:25)
at ModuleLoader.resolve (node:internal/modules/esm/loader:708:38)
at ModuleLoader.getModuleJobForImport (node:internal/modules/esm/loader:309:38)
at #link (node:internal/modules/esm/module_job:202:49)

components/scrapeless/actions/universal-scraping-api/universal-scraping-api.mjs

Oops! Something went wrong! :(

ESLint: 8.57.1

Error [ERR_MODULE_NOT_FOUND]: Cannot find package 'jsonc-eslint-parser' imported from /eslint.config.mjs
at Object.getPackageJSONURL (node:internal/modules/package_json_reader:255:9)
at packageResolve (node:internal/modules/esm/resolve:767:81)
at moduleResolve (node:internal/modules/esm/resolve:853:18)
at defaultResolve (node:internal/modules/esm/resolve:983:11)
at ModuleLoader.defaultResolve (node:internal/modules/esm/loader:801:12)
at #cachedDefaultResolve (node:internal/modules/esm/loader:725:25)
at ModuleLoader.resolve (node:internal/modules/esm/loader:708:38)
at ModuleLoader.getModuleJobForImport (node:internal/modules/esm/loader:309:38)
at #link (node:internal/modules/esm/module_job:202:49)


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3ca44b5 and 3544fe8.

⛔ Files ignored due to path filters (1)
  • pnpm-lock.yaml is excluded by !**/pnpm-lock.yaml
📒 Files selected for processing (4)
  • components/scrapeless/actions/crawler/crawler.mjs (2 hunks)
  • components/scrapeless/actions/scraping-api/scraping-api.mjs (2 hunks)
  • components/scrapeless/actions/universal-scraping-api/universal-scraping-api.mjs (2 hunks)
  • components/scrapeless/package.json (1 hunks)
✅ Files skipped from review due to trivial changes (1)
  • components/scrapeless/package.json
🚧 Files skipped from review as they are similar to previous changes (3)
  • components/scrapeless/actions/scraping-api/scraping-api.mjs
  • components/scrapeless/actions/crawler/crawler.mjs
  • components/scrapeless/actions/universal-scraping-api/universal-scraping-api.mjs
⏰ Context from checks skipped due to timeout of 90000ms (4)
  • GitHub Check: Publish TypeScript components
  • GitHub Check: Lint Code Base
  • GitHub Check: Verify TypeScript components
  • GitHub Check: pnpm publish
✨ Finishing Touches
  • 📝 Generate Docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@adolfo-pd adolfo-pd added the User submitted Submitted by a user label Jul 1, 2025
@pipedream-component-development
Copy link
Collaborator

Thank you so much for submitting this! We've added it to our backlog to review, and our team has been notified.

@pipedream-component-development
Copy link
Collaborator

Thanks for submitting this PR! When we review PRs, we follow the Pipedream component guidelines. If you're not familiar, here's a quick checklist:

@joy-chanboop joy-chanboop changed the title fix(scrapeless): fix actions [components] Scrapeless - fix actions Jul 1, 2025
@joy-chanboop joy-chanboop force-pushed the hotfix/scrapeless-actions branch from 3ca44b5 to 3544fe8 Compare July 1, 2025 03:18
@joy-chanboop
Copy link
Contributor Author

Hi @jcortes,

I noticed that in the previously deployed Scrapeless service, the scraping-api action couldn’t retrieve the expected props — specifically, inputProps was empty. I’m not entirely sure why using async together with additionalProps caused the run function not to receive the current props values.

By explicitly adding an await, the parameters are now correctly resolved, and inputProps contains the form data as expected.

Could you please help review this fix? Let me know if you have insights on why the previous combination caused inputProps to be empty.

Thanks a lot for your time!

Copy link
Collaborator

@jcortes jcortes left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@@joy-chanboop Can you please try with this version if that works on your side?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @joy-chanboop I've just tried with this modification and it worked just fine altought I ran out of credits with the account that Leo shared [Scrapeless]: insufficient balance, please recharge first.

import scrapeless from "../../scrapeless.app.mjs";

export default {
  key: "scrapeless-crawler",
  name: "Crawler",
  description: "Crawl any website at scale and say goodbye to blocks. [See the documentation](https://apidocs.scrapeless.com/api-17509010).",
  version: "0.0.9",
  type: "action",
  props: {
    scrapeless,
    apiServer: {
      type: "string",
      label: "Please select a API server",
      description: "Please select a API server to use",
      default: "crawl",
      options: [
        {
          label: "Crawl",
          value: "crawl",
        },
        {
          label: "Scrape",
          value: "scrape",
        },
      ],
      reloadProps: true,
    },
  },
  additionalProps() {
    const { apiServer } = this;

    const props = {};

    if (apiServer === "crawl" || apiServer === "scrape") {
      props.url = {
        type: "string",
        label: "URL to Crawl",
        description: "If you want to crawl in batches, please refer to the SDK of the document",
      };
    }

    if (apiServer === "crawl") {
      props.limitCrawlPages = {
        type: "integer",
        label: "Number Of Subpages",
        default: 5,
        description: "Max number of results to return",
      };
    }

    return props;
  },
  async run({ $ }) {
    const {
      scrapeless,
      apiServer,
      url,
      limitCrawlPages,
    } = this;

    console.log("url", url);
    console.log("limitCrawlPages", limitCrawlPages);
    console.log("apiServer", apiServer);

    const browserOptions = {
      "proxy_country": "ANY",
      "session_name": "Crawl",
      "session_recording": true,
      "session_ttl": 900,
    };

    let response;

    if (apiServer === "crawl") {
      response =
        await scrapeless._scrapelessClient().scrapingCrawl.crawl.crawlUrl(url, {
          limit: limitCrawlPages,
          browserOptions,
        });
    }

    if (apiServer === "scrape") {
      response =
        await scrapeless._scrapelessClient().scrapingCrawl.scrape.scrapeUrl(url, {
          browserOptions,
        });
    }

    if (response?.status === "completed" && response?.data) {
      $.export("$summary", `Successfully retrieved crawling results for ${url}`);
      return response;
    } else {
      throw new Error(response?.error || "Failed to retrieve crawling results");
    }
  },
};

Copy link
Contributor Author

@joy-chanboop joy-chanboop Jul 2, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @joy-chanboop I've just tried with this modification and it worked just fine altought I ran out of credits with the account that Leo shared [Scrapeless]: insufficient balance, please recharge first.

import scrapeless from "../../scrapeless.app.mjs";

export default {
  key: "scrapeless-crawler",
  name: "Crawler",
  description: "Crawl any website at scale and say goodbye to blocks. [See the documentation](https://apidocs.scrapeless.com/api-17509010).",
  version: "0.0.9",
  type: "action",
  props: {
    scrapeless,
    apiServer: {
      type: "string",
      label: "Please select a API server",
      description: "Please select a API server to use",
      default: "crawl",
      options: [
        {
          label: "Crawl",
          value: "crawl",
        },
        {
          label: "Scrape",
          value: "scrape",
        },
      ],
      reloadProps: true,
    },
  },
  additionalProps() {
    const { apiServer } = this;

    const props = {};

    if (apiServer === "crawl" || apiServer === "scrape") {
      props.url = {
        type: "string",
        label: "URL to Crawl",
        description: "If you want to crawl in batches, please refer to the SDK of the document",
      };
    }

    if (apiServer === "crawl") {
      props.limitCrawlPages = {
        type: "integer",
        label: "Number Of Subpages",
        default: 5,
        description: "Max number of results to return",
      };
    }

    return props;
  },
  async run({ $ }) {
    const {
      scrapeless,
      apiServer,
      url,
      limitCrawlPages,
    } = this;

    console.log("url", url);
    console.log("limitCrawlPages", limitCrawlPages);
    console.log("apiServer", apiServer);

    const browserOptions = {
      "proxy_country": "ANY",
      "session_name": "Crawl",
      "session_recording": true,
      "session_ttl": 900,
    };

    let response;

    if (apiServer === "crawl") {
      response =
        await scrapeless._scrapelessClient().scrapingCrawl.crawl.crawlUrl(url, {
          limit: limitCrawlPages,
          browserOptions,
        });
    }

    if (apiServer === "scrape") {
      response =
        await scrapeless._scrapelessClient().scrapingCrawl.scrape.scrapeUrl(url, {
          browserOptions,
        });
    }

    if (response?.status === "completed" && response?.data) {
      $.export("$summary", `Successfully retrieved crawling results for ${url}`);
      return response;
    } else {
      throw new Error(response?.error || "Failed to retrieve crawling results");
    }
  },
};

Hi @jcortes ,

First, regarding the error you encountered — it’s actually due to your Scrapeless API KEY having no remaining balance. Could you please provide your email address? We’ll send you a dedicated test API KEY so you can continue testing without interruptions.

Also, I’m not sure if this was influenced by running in Pipedream’s production environment, but I found that previously, when executing the scraping-api action, inputProps didn’t contain the form field values returned by the additionalProps function. After reviewing the code, I realized that by adding an explicit await, I was able to correctly retrieve the props values.

Let me know if you have any thoughts on this, or if there’s more you’d like me to check.

Thanks a lot for your time!

You can use the following online environment code for testing.

import scrapeless from "../../scrapeless.app.mjs";
import { log } from "../../common/utils.mjs";
export default {
  key: "scrapeless-scraping-api",
  name: "Scraping API",
  description: "Endpoints for fresh, structured data from 100+ popular sites. [See the documentation](https://apidocs.scrapeless.com/api-12919045).",
  version: "0.0.1",
  type: "action",
  props: {
    scrapeless,
    apiServer: {
      type: "string",
      label: "Please select a API server",
      default: "googleSearch",
      description: "Please select a API server to use",
      options: [
        {
          label: "Google Search",
          value: "googleSearch",
        },
      ],
      reloadProps: true,
    },
  },
  async run({ $ }) {
    const {
      scrapeless, apiServer, ...inputProps
    } = this;

    const MAX_RETRIES = 3;
    // 10 seconds
    const DELAY = 1000 * 10;
    const { run } = $.context;

    let submitData;
    let job;

    // pre check if the job is already in the context
    if (run?.context?.job) {
      job = run.context.job;
    }

    if (apiServer === "googleSearch") {
      submitData = {
        actor: "scraper.google.search",
        input: {
          q: inputProps.q,
          hl: inputProps.hl,
          gl: inputProps.gl,
        },
      };
    }

    if (!submitData) {
      throw new Error("No actor found");
    }
    // 1. Create a new scraping job
    if (!job) {
      job = await scrapeless._scrapelessClient().deepserp.createTask({
        actor: submitData.actor,
        input: submitData.input,
      });

      if (job.status === 200) {
        $.export("$summary", "Successfully retrieved scraping results");
        return job.data;
      }

      log("task in progress");
    }

    // 2. Wait for the job to complete
    if (run.runs === 1) {
      $.flow.rerun(DELAY, {
        job,
      }, MAX_RETRIES);
    } else if (run.runs > MAX_RETRIES ) {
      throw new Error("Max retries reached");
    } else if (job && job?.data?.taskId) {
      const result = await scrapeless._scrapelessClient().deepserp.getTaskResult(job.data.taskId);
      if (result.status === 200) {
        $.export("$summary", "Successfully retrieved scraping results");
        return result.data;
      } else {
        $.flow.rerun(DELAY, {
          job,
        }, MAX_RETRIES);
      }
    } else {
      throw new Error("No job found");
    }

  },
  additionalProps() {
    const { apiServer } = this;

    const props = {};

    if (apiServer === "googleSearch") {
      props.q = {
        type: "string",
        label: "Search Query",
        description: "Parameter defines the query you want to search. You can use anything that you would use in a regular Google search. e.g. inurl:, site:, intitle:. We also support advanced search query parameters such as as_dt and as_eq.",
        default: "coffee",
      };

      props.hl = {
        type: "string",
        label: "Language",
        description: "Parameter defines the language to use for the Google search. It's a two-letter language code. (e.g., en for English, es for Spanish, or fr for French).",
        default: "en",
      };

      props.gl = {
        type: "string",
        label: "Country",
        description: "Parameter defines the country to use for the Google search. It's a two-letter country code. (e.g., us for the United States, uk for United Kingdom, or fr for France).",
        default: "us",
      };
    }

    return props;
  },
};

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @joy-chanboop this is my email [email protected]. The way you are deferring the values with await is weird to me because the additionalProps method doesn't have the async await signature which is not needed in this case. However in my test I can see the logs with the values of the props whenever I run the action so I'm wondering if at least you are able to see them too.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @jcortes ,

I’ve just sent the testing API KEY to your email, please check your inbox. Let me know if you didn’t receive it.

Additionally, I found that only the scraping-api.mjs action requires adding await to properly retrieve the props values, which is quite odd — because other actions like crawler.mjs work fine without using await, and still correctly get the props. This inconsistency is also puzzling to me.

Could you help by running a quick test on the current scraping-api.mjs action in the master branch of the pipedream repo? In my local testing, without adding await, it consistently fails to retrieve the props, which led me to apply this fix.

Thanks a lot for taking a look!

@jcortes jcortes moved this from In Review to Changes Required in Component (Source and Action) Backlog Jul 1, 2025
@joy-chanboop joy-chanboop requested a review from jcortes July 2, 2025 01:20
Copy link
Collaborator

@jcortes jcortes left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@joy-chanboop joy-chanboop requested a review from jcortes July 3, 2025 01:14
Copy link
Collaborator

@jcortes jcortes left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @joy-chanboop thanks for the api key. As I was playing around with the sdk I've ran into two different issues one is that if the sdk is running offline it means that tries to create a storage dir in the host which in this case is the Pipedream infraestructure in a default path that doesn't have permissions for it so that's why I had to set the env var:

process.env.SCRAPELESS_IS_ONLINE = "true";

On the other hand I noticed there were also some kind of warnings regarding the logging so I had to set the other env that I saw in the sdk code which is:

process.env.SCRAPELESS_LOG_ROOT_DIR = "/tmp";

And both of these envs I had to put them inside the this function _scrapelessClient and make it async in scrapeless.app.mjs

    async _scrapelessClient() {
      process.env.SCRAPELESS_IS_ONLINE = "true";
      process.env.SCRAPELESS_LOG_ROOT_DIR = "/tmp";

      const { Scrapeless } = await import("@scrapeless-ai/sdk");

      const { api_key } = this.$auth;
      if (!api_key) {
        throw new ConfigurationError("API key is required");
      }

      return new Scrapeless({
        apiKey: api_key,
        baseUrl: this._baseUrl(),
      });
    },

So the import worked just fine. Now I also refactored a bit the component crawler.mjs so you can test it on your side:

import scrapeless from "../../scrapeless.app.mjs";

export default {
  key: "scrapeless-crawler",
  name: "Crawler",
  description: "Crawl any website at scale and say goodbye to blocks. [See the documentation](https://apidocs.scrapeless.com/api-17509010).",
  version: "0.0.1",
  type: "action",
  props: {
    scrapeless,
    apiServer: {
      type: "string",
      label: "Please select a API server",
      description: "Please select a API server to use",
      default: "crawl",
      options: [
        {
          label: "Crawl",
          value: "crawl",
        },
        {
          label: "Scrape",
          value: "scrape",
        },
      ],
      reloadProps: true,
    },
  },
  additionalProps() {
    const props = {
      url: {
        type: "string",
        label: "URL to Crawl",
        description: "If you want to crawl in batches, please refer to the SDK of the document",
      },
    };

    if (this.apiServer === "crawl") {
      return {
        ...props,
        limitCrawlPages: {
          type: "integer",
          label: "Number Of Subpages",
          default: 5,
          description: "Max number of results to return",
        },
      };
    }

    return props;
  },
  async run({ $ }) {
    const {
      scrapeless,
      apiServer,
      ...inputProps
    } = this;

    const browserOptions = {
      "proxy_country": "ANY",
      "session_name": "Crawl",
      "session_recording": true,
      "session_ttl": 900,
    };

    let response;

    const client = await scrapeless._scrapelessClient();

    if (apiServer === "crawl") {
      response =
        await client.scrapingCrawl.crawl.crawlUrl(inputProps.url, {
          limit: inputProps.limitCrawlPages,
          browserOptions,
        });
    }

    if (apiServer === "scrape") {
      response =
        await client.scrapingCrawl.scrape.scrapeUrl(inputProps.url, {
          browserOptions,
        });
    }

    if (response?.status === "completed" && response?.data) {
      $.export("$summary", `Successfully retrieved crawling results for \`${inputProps.url}\``);
      return response.data;
    } else {
      throw new Error(response?.error || "Failed to retrieve crawling results");
    }
  },
};

So let me know if that works!

@joy-chanboop
Copy link
Contributor Author

Hi @jcortes,
I’ve incorporated your suggestions and also made some additional updates on my side.
To ensure a clean submission process, I’ll close this PR and open a new one that includes these action adjustments.
Please help review the new PR once it’s ready. Thanks a lot for your support!
#17493

@github-project-automation github-project-automation bot moved this from Changes Required to Done in Component (Source and Action) Backlog Jul 7, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
User submitted Submitted by a user
Development

Successfully merging this pull request may close these issues.

4 participants