Skip to content

HuntingAbuseAPI analyzer #2885

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 3 commits into
base: develop
Choose a base branch
from
Open

HuntingAbuseAPI analyzer #2885

wants to merge 3 commits into from

Conversation

spoiicy
Copy link
Member

@spoiicy spoiicy commented Jun 4, 2025

Closes #2778

Description

This PR aims to add an analyzer to identify if the provided observable is present in the false positive list or not via Hunting Abuse API.

Type of change

Please delete options that are not relevant.

  • Bug fix (non-breaking change which fixes an issue).
  • New feature (non-breaking change which adds functionality).
  • Breaking change (fix or feature that would cause existing functionality to not work as expected).

Checklist

  • I have read and understood the rules about how to Contribute to this project
  • The pull request is for the branch develop
  • A new plugin (analyzer, connector, visualizer, playbook, pivot or ingestor) was added or changed, in which case:
    • I strictly followed the documentation "How to create a Plugin"
    • Usage file was updated. A link to the PR to the docs repo has been added as a comment here.
    • Advanced-Usage was updated (in case the plugin provides additional optional configuration). A link to the PR to the docs repo has been added as a comment here.
    • I have dumped the configuration from Django Admin using the dumpplugin command and added it in the project as a data migration. ("How to share a plugin with the community")
    • If a File analyzer was added and it supports a mimetype which is not already supported, you added a sample of that type inside the archive test_files.zip and you added the default tests for that mimetype in test_classes.py.
    • If you created a new analyzer and it is free (does not require any API key), please add it in the FREE_TO_USE_ANALYZERS playbook by following this guide.
    • Check if it could make sense to add that analyzer/connector to other freely available playbooks.
    • I have provided the resulting raw JSON of a finished analysis and a screenshot of the results.
    • If the plugin interacts with an external service, I have created an attribute called precisely url that contains this information. This is required for Health Checks (HEAD HTTP requests).
    • If the plugin requires mocked testing, _monkeypatch() was used in its class to apply the necessary decorators.
    • I have added that raw JSON sample to the MockUpResponse of the _monkeypatch() method. This serves us to provide a valid sample for testing.
    • I have created the corresponding DataModel for the new analyzer following the documentation
  • I have inserted the copyright banner at the start of the file: # This file is a part of IntelOwl https://github.com/intelowlproject/IntelOwl # See the file 'LICENSE' for copying permission.
  • Please avoid adding new libraries as requirements whenever it is possible. Use new libraries only if strictly needed to solve the issue you are working for. In case of doubt, ask a maintainer permission to use a specific library.
  • If external libraries/packages with restrictive licenses were added, they were added in the Legal Notice section.
  • Linters (Black, Flake, Isort) gave 0 errors. If you have correctly installed pre-commit, it does these checks and adjustments on your behalf.
  • I have added tests for the feature/bug I solved (see tests folder). All the tests (new and old ones) gave 0 errors.
  • If the GUI has been modified:
    • I have a provided a screenshot of the result in the PR.
    • I have created new frontend tests for the new component or updated existing ones.
  • After you had submitted the PR, if DeepSource, Django Doctors or other third-party linters have triggered any alerts during the CI checks, I have solved those alerts.

Important Rules

  • If you miss to compile the Checklist properly, your PR won't be reviewed by the maintainers.
  • Everytime you make changes to the PR and you think the work is done, you should explicitly ask for a review by using GitHub's reviewing system detailed here.

@spoiicy spoiicy changed the title hunting_abuse_api analyzer HuntingAbuseAPI analyzer Jun 4, 2025
@spoiicy spoiicy marked this pull request as ready for review June 4, 2025 14:22
@spoiicy
Copy link
Member Author

spoiicy commented Jun 4, 2025

Hi @fgibertoni, hope you are doing well. The PR is ready to be reviewed.

@spoiicy spoiicy linked an issue Jun 5, 2025 that may be closed by this pull request
@spoiicy spoiicy requested review from fgibertoni and removed request for fgibertoni June 6, 2025 15:24
@spoiicy
Copy link
Member Author

spoiicy commented Jun 6, 2025

FP found Response:

image

FP not found Response:
image

Copy link
Contributor

@fgibertoni fgibertoni left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for your work!
I think it would be useful to add a Data Model too.
You can check out the doc for guidelines on how to write one (evaluation, reliability and maybe additional details can be mapped) or just check other analyzers like GreyNoise Intel or Crowdsec.
Since it's a relatively new feature feel free to ask for additional guidance if you're having trouble 😄

logger = logging.getLogger(__name__)


class HuntingAbuseAPI(ObservableAnalyzer):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For abuse.ch services we introduced a mixin called AbuseCHMixin that handles the required API key, you just need to add the authentication_header property to the other headers and add the Mixin in the superclasses list. You can find some examples about this in ThreatFox or URLHaus analyzers.

logger.info(f"Fetching fp_status for {self.observable_name}")
if value_dict["entry_value"] == self.observable_name:
return {"fp_status": "true", "details": value_dict}
return {"fp_status": "False"}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
return {"fp_status": "False"}
return {"fp_status": False}

So it gets automatically converted in JSON compatible format

for _key, value_dict in fp_list.items():
logger.info(f"Fetching fp_status for {self.observable_name}")
if value_dict["entry_value"] == self.observable_name:
return {"fp_status": "true", "details": value_dict}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
return {"fp_status": "true", "details": value_dict}
return {"fp_status": True, "details": value_dict}

So it gets automatically converted in JSON compatible format

@AnshSinghal
Copy link
Contributor

Hi @spoiicy I had previously explored this issue a bit and just wanted to share a quick suggestion. Would it make sense to use a TTL-based cache for the false positives list? That way, we can store it once and refresh it periodically, instead of fetching the full list on every search. This could help with performance and reduce API load.
Totally appreciate your work on this — feel free to ignore if you’ve already considered it or have a better approach!

@spoiicy
Copy link
Member Author

spoiicy commented Jun 9, 2025

@fgibertoni As per suggestion from @AnshSinghal, it actually makes sense to store the results for some amount of time, update them after, let's say 1 day, so that the server doesn't have to query on every call.

And I am aware that we used a file-based approach previously to perform such tasks and now @cristinaascari is working on solution, making it a DB-based approach and refactoring old code.

So how should I proceed with this? Let me know what you think.

@fgibertoni
Copy link
Contributor

Yes I do agree with you and @AnshSinghal. At the moment @cristinaascari is blocked by other work on that task so it may take longer than expected.
I think we can proceed with the "standard" approach and then we will refactor slighlty the code to adapt to her changes.

@spoiicy spoiicy force-pushed the gsoc_25_enhancements branch from 7cebbd4 to 08a3184 Compare June 19, 2025 11:10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Analyzer] Hunting Abuse.ch
3 participants