Skip to content

✨ feat(opensearch): opensearch description for the search page#787

Merged
neon-mmd merged 9 commits intorollingfrom
FEAT/622_opensearch-description-for-the-search-page
Apr 24, 2026
Merged

✨ feat(opensearch): opensearch description for the search page#787
neon-mmd merged 9 commits intorollingfrom
FEAT/622_opensearch-description-for-the-search-page

Conversation

@neon-mmd
Copy link
Copy Markdown
Owner

@neon-mmd neon-mmd commented Apr 23, 2026

What does this PR do?

This PR provides the opensearch description file for the search engine.

Why is this change important?

This change is essential as it improves the user experience by enabling automatic detection of the search engine as a valid entry for the web browsers.

Author's checklist

  • Provide the opensearch description file for the search engine.
  • Bump the app version v1.29.0

Related issues

Closes #622

Summary by CodeRabbit

  • New Features

    • Added OpenSearch support so browsers can discover and add the site as a search provider; the site now advertises and serves an OpenSearch description.
  • Chores

    • Package version bumped from 1.28.0 to 1.29.0.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 23, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 31cf1317-8617-4f24-9901-60d6bb240135

📥 Commits

Reviewing files that changed from the base of the PR and between cec665f and b02e1e8.

📒 Files selected for processing (1)
  • src/routes/mod.rs
🚧 Files skipped from review as they are similar to previous changes (1)
  • src/routes/mod.rs

📝 Walkthrough

Walkthrough

Adds an OpenSearch description XML served at /websurfx.xml, registers a new Actix route to serve it, inserts a <link rel="search"> in the page header, and bumps the package version to 1.29.0.

Changes

Cohort / File(s) Summary
Version Management
Cargo.toml
Bumped package version from 1.28.0 to 1.29.0.
OpenSearch Descriptor
public/websurfx.xml
New OpenSearch description XML added (engine name, description, favicon, example, template /search?q={searchTerms}).
Routing
src/routes/mod.rs, src/lib.rs
Added opensearch_description handler and registered it in the Actix app to serve the XML with application/opensearchdescription+xml.
Header Integration
src/templates/partials/header.rs
Added <link rel="search"> metadata pointing to /websurfx.xml for OpenSearch discovery.

Sequence Diagram(s)

sequenceDiagram
    participant Browser as Browser (User)
    participant Server as Actix Server
    participant FS as Theme / public files

    Browser->>Server: GET / (page)
    Server->>FS: Render header (includes link rel="search" href="/websurfx.xml")
    Server->>Browser: 200 OK (HTML with search link)

    Browser->>Server: GET /websurfx.xml (browser discovery or user action)
    Server->>FS: Read `public/websurfx.xml`
    Server-->>Browser: 200 OK (Content-Type: application/opensearchdescription+xml, XML body)
Loading

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~12 minutes

Possibly related PRs

Suggested reviewers

  • jfvillablanca
  • spencerjibz
  • alamin655
  • arctic-byte

Poem

🐰 A hop, a link, a tiny tune,
I hid a XML under the moon.
"Search me," I whisper in the head,
So browsers add me when they're fed —
Hopping results from dawn to noon.

🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately describes the main change: adding OpenSearch description support for the search page, which is the primary objective of this PR.
Linked Issues check ✅ Passed All coding requirements from issue #622 are met: OpenSearch XML file added, endpoint created, link added to page headers for browser discovery.
Out of Scope Changes check ✅ Passed All changes are directly related to implementing OpenSearch support; version bump is a standard release practice accompanying feature additions.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch FEAT/622_opensearch-description-for-the-search-page

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@neon-mmd neon-mmd moved this from Todo to Under Review in Task list for v2.0.0 release cycle Apr 23, 2026
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (2)
src/routes/mod.rs (2)

60-64: Consider caching the file contents instead of reading from disk per request.

Like robots_data, this reads the file from disk on every request. For an endpoint that returns a static document that essentially never changes at runtime, consider reading once at startup (or lazily via OnceCell) and serving the cached String/Bytes. This is an optional refactor and applies equally to the existing robots_data handler.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/routes/mod.rs` around lines 60 - 64, The handler currently reads the
static theme XML on every request using read_to_string(format!(
"{}/websurfx.xml", file_path(FileType::Theme).await?)).await? — change this to
cache the file contents (e.g., a module-level OnceCell<String> or Bytes) and
load it once (either at startup or lazily on first request) and then return the
cached value in the route handler (mirror the approach used by robots_data);
ensure the cache key is tied to FileType::Theme/read_to_string semantics and
that any I/O errors are handled when populating the OnceCell so the handler
simply serves the stored String/Bytes thereafter.

65-65: Avoid .unwrap() — crate forbids clippy::panic.

Cargo.toml declares panic = "forbid" under [lints.clippy]. While .unwrap() on a statically valid MIME string will not panic in practice, it is a latent panic site and inconsistent with the project's lint policy. Prefer building the ContentType with a compile-time-checked constructor, or propagate the error.

♻️ Proposed fix
-    let content_type = ContentType("application/opensearchdescription+xml".parse().unwrap());
-    Ok(HttpResponse::Ok()
-        .insert_header(content_type)
-        .body(page_content))
+    Ok(HttpResponse::Ok()
+        .insert_header((
+            actix_web::http::header::CONTENT_TYPE,
+            "application/opensearchdescription+xml",
+        ))
+        .body(page_content))
actix-web 4.11 insert_header tuple content-type string header value
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/routes/mod.rs` at line 65, The code uses .parse().unwrap() to build a
ContentType (variable content_type), which violates the project's panic-forbid
rule; replace the parse+unwrap with a non-panicking constructor such as creating
a HeaderValue via
HeaderValue::from_static("application/opensearchdescription+xml") and then
constructing ContentType(HeaderValue::from_static(...)) or otherwise propagate
the parse error (e.g., use HeaderValue::from_str and handle the Result) so no
unwrap is used; update the usage of ContentType accordingly (references:
ContentType, HeaderValue, content_type).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@public/websurfx.xml`:
- Around line 8-10: The OpenSearch description uses relative paths for the Image
and Url template/self references which must be absolute; update the
opensearch_description handler to build absolute URLs (using the incoming Host
header or the configured public base URL) for the <Image> element value and for
both <Url template="..."> attributes (results and self) so they produce fully
qualified URLs, and add width="32" and height="32" attributes to the Image to
reflect the favicon size; locate and modify the code that renders these XML
nodes (the Image element, the Url elements, and the opensearch_description
response generator) to interpolate the computed base URL instead of hardcoded
relative paths.

In `@src/routes/mod.rs`:
- Line 55: The doc comment above the OpenSearch description route in
src/routes/mod.rs is copy/pasted from the robots.txt handler and incorrectly
describes the "route of robots.txt page"; update that comment to accurately
describe the OpenSearch description endpoint (the handler that returns the
OpenSearch description XML, not robots_data). Locate the doc comment immediately
above the OpenSearch route handler (look for the function/handler name like
opensearch_description or opensearch_description_route) and replace the
incorrect text with a short description that it serves the OpenSearch
description XML for the websurfx meta search engine.

---

Nitpick comments:
In `@src/routes/mod.rs`:
- Around line 60-64: The handler currently reads the static theme XML on every
request using read_to_string(format!( "{}/websurfx.xml",
file_path(FileType::Theme).await?)).await? — change this to cache the file
contents (e.g., a module-level OnceCell<String> or Bytes) and load it once
(either at startup or lazily on first request) and then return the cached value
in the route handler (mirror the approach used by robots_data); ensure the cache
key is tied to FileType::Theme/read_to_string semantics and that any I/O errors
are handled when populating the OnceCell so the handler simply serves the stored
String/Bytes thereafter.
- Line 65: The code uses .parse().unwrap() to build a ContentType (variable
content_type), which violates the project's panic-forbid rule; replace the
parse+unwrap with a non-panicking constructor such as creating a HeaderValue via
HeaderValue::from_static("application/opensearchdescription+xml") and then
constructing ContentType(HeaderValue::from_static(...)) or otherwise propagate
the parse error (e.g., use HeaderValue::from_str and handle the Result) so no
unwrap is used; update the usage of ContentType accordingly (references:
ContentType, HeaderValue, content_type).
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 266152b6-5ec1-4790-9665-e4ecd7de3482

📥 Commits

Reviewing files that changed from the base of the PR and between 37432b5 and cb8a3c4.

⛔ Files ignored due to path filters (1)
  • Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (5)
  • Cargo.toml
  • public/websurfx.xml
  • src/lib.rs
  • src/routes/mod.rs
  • src/templates/partials/header.rs

Comment thread public/websurfx.xml
Comment thread src/routes/mod.rs Outdated
neon-mmd and others added 2 commits April 23, 2026 23:55
…ving route (#622)

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
src/routes/mod.rs (1)

55-71: LGTM!

Handler mirrors the existing robots_data pattern, the MIME application/opensearchdescription+xml matches the OpenSearch 1.1 spec, and the doc comment is now accurate. Route registration in src/lib.rs and the <link rel="search"> in the header partial complete the auto-discovery wiring.

One optional nit: each request re-reads websurfx.xml from disk. Since this file is effectively static per deployment, you could cache it on first read (e.g., via OnceCell/tokio::sync::OnceCell) to avoid repeated I/O — but this matches the style of robots_data, so feel free to defer for consistency.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/routes/mod.rs` around lines 55 - 71, The handler opensearch_description
currently reads "{}/websurfx.xml" from disk on every request; change it to load
and cache the file on first access (e.g., use OnceCell or tokio::sync::OnceCell
holding a String) so subsequent calls return the cached content instead of
re-reading the file; keep the existing function signature and response behavior
(ContentType "application/opensearchdescription+xml" and HttpResponse::Ok with
.body(page_content)) and initialize the cell on first await of
file_path(FileType::Theme) and read_to_string, storing the resulting String in
the OnceCell for future requests.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@src/routes/mod.rs`:
- Around line 55-71: The handler opensearch_description currently reads
"{}/websurfx.xml" from disk on every request; change it to load and cache the
file on first access (e.g., use OnceCell or tokio::sync::OnceCell holding a
String) so subsequent calls return the cached content instead of re-reading the
file; keep the existing function signature and response behavior (ContentType
"application/opensearchdescription+xml" and HttpResponse::Ok with
.body(page_content)) and initialize the cell on first await of
file_path(FileType::Theme) and read_to_string, storing the resulting String in
the OnceCell for future requests.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 96363c04-fd3e-4c89-8a71-6f41d69b158f

📥 Commits

Reviewing files that changed from the base of the PR and between 922389b and cec665f.

📒 Files selected for processing (1)
  • src/routes/mod.rs

@neon-mmd neon-mmd merged commit fb9d163 into rolling Apr 24, 2026
11 checks passed
@neon-mmd neon-mmd deleted the FEAT/622_opensearch-description-for-the-search-page branch April 24, 2026 09:59
@github-project-automation github-project-automation Bot moved this from Under Review to Done in Task list for v2.0.0 release cycle Apr 24, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Development

Successfully merging this pull request may close these issues.

✨ OpenSearch description for the search page

1 participant