Conversation
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
📝 WalkthroughWalkthroughAdds an OpenSearch description XML served at Changes
Sequence Diagram(s)sequenceDiagram
participant Browser as Browser (User)
participant Server as Actix Server
participant FS as Theme / public files
Browser->>Server: GET / (page)
Server->>FS: Render header (includes link rel="search" href="/websurfx.xml")
Server->>Browser: 200 OK (HTML with search link)
Browser->>Server: GET /websurfx.xml (browser discovery or user action)
Server->>FS: Read `public/websurfx.xml`
Server-->>Browser: 200 OK (Content-Type: application/opensearchdescription+xml, XML body)
Estimated code review effort🎯 2 (Simple) | ⏱️ ~12 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (2)
src/routes/mod.rs (2)
60-64: Consider caching the file contents instead of reading from disk per request.Like
robots_data, this reads the file from disk on every request. For an endpoint that returns a static document that essentially never changes at runtime, consider reading once at startup (or lazily viaOnceCell) and serving the cachedString/Bytes. This is an optional refactor and applies equally to the existingrobots_datahandler.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/routes/mod.rs` around lines 60 - 64, The handler currently reads the static theme XML on every request using read_to_string(format!( "{}/websurfx.xml", file_path(FileType::Theme).await?)).await? — change this to cache the file contents (e.g., a module-level OnceCell<String> or Bytes) and load it once (either at startup or lazily on first request) and then return the cached value in the route handler (mirror the approach used by robots_data); ensure the cache key is tied to FileType::Theme/read_to_string semantics and that any I/O errors are handled when populating the OnceCell so the handler simply serves the stored String/Bytes thereafter.
65-65: Avoid.unwrap()— crate forbidsclippy::panic.
Cargo.tomldeclarespanic = "forbid"under[lints.clippy]. While.unwrap()on a statically valid MIME string will not panic in practice, it is a latent panic site and inconsistent with the project's lint policy. Prefer building theContentTypewith a compile-time-checked constructor, or propagate the error.♻️ Proposed fix
- let content_type = ContentType("application/opensearchdescription+xml".parse().unwrap()); - Ok(HttpResponse::Ok() - .insert_header(content_type) - .body(page_content)) + Ok(HttpResponse::Ok() + .insert_header(( + actix_web::http::header::CONTENT_TYPE, + "application/opensearchdescription+xml", + )) + .body(page_content))actix-web 4.11 insert_header tuple content-type string header value🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/routes/mod.rs` at line 65, The code uses .parse().unwrap() to build a ContentType (variable content_type), which violates the project's panic-forbid rule; replace the parse+unwrap with a non-panicking constructor such as creating a HeaderValue via HeaderValue::from_static("application/opensearchdescription+xml") and then constructing ContentType(HeaderValue::from_static(...)) or otherwise propagate the parse error (e.g., use HeaderValue::from_str and handle the Result) so no unwrap is used; update the usage of ContentType accordingly (references: ContentType, HeaderValue, content_type).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@public/websurfx.xml`:
- Around line 8-10: The OpenSearch description uses relative paths for the Image
and Url template/self references which must be absolute; update the
opensearch_description handler to build absolute URLs (using the incoming Host
header or the configured public base URL) for the <Image> element value and for
both <Url template="..."> attributes (results and self) so they produce fully
qualified URLs, and add width="32" and height="32" attributes to the Image to
reflect the favicon size; locate and modify the code that renders these XML
nodes (the Image element, the Url elements, and the opensearch_description
response generator) to interpolate the computed base URL instead of hardcoded
relative paths.
In `@src/routes/mod.rs`:
- Line 55: The doc comment above the OpenSearch description route in
src/routes/mod.rs is copy/pasted from the robots.txt handler and incorrectly
describes the "route of robots.txt page"; update that comment to accurately
describe the OpenSearch description endpoint (the handler that returns the
OpenSearch description XML, not robots_data). Locate the doc comment immediately
above the OpenSearch route handler (look for the function/handler name like
opensearch_description or opensearch_description_route) and replace the
incorrect text with a short description that it serves the OpenSearch
description XML for the websurfx meta search engine.
---
Nitpick comments:
In `@src/routes/mod.rs`:
- Around line 60-64: The handler currently reads the static theme XML on every
request using read_to_string(format!( "{}/websurfx.xml",
file_path(FileType::Theme).await?)).await? — change this to cache the file
contents (e.g., a module-level OnceCell<String> or Bytes) and load it once
(either at startup or lazily on first request) and then return the cached value
in the route handler (mirror the approach used by robots_data); ensure the cache
key is tied to FileType::Theme/read_to_string semantics and that any I/O errors
are handled when populating the OnceCell so the handler simply serves the stored
String/Bytes thereafter.
- Line 65: The code uses .parse().unwrap() to build a ContentType (variable
content_type), which violates the project's panic-forbid rule; replace the
parse+unwrap with a non-panicking constructor such as creating a HeaderValue via
HeaderValue::from_static("application/opensearchdescription+xml") and then
constructing ContentType(HeaderValue::from_static(...)) or otherwise propagate
the parse error (e.g., use HeaderValue::from_str and handle the Result) so no
unwrap is used; update the usage of ContentType accordingly (references:
ContentType, HeaderValue, content_type).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 266152b6-5ec1-4790-9665-e4ecd7de3482
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (5)
Cargo.tomlpublic/websurfx.xmlsrc/lib.rssrc/routes/mod.rssrc/templates/partials/header.rs
…ving route (#622) Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
There was a problem hiding this comment.
🧹 Nitpick comments (1)
src/routes/mod.rs (1)
55-71: LGTM!Handler mirrors the existing
robots_datapattern, the MIMEapplication/opensearchdescription+xmlmatches the OpenSearch 1.1 spec, and the doc comment is now accurate. Route registration insrc/lib.rsand the<link rel="search">in the header partial complete the auto-discovery wiring.One optional nit: each request re-reads
websurfx.xmlfrom disk. Since this file is effectively static per deployment, you could cache it on first read (e.g., viaOnceCell/tokio::sync::OnceCell) to avoid repeated I/O — but this matches the style ofrobots_data, so feel free to defer for consistency.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/routes/mod.rs` around lines 55 - 71, The handler opensearch_description currently reads "{}/websurfx.xml" from disk on every request; change it to load and cache the file on first access (e.g., use OnceCell or tokio::sync::OnceCell holding a String) so subsequent calls return the cached content instead of re-reading the file; keep the existing function signature and response behavior (ContentType "application/opensearchdescription+xml" and HttpResponse::Ok with .body(page_content)) and initialize the cell on first await of file_path(FileType::Theme) and read_to_string, storing the resulting String in the OnceCell for future requests.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@src/routes/mod.rs`:
- Around line 55-71: The handler opensearch_description currently reads
"{}/websurfx.xml" from disk on every request; change it to load and cache the
file on first access (e.g., use OnceCell or tokio::sync::OnceCell holding a
String) so subsequent calls return the cached content instead of re-reading the
file; keep the existing function signature and response behavior (ContentType
"application/opensearchdescription+xml" and HttpResponse::Ok with
.body(page_content)) and initialize the cell on first await of
file_path(FileType::Theme) and read_to_string, storing the resulting String in
the OnceCell for future requests.
What does this PR do?
This PR provides the opensearch description file for the search engine.
Why is this change important?
This change is essential as it improves the user experience by enabling automatic detection of the search engine as a valid entry for the web browsers.
Author's checklist
v1.29.0Related issues
Closes #622
Summary by CodeRabbit
New Features
Chores