-
-
Notifications
You must be signed in to change notification settings - Fork 144
feat: add resource utilization middleware to monitor CPU and memory u… #1352
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
WalkthroughA new middleware for resource utilization checking was added to the HTTP handlers. The middleware monitors CPU and memory usage before processing requests, returning a 503 error if thresholds are exceeded. The middleware is registered in the HTTP server pipeline, and the necessary module declarations and imports were updated. Additionally, command-line options for configuring CPU and memory thresholds were introduced, and the resource monitor runs concurrently during server operation with proper startup and shutdown handling. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant ActixServer
participant ResourceCheckMiddleware
participant NextHandler
Client->>ActixServer: Sends HTTP request
ActixServer->>ResourceCheckMiddleware: Pass request to middleware
ResourceCheckMiddleware->>ResourceCheckMiddleware: Check CPU & memory usage flag
alt Usage within threshold
ResourceCheckMiddleware->>NextHandler: Forward request
NextHandler-->>Client: Respond
else Usage exceeds threshold
ResourceCheckMiddleware-->>Client: Return 503 Service Unavailable
end
sequenceDiagram
participant Server
participant ResourceMonitorTask
participant ShutdownSignal
Server->>ResourceMonitorTask: spawn_resource_monitor(shutdown_rx)
ResourceMonitorTask->>ResourceMonitorTask: Periodically check CPU & memory usage
ResourceMonitorTask->>ResourceMonitorTask: Update global flag based on thresholds
Server->>ShutdownSignal: Send shutdown signal on server stop
ShutdownSignal->>ResourceMonitorTask: Terminate monitoring task
Poem
✨ Finishing Touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (3)
src/handlers/http/resource_check.rs (2)
29-30
: Hard-coded thresholds → move to configuration
CPU_UTILIZATION_THRESHOLD
andMEMORY_UTILIZATION_THRESHOLD
are compile-time constants.
Making them environment- or config-driven (e.g.PARSEABLE.options
) allows tuning without redeploying.
57-58
: IncludeRetry-After
header for back-pressure friendlinessWhen returning
503 Service Unavailable
, consider addingRetry-After
to hint clients when to attempt again:-return Err(ErrorServiceUnavailable(error_msg)); +let mut err = ErrorServiceUnavailable(error_msg); +err.response_mut().headers_mut().insert( + http::header::RETRY_AFTER, + http::HeaderValue::from_static("30"), // seconds +); +return Err(err);Small change, big UX improvement.
src/handlers/http/modal/mod.rs (1)
48-48
: Import grouping
resource_check
is added to the giganticsuper::{…}
list. No functional issue, but grouping related imports (middleware modules together) would aid readability.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
src/handlers/http/mod.rs
(1 hunks)src/handlers/http/modal/mod.rs
(2 hunks)src/handlers/http/resource_check.rs
(1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
src/handlers/http/modal/mod.rs (1)
src/handlers/http/resource_check.rs (1)
check_resource_utilization_middleware
(34-76)
⏰ Context from checks skipped due to timeout of 90000ms (9)
- GitHub Check: Build Default aarch64-apple-darwin
- GitHub Check: Quest Smoke and Load Tests for Standalone deployments
- GitHub Check: coverage
- GitHub Check: Quest Smoke and Load Tests for Distributed deployments
- GitHub Check: Build Default x86_64-pc-windows-msvc
- GitHub Check: Build Default x86_64-unknown-linux-gnu
- GitHub Check: Build Default aarch64-unknown-linux-gnu
- GitHub Check: Build Kafka aarch64-apple-darwin
- GitHub Check: Build Kafka x86_64-unknown-linux-gnu
🔇 Additional comments (3)
src/handlers/http/resource_check.rs (1)
61-63
:refresh_cpu_usage
needs a delay between calls
sysinfo
computes CPU load as a delta between two successive calls. When you performrefresh_cpu_usage()
and immediately readglobal_cpu_usage()
without any pause, the value is often ~0 %. If the staticSystem
suggestion above is not adopted, you’ll need at minimum:sys.refresh_cpu(); tokio::time::sleep(Duration::from_millis(200)).await; sys.refresh_cpu(); let cpu_usage = sys.global_cpu_usage();Failing to do so will under-report utilisation and the middleware might never trigger.
src/handlers/http/mod.rs (1)
39-40
: Module export looks goodPublicly exposing
resource_check
integrates the new middleware cleanly; no further action required.src/handlers/http/modal/mod.rs (1)
115-118
: To verify the relative cost and early‐exit behavior of each middleware, let’s locate their implementations:#!/bin/bash # Find the shutdown‐check middleware implementation rg -n "fn check_shutdown_middleware" -n . # Find the resource‐utilization middleware implementation rg -n "fn check_resource_utilization_middleware" -n .
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
♻️ Duplicate comments (1)
src/handlers/http/modal/ingest_server.rs (1)
129-132
: Same duplication issue as inserver.rs
See earlier comment – ensure the resource monitor is started exactly once per process.
🧹 Nitpick comments (5)
src/option.rs (1)
178-188
: Unify percentage-validation helpers and numeric types
validate_percentage
duplicates most ofvalidate_disk_usage
but returnsf32
while the earlier helper usesf64
.
Consider consolidating both into a single generic routine and sticking to one float type (preferf64
throughout for higher precision and consistency).src/handlers/http/modal/server.rs (2)
142-145
: Guard against spawning multiple resource-monitor tasks
spawn_resource_monitor()
is invoked here and again iningest_server.rs
.
If both server roles coexist in the same binary this will start two background loops polling the sameSYS_INFO
, doubling load and causing racy log noise.
Add aOnceCell
/Lazy
flag insideresource_check
so the monitor is spawned only once.
158-159
: Silently dropping the monitor task handleThe oneshot signal is sent, but the spawned task’s
JoinHandle
is not awaited. On orderly shutdown this may leak warnings if the task panics.
Store the handle andawait
it after sending the shutdown signal, similar tostartup_sync_handle
.src/cli.rs (1)
320-338
: Align threshold types with existing optionsCPU and memory thresholds use
f32
whereas--max-disk-usage-percent
usesf64
.
For a consistent public API (and to avoid accidental precision loss) consider switching these tof64
as well.src/handlers/http/resource_check.rs (1)
33-34
: UseAtomicBool
for cheap lock-free reads
RESOURCE_CHECK_ENABLED
is a single boolean read on every request. AnAtomicBool
withOrdering::Relaxed
would avoid the async RwLock overhead and allocation per request.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
src/cli.rs
(1 hunks)src/handlers/http/modal/ingest_server.rs
(2 hunks)src/handlers/http/modal/server.rs
(3 hunks)src/handlers/http/resource_check.rs
(1 hunks)src/option.rs
(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (10)
- GitHub Check: Build Default x86_64-unknown-linux-gnu
- GitHub Check: Build Default x86_64-pc-windows-msvc
- GitHub Check: Quest Smoke and Load Tests for Distributed deployments
- GitHub Check: Build Default x86_64-apple-darwin
- GitHub Check: Quest Smoke and Load Tests for Standalone deployments
- GitHub Check: Build Default aarch64-apple-darwin
- GitHub Check: Build Default aarch64-unknown-linux-gnu
- GitHub Check: Build Kafka x86_64-unknown-linux-gnu
- GitHub Check: Build Kafka aarch64-apple-darwin
- GitHub Check: coverage
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (5)
src/handlers/http/resource_check.rs (5)
41-45
: Thresholds captured once – runtime config changes are ignored
cpu_threshold
andmemory_threshold
are read before the loop and never refreshed.
If these values are expected to be hot-reloaded (e.g. via SIGHUP or admin API), the monitor will keep using the stale numbers. Consider moving the reads inside the tick branch:- let cpu_threshold = PARSEABLE.options.cpu_utilization_threshold; - let memory_threshold = PARSEABLE.options.memory_utilization_threshold; + // fetched on every iteration to pick up dynamic config changes
51-58
:spawn_blocking
each tick is avoidable overheadSampling three numeric fields from
sysinfo
is cheap and already guarded by aMutex
.
Spawning a blocking task every 30 s creates overhead and needless thread context switches.- let (used_memory, total_memory, cpu_usage) = tokio::task::spawn_blocking(|| { - let sys = SYS_INFO.lock().unwrap(); - ... - }).await.unwrap(); + let (used_memory, total_memory, cpu_usage) = { + let sys = SYS_INFO.lock().unwrap(); + (sys.used_memory() as f32, + sys.total_memory() as f32, + sys.global_cpu_usage()) + };If you still worry about blocking, wrap the initial
/proc
scan in a background task and keep subsequent reads cheap.
70-73
: Promote heavy log line todebug!
(or throttle)Emitting a full resource report at
info
on every 30-second tick can flood production logs.
Either:
- change to
debug!
, or- log only on state changes (already handled below), or
- throttle with
tracing::info!(rate = ...)
.Minor, but keeps logs actionable.
70-73
: Preferf64
for GB conversion to avoid precision loss
used_memory
/total_memory
are cast tof32
, then divided by1024³
. Large byte counts can exceedf32
’s 24-bit mantissa and lose precision (especially on >128 GiB hosts).- used_memory / 1024.0 / 1024.0 / 1024.0, - total_memory / 1024.0 / 1024.0 / 1024.0); + (used_memory as f64) / 1024f64.powi(3), + (total_memory as f64) / 1024f64.powi(3));
89-90
: Acquire write lock once to read & update flagTwo separate lock calls (
read
thenwrite
) introduce an avoidable race window.
Capture the previous value while holding the write lock:- let previous_state = *RESOURCE_CHECK_ENABLED.read().await; - *RESOURCE_CHECK_ENABLED.write().await = resource_ok; + let mut guard = RESOURCE_CHECK_ENABLED.write().await; + let previous_state = *guard; + *guard = resource_ok;
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
src/handlers/http/resource_check.rs
(1 hunks)
🧰 Additional context used
🧠 Learnings (1)
src/handlers/http/resource_check.rs (1)
Learnt from: vkhinvasara
PR: parseablehq/parseable#1352
File: src/handlers/http/resource_check.rs:70-74
Timestamp: 2025-06-18T08:43:30.100Z
Learning: The sysinfo crate (version 0.30+) returns memory values from `used_memory()` and `total_memory()` in bytes, not kilobytes. Converting bytes to GiB requires dividing by 1024³ (1,073,741,824). In older versions (pre-0.30), the methods returned kilobytes.
⏰ Context from checks skipped due to timeout of 90000ms (10)
- GitHub Check: coverage
- GitHub Check: Build Default aarch64-apple-darwin
- GitHub Check: Build Default x86_64-pc-windows-msvc
- GitHub Check: Build Kafka aarch64-apple-darwin
- GitHub Check: Build Default x86_64-apple-darwin
- GitHub Check: Build Default aarch64-unknown-linux-gnu
- GitHub Check: Build Kafka x86_64-unknown-linux-gnu
- GitHub Check: Build Default x86_64-unknown-linux-gnu
- GitHub Check: Quest Smoke and Load Tests for Distributed deployments
- GitHub Check: Quest Smoke and Load Tests for Standalone deployments
🔇 Additional comments (1)
src/handlers/http/resource_check.rs (1)
33-34
: ```shell
#!/bin/bash
set -eecho "=== Cargo.toml entries for Tokio ==="
rg -n '^\stokio\s=' --glob 'Cargo.toml' || echo "No direct Tokio entry in Cargo.toml"echo
echo "=== Cargo.lock entries for Tokio ==="
rg -n '^tokio ' --glob 'Cargo.lock' || echo "No Tokio entries found in Cargo.lock"</details> </blockquote></details> </details> <!-- This is an auto-generated comment by CodeRabbit for review status -->
Example usage:
|
Resource Utilization Thresholds: Sending 503 response if CPU/Memory thresholds exceed.
Description
This PR enhances our ingestion pipeline by implementing an overload protection mechanism during peak usage. Specifically, when CPU or memory utilization crosses predefined thresholds, the system will proactively reject further ingestion requests with an HTTP 503 (Service Unavailable) status. This prevents uncontrolled resource consumption and cascading failures.
File Changes
resource_check.rs
: This is a middleware which checks resource utilization before every ingestion request.modal/mod.rs
: To wrap the above middleware.http/mod.rs
: To expose said middleware.Summary by CodeRabbit