Skip to content

Commit 368995b

Browse files
committed
fix: reverting back flushNow lock
1 parent 26aa67c commit 368995b

File tree

1 file changed

+3
-2
lines changed

1 file changed

+3
-2
lines changed

docs/01-common-patterns/batching-ops.md

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -63,8 +63,6 @@ func (b *Batcher[T]) Add(item T) {
6363
}
6464

6565
func (b *Batcher[T]) flushNow() {
66-
b.mu.Lock()
67-
defer b.mu.Unlock()
6866
if len(b.buffer) == 0 {
6967
return
7068
}
@@ -73,6 +71,9 @@ func (b *Batcher[T]) flushNow() {
7371
}
7472
```
7573

74+
!!! warning
75+
This batcher implementation expects that you will never call `Batcher.Add(...)` from your `flush()` function. We have this limitation because Go mutexes are [**not** recursive](https://stackoverflow.com/questions/14670979/recursive-locking-in-go).
76+
7677
This batcher works with any data type, making it a flexible solution for aggregating logs, metrics, database writes, or other grouped operations. Internally, the buffer acts as a queue that accumulates items until a flush threshold is reached. The use of `sync.Mutex` ensures that `Add()` and `flushNow()` are safe for concurrent access, which is necessary in most real-world systems where multiple goroutines may write to the batcher.
7778

7879
From a performance standpoint, it's true that a lock-free implementation—using atomic operations or concurrent ring buffers—could reduce contention and improve throughput under heavy load. However, such designs are more complex, harder to maintain, and generally not justified unless you're pushing extremely high concurrency or low-latency boundaries. For most practical workloads, the simplicity and safety of a `sync.Mutex`-based design offers a great balance between performance and maintainability.

0 commit comments

Comments
 (0)