Open
Conversation
Modify server tokenization logic to distinguish between the prompt used for generation and the key used for cache lookup. - Add `_tokenize_for_cache_key` which applies chat templates without the generation suffix. This ensures KV cache lookups match on message content only, fixing prompt chaining issues where the suffix incorrectly altered the cache key. - Update batch and non-batch generation flows to fetch cache using the clean key, then append the generation suffix to the remaining tokens. - Fix `ArraysCache.nbytes` to check for `None` entries before summing bytes, preventing potential errors during size calculation.
Previously, extracting a cache entry removed it from the LRU, preventing multiple requests from reusing the same cached prefix. Update `LRUPromptCache._extract` to accept a `keep_original` flag. When enabled for shorter prefix matches, the method returns a deep copy of the cache without deleting the original entry. This ensures the cached prompt remains available for subsequent requests, supporting hybrid model prompt chaining. Add `test_cache_persistence` to verify that cached prefixes persist and are reused across multiple requests.
Update LRUPromptCache to store cache entries at message boundaries (e.g., after system or user messages) in addition to the full prompt sequence. This allows the cache to be shared when conversations branch, improving efficiency for multi-turn dialogs. - Modify `insert_cache` to accept optional `boundary_positions` list. - Add `_insert_boundary_cache` helper to store references to shared cache objects at specific token indices. - Add `_find_cache_boundaries` in `ResponseGenerator` to detect message delimiters like `<|im_end|>` across different tokenizers.
Modify the LRU cache strategy to return references instead of copies, reducing memory overhead for hybrid models and prompt chaining. - Remove `deepcopy` in `_extract` to allow cache objects to be mutated in place. - Update `fetch_nearest_cache` to return the matched token position, enabling cache migration. - Extend `insert_cache` with `old_position` to move cache entries rather than duplicating them. - Dynamically update `nbytes` when overwriting existing cache entries. - Add debug logging for cache operations.
Adds support for creating periodic snapshots of the prompt cache to facilitate branching conversation histories. - Introduced `is_snapshot` attribute to `CacheEntry` to distinguish mutable cache entries from immutable snapshots. - Added `checkpoint_interval` (default 8192) to `__init__` to specify snapshot frequency. - Implemented `_find_checkpoint_positions` to place snapshots at logical message boundaries near the interval. - Modified lookup logic to extract copies from snapshots (preventing shared state mutation) while preserving in-place updates for linear extensions.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
This pull request includes new features, refactoring, and fixes to enhance the functionality of LRUPromptCache, particularly for branching conversation histories and prompt chaining.
Changes Made
Checklist