Skip to content

Conversation

@bylaws
Copy link
Collaborator

@bylaws bylaws commented Dec 29, 2025

A significant proportion of time with the prior approach was spent in the unbuffered write calls used once per code page/entrypoint; taking upwards of 15 seconds to serialize a 700MB P3R cache. Switching to an mmap approach reduces the CPU time there to less than a second, leaving I/O to dominate.

A two-pass approach is used:

  1. Calculate the required size for the cache file, issue a callback to frontend to map a cache file of that size.
  2. Write cache data into the buffer provided by that callback.

CC: @neobrain needs linux testing

A significant proportion of time with the prior approach was spent in
the unbuffered write calls used once per code page/entrypoint; taking
upwards of 15 seconds to serialize a 700MB P3R cache. Switching to an
mmap approach reduces the CPU time there to less than a second, leaving
I/O to dominate.

A two-pass approach is used:
1. Calculate the required size for the cache file, issue a callback to
   frontend to map a cache file of that size.
2. Write cache data into the buffer provided by that callback.
@bylaws bylaws marked this pull request as draft December 29, 2025 18:12
@neobrain
Copy link
Member

Sorry, but Serialize.template operator()<false>() is an instant red flag for me :)

But yeah, good find - the core issue here is the number of small-size writes that became necessary after the recent changes to the LookupCache data structures. I would much prefer using a simple buffering (std::vector) here, since it doesn't involve nested template lambda expressions or a funky mmapping-callback. Using mmap over write plays a negligible role, so I expect equivalent performance here.

@bylaws
Copy link
Collaborator Author

bylaws commented Jan 12, 2026

Sorry, but Serialize.template operator()<false>() is an instant red flag for me :)

But yeah, good find - the core issue here is the number of small-size writes that became necessary after the recent changes to the LookupCache data structures. I would much prefer using a simple buffering (std::vector) here, since it doesn't involve nested template lambda expressions or a funky mmapping-callback. Using mmap over write plays a negligible role, so I expect equivalent performance here.

Hmm I was wanting to avoid just doubling the memory usage for no reason, especially when mmap is basically free, we could just buffer some of them and not the code buffer write but felt like mmap was just a nicer catch all future proof solution. I could split the lambda out into a separate function? Like the nested lambda isn't inherent to the approach, main reason for templates is as I thought it was kinda cool to reuse the same expression to calculate size as to write out. But a templated static function could do exactly the same.

With that I suppose the only thing would be the mapping callback, which I feel isn't that obtuse? Up to you but when it allows a technically better solution that's easier to extend in future for cache changes I lean towards. (It also means running out of disk space is safely handled which is kinda nice)

@neobrain
Copy link
Member

neobrain commented Jan 14, 2026

Hmm I was wanting to avoid just doubling the memory usage for no reason, especially when mmap is basically free, we could just buffer some of them and not the code buffer write but felt like mmap was just a nicer catch all future proof solution

Yes, we don't need to reduce it to a single write, but just eliminate the egregious high-frequency writes for the BlockList. Could either be one medium-size allocation per write loop or a more generic "WriteBuffered" utility that performs sector-sized buffering. The former will add negligible memory overhead, whereas for the latter it depends on the typical size of inner code page data, which you can probably estimate better than me.

I could split the lambda out into a separate function? Like the nested lambda isn't inherent to the approach, main reason for templates is as I thought it was kinda cool to reuse the same expression to calculate size as to write out. But a templated static function could do exactly the same.

Indeed the nested template lambda was just one particularly inappropriate element here given how simple the underlying problem is. Still as you mentioned, without this quasi-metaprogramming we'd need to duplicate the logic to some extent. And even then it's still writing data through magic mmapped memory, which most people would stumble over at least.

None of that is intractable to figure out for a reader (or maybe I'm lacking imagination to see how simple it could be without the lambdas), but given that simple buffering addresses 99% of the performance concern here, I don't find it a compelling approach from a maintainability perspective.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants