Add Dockerfile and File-based Caching (Self-Hosted) #4711
+313
−3
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR adds Dockerfile and a simple file based caching mechanism, i've been battling slowness and timeouts for a very long time while using this project in my gh profile and this has made a huge impact.
In my testing i've gone from around 4s for each request to under 100ms (after the first one gets cached obviously)
This is currently deployed and used in my own profile with a self hosted Coolify instance.
For the Dockerfile i used node:lts-alpine, this was inspired from another PR that was already opened in the past, includes a health check, and exposes a configurable port via the PORT environment variable (defaults to 9000).
File-based caching system is defined in a new module (
src/common/fileCache.js) that persists API responses to disk, reducing GitHub API calls significantly. The cache uses a 24-hour TTL by default, generates unique keys via MD5 hashes of request parameters, and handles errors gracefully so caching failures never break actual requests.Integrated caching into all data fetchers:
stats.jstop-languages.jsrepo.jsgist.jswakatime.jsThis is extremely beneficial in selfhosted docker scenario, i am not sure it would make any difference in Vercel since storage may not be persistent, anyway it should not break that kind of deployment or have any impact.