Replies: 1 comment
-
For starters, it would be prudent to see the utilization of the system on various levels when the history is being generated. Also, try running the experiment with renamed file handling off. Also, I am not sure whether Docker environment could play any role. For the record, I played a bit with JGit tuning (#4729) however the results were inconclusive. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
When I first indexed AOSP code (customized) using opengrok-indexer, the time it took to complete was several hours longer when using 64CPUs than when using 16CPUs.
I tried changing the disk specifications, but the time it took to create the history cache remained almost the same.
I tried running the indexer with --historyThreads and --historyFileThreads set to 16, and the execution time was almost the same as when using 16vCPUs. I believe the number of CPUs is affecting this.
I'd like to reduce the time it takes to create the history cache.
Could you please give me some advice on tuning methods and the optimal number of CPUs?
Usage environment and time recorded by "Done history cache for all repositories":
opengrok/docker:1.13.28
Indexer options:
opengrok-indexer
-a /opengrok/lib/opengrok.jar \
-J=-Djava.util.logging.config.file=/opengrok/config/logging.properties.index \
-J=-XX:MaxRAMPercentage=50 \
-- -v \
-s /opengrok/src \
-d /opengrok/data \
-W /opengrok/etc/configuration.xml \
-U "$URI" \
--renamedHistory on \
--depth 50 \
--nestingMaximum 50 \
-c /usr/local/bin/ctags \
-m "256" \
-H\
-P\
-S\
-G
Beta Was this translation helpful? Give feedback.
All reactions