Skip to content

Commit 967da6c

Browse files
committed
[GR-59202] Rename ML profile inference models in Native Image.
PullRequest: graal/20269
2 parents cf13e25 + 8d58b8c commit 967da6c

File tree

1 file changed

+5
-6
lines changed

1 file changed

+5
-6
lines changed

docs/reference-manual/native-image/OptimizationsAndPerformance.md

Lines changed: 5 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -41,21 +41,20 @@ Find more information on this topic in [Basic Usage of Profile-Guided Optimizati
4141
### ML-Powered Profile Inference for Enhanced Performance
4242

4343
Native Image supports machine learning-driven static profiling, as a built-in capability.
44-
By default, GraalVM runs at the `-O2` optimization level, which uses the simple and fast XGBoost ML model for profile inference.
44+
By default, GraalVM runs at the `-O2` optimization level, which uses the simple and fast **Graal Static Profiler (GraalSP)** for profile inference.
4545
This model is optimized for a wide range of applications.
4646

47-
As of GraalVM for JDK 24, the new Graph Neural Network (GNN) ML model can be used for profile inference, offering even better performance.
47+
As of GraalVM for JDK 24, the new **Graal Neural Network (GraalNN)** static profiler can be used for ML-powered profile inference, offering even better performance.
4848
Enable it by passing the `-O3` option to Native Image.
4949

5050
> Note: Not available in GraalVM Community Edition.
5151
52-
Note that if Profile-Guided Optimization (PGO) is enabled, ML inference is automatically disabled, as PGO utilizes high-quality profile data that makes additional ML inference unnecessary.
53-
Thus, passing the `--pgo` option will disable the ML inference feature.
52+
Note that if the user provides a [PGO profile](#profile-guided-optimization-for-improved-throughput) using the `--pgo` option, additional ML inference is unnecessary and therefore disabled automatically.
5453

5554
Key Points:
5655

57-
* **XGBoost ML model** (simple model) is used with `-O2` by default.
58-
* **GNN ML model** (advanced model) is used with `-O3` by default.
56+
* **GraalSP** (simple model) is used with `-O2` by default.
57+
* **GraalNN** (advanced model) is used with `-O3` by default.
5958

6059
### Optimizing for Specific Machines
6160

0 commit comments

Comments
 (0)