You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/reference-manual/native-image/OptimizationsAndPerformance.md
+5-6Lines changed: 5 additions & 6 deletions
Original file line number
Diff line number
Diff line change
@@ -41,21 +41,20 @@ Find more information on this topic in [Basic Usage of Profile-Guided Optimizati
41
41
### ML-Powered Profile Inference for Enhanced Performance
42
42
43
43
Native Image supports machine learning-driven static profiling, as a built-in capability.
44
-
By default, GraalVM runs at the `-O2` optimization level, which uses the simple and fast XGBoost ML model for profile inference.
44
+
By default, GraalVM runs at the `-O2` optimization level, which uses the simple and fast **Graal Static Profiler (GraalSP)** for profile inference.
45
45
This model is optimized for a wide range of applications.
46
46
47
-
As of GraalVM for JDK 24, the new Graph Neural Network (GNN) ML model can be used for profile inference, offering even better performance.
47
+
As of GraalVM for JDK 24, the new **Graal Neural Network (GraalNN)** static profiler can be used for ML-powered profile inference, offering even better performance.
48
48
Enable it by passing the `-O3` option to Native Image.
49
49
50
50
> Note: Not available in GraalVM Community Edition.
51
51
52
-
Note that if Profile-Guided Optimization (PGO) is enabled, ML inference is automatically disabled, as PGO utilizes high-quality profile data that makes additional ML inference unnecessary.
53
-
Thus, passing the `--pgo` option will disable the ML inference feature.
52
+
Note that if the user provides a [PGO profile](#profile-guided-optimization-for-improved-throughput) using the `--pgo` option, additional ML inference is unnecessary and therefore disabled automatically.
54
53
55
54
Key Points:
56
55
57
-
***XGBoost ML model** (simple model) is used with `-O2` by default.
58
-
***GNN ML model** (advanced model) is used with `-O3` by default.
56
+
***GraalSP** (simple model) is used with `-O2` by default.
57
+
***GraalNN** (advanced model) is used with `-O3` by default.
0 commit comments