Skip to content

'num_threads' has no effect on benchmark_model target #3360

@felix-johnny

Description

@felix-johnny

Hi!
I am doing the switch from using TFLite to LiteRT and these issues are related to that. If you see any incorrect usages or any tips, feel free to suggest!

OS: MacOS 15.5

Build Command: bazelisk build -c opt --config macos //litert/tools:benchmark_model --define xnn_enable_profiler=true --define tflite_with_xnnpack=true
--define xnn_enable_arm_i8mm=true
--define tflite_with_xnnpack_qs8=true
--define tflite_with_xnnpack_qu8=true --repo_env=HERMETIC_PYTHON_VERSION=3.12

Execution 1 : ./bazel-bin/litert/tools/benchmark_model --use_profiler=true --require_full_delegation=false --enable_op_profiling=true --num_threads=4 --use_profiler=true --require_full_delegation=false --enable_op_profiling=true --graph=my_model.tflite

Execution 2 : same_as_execution_1 num_threads=1

The inference times are the same wheras they should be different like in TFLite.

Metadata

Metadata

Assignees

Labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions