Skip to content

Commit ced68e1

Browse files
Add performance benchmark scripts for 4 use cases. (#1052)
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
1 parent bf5c391 commit ced68e1

File tree

12 files changed

+892
-0
lines changed

12 files changed

+892
-0
lines changed
Lines changed: 77 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,77 @@
1+
# CodeGen Benchmarking
2+
3+
This folder contains a collection of scripts to enable inference benchmarking by leveraging a comprehensive benchmarking tool, [GenAIEval](https://github.com/opea-project/GenAIEval/blob/main/evals/benchmark/README.md), that enables throughput analysis to assess inference performance.
4+
5+
By following this guide, you can run benchmarks on your deployment and share the results with the OPEA community.
6+
7+
## Purpose
8+
9+
We aim to run these benchmarks and share them with the OPEA community for three primary reasons:
10+
11+
- To offer insights on inference throughput in real-world scenarios, helping you choose the best service or deployment for your needs.
12+
- To establish a baseline for validating optimization solutions across different implementations, providing clear guidance on which methods are most effective for your use case.
13+
- To inspire the community to build upon our benchmarks, allowing us to better quantify new solutions in conjunction with current leading llms, serving frameworks etc.
14+
15+
## Metrics
16+
17+
The benchmark will report the below metrics, including:
18+
19+
- Number of Concurrent Requests
20+
- End-to-End Latency: P50, P90, P99 (in milliseconds)
21+
- End-to-End First Token Latency: P50, P90, P99 (in milliseconds)
22+
- Average Next Token Latency (in milliseconds)
23+
- Average Token Latency (in milliseconds)
24+
- Requests Per Second (RPS)
25+
- Output Tokens Per Second
26+
- Input Tokens Per Second
27+
28+
Results will be displayed in the terminal and saved as CSV file named `1_testspec.yaml`.
29+
30+
## Getting Started
31+
32+
We recommend using Kubernetes to deploy the CodeGen service, as it offers benefits such as load balancing and improved scalability. However, you can also deploy the service using Docker if that better suits your needs.
33+
34+
### Prerequisites
35+
36+
- Install Kubernetes by following [this guide](https://github.com/opea-project/docs/blob/main/guide/installation/k8s_install/k8s_install_kubespray.md).
37+
38+
- Every node has direct internet access
39+
- Set up kubectl on the master node with access to the Kubernetes cluster.
40+
- Install Python 3.8+ on the master node for running GenAIEval.
41+
- Ensure all nodes have a local /mnt/models folder, which will be mounted by the pods.
42+
- Ensure that the container's ulimit can meet the the number of requests.
43+
44+
```bash
45+
# The way to modify the containered ulimit:
46+
sudo systemctl edit containerd
47+
# Add two lines:
48+
[Service]
49+
LimitNOFILE=65536:1048576
50+
51+
sudo systemctl daemon-reload; sudo systemctl restart containerd
52+
```
53+
54+
### Test Steps
55+
56+
Please deploy CodeGen service before benchmarking.
57+
58+
##### Run Benchmark Test
59+
60+
Before the benchmark, we can configure the number of test queries and test output directory by:
61+
62+
```bash
63+
export USER_QUERIES="[128, 128, 128, 128]"
64+
export TEST_OUTPUT_DIR="/tmp/benchmark_output"
65+
```
66+
67+
And then run the benchmark by:
68+
69+
```bash
70+
bash benchmark.sh -n <node_count>
71+
```
72+
73+
The argument `-n` refers to the number of test nodes.
74+
75+
##### 4. Data collection
76+
77+
All the test results will come to this folder `/tmp/benchmark_output` configured by the environment variable `TEST_OUTPUT_DIR` in previous steps.
Lines changed: 99 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,99 @@
1+
#!/bin/bash
2+
3+
# Copyright (C) 2024 Intel Corporation
4+
# SPDX-License-Identifier: Apache-2.0
5+
6+
deployment_type="k8s"
7+
node_number=1
8+
service_port=7778
9+
query_per_node=128
10+
11+
benchmark_tool_path="$(pwd)/GenAIEval"
12+
13+
usage() {
14+
echo "Usage: $0 [-d deployment_type] [-n node_number] [-i service_ip] [-p service_port]"
15+
echo " -d deployment_type deployment type, select between k8s and docker (default: ${deployment_type})"
16+
echo " -n node_number Test node number, required only for k8s deployment_type, (default: ${node_number})"
17+
echo " -i service_ip service ip, required only for docker deployment_type"
18+
echo " -p service_port service port, required only for docker deployment_type, (default: ${service_port})"
19+
exit 1
20+
}
21+
22+
while getopts ":d:n:i:p:" opt; do
23+
case ${opt} in
24+
d )
25+
deployment_type=$OPTARG
26+
;;
27+
n )
28+
node_number=$OPTARG
29+
;;
30+
i )
31+
service_ip=$OPTARG
32+
;;
33+
p )
34+
service_port=$OPTARG
35+
;;
36+
\? )
37+
echo "Invalid option: -$OPTARG" 1>&2
38+
usage
39+
;;
40+
: )
41+
echo "Invalid option: -$OPTARG requires an argument" 1>&2
42+
usage
43+
;;
44+
esac
45+
done
46+
47+
if [[ "$deployment_type" == "docker" && -z "$service_ip" ]]; then
48+
echo "Error: service_ip is required for docker deployment_type" 1>&2
49+
usage
50+
fi
51+
52+
if [[ "$deployment_type" == "k8s" && ( -n "$service_ip" || -n "$service_port" ) ]]; then
53+
echo "Warning: service_ip and service_port are ignored for k8s deployment_type" 1>&2
54+
fi
55+
56+
function main() {
57+
if [[ ! -d ${benchmark_tool_path} ]]; then
58+
echo "Benchmark tool not found, setting up..."
59+
setup_env
60+
fi
61+
run_benchmark
62+
}
63+
64+
function setup_env() {
65+
git clone https://github.com/opea-project/GenAIEval.git
66+
pushd ${benchmark_tool_path}
67+
python3 -m venv stress_venv
68+
source stress_venv/bin/activate
69+
pip install -r requirements.txt
70+
popd
71+
}
72+
73+
function run_benchmark() {
74+
source ${benchmark_tool_path}/stress_venv/bin/activate
75+
export DEPLOYMENT_TYPE=${deployment_type}
76+
export SERVICE_IP=${service_ip:-"None"}
77+
export SERVICE_PORT=${service_port:-"None"}
78+
if [[ -z $USER_QUERIES ]]; then
79+
user_query=$((query_per_node*node_number))
80+
export USER_QUERIES="[${user_query}, ${user_query}, ${user_query}, ${user_query}]"
81+
echo "USER_QUERIES not configured, setting to: ${USER_QUERIES}."
82+
fi
83+
export WARMUP=$(echo $USER_QUERIES | sed -e 's/[][]//g' -e 's/,.*//')
84+
if [[ -z $WARMUP ]]; then export WARMUP=0; fi
85+
if [[ -z $TEST_OUTPUT_DIR ]]; then
86+
if [[ $DEPLOYMENT_TYPE == "k8s" ]]; then
87+
export TEST_OUTPUT_DIR="${benchmark_tool_path}/evals/benchmark/benchmark_output/node_${node_number}"
88+
else
89+
export TEST_OUTPUT_DIR="${benchmark_tool_path}/evals/benchmark/benchmark_output/docker"
90+
fi
91+
echo "TEST_OUTPUT_DIR not configured, setting to: ${TEST_OUTPUT_DIR}."
92+
fi
93+
94+
envsubst < ./benchmark.yaml > ${benchmark_tool_path}/evals/benchmark/benchmark.yaml
95+
cd ${benchmark_tool_path}/evals/benchmark
96+
python benchmark.py
97+
}
98+
99+
main
Lines changed: 47 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,47 @@
1+
# Copyright (C) 2024 Intel Corporation
2+
# SPDX-License-Identifier: Apache-2.0
3+
4+
test_suite_config: # Overall configuration settings for the test suite
5+
examples: ["chatqna"] # The specific test cases being tested, e.g., chatqna, codegen, codetrans, faqgen, audioqna, visualqna
6+
deployment_type: "k8s" # Default is "k8s", can also be "docker"
7+
service_ip: None # Leave as None for k8s, specify for Docker
8+
service_port: None # Leave as None for k8s, specify for Docker
9+
warm_ups: 0 # Number of test requests for warm-up
10+
run_time: 60m # The max total run time for the test suite
11+
seed: # The seed for all RNGs
12+
user_queries: [1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024, 2048] # Number of test requests at each concurrency level
13+
query_timeout: 120 # Number of seconds to wait for a simulated user to complete any executing task before exiting. 120 sec by defeult.
14+
random_prompt: false # Use random prompts if true, fixed prompts if false
15+
collect_service_metric: false # Collect service metrics if true, do not collect service metrics if false
16+
data_visualization: false # Generate data visualization if true, do not generate data visualization if false
17+
llm_model: "Qwen/CodeQwen1.5-7B-Chat" # The LLM model used for the test
18+
test_output_dir: "/tmp/benchmark_output" # The directory to store the test output
19+
load_shape: # Tenant concurrency pattern
20+
name: constant # poisson or constant(locust default load shape)
21+
params: # Loadshape-specific parameters
22+
constant: # Constant load shape specific parameters, activate only if load_shape.name is constant
23+
concurrent_level: 4 # If user_queries is specified, concurrent_level is target number of requests per user. If not, it is the number of simulated users
24+
# arrival_rate: 1.0 # Request arrival rate. If set, concurrent_level will be overridden, constant load will be generated based on arrival-rate
25+
poisson: # Poisson load shape specific parameters, activate only if load_shape.name is poisson
26+
arrival_rate: 1.0 # Request arrival rate
27+
namespace: "" # Fill the user-defined namespace. Otherwise, it will be default.
28+
29+
test_cases:
30+
codegen:
31+
llm:
32+
run_test: true
33+
service_name: "llm-dependency-svc" # Replace with your service name
34+
parameters:
35+
model_name: "Qwen/CodeQwen1.5-7B-Chat"
36+
max_new_tokens: 128
37+
temperature: 0.01
38+
top_k: 10
39+
top_p: 0.95
40+
repetition_penalty: 1.03
41+
streaming: true
42+
llmserve:
43+
run_test: true
44+
service_name: "llm-svc" # Replace with your service name
45+
e2e:
46+
run_test: true
47+
service_name: "codegen-backend-svc" # Replace with your service name
Lines changed: 77 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,77 @@
1+
# CodeTrans Benchmarking
2+
3+
This folder contains a collection of scripts to enable inference benchmarking by leveraging a comprehensive benchmarking tool, [GenAIEval](https://github.com/opea-project/GenAIEval/blob/main/evals/benchmark/README.md), that enables throughput analysis to assess inference performance.
4+
5+
By following this guide, you can run benchmarks on your deployment and share the results with the OPEA community.
6+
7+
## Purpose
8+
9+
We aim to run these benchmarks and share them with the OPEA community for three primary reasons:
10+
11+
- To offer insights on inference throughput in real-world scenarios, helping you choose the best service or deployment for your needs.
12+
- To establish a baseline for validating optimization solutions across different implementations, providing clear guidance on which methods are most effective for your use case.
13+
- To inspire the community to build upon our benchmarks, allowing us to better quantify new solutions in conjunction with current leading llms, serving frameworks etc.
14+
15+
## Metrics
16+
17+
The benchmark will report the below metrics, including:
18+
19+
- Number of Concurrent Requests
20+
- End-to-End Latency: P50, P90, P99 (in milliseconds)
21+
- End-to-End First Token Latency: P50, P90, P99 (in milliseconds)
22+
- Average Next Token Latency (in milliseconds)
23+
- Average Token Latency (in milliseconds)
24+
- Requests Per Second (RPS)
25+
- Output Tokens Per Second
26+
- Input Tokens Per Second
27+
28+
Results will be displayed in the terminal and saved as CSV file named `1_testspec.yaml`.
29+
30+
## Getting Started
31+
32+
We recommend using Kubernetes to deploy the CodeTrans service, as it offers benefits such as load balancing and improved scalability. However, you can also deploy the service using Docker if that better suits your needs.
33+
34+
### Prerequisites
35+
36+
- Install Kubernetes by following [this guide](https://github.com/opea-project/docs/blob/main/guide/installation/k8s_install/k8s_install_kubespray.md).
37+
38+
- Every node has direct internet access
39+
- Set up kubectl on the master node with access to the Kubernetes cluster.
40+
- Install Python 3.8+ on the master node for running GenAIEval.
41+
- Ensure all nodes have a local /mnt/models folder, which will be mounted by the pods.
42+
- Ensure that the container's ulimit can meet the the number of requests.
43+
44+
```bash
45+
# The way to modify the containered ulimit:
46+
sudo systemctl edit containerd
47+
# Add two lines:
48+
[Service]
49+
LimitNOFILE=65536:1048576
50+
51+
sudo systemctl daemon-reload; sudo systemctl restart containerd
52+
```
53+
54+
### Test Steps
55+
56+
Please deploy CodeTrans service before benchmarking.
57+
58+
##### Run Benchmark Test
59+
60+
Before the benchmark, we can configure the number of test queries and test output directory by:
61+
62+
```bash
63+
export USER_QUERIES="[1, 1, 1, 1]"
64+
export TEST_OUTPUT_DIR="/tmp/benchmark_output"
65+
```
66+
67+
And then run the benchmark by:
68+
69+
```bash
70+
bash benchmark.sh -n <node_count>
71+
```
72+
73+
The argument `-n` refers to the number of test nodes.
74+
75+
##### 4. Data collection
76+
77+
All the test results will come to this folder `/tmp/benchmark_output` configured by the environment variable `TEST_OUTPUT_DIR` in previous steps.

0 commit comments

Comments
 (0)