Skip to content

Commit 427293c

Browse files
Support for AMD EPYC via Docker Containers (#2083)
Signed-off-by: Jereshea J M <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
1 parent c66a2f0 commit 427293c

File tree

89 files changed

+14304
-0
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

89 files changed

+14304
-0
lines changed
Lines changed: 271 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,271 @@
1+
# Deploying AudioQnA on AMD EPYC™ Processors
2+
3+
This document provides a step-by-step guide for deploying the AudioQnA application on a single node, leveraging the [GenAIComps](https://github.com/opea-project/GenAIComps.git) microservices, optimized for AMD EPYC™ Processors. The process covers pulling Docker images, deploying containers using Docker Compose, and running services with the `llm` microservices.
4+
5+
Note: The default LLM is `meta-llama/Meta-Llama-3-8B-Instruct`. Before deploying the application, please make sure either you've requested and been granted the access to it on [Huggingface](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) or you've downloaded the model locally from [ModelScope](https://www.modelscope.cn/models).
6+
7+
## Table of Contents
8+
9+
1. [AudioQnA Quick Start Deployment](#audioqna-quick-start-deployment)
10+
2. [AudioQnA Docker Compose Files](#audioqna-docker-compose-files)
11+
3. [Validate Microservices](#validate-microservices)
12+
4. [Conclusion](#conclusion)
13+
14+
## AudioQnA Quick Start Deployment
15+
16+
This section describes how to quickly deploy and test the AudioQnA service manually on an AMD EPYC™ processor. The basic steps are:
17+
18+
1. [Access the Code](#access-the-code)
19+
2. [Install Docker](#install-docker)
20+
3. [Determine your host's external IP address](#determine-your-host-external-ip-address)
21+
4. [Configure the Deployment Environment](#configure-the-deployment-environment)
22+
5. [Deploy the Services Using Docker Compose](#deploy-the-services-using-docker-compose)
23+
6. [Check the Deployment Status](#check-the-deployment-status)
24+
7. [Validate the Pipeline](#validate-the-pipeline)
25+
8. [Cleanup the Deployment](#cleanup-the-deployment)
26+
27+
### Access the Code
28+
29+
Clone the GenAIExample repository and access the AudioQnA AMD EPYC™ platform Docker Compose files and supporting scripts:
30+
31+
```bash
32+
git clone https://github.com/opea-project/GenAIExamples.git
33+
cd GenAIExamples/AudioQnA/docker_compose/amd/cpu/epyc
34+
35+
```
36+
37+
### Install Docker
38+
39+
Ensure Docker is installed on your system. If Docker is not already installed, use the provided script to set it up:
40+
41+
source ./install_docker.sh
42+
43+
This script installs Docker and its dependencies. After running it, verify the installation by checking the Docker version:
44+
45+
docker --version
46+
47+
If Docker is already installed, this step can be skipped.
48+
49+
### Determine your host external IP address
50+
51+
Run the following command in your terminal to list network interfaces:
52+
53+
ifconfig
54+
55+
Look for the inet address associated with your active network interface (e.g., enp99s0). For example:
56+
57+
enp99s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
58+
inet 10.101.16.119 netmask 255.255.255.0 broadcast 10.101.16.255
59+
60+
In this example, the (`host_ip`) would be (`10.101.16.119`).
61+
62+
# Replace with your host's external IP address
63+
export host_ip="your_external_ip_address"
64+
65+
### Configure the Deployment Environment
66+
67+
The model_cache directory, by default, stores models in the ./data directory. To change this, use the following command:
68+
69+
```bash
70+
# Optional
71+
export model_cache=/home/documentation/data_audioqna/data # Path to save cache models
72+
```
73+
74+
To set up environment variables for deploying AudioQnA services, set up some parameters specific to the deployment environment and source the `set_env.sh` script in this directory:
75+
76+
```bash
77+
export HF_TOKEN="Your_HuggingFace_API_Token"
78+
export http_proxy="Your_HTTP_Proxy" # http proxy if any
79+
export https_proxy="Your_HTTPs_Proxy" # https proxy if any
80+
export no_proxy=localhost,127.0.0.1,$host_ip,whisper-service,speecht5-service,vllm-service,tgi-service,audioqna-epyc-backend-server,audioqna-epyc-ui-server # additional no proxies if needed
81+
export NGINX_PORT=${your_nginx_port} # your usable port for nginx, 80 for example
82+
source ./set_env.sh
83+
```
84+
85+
### Deploy the Services Using Docker Compose
86+
87+
To deploy the AudioQnA services, execute the `docker compose up` command with the appropriate arguments. For a default deployment, execute the command below. It uses the 'compose.yaml' file.
88+
89+
```bash
90+
docker compose -f compose.yaml up -d
91+
```
92+
93+
> **Note**: developers should build docker image from source when:
94+
>
95+
> - Developing off the git main branch (as the container's ports in the repo may be different > from the published docker image).
96+
> - Unable to download the docker image.
97+
> - Use a specific version of Docker image.
98+
99+
Please refer to the table below to build different microservices from source:
100+
101+
| Microservice | Deployment Guide |
102+
| ------------ | --------------------------------------------------------------------------------------------------------------------------------- |
103+
| vLLM | [vLLM build guide](https://github.com/opea-project/GenAIComps/tree/main/comps/third_parties/vllm#build-docker) |
104+
| LLM | [LLM build guide](https://github.com/opea-project/GenAIComps/tree/main/comps/llms) |
105+
| WHISPER | [Whisper build guide](https://github.com/opea-project/GenAIComps/tree/main/comps/asr/src#211-whisper-server-image) |
106+
| SPEECHT5 | [SpeechT5 build guide](https://github.com/opea-project/GenAIComps/tree/main/comps/tts/src#211-speecht5-server-image) |
107+
| GPT-SOVITS | [GPT-SOVITS build guide](https://github.com/opea-project/GenAIComps/tree/main/comps/third_parties/gpt-sovits/src#build-the-image) |
108+
| MegaService | [MegaService build guide](../../../../README_miscellaneous.md#build-megaservice-docker-image) |
109+
| UI | [Basic UI build guide](../../../../README_miscellaneous.md#build-ui-docker-image) |
110+
111+
### Check the Deployment Status
112+
113+
After running docker compose, check if all the containers launched via docker compose have started:
114+
115+
```bash
116+
docker ps -a
117+
```
118+
119+
For the default deployment, the following 5 containers should have started:
120+
121+
```
122+
1c67e44c39d2 opea/audioqna-ui:latest "docker-entrypoint.s…" About a minute ago Up About a minute 0.0.0.0:5173->5173/tcp, :::5173->5173/tcp audioqna-epyc-ui-server
123+
833a42677247 opea/audioqna:latest "python audioqna.py" About a minute ago Up About a minute 0.0.0.0:3008->8888/tcp, :::3008->8888/tcp audioqna-epyc-backend-server
124+
5dc4eb9bf499 opea/speecht5:latest "python speecht5_ser…" About a minute ago Up About a minute 0.0.0.0:7055->7055/tcp, :::7055->7055/tcp speecht5-service
125+
814e6efb1166 opea/vllm:latest "python3 -m vllm.ent…" About a minute ago Up About a minute (healthy) 0.0.0.0:3006->80/tcp, :::3006->80/tcp vllm-service
126+
46f7a00f4612 opea/whisper:latest "python whisper_serv…" About a minute ago Up About a minute 0.0.0.0:7066->7066/tcp, :::7066->7066/tcp whisper-service
127+
```
128+
129+
If any issues are encountered during deployment, refer to the [Troubleshooting](../../../../README_miscellaneous.md#troubleshooting) section.
130+
131+
### Validate the Pipeline
132+
133+
Once the AudioQnA services are running, test the pipeline using the following command:
134+
135+
```bash
136+
# Test the AudioQnA megaservice by recording a .wav file, encoding the file into the base64 format, and then sending the base64 string to the megaservice endpoint.
137+
# The megaservice will return a spoken response as a base64 string. To listen to the response, decode the base64 string and save it as a .wav file.
138+
wget https://github.com/intel/intel-extension-for-transformers/raw/refs/heads/main/intel_extension_for_transformers/neural_chat/assets/audio/sample_2.wav
139+
base64_audio=$(base64 -w 0 sample_2.wav)
140+
141+
# if you are using speecht5 as the tts service, voice can be "default" or "male"
142+
# if you are using gpt-sovits for the tts service, you can set the reference audio following https://github.com/opea-project/GenAIComps/blob/main/comps/third_parties/gpt-sovits/src/README.md
143+
144+
curl http://${host_ip}:3008/v1/audioqna \
145+
-X POST \
146+
-H "Content-Type: application/json" \
147+
-d "{\"audio\": \"${base64_audio}\", \"max_tokens\": 64, \"voice\": \"default\"}" \
148+
| sed 's/^"//;s/"$//' | base64 -d > output.wav
149+
```
150+
151+
**Note** : Access the AudioQnA UI by web browser through this URL: `http://${host_ip}:5173`. Please confirm the `5173` port is opened in the firewall. To validate each microservice used in the pipeline refer to the [Validate Microservices](#validate-microservices) section.
152+
153+
### Cleanup the Deployment
154+
155+
To stop the containers associated with the deployment, execute the following command:
156+
157+
```bash
158+
docker compose -f compose.yaml down
159+
```
160+
161+
## AudioQnA Docker Compose Files
162+
163+
When deploying an AudioQnA pipeline on an AMD EPYC™ platform, users can select from various large language model serving frameworks or opt for either single-language English TTS. The table below highlights the available configurations included in the application. These configurations serve as templates and can be extended to incorporate additional components from [GenAIComps](https://github.com/opea-project/GenAIComps.git).
164+
165+
| File | Description |
166+
| -------------------------------------- | ----------------------------------------------------------------------------------------- |
167+
| [compose.yaml](./compose.yaml) | Default compose file using vllm as serving framework and redis as vector database |
168+
| [compose_tgi.yaml](./compose_tgi.yaml) | The LLM serving framework is TGI. All other configurations remain the same as the default |
169+
170+
## Validate MicroServices
171+
172+
1. Whisper Service
173+
174+
```bash
175+
wget https://github.com/intel/intel-extension-for-transformers/raw/main/intel_extension_for_transformers/neural_chat/assets/audio/sample.wav
176+
curl http://${host_ip}:${WHISPER_SERVER_PORT}/v1/audio/transcriptions \
177+
-H "Content-Type: multipart/form-data" \
178+
-F file="@./sample.wav" \
179+
-F model="openai/whisper-small"
180+
```
181+
182+
2. LLM backend Service
183+
184+
During the initial startup, the service requires additional time to download, load, and warm up the model. Once this process is complete, the service will be ready, and the container (either `vllm-service` or `tgi-service`) will display a `healthy` status when viewed using `docker ps`. Prior to this, the status will appear as `health: starting`.
185+
186+
Or try the command below to check whether the LLM serving is ready.
187+
188+
```bash
189+
# vLLM service
190+
docker logs vllm-service 2>&1 | grep complete
191+
# If the service is ready, you will get the response like below.
192+
INFO: Application startup complete.
193+
```
194+
195+
```bash
196+
# TGI service
197+
docker logs tgi-service | grep Connected
198+
# If the service is ready, you will get the response like below.
199+
2024-09-03T02:47:53.402023Z INFO text_generation_router::server: router/src/server.rs:2311: Connected
200+
```
201+
202+
Then try the `cURL` command below to validate services.
203+
204+
```bash
205+
# either vLLM or TGI service
206+
curl http://${host_ip}:${LLM_SERVER_PORT}/v1/chat/completions \
207+
-X POST \
208+
-d '{"model": "meta-llama/Meta-Llama-3-8B-Instruct", "messages": [{"role": "user", "content": "What is Deep Learning?"}], "max_tokens":17}' \
209+
-H 'Content-Type: application/json'
210+
```
211+
212+
3. TTS Service
213+
214+
```bash
215+
# speecht5 service
216+
curl http://${host_ip}:${SPEECHT5_SERVER_PORT}/v1/audio/speech -XPOST -d '{"input": "Who are you?"}' -H 'Content-Type: application/json' --output speech.mp3
217+
218+
# gpt-sovits service (optional)
219+
curl http://${host_ip}:${GPT_SOVITS_SERVER_PORT}/v1/audio/speech -XPOST -d '{"input": "Who are you?"}' -H 'Content-Type: application/json' --output speech.mp3
220+
```
221+
222+
### Profile Microservices
223+
224+
To further analyze MicroService Performance, users could follow the instructions to profile MicroServices.
225+
226+
#### 1. vLLM backend Service
227+
228+
Users could follow previous section to testing vLLM microservice or CodeGen MegaService. By default, vLLM profiling is not enabled. Users could start and stop profiling by following commands.
229+
230+
##### Start vLLM profiling
231+
232+
```bash
233+
curl http://${host_ip}:${LLM_SERVER_PORT}/start_profile \
234+
-X POST \
235+
-d '{"model": "meta-llama/Meta-Llama-3-8B-Instruct"}' \
236+
-H 'Content-Type: application/json'
237+
```
238+
239+
After vLLM profiling is started, users could start asking questions and get responses from vLLM MicroService
240+
241+
```bash
242+
curl http://${host_ip}:${LLM_SERVER_PORT}/v1/chat/completions \
243+
-X POST \
244+
-d '{"model": "meta-llama/Meta-Llama-3-8B-Instruct", "messages": [{"role": "user", "content": "What is Deep Learning?"}], "max_tokens":17}' \
245+
-H 'Content-Type: application/json'
246+
```
247+
248+
##### Stop vLLM profiling
249+
250+
By following command, users could stop vLLM profiling and generate a \*.pt.trace.json.gz file as profiling result under /mnt folder in vllm-service docker instance.
251+
252+
```bash
253+
curl http://${host_ip}:${LLM_SERVER_PORT}/stop_profile \
254+
-X POST \
255+
-d '{"model": "meta-llama/Meta-Llama-3-8B-Instruct"}' \
256+
-H 'Content-Type: application/json'
257+
```
258+
259+
After vllm profiling is stopped, users could use below command to get the \*.pt.trace.json.gz file under /mnt folder.
260+
261+
```bash
262+
docker cp vllm-service:/mnt/ .
263+
```
264+
265+
##### Check profiling result
266+
267+
Open a web browser and type "chrome://tracing" or "ui.perfetto.dev", and then load the json.gz file.
268+
269+
## Conclusion
270+
271+
This guide should enable developers to deploy the default configuration or any of the other compose yaml files for different configurations. It also highlights the configurable parameters that can be set before deployment.
Lines changed: 96 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,96 @@
1+
# Copyright (C) 2025 Advanced Micro Devices, Inc.
2+
# Copyright (C) 2025 Intel Corporation
3+
# SPDX-License-Identifier: Apache-2.0
4+
5+
services:
6+
whisper-service:
7+
image: ${REGISTRY:-opea}/whisper:${TAG:-latest}
8+
container_name: whisper-service
9+
ports:
10+
- ${WHISPER_SERVER_PORT:-7066}:7066
11+
ipc: host
12+
environment:
13+
no_proxy: ${no_proxy}
14+
http_proxy: ${http_proxy}
15+
https_proxy: ${https_proxy}
16+
restart: unless-stopped
17+
speecht5-service:
18+
image: ${REGISTRY:-opea}/speecht5:${TAG:-latest}
19+
container_name: speecht5-service
20+
ports:
21+
- ${SPEECHT5_SERVER_PORT:-7055}:7055
22+
ipc: host
23+
environment:
24+
no_proxy: ${no_proxy}
25+
http_proxy: ${http_proxy}
26+
https_proxy: ${https_proxy}
27+
restart: unless-stopped
28+
vllm-service:
29+
image: ${REGISTRY:-opea}/vllm:${TAG:-latest}
30+
container_name: vllm-service
31+
ports:
32+
- ${LLM_SERVER_PORT:-3006}:80
33+
volumes:
34+
- "${MODEL_CACHE:-./data}:/root/.cache/huggingface/hub"
35+
shm_size: 128g
36+
environment:
37+
no_proxy: ${no_proxy}
38+
http_proxy: ${http_proxy}
39+
https_proxy: ${https_proxy}
40+
HF_TOKEN: ${HF_TOKEN}
41+
LLM_MODEL_ID: ${LLM_MODEL_ID}
42+
MODEL_CACHE: ${MODEL_CACHE}
43+
VLLM_TORCH_PROFILER_DIR: "/mnt"
44+
VLLM_CPU_KVCACHE_SPACE: 40
45+
LLM_SERVER_PORT: ${LLM_SERVER_PORT}
46+
healthcheck:
47+
test:
48+
[
49+
"CMD-SHELL",
50+
"curl -f http://$host_ip:${LLM_SERVER_PORT}/health || exit 1",
51+
]
52+
interval: 10s
53+
timeout: 10s
54+
retries: 100
55+
command: --model ${LLM_MODEL_ID} --host 0.0.0.0 --port 80
56+
audioqna-epyc-backend-server:
57+
image: ${REGISTRY:-opea}/audioqna:${TAG:-latest}
58+
container_name: audioqna-epyc-backend-server
59+
depends_on:
60+
- whisper-service
61+
- vllm-service
62+
- speecht5-service
63+
ports:
64+
- "3008:8888"
65+
environment:
66+
- no_proxy=${no_proxy}
67+
- https_proxy=${https_proxy}
68+
- http_proxy=${http_proxy}
69+
- MEGA_SERVICE_HOST_IP=${MEGA_SERVICE_HOST_IP}
70+
- WHISPER_SERVER_HOST_IP=${WHISPER_SERVER_HOST_IP}
71+
- WHISPER_SERVER_PORT=${WHISPER_SERVER_PORT}
72+
- LLM_SERVER_HOST_IP=${LLM_SERVER_HOST_IP}
73+
- LLM_SERVER_PORT=${LLM_SERVER_PORT}
74+
- LLM_MODEL_ID=${LLM_MODEL_ID}
75+
- SPEECHT5_SERVER_HOST_IP=${SPEECHT5_SERVER_HOST_IP}
76+
- SPEECHT5_SERVER_PORT=${SPEECHT5_SERVER_PORT}
77+
ipc: host
78+
restart: always
79+
audioqna-epyc-ui-server:
80+
image: ${REGISTRY:-opea}/audioqna-ui:${TAG:-latest}
81+
container_name: audioqna-epyc-ui-server
82+
depends_on:
83+
- audioqna-epyc-backend-server
84+
ports:
85+
- "5173:5173"
86+
environment:
87+
- no_proxy=${no_proxy}
88+
- https_proxy=${https_proxy}
89+
- http_proxy=${http_proxy}
90+
- CHAT_URL=${BACKEND_SERVICE_ENDPOINT}
91+
ipc: host
92+
restart: always
93+
94+
networks:
95+
default:
96+
driver: bridge

0 commit comments

Comments
 (0)