Skip to content

Commit 9417e3b

Browse files
authored
Fix workexec agent docker build issues and enable LLM Remote Endpoint (#2103)
Signed-off-by: Tsai, Louie <[email protected]>
1 parent 33119a7 commit 9417e3b

File tree

4 files changed

+24
-4
lines changed

4 files changed

+24
-4
lines changed

WorkflowExecAgent/docker_compose/intel/cpu/xeon/README.md

Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -60,6 +60,24 @@ export temperature=0
6060
export max_new_tokens=1000
6161
```
6262

63+
<details>
64+
<summary> Using Remote LLM Endpoints </summary>
65+
When models are deployed on a remote server, a base URL and an API key are required to access them. To set up a remote server and acquire the base URL and API key, refer to <a href="https://www.intel.com/content/www/us/en/developer/topic-technology/artificial-intelligence/enterprise-inference.html"> Intel® AI for Enterprise Inference </a> offerings.
66+
67+
Set the following environment variables.
68+
69+
- `llm_endpoint_url` is the HTTPS endpoint of the remote server with the model of choice (i.e. https://api.inference.denvrdata.com). **Note:** If not using LiteLLM, the second part of the model card needs to be appended to the URL i.e. `/Llama-3.3-70B-Instruct` from `meta-llama/Llama-3.3-70B-Instruct`.
70+
- `llm_endpoint_api_key` is the access token or key to access the model(s) on the server.
71+
- `LLM_MODEL_ID` is the model card which may need to be overwritten depending on what it is set to `set_env.sh`.
72+
73+
```bash
74+
export llm_endpoint_url=<https-endpoint-of-remote-server>
75+
export llm_endpoint_api_key=<your-api-key>
76+
export LLM_MODEL_ID=<model-card>
77+
```
78+
79+
</details>
80+
6381
### Deploy the Services Using Docker Compose
6482

6583
For an out-of-the-box experience, this guide uses an example workflow serving API service. There are 3 services needed for the setup: the agent microservice, an LLM inference service, and the workflow serving API.

WorkflowExecAgent/docker_compose/intel/cpu/xeon/compose_vllm.yaml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,7 @@ services:
1717
recursion_limit: ${recursion_limit}
1818
llm_engine: ${llm_engine}
1919
llm_endpoint_url: ${llm_endpoint_url}
20+
api_key: ${llm_endpoint_api_key}
2021
model: ${model}
2122
temperature: ${temperature}
2223
max_new_tokens: ${max_new_tokens}

WorkflowExecAgent/tests/2_start_vllm_service.sh

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,7 @@ function build_vllm_docker_image() {
3838
function start_vllm_service() {
3939
echo "start vllm service"
4040
export VLLM_SKIP_WARMUP=true
41-
docker run -d -p ${vllm_port}:${vllm_port} --rm --network=host --name test-comps-vllm-service -v ~/.cache/huggingface:/root/.cache/huggingface -v ${WORKPATH}/tests/tool_chat_template_mistral_custom.jinja:/root/tool_chat_template_mistral_custom.jinja -e HF_TOKEN=$HF_TOKEN -e http_proxy=$http_proxy -e https_proxy=$https_proxy -it vllm-cpu-env --model ${model} --port ${vllm_port} --chat-template /root/tool_chat_template_mistral_custom.jinja --enable-auto-tool-choice --tool-call-parser mistral
41+
docker run -d -p ${vllm_port}:${vllm_port} --rm --network=host --name test-comps-vllm-service -v ~/.cache/huggingface:/root/.cache/huggingface -v ${WORKPATH}/tests/tool_chat_template_mistral_custom.jinja:/root/tool_chat_template_mistral_custom.jinja -e HF_TOKEN=$HF_TOKEN -e http_proxy=$http_proxy -e https_proxy=$https_proxy -it public.ecr.aws/q9t5s3a7/vllm-cpu-release-repo:v0.10.0 --model ${model} --port ${vllm_port} --chat-template /root/tool_chat_template_mistral_custom.jinja --enable-auto-tool-choice --tool-call-parser mistral
4242
echo ${LOG_PATH}/vllm-service.log
4343
sleep 10s
4444
echo "Waiting vllm ready"
@@ -64,9 +64,9 @@ function start_vllm_service() {
6464
}
6565

6666
function main() {
67-
echo "==================== Build vllm docker image ===================="
68-
build_vllm_docker_image
69-
echo "==================== Build vllm docker image completed ===================="
67+
# echo "==================== Build vllm docker image ===================="
68+
# build_vllm_docker_image
69+
# echo "==================== Build vllm docker image completed ===================="
7070

7171
echo "==================== Start vllm docker service ===================="
7272
start_vllm_service

WorkflowExecAgent/tests/3_launch_and_validate_agent.sh

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -16,6 +16,7 @@ export HF_TOKEN=${HF_TOKEN}
1616
export llm_engine=vllm
1717
export ip_address=$(hostname -I | awk '{print $1}')
1818
export llm_endpoint_url=http://${ip_address}:${vllm_port}
19+
export api_key=""
1920
export model=mistralai/Mistral-7B-Instruct-v0.3
2021
export recursion_limit=25
2122
export temperature=0

0 commit comments

Comments
 (0)