|
| 1 | +# Build and deploy FaqGen Application on AMD GPU (ROCm) |
| 2 | + |
| 3 | +## Build images |
| 4 | + |
| 5 | +### Build the LLM Docker Image |
| 6 | + |
| 7 | +```bash |
| 8 | +### Cloning repo |
| 9 | +git clone https://github.com/opea-project/GenAIComps.git |
| 10 | +cd GenAIComps |
| 11 | + |
| 12 | +### Build Docker image |
| 13 | +docker build -t opea/llm-tgi:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/llms/text-generation/tgi/Dockerfile . |
| 14 | +``` |
| 15 | + |
| 16 | +## 🚀 Start Microservices and MegaService |
| 17 | + |
| 18 | +### Required Models |
| 19 | + |
| 20 | +Default model is "meta-llama/Meta-Llama-3-8B-Instruct". Change "LLM_MODEL_ID" in environment variables below if you want to use another model. |
| 21 | + |
| 22 | +For gated models, you also need to provide [HuggingFace token](https://huggingface.co/docs/hub/security-tokens) in "HUGGINGFACEHUB_API_TOKEN" environment variable. |
| 23 | + |
| 24 | +### Setup Environment Variables |
| 25 | + |
| 26 | +Since the `compose.yaml` will consume some environment variables, you need to setup them in advance as below. |
| 27 | + |
| 28 | +```bash |
| 29 | +export FAQGEN_LLM_MODEL_ID="meta-llama/Meta-Llama-3-8B-Instruct" |
| 30 | +export HOST_IP=${your_no_proxy} |
| 31 | +export FAQGEN_TGI_SERVICE_PORT=8008 |
| 32 | +export FAQGEN_LLM_SERVER_PORT=9000 |
| 33 | +export FAQGEN_HUGGINGFACEHUB_API_TOKEN=${your_hf_api_token} |
| 34 | +export FAQGEN_BACKEND_SERVER_PORT=8888 |
| 35 | +export FAGGEN_UI_PORT=5173 |
| 36 | +``` |
| 37 | + |
| 38 | +Note: Please replace with `host_ip` with your external IP address, do not use localhost. |
| 39 | + |
| 40 | +Note: In order to limit access to a subset of GPUs, please pass each device individually using one or more -device /dev/dri/rendered<node>, where <node> is the card index, starting from 128. (https://rocm.docs.amd.com/projects/install-on-linux/en/latest/how-to/docker.html#docker-restrict-gpus) |
| 41 | + |
| 42 | +Example for set isolation for 1 GPU |
| 43 | + |
| 44 | +``` |
| 45 | + - /dev/dri/card0:/dev/dri/card0 |
| 46 | + - /dev/dri/renderD128:/dev/dri/renderD128 |
| 47 | +``` |
| 48 | + |
| 49 | +Example for set isolation for 2 GPUs |
| 50 | + |
| 51 | +``` |
| 52 | + - /dev/dri/card0:/dev/dri/card0 |
| 53 | + - /dev/dri/renderD128:/dev/dri/renderD128 |
| 54 | + - /dev/dri/card0:/dev/dri/card0 |
| 55 | + - /dev/dri/renderD129:/dev/dri/renderD129 |
| 56 | +``` |
| 57 | + |
| 58 | +Please find more information about accessing and restricting AMD GPUs in the link (https://rocm.docs.amd.com/projects/install-on-linux/en/latest/how-to/docker.html#docker-restrict-gpus) |
| 59 | + |
| 60 | +### Start Microservice Docker Containers |
| 61 | + |
| 62 | +```bash |
| 63 | +cd GenAIExamples/FaqGen/docker_compose/amd/gpu/rocm/ |
| 64 | +docker compose up -d |
| 65 | +``` |
| 66 | + |
| 67 | +### Validate Microservices |
| 68 | + |
| 69 | +1. TGI Service |
| 70 | + |
| 71 | + ```bash |
| 72 | + curl http://${host_ip}:8008/generate \ |
| 73 | + -X POST \ |
| 74 | + -d '{"inputs":"What is Deep Learning?","parameters":{"max_new_tokens":17, "do_sample": true}}' \ |
| 75 | + -H 'Content-Type: application/json' |
| 76 | + ``` |
| 77 | + |
| 78 | +2. LLM Microservice |
| 79 | + |
| 80 | + ```bash |
| 81 | + curl http://${host_ip}:9000/v1/faqgen \ |
| 82 | + -X POST \ |
| 83 | + -d '{"query":"Text Embeddings Inference (TEI) is a toolkit for deploying and serving open source text embeddings and sequence classification models. TEI enables high-performance extraction for the most popular models, including FlagEmbedding, Ember, GTE and E5."}' \ |
| 84 | + -H 'Content-Type: application/json' |
| 85 | + ``` |
| 86 | + |
| 87 | +3. MegaService |
| 88 | + |
| 89 | + ```bash |
| 90 | + curl http://${host_ip}:8888/v1/faqgen \ |
| 91 | + -H "Content-Type: multipart/form-data" \ |
| 92 | + -F "messages=Text Embeddings Inference (TEI) is a toolkit for deploying and serving open source text embeddings and sequence classification models. TEI enables high-performance extraction for the most popular models, including FlagEmbedding, Ember, GTE and E5." \ |
| 93 | + -F "max_tokens=32" \ |
| 94 | + -F "stream=false" |
| 95 | + ``` |
| 96 | + |
| 97 | + Following the validation of all aforementioned microservices, we are now prepared to construct a mega-service. |
| 98 | + |
| 99 | +## 🚀 Launch the UI |
| 100 | + |
| 101 | +Open this URL `http://{host_ip}:5173` in your browser to access the frontend. |
| 102 | + |
| 103 | + |
| 104 | + |
| 105 | +## 🚀 Launch the React UI (Optional) |
| 106 | + |
| 107 | +To access the FAQGen (react based) frontend, modify the UI service in the `compose.yaml` file. Replace `faqgen-rocm-ui-server` service with the `faqgen-rocm-react-ui-server` service as per the config below: |
| 108 | + |
| 109 | +```bash |
| 110 | + faqgen-rocm-react-ui-server: |
| 111 | + image: opea/faqgen-react-ui:latest |
| 112 | + container_name: faqgen-rocm-react-ui-server |
| 113 | + environment: |
| 114 | + - no_proxy=${no_proxy} |
| 115 | + - https_proxy=${https_proxy} |
| 116 | + - http_proxy=${http_proxy} |
| 117 | + ports: |
| 118 | + - 5174:80 |
| 119 | + depends_on: |
| 120 | + - faqgen-rocm-backend-server |
| 121 | + ipc: host |
| 122 | + restart: always |
| 123 | +``` |
| 124 | + |
| 125 | +Open this URL `http://{host_ip}:5174` in your browser to access the react based frontend. |
| 126 | + |
| 127 | +- Create FAQs from Text input |
| 128 | +  |
| 129 | + |
| 130 | +- Create FAQs from Text Files |
| 131 | +  |
0 commit comments