Skip to content

Conversation

@sgurunat
Copy link
Contributor

Description
This PR contains changes related to multiple models selection in ProductivitySuite ChatQnA along with some minor enhancements to the UI. Also it contains docker compose files and instructions to run ProductivitySuite on Intel Gaudi server with remote TGI/TEI services.

Type of change
New feature (non-breaking change which adds new functionality)
Others (enhancement, documentation, validation, etc.)
New Features:

Add chatqna_wrapper.py along with updated Dockerfile.wrapper. To support multiple models chatqna with wrapper is required
ProductivitySuite: Add docker compose files for Intel Gaudi server along with remote tgi/tei service with instructions
ProductivitySuite UI: Add multiple models support. Choose different models from dropdown
Enhancements:

ProductivitySuite UI: Update names of ChatQnA, CodeGen, DocSum to Digital Assistant, Code Generator, Content Summarizer respectively
ProductivitySuite UI: Update Docsum to have vertical scroll bar if content exceeds the window height
ProductivitySuite UI: Remove <|eot_id|> string from the Chat, Docsum and Faqgen response
ProductivitySuite UI: Update contextWrapper and contextTitle width to adjust to different screen sizes
ProductivitySuite UI: Show system prompt input field always to edit in the chatqna prompt section
ProductivitySuite UI: Update max_new_tokens into max_tokens

@jaswanth8888 jaswanth8888 added this to the v1.1 milestone Nov 7, 2024
@lvliang-intel
Copy link
Collaborator

@sgurunat,
Please fix this path check issue.
image

@sgurunat
Copy link
Contributor Author

@lvliang-intel - Fixed it

@chensuyue
Copy link
Collaborator

git clone https://github.com/opea-project/GenAIComps.git && cd GenAIComps && git checkout "${opea_branch:-"main"}" && cd ../

echo "Build all the images with --no-cache, check docker_image_build.log for details..."
docker compose -f build_vllm.yaml build --no-cache > ${LOG_PATH}/docker_image_build.log
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

build_vllm.yaml -> build.yaml and just build the images required by this test.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh yeah, missed to change it. Updated it now. Thanks

docker compose -f build_vllm.yaml build --no-cache > ${LOG_PATH}/docker_image_build.log

docker pull ghcr.io/huggingface/tei-gaudi:latest
docker pull opea/vllm-hpu:latest
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You don't need pull this, this will be build in the CI test.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok commented it out.

@chensuyue
Copy link
Collaborator

This PR should be closed right?

@lvliang-intel
Copy link
Collaborator

This PR is fixed by #1144 and #1149

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants