In the README file for the llama2 deployment example:
https://github.com/oracle-samples/oci-data-science-ai-samples/blob/main/model-deployment/containers/llama2
there is an instruction to create container repository of name 'text-generation-interface-odsc'
while in the make file:
|
TGI_INFERENCE_IMAGE:=${CONTAINER_REGISTRY}/${TENANCY}/text-generation-interface:0.9.3-v |
image is pushed to the different repository, which with default OCI settings will be created on the root level compartment.
not a huge deal but quite confusing for the beginners (especially when you create the model deployment and you cannot find your model, the scope of available repositories is limited to the compartment)
In the README file for the llama2 deployment example:
https://github.com/oracle-samples/oci-data-science-ai-samples/blob/main/model-deployment/containers/llama2
there is an instruction to create container repository of name 'text-generation-interface-odsc'
while in the make file:
oci-data-science-ai-samples/model-deployment/containers/llama2/Makefile
Line 14 in 5c4fb77
image is pushed to the different repository, which with default OCI settings will be created on the root level compartment.
not a huge deal but quite confusing for the beginners (especially when you create the model deployment and you cannot find your model, the scope of available repositories is limited to the compartment)