Run Ollama AI models locally with remote access via Cloudflare tunnels.
- ollama: AI model server with GPU support (port 11434)
- ollama-tunnel: Secure Cloudflare tunnel for remote access
- Edit
cloudflared/config.yml
with your tunnel UUID and hostname - Run
docker compose up -d
- Access locally at
http://localhost:11434
or remotely via Cloudflare - Check tunnel status:
docker compose logs ollama-tunnel
# Start containers then run:
bash ollama/models.sh
- LLMs: llama3.2:1b, gemma3:1b, deepseek-r1:1.5b
- Embeddings: nomic-embed-text, mxbai-embed-large
- Vision: granite3.2-vision:2b
- Edit
ollama/models.sh
to add/remove models - Uncomment models in the script to enable them
- View installed models by uncommenting
ollama list
in the script
This project is licensed under the MIT License.