Example implementations of various LLM providers using PostHog's AI SDKs. This repository demonstrates how to integrate multiple AI providers (Anthropic, OpenAI, Google Gemini) with PostHog for analytics tracking.
- Python 3.8 or higher
- pip package manager
- Node.js 16 or higher
- npm package manager
-
Configure environment variables:
cp .env.example .env
Edit
.env
and add your API keys:ANTHROPIC_API_KEY
: Your Anthropic API keyGEMINI_API_KEY
: Your Google Gemini API keyOPENAI_API_KEY
: Your OpenAI API keyPOSTHOG_API_KEY
: Your PostHog API keyPOSTHOG_HOST
: PostHog host (defaults to https://app.posthog.com)
-
Run the application:
For Python:
cd python ./run.sh
For Node.js:
cd node ./run.sh
The run.sh
script will automatically:
- Set up a virtual environment (Python) or install dependencies (Node)
- Install all required packages
- Start the interactive CLI
- Chat Mode: Interactive conversation with the selected provider
- Tool Call Test: Automatically tests weather tool calling
- Message Test: Simple greeting test
- Image Test: Tests image description capabilities
- Embeddings Test: Tests embedding generation (OpenAI only)
Claude's extended thinking feature allows the model to show its internal reasoning process before responding. This can improve response quality for complex problems.
How to use:
When you select an Anthropic provider (options 1 or 2), you'll be prompted:
🧠 Extended Thinking Configuration
==================================================
Extended thinking shows Claude's reasoning process.
This can improve response quality for complex problems.
==================================================
Enable extended thinking? (y/n) [default: n]: y
Thinking budget tokens (1024-32000) [default: 10000]: 15000
✅ Initialized Anthropic (Thinking: enabled, budget: 15000)
How it works:
- The CLI will ask if you want to enable thinking each time you select an Anthropic provider
- You can customize the thinking budget (min: 1024, recommended: 10000-15000)
- Claude will show its reasoning process prefixed with "💭 Thinking:"
- Larger budgets can improve response quality for complex problems
- The model may not use the entire allocated budget
- Works with both regular and streaming Anthropic providers
max_tokens
is automatically adjusted to accommodate both thinking and response
Example output:
👤 You: Are there an infinite number of prime numbers such that n mod 4 == 3?
💭 Thinking: Let me think about this systematically. I need to consider
the distribution of primes and their properties modulo 4...
🤖 Bot: Yes, there are infinitely many prime numbers of the form 4k + 3...
An interactive tool for creating complex nested LLM trace data for testing PostHog analytics. Features pre-built templates (simple chat, RAG pipeline, multi-agent) and a custom trace builder for creating arbitrarily complex structures.
cd python/trace-generator
./run.sh
If you're developing the PostHog SDKs locally, you can use local paths instead of published packages:
-
Set environment variables in your
.env
:# For local PostHog SDK development POSTHOG_PYTHON_PATH=/../posthog-python POSTHOG_JS_PATH=/../posthog-js
-
Run the application normally with
./run.sh
The scripts will automatically detect and use your local SDK versions.
MIT License - see LICENSE file for details
Contributions are welcome! Please feel free to submit a Pull Request.