Blazing-fast boilerplate for building AI-powered web apps on the free tiers of Vercel + Supabase.
- Next.js 15 (App Router) with React 19
- Tailwind CSS 4 & Tailwind UI ready
- pnpm workspace
- Supabase Postgres + pgvector for RAG & embeddings
- Vercel AI SDK 4.x (upgrade-ready)
- Auth, RLS, Prisma, and pgvector out-of-the-box
- ESLint 9 flat-config, Prettier, Husky, commitlint
- GitHub Actions CI pipeline
- Click Use this template → Create a new repo on GitHub.
- Clone your new repo locally.
- Continue with the quick-start below.
pnpm dlx degit <your-org>/ai-starter my-app
cd my-app
git init && pnpm install
# 1) Install dependencies
pnpm install
# 2) Copy env template and add your own keys
cp .env.example .env.local
# → fill in OPENAI_API_KEY, DATABASE_URL, etc.
# 3) Run the dev server
pnpm dev
Visit http://localhost:3000 to see the starter running.
Run these once per project. They’re left out of automated scripts so each developer can point the starter at their own Supabase instance.
-
Initialise Prisma for Postgres
npx prisma init --datasource-provider postgresql
-
Add your Supabase connection string to
.env.local
:DATABASE_URL="postgresql://postgres:<PW>@db.<PROJECT>.supabase.co:6543/postgres"
-
**Enable **`` inside Supabase (SQL Editor):
create extension if not exists vector;
-
Add the first model in
prisma/schema.prisma
:model Document { id Int @id @default(autoincrement()) content String embedding Vector @db.Vector(1536) }
-
Generate & apply the migration
pnpm prisma migrate dev --name init
-
(Optional) Inspect locally
pnpm prisma studio
See .env.example
for the full list. Minimum set:
Key | Description |
---|---|
OPENAI_API_KEY |
OpenAI or compatible model key |
NEXT_PUBLIC_SUPABASE_URL |
Supabase project URL |
NEXT_PUBLIC_SUPABASE_ANON_KEY |
Supabase anon key |
DATABASE_URL |
Supabase Postgres connection string |
Task | Command |
---|---|
Lint & format | pnpm lint · pnpm format |
Prisma migrations | pnpm prisma migrate dev |
Update deps | pnpm up -r |
Start local Supabase | supabase start |
- Push to GitHub.
- Import the repo on Vercel.
- Add the same environment variables under Project → Settings → Environment Variables.
- Trigger a build — production URL is ready in ~30 sec.
- RAG Schema: tweak
prisma/schema.prisma
(e.g.,embedding Vector(1536)
). - AI Model: change the model ID in
lib/ai.ts
.
Developers often want to hack on UI & data plumbing without racking up usage bills. Two zero‑cost strategies ship with this starter:
Mode | How to enable | Behaviour |
---|---|---|
Mock | Add AI_MODE=mock to .env.local |
lib/ai.ts and lib/embeddings.ts return stub data; no network calls at all. |
Local Model | Install Ollama & run ollama run llama3 → set AI_MODE=ollama |
Uses Ollama’s local /api for both chat and embeddings (e.g., ollama run nomic-embed ). No cloud traffic → no credits. |
// lib/ai.ts (excerpt)
import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai';
import { mockResponse } from './mocks';
export async function chat(messages) {
if (process.env.AI_MODE === 'mock') {
console.warn('[AI‑Mock] Returning stub response');
return { text: mockResponse(messages) };
}
if (process.env.AI_MODE === 'ollama') {
return generateText({
model: { id: 'llama3', provider: 'ollama', api: 'http://localhost:11434' },
prompt: messages,
});
}
return generateText({ model: openai('gpt-4o'), prompt: messages });
}
// lib/embeddings.ts – one place for vector generation
import { embed } from 'ai';
export async function getEmbedding(text: string) {
if (process.env.AI_MODE === 'mock') {
// Return a deterministic fake vector based on string hash → same length as pgvector column
return Array.from({ length: 1536 }, (_, i) => ((text.charCodeAt(i % text.length) % 100) / 100));
}
if (process.env.AI_MODE === 'ollama') {
const res = await fetch('http://localhost:11434/api/embeddings', {
method: 'POST',
body: JSON.stringify({ model: 'nomic-embed-text', prompt: text }),
});
return (await res.json()).embedding;
}
// Cloud default → OpenAI embeddings endpoint
const [vec] = await embed({ model: 'text-embedding-3-small', value: text });
return vec;
}
Why hash‑based fake vectors? They keep the pgvector column populated and deterministic, so similarity queries remain testable (same input → same vector) without hitting any API.
Remember: Commit AI_MODE
lines only to .env.example
; each developer chooses whether to use mock, local, or cloud mode.
Tip: Commit
AI_MODE=mock
only to.env.example
, never to production envs.
Pull requests are welcome! Please run pnpm lint
before committing.
MIT
Built with ❤️ & 🧠 by Drew Thompson