🚀 SDK & CLI for building VS Code Chat extensions with custom LLM providers.
- One-click CLI Generation - Interactively create extension projects
- Multi-provider Presets - Built-in configurations for GLM, DeepSeek, Qwen, MiniMax, OpenAI, Anthropic
- Dynamic Model Fetching - Automatically fetch available models from API
- Ready-to-use Templates - Copy templates to quickly create custom Copilot extensions
- Lightweight SDK - Core SDK has no VS Code dependencies, works in any Node.js environment
all-in-copilot/
├── packages/
│ └── sdk/ # Core SDK
│ └── src/
│ ├── core/ # Types, model fetcher
│ └── vscode/ # VS Code provider helpers
│
├── templates/ # Extension templates
│ ├── base-template/ # Base template for custom providers
│ ├── glm-template/ # GLM (智谱AI) example
│ ├── minimax-template/ # MiniMax example
│ ├── kimi-template/ # Kimi K2 (Moonshot) example
│ ├── mimo-template/ # Xiaomi MiMo example
│ └── aliyun-coding-template/ # Aliyun Model Studio Coding Plan example
│
└── cli/ # Project generator CLI
└── src/
└── index.ts# Install CLI globally
npm install -g @all-in-copilot/cli
# Create a new project interactively
all-in-copilot
# Or with short command
aic create my-copilot# Copy template
cp -r templates/glm-template my-copilot
cd my-copilot
# Edit configuration
vim src/config.ts
# Install dependencies
npm install
# Compile and test (F5 in VS Code)
npm run compileall-in-copilot # Interactive mode
all-in-copilot create NAME # Create project with prompts
all-in-copilot list # List available presets
all-in-copilot help # Show help| Preset | Provider | API Format |
|---|---|---|
| glm | GLM (智谱AI) | OpenAI |
| minimax | MiniMax | Anthropic |
| kimi | Kimi (Moonshot) | Anthropic |
| mimo | Xiaomi MiMo | Anthropic |
| deepseek | DeepSeek | OpenAI |
| qwen | Qwen | OpenAI |
| openai | OpenAI | OpenAI |
| anthropic | Anthropic | Anthropic |
| custom | Custom | OpenAI |
Edit src/config.ts to customize your provider:
export const PROVIDER_CONFIG: ProviderConfig = {
id: "provider-id",
name: "Provider Name",
baseUrl: "https://api.example.com/v1/chat/completions",
apiKeySecret: "extension-name.apiKey",
family: "provider-family",
apiMode: "openai", // 'openai' | 'anthropic' | 'gemini' | 'ollama'
supportsTools: true,
supportsVision: false,
defaultMaxOutputTokens: 4096,
defaultContextLength: 32768,
dynamicModels: true,
modelsCacheTTL: 5 * 60 * 1000,
};
export const FALLBACK_MODELS: ModelConfig[] = [
{
id: "model-1",
name: "Model 1",
maxInputTokens: 30000,
maxOutputTokens: 4096,
supportsTools: true,
supportsVision: false,
},
];
// Optional: Filter which models to display
export function filterModels(models: ModelConfig[]): ModelConfig[] {
return models.filter((m) => m.id.includes("chat"));
}import {
convertToOpenAI,
convertToolsToOpenAI,
processOpenAIStream,
fetchModelsFromAPI,
estimateTokens,
} from "@all-in-copilot/sdk";
// Dynamic model fetching
const providerConfig: ProviderConfig = {
id: "my-provider",
name: "My Provider",
baseUrl: "https://api.example.com/v1",
// ... other config
};
const models = await fetchModels(providerConfig, { apiKey: "your-api-key" });
console.log("Available models:", models);
// Use OpenAI-compatible provider
const provider = new OpenAIProvider(providerConfig, { apiKey: "your-api-key" });
for (const model of models) {
provider.registerModel(model);
}
const response = await provider.complete({
model: models[0].id,
messages: [{ role: "user", content: "Hello!" }],
});cd packages/sdk
npm run buildcd templates/minimax-template
npm run compile# Terminal 1
cd packages/sdk && npm run watch
# Terminal 2
cd templates/minimax-template && npm run watchAdd the following secrets in your repository settings:
| Secret | Description |
|---|---|
VSCE_PAT |
VS Code Marketplace Personal Access Token |
OVSX_PAT |
Open VSX Registry Token (optional) |
| Workflow | Trigger | Description |
|---|---|---|
| CI | Push/PR to main |
Build SDK, CLI and all templates, run tests |
| Release | Tag push (v*) / Manual |
Publish stable or pre-release to marketplace |
| Pre-release | Push to pre-release/* / Manual |
Publish pre-release version |
| Publish Single | Manual only | Manually publish a single extension |
Stable Release:
# 1. Update version
cd templates/glm-template
npm version patch # or minor / major
# 2. Commit and create tag
git add .
git commit -m "chore: bump glm-template to v0.2.0"
git tag glm-template-v0.2.0
git push origin main --tagsPre-release:
# Method 1: Use tag
git tag glm-template-v0.2.0-beta.1
git push origin --tags
# Method 2: Use branch
git checkout -b pre-release/glm-template
git push origin pre-release/glm-template
# Method 3: Manual trigger via GitHub Actions| Pattern | Triggers | Example |
|---|---|---|
v* |
Release all templates | v1.0.0 |
<template>-v* |
Release specific template | glm-template-v0.2.0 |
*-pre*, *-alpha*, *-beta*, *-rc* |
Pre-release | v1.0.0-beta.1 |
| Template | Base URL | API Mode | Dynamic Models |
|---|---|---|---|
| GLM | https://open.bigmodel.cn/api/paas/v4/chat/completions |
OpenAI | ✅ |
| MiniMax | https://api.minimaxi.com/anthropic/v1/messages |
Anthropic | ❌ |
| Kimi | https://api.moonshot.cn/anthropic |
Anthropic | ❌ |
| MiMo | https://api.xiaomimimo.com/anthropic/v1/messages |
Anthropic | ❌ |
| Aliyun Coding Plan | https://coding.dashscope.aliyuncs.com/apps/anthropic/v1/messages |
Anthropic | ❌ |
| Base | Custom | OpenAI/Anthropic | ✅ |
Provider API (/models)
↓
fetch + cache
↓
filterModels()
↓
VS Code Model List
- Extension calls
/modelsendpoint on provider's API - Response is parsed and converted to
ModelConfig[] - Models are cached for
modelsCacheTTLmilliseconds - If fetch fails, fallback to
FALLBACK_MODELS - Optional
filterModels()function filters displayed models
🚀 用于构建带有自定义 LLM 提供商的 VS Code Chat 扩展的 SDK 和 CLI。
- CLI 一键生成 - 交互式创建扩展项目
- 多供应商预设 - 内置 GLM、DeepSeek、Qwen、MiniMax、OpenAI、Anthropic 配置
- 动态模型获取 - 自动从 API 获取可用模型列表
- 即用型模板 - 复制模板即可快速创建自定义 Copilot 扩展
- 轻量 SDK - 核心 SDK 无 VS Code 依赖,可在任何 Node.js 环境使用
all-in-copilot/
├── packages/
│ └── sdk/ # 核心 SDK
│ └── src/
│ ├── core/ # 类型定义、模型获取
│ └── vscode/ # VS Code 提供者助手
│
├── templates/ # 扩展模板
│ ├── base-template/ # 自定义提供者的基础模板
│ ├── glm-template/ # GLM (智谱AI) 示例
│ ├── minimax-template/ # MiniMax 示例
│ ├── kimi-template/ # Kimi K2 (月之暗面) 示例
│ ├── mimo-template/ # 小米 MiMo 示例
│ └── aliyun-coding-template/ # 阿里云百炼编程计划示例
│
└── cli/ # 项目生成 CLI
└── src/
└── index.ts# 全局安装 CLI
npm install -g @all-in-copilot/cli
# 交互式创建新项目
all-in-copilot
# 或使用简短命令
aic create my-copilot# 复制模板
cp -r templates/glm-template my-copilot
cd my-copilot
# 编辑配置
vim src/config.ts
# 安装依赖
npm install
# 编译并测试(在 VS Code 中按 F5)
npm run compileall-in-copilot # 交互模式
all-in-copilot create NAME # 使用提示创建项目
all-in-copilot list # 列出可用预设
all-in-copilot help # 显示帮助| 预设 | 提供商 | API 格式 |
|---|---|---|
| glm | GLM (智谱AI) | OpenAI |
| minimax | MiniMax | Anthropic |
| kimi | Kimi (月之暗面) | Anthropic |
| mimo | 小米 MiMo | Anthropic |
| deepseek | DeepSeek | OpenAI |
| qwen | Qwen (通义千问) | OpenAI |
| openai | OpenAI | OpenAI |
| anthropic | Anthropic | Anthropic |
| custom | 自定义 | OpenAI |
编辑 src/config.ts 来自定义你的提供者:
export const PROVIDER_CONFIG: ProviderConfig = {
id: "provider-id", // 提供者 ID
name: "Provider Name", // 显示名称
baseUrl: "https://api.example.com/v1/chat/completions", // API 地址
apiKeySecret: "extension-name.apiKey", // API 密钥存储键
family: "provider-family", // 模型系列
apiMode: "openai", // API 模式:'openai' | 'anthropic' | 'gemini' | 'ollama'
supportsTools: true, // 是否支持工具调用
supportsVision: false, // 是否支持图像
defaultMaxOutputTokens: 4096,
defaultContextLength: 32768,
dynamicModels: true, // 是否动态获取模型
modelsCacheTTL: 5 * 60 * 1000, // 模型缓存时间
};
export const FALLBACK_MODELS: ModelConfig[] = [
{
id: "model-1",
name: "Model 1",
maxInputTokens: 30000,
maxOutputTokens: 4096,
supportsTools: true,
supportsVision: false,
},
];
// 可选:过滤要显示的模型
export function filterModels(models: ModelConfig[]): ModelConfig[] {
return models.filter((m) => m.id.includes("chat"));
}import {
convertToOpenAI,
convertToolsToOpenAI,
processOpenAIStream,
fetchModelsFromAPI,
estimateTokens,
} from "@all-in-copilot/sdk";
// 动态模型获取
const providerConfig: ProviderConfig = {
id: "my-provider",
name: "My Provider",
baseUrl: "https://api.example.com/v1",
// ... 其他配置
};
const models = await fetchModels(providerConfig, { apiKey: "your-api-key" });
console.log("可用模型:", models);
// 使用 OpenAI 兼容的提供者
const provider = new OpenAIProvider(providerConfig, { apiKey: "your-api-key" });
for (const model of models) {
provider.registerModel(model);
}
const response = await provider.complete({
model: models[0].id,
messages: [{ role: "user", content: "Hello!" }],
});cd packages/sdk
npm run buildcd templates/minimax-template
npm run compile# 终端 1
cd packages/sdk && npm run watch
# 终端 2
cd templates/minimax-template && npm run watch在仓库设置中添加以下 Secrets:
| Secret | 描述 |
|---|---|
VSCE_PAT |
VS Code 插件市场个人访问令牌 |
OVSX_PAT |
Open VSX 注册表令牌(可选) |
| 工作流 | 触发条件 | 描述 |
|---|---|---|
| CI | Push/PR 到 main |
构建 SDK、CLI 和所有模板,运行测试 |
| Release | Tag 推送 (v*) / 手动 |
发布正式版或预发布版到插件市场 |
| Pre-release | Push 到 pre-release/* / 手动 |
发布预发布版本 |
| Publish Single | 仅手动 | 手动发布单个插件 |
正式发布:
# 1. 更新版本号
cd templates/glm-template
npm version patch # 或 minor / major
# 2. 提交并创建 tag
git add .
git commit -m "chore: bump glm-template to v0.2.0"
git tag glm-template-v0.2.0
git push origin main --tags预发布:
# 方法 1:使用 Tag
git tag glm-template-v0.2.0-beta.1
git push origin --tags
# 方法 2:使用分支
git checkout -b pre-release/glm-template
git push origin pre-release/glm-template
# 方法 3:通过 GitHub Actions 手动触发| 模式 | 触发 | 示例 |
|---|---|---|
v* |
发布所有模板 | v1.0.0 |
<template>-v* |
发布指定模板 | glm-template-v0.2.0 |
*-pre*, *-alpha*, *-beta*, *-rc* |
预发布 | v1.0.0-beta.1 |
| 模板 | Base URL | API 模式 | 动态模型 |
|---|---|---|---|
| GLM | https://open.bigmodel.cn/api/paas/v4/chat/completions |
OpenAI | ✅ |
| MiniMax | https://api.minimaxi.com/anthropic/v1/messages |
Anthropic | ❌ |
| Kimi | https://api.moonshot.cn/anthropic |
Anthropic | ❌ |
| MiMo | https://api.xiaomimimo.com/anthropic/v1/messages |
Anthropic | ❌ |
| Aliyun Coding Plan | https://coding.dashscope.aliyuncs.com/apps/anthropic/v1/messages |
Anthropic | ❌ |
| Base | 自定义 | OpenAI/Anthropic | ✅ |
提供者 API (/models)
↓
获取 + 缓存
↓
filterModels()
↓
VS Code 模型列表
- 扩展调用提供者 API 的
/models端点 - 响应被解析并转换为
ModelConfig[] - 模型被缓存
modelsCacheTTL毫秒 - 如果获取失败,回退到
FALLBACK_MODELS - 可选的
filterModels()函数过滤要显示的模型
This project is inspired by and references the following excellent open source projects:
本项目受到以下优秀开源项目的启发和参考:
| Project | Description |
|---|---|
| vscode-copilot-chat | Microsoft's official GitHub Copilot Chat extension - the authoritative reference for VS Code Chat API usage / 微软官方 GitHub Copilot Chat 扩展 - VS Code Chat API 使用的权威参考 |
| vscode | Visual Studio Code source code - for understanding VS Code extension APIs / VS Code 源代码 - 用于理解 VS Code 扩展 API |
| Project | Description |
|---|---|
| oai-compatible-copilot | OpenAI-compatible Copilot - early exploration of custom model integration / OpenAI 兼容 Copilot - 自定义模型集成的早期探索 |
| huggingface-vscode-chat | HuggingFace's VS Code Chat extension / HuggingFace 的 VS Code Chat 扩展 |
| addi | AI-powered development assistant with MCP integration / 带 MCP 集成的 AI 开发助手 |
| ChatGLM-vscode-chat | ChatGLM VS Code extension - GLM model integration reference / ChatGLM VS Code 扩展 - GLM 模型集成参考 |
Special thanks to the following AI providers for their excellent APIs:
特别感谢以下 AI 服务提供商提供的优秀 API:
- 智谱AI (Zhipu AI) - GLM series models / GLM 系列模型
- Moonshot AI (月之暗面) - Kimi K2 models / Kimi K2 模型
- MiniMax - M2 series models / M2 系列模型
- Xiaomi MiMo (小米) - MiMo models / MiMo 模型
- OpenAI - GPT models / GPT 模型
- Anthropic - Claude models / Claude 模型
- Google - Gemini models / Gemini 模型
Contributions are welcome! Please feel free to submit a Pull Request.
欢迎贡献!请随时提交 Pull Request。
MIT