-
Ensure the bug was not already reported by searching on GitHub under Issues.
-
If you're unable to find an open issue addressing the problem, open a new one. Include a title and clear description, relevant information, and a code sample demonstrating the issue.
-
Verify it's a RubyLLM bug, not your application code, before opening an issue.
-
Open a new GitHub pull request with the patch.
-
Ensure the PR description clearly describes the problem and solution. Include the relevant issue number if applicable.
-
Run
overcommit --installbefore committing - it handles code style and tests automatically.
-
First check if this belongs in RubyLLM or your application. RubyLLM already provides chat, agents, tools, embeddings, and the building blocks for workflows and RAG. Before proposing a new feature, ask: can this be built with what RubyLLM already offers? If so, it belongs in your application code, not in the gem.
-
Features we'll reject:
- Anything you can build in a few lines of application code using existing RubyLLM features
- Opinionated abstractions over patterns that are already straightforward (e.g., specific RAG pipeline frameworks, workflow orchestration DSLs)
- Integrations with specific external services (vector databases, search engines, etc.) — these work great as separate gems
- Testing frameworks
-
You must open an issue first and wait for maintainer feedback before writing code. PRs for new features without an approved issue will be closed without review.
-
Keep PRs focused and reasonably sized. Large features should be discussed in the issue and potentially broken into smaller, reviewable PRs. Dropping thousands of lines of code without prior discussion is not helpful.
-
If you use AI tools, you must understand every single line of code you submit. AI-generated code often requires more review time from maintainers, which may delay your PR.
- Core providers have a high acceptance bar.
- For smaller or emerging providers, the preferred path is a community gem rather than RubyLLM core.
gh repo fork crmne/ruby_llm --clone && cd ruby_llm
bundle install
overcommit --install # Required - sets up git hooks
gh issue develop 123 --checkout # or create your own branch
# make changes, add tests
gh pr create --webovercommit --run
# Re-recording VCR cassettes (requires API keys):
rake vcr:record[openai,anthropic] # Specific providers
rake vcr:record[all] # EverythingAlways check cassettes for leaked API keys before committing.
- Never edit
models.json,aliases.json, oravailable-models.md- they're auto-generated byrake models - Write tests for any new functionality
- Keep it simple - if it needs extensive documentation, reconsider the approach
- Model data comes from models.dev. If you spot issues in the upstream registry, please report them via their site or repo.
This is my gift to the Ruby community.
Gifts don't come with SLAs. I respond when I can.
If RubyLLM helps you, consider sponsoring.
Sponsorship is just a way to say thanks - it doesn't buy priority support or feature requests.
Go ship AI apps!
— Carmine