Skip to content
Discussion options

You must be logged in to vote

Yes, TruLens supports evaluation and instrumentation for LangChain-based models that implement the BaseChatModel interface, including custom/internal LLMs. You can use both the LangChain integration (TruChain) and the Langchain provider to run feedback functions without relying on OpenAI.

The Langchain provider in TruLens accepts any LangChain LLM or ChatModel (i.e., anything implementing BaseLLM or BaseChatModel), so your custom class is supported. The provider wraps your model and uses its predict or predict_messages methods to generate completions for feedback evaluation—there’s no requirement for the underlying model to be OpenAI-based. This is confirmed in the provider implementatio…

Replies: 6 comments 6 replies

Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
2 replies
@dosubot
Comment options

Answer selected by sfc-gh-jreini
@sfc-gh-jreini
Comment options

Comment options

You must be logged in to vote
1 reply
@dosubot
Comment options

Comment options

You must be logged in to vote
1 reply
@dosubot
Comment options

Comment options

You must be logged in to vote
1 reply
@dosubot
Comment options

Comment options

You must be logged in to vote
1 reply
@dosubot
Comment options

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants