Summary
TruLens supports blocking guardrails (evaluate before returning a response) for both LangChain and LlamaIndex, but the existing skills don't cover guardrail configuration. Users who want to use TruLens feedback functions as runtime safety checks have no guided workflow.
What
Create skills/guardrails/SKILL.md that walks users through:
- Choosing which feedback functions to use as guardrails (safety metrics like harmfulness/toxicity are obvious, but context_relevance as a hallucination gate is powerful too)
- Configuring threshold and action (block, warn, fallback response)
- Framework-specific setup for LangChain (
WithFeedbackFilterDocuments) and LlamaIndex guardrail patterns
- Testing guardrails with adversarial inputs
- Monitoring guardrail trigger rates in the dashboard
Reference
Difficulty
Easy-Medium
Summary
TruLens supports blocking guardrails (evaluate before returning a response) for both LangChain and LlamaIndex, but the existing skills don't cover guardrail configuration. Users who want to use TruLens feedback functions as runtime safety checks have no guided workflow.
What
Create
skills/guardrails/SKILL.mdthat walks users through:WithFeedbackFilterDocuments) and LlamaIndex guardrail patternsReference
src/apps/langchain/trulens/apps/langchain/guardrails.pyandsrc/apps/llamaindex/trulens/apps/llamaindex/guardrails.pyexamples/quickstart/blocking_guardrails.ipynbskills/directoryDifficulty
Easy-Medium