v0.14.0
🧪 New Experiments
Human in the Loop Confirmation for Agents
This new feature allows to require human confirmation for an Agent's tool calls. In short, when you build an Agent, you can define which tools require a confirmation from the user and you can choose how that confirmation should be requested. For example, you can define that a tools always requires confirmation, whereas another tool requires confirmation only when the Agent uses it for the first time:
agent = Agent(
chat_generator=OpenAIChatGenerator(model="gpt-4.1"),
tools=[balance_tool, addition_tool, phone_tool],
system_prompt="You are a helpful financial assistant. Use the provided tool to get bank balances when needed.",
confirmation_strategies={
balance_tool.name: BlockingConfirmationStrategy(
confirmation_policy=AlwaysAskPolicy(), confirmation_ui=RichConsoleUI(console=cons)
),
addition_tool.name: BlockingConfirmationStrategy(
confirmation_policy=NeverAskPolicy(), confirmation_ui=SimpleConsoleUI()
),
phone_tool.name: BlockingConfirmationStrategy(
confirmation_policy=AskOncePolicy(), confirmation_ui=SimpleConsoleUI()
),
},
)Here are code examples, including one that shows how to combine the feature with breakpoints:
Adjusting Header Levels in Markdown Files
This new MarkdownHeaderLevelsInferrer component is useful for processing of markdown files. It can infer and rewrite header levels in markdown text to normalize hierarchy. For example, when you use docling to parse files, it outputs all headers to level 2 (##). The new MarkdownHeaderLevelsInferrer component can adjust the levels:
- First header → Always becomes level 1 (#)
- Subsequent headers → Level increases if no content between headers, stays same if content exists
- Maximum level → Capped at 6 (######)
from haystack import Document
from haystack_experimental.components.preprocessors import MarkdownHeaderLevelInferrer
# Create a document with uniform header levels
text = "## Title\n## Subheader\nSection\n## Subheader\nMore Content"
doc = Document(content=text)
# Initialize the inferrer and process the document
inferrer = MarkdownHeaderLevelInferrer()
result = inferrer.run([doc])
# The headers are now normalized with proper hierarchy
print(result["documents"][0].content)
> # Title\n## Subheader\nSection\n## Subheader\nMore ContentSummarizing Long Texts
This new LLMSummarizer component summarizes text using an LLM. The component can even do so recursively for very long texts. It's inspired by code from an OpenAI blog post.
from haystack_experimental.components.summarizers.summarizer import Summarizer
from haystack.components.generators.chat import OpenAIChatGenerator
from haystack import Document
text = ("Machine learning is a subset of artificial intelligence that provides systems "
"the ability to automatically learn and improve from experience without being "
"explicitly programmed. The process of learning begins with observations or data. "
"Supervised learning algorithms build a mathematical model of sample data, known as "
"training data, in order to make predictions or decisions. Unsupervised learning "
"algorithms take a set of data that contains only inputs and find structure in the data. "
"Reinforcement learning is an area of machine learning where an agent learns to behave "
"in an environment by performing actions and seeing the results. Deep learning uses "
"artificial neural networks to model complex patterns in data. Neural networks consist "
"of layers of connected nodes, each performing a simple computation.")
doc = Document(content=text)
chat_generator = OpenAIChatGenerator(model="gpt-4")
summarizer = Summarizer(chat_generator=chat_generator)
summarizer.run(documents=[doc])Full Changelog: v0.13.0...v0.14.0