Releases: deepset-ai/haystack-experimental
v0.14.3
- fix: Use deepcopy with exceptions instead of deepcopy (#397)
- fix: Fix typing issue for Agent (#398)
- feat: Update QueryExpander, MultiQueryEmbeddingRetriever and MultiQueryTextRetriever to auto warm up at runtime (#395)
- fix: Expose MarkdownHeaderLevelInferrer in preprocessors module and fix example (#384)
v0.14.2
v0.14.1
v0.14.0
π§ͺ New Experiments
Human in the Loop Confirmation for Agents
This new feature allows to require human confirmation for an Agent's tool calls. In short, when you build an Agent, you can define which tools require a confirmation from the user and you can choose how that confirmation should be requested. For example, you can define that a tools always requires confirmation, whereas another tool requires confirmation only when the Agent uses it for the first time:
agent = Agent(
chat_generator=OpenAIChatGenerator(model="gpt-4.1"),
tools=[balance_tool, addition_tool, phone_tool],
system_prompt="You are a helpful financial assistant. Use the provided tool to get bank balances when needed.",
confirmation_strategies={
balance_tool.name: BlockingConfirmationStrategy(
confirmation_policy=AlwaysAskPolicy(), confirmation_ui=RichConsoleUI(console=cons)
),
addition_tool.name: BlockingConfirmationStrategy(
confirmation_policy=NeverAskPolicy(), confirmation_ui=SimpleConsoleUI()
),
phone_tool.name: BlockingConfirmationStrategy(
confirmation_policy=AskOncePolicy(), confirmation_ui=SimpleConsoleUI()
),
},
)Here are code examples, including one that shows how to combine the feature with breakpoints:
Adjusting Header Levels in Markdown Files
This new MarkdownHeaderLevelsInferrer component is useful for processing of markdown files. It can infer and rewrite header levels in markdown text to normalize hierarchy. For example, when you use docling to parse files, it outputs all headers to level 2 (##). The new MarkdownHeaderLevelsInferrer component can adjust the levels:
- First header β Always becomes level 1 (#)
- Subsequent headers β Level increases if no content between headers, stays same if content exists
- Maximum level β Capped at 6 (######)
from haystack import Document
from haystack_experimental.components.preprocessors import MarkdownHeaderLevelInferrer
# Create a document with uniform header levels
text = "## Title\n## Subheader\nSection\n## Subheader\nMore Content"
doc = Document(content=text)
# Initialize the inferrer and process the document
inferrer = MarkdownHeaderLevelInferrer()
result = inferrer.run([doc])
# The headers are now normalized with proper hierarchy
print(result["documents"][0].content)
> # Title\n## Subheader\nSection\n## Subheader\nMore ContentSummarizing Long Texts
This new LLMSummarizer component summarizes text using an LLM. The component can even do so recursively for very long texts. It's inspired by code from an OpenAI blog post.
from haystack_experimental.components.summarizers.summarizer import Summarizer
from haystack.components.generators.chat import OpenAIChatGenerator
from haystack import Document
text = ("Machine learning is a subset of artificial intelligence that provides systems "
"the ability to automatically learn and improve from experience without being "
"explicitly programmed. The process of learning begins with observations or data. "
"Supervised learning algorithms build a mathematical model of sample data, known as "
"training data, in order to make predictions or decisions. Unsupervised learning "
"algorithms take a set of data that contains only inputs and find structure in the data. "
"Reinforcement learning is an area of machine learning where an agent learns to behave "
"in an environment by performing actions and seeing the results. Deep learning uses "
"artificial neural networks to model complex patterns in data. Neural networks consist "
"of layers of connected nodes, each performing a simple computation.")
doc = Document(content=text)
chat_generator = OpenAIChatGenerator(model="gpt-4")
summarizer = Summarizer(chat_generator=chat_generator)
summarizer.run(documents=[doc])Full Changelog: v0.13.0...v0.14.0
v0.13.0
π§ͺ New Experiments
Semantic Chunking based on Sentence Embeddings
We added a new EmbeddingBasedDocumentSplitter component that splits longer texts based on sentences that are semantically related. Users benefit from Documents that are more semantically coherent. The component is initialized with a TextEmbedder. PR #353
from haystack import Document
from haystack.components.embedders import SentenceTransformersDocumentEmbedder
from haystack_experimental.components.preprocessors import EmbeddingBasedDocumentSplitter
doc = Document(
content="This is a first sentence. This is a second sentence. This is a third sentence. "
"Completely different topic. The same completely different topic."
)
embedder = SentenceTransformersDocumentEmbedder()
splitter = EmbeddingBasedDocumentSplitter(
document_embedder=embedder,
sentences_per_group=2,
percentile=0.95,
min_length=50,
max_length=1000
)
splitter.warm_up()
result = splitter.run(documents=[doc])Hallucination Risk Assessment for LLM Answers
The OpenAIChatGenerator can now estimate the risk of hallucinations in generated answers. You can configure a risk threshold and the OpenAIChatGenerator will refuse to return an answer if the risk of hallucination is above the threshold. Refer to research paper and repo for technical details on risk calculation. PR #359
π Try out the component here!
from haystack.dataclasses import ChatMessage
from haystack_experimental.components.generators.chat.openai import HallucinationScoreConfig, OpenAIChatGenerator
llm = OpenAIChatGenerator(model="gpt-4o")
rag_result = llm.run(
messages=[
ChatMessage.from_user(
text="Task: Answer strictly based on the evidence provided below.\n"
"Question: Who won the Nobel Prize in Physics in 2019?\n"
"Evidence:\n"
"- Nobel Prize press release (2019): James Peebles (1/2); Michel Mayor & Didier Queloz (1/2).\n"
"Constraints: If evidence is insufficient or conflicting, refuse."
)
],
hallucination_score_config=HallucinationScoreConfig(skeleton_policy="evidence_erase"),
)
print(f"Decision: {rag_result['replies'][0].meta['hallucination_decision']}")
print(f"Risk bound: {rag_result['replies'][0].meta['hallucination_risk']:.3f}")
print(f"Rationale: {rag_result['replies'][0].meta['hallucination_rationale']}")
print(f"Answer:\n{rag_result['replies'][0].text}")Multi-Query Retrieval for Query Expansion
Two newly introduced components MultiQueryKeywordRetriever and MultiQueryEmbeddingRetriever enable concurrent processing of multiple queries. It works best in combination with a QueryExpander component, which given a single query of a user, generates multiple queries. You can learn more about query expansion in this jupyter notebook. PR #358
β Adopted Experiments
Full Changelog: v0.12.0...v0.13.0
v0.12.0
π§ͺ New Experiments
π§ Agent Breakpoints
Weβve introduced Agent Breakpointsβa feature that allows you to pause and inspect specific stages within the Agent component's execution.
You can use this feature to:
- Place breakpoints directly on the chat_generator to debug interactions.
- Add breakpoints to the tools used by the agent to inspect tool behavior during execution.
π§ Example Usage for Agent within Pipeline
from haystack.components.generators.chat import OpenAIChatGenerator
from haystack.dataclasses import ChatMessage
from haystack.tools.tool import Tool
from haystack_experimental.components.agents.agent import Agent
from typing import List
from haystack_experimental.dataclasses.breakpoints import AgentBreakpoint, Breakpoint, ToolBreakpoint
# Tool Function
def calculate(expression: str) -> dict:
try:
result = eval(expression, {"__builtins__": {}})
return {"result": result}
except Exception as e:
return {"error": str(e)}
# Tool Definition
calculator_tool = Tool(
name="calculator",
description="Evaluate basic math expressions.",
parameters={
"type": "object",
"properties": {
"expression": {"type": "string", "description": "Math expression to evaluate"}
},
"required": ["expression"]
},
function=calculate,
outputs_to_state={"calc_result": {"source": "result"}}
)
# Agent Setup
agent = Agent(
chat_generator=OpenAIChatGenerator(),
tools=[calculator_tool],
exit_conditions=["calculator"],
state_schema={
"calc_result": {"type": int},
}
)
debug_path = "Path to save the state"
# Breakpoint on the chat_generator of the Agent
chat_generator_breakpoint = Breakpoint("chat_generator", visit_count=0)
agent_breakpoint = AgentBreakpoint(break_point=chat_generator_breakpoint, agent_name='database_agent')
# Run the Agent
agent.warm_up()
response = agent.run(messages=[ChatMessage.from_user("What is 7 * (4 + 2)?")], break_point=agent_breakpoint, debug_path=debug_path)
# Breakpoint on the tools of the Agent
tool_breakpoint = ToolBreakpoint(component_name="tool_invoker", visit_count=0, tool_name="calculator")
agent_breakpoint = AgentBreakpoint(break_point=tool_breakpoint, agent_name='database_agent')
# Run the Agent
agent.warm_up()
response = agent.run(messages=[ChatMessage.from_user("What is 7 * (4 + 2)?")], break_point=agent_breakpoint, debug_path=debug_path)π¦ Breakpoints Dataclass
Weβve added a dedicated Breakpoint dataclass interface to standardize the way breakpoints are declared and managed.
- Use
Breakpointto target generic components. - Use
AgentBreakpointfor setting breakpoints on the agent. - Use
ToolBreakpointto set breakpoints on specific tools used by the agent.
Related PRs
- feat: adding agents back to the experimental repo (#326)
Other Updates
v0.11.0
π§ͺ New Experiments
Query Expander component
We are introducing a component that generates a list of semantically similar queries to improve retrieval recall in RAG systems.
from haystack.components.generators.chat.openai import OpenAIChatGenerator
from haystack_experimental.components.query import QueryExpander
expander = QueryExpander(
chat_generator=OpenAIChatGenerator(model="gpt-4.1-mini"),
n_expansions=3
)
result = expander.run(query="green energy sources")
print(result["queries"])
# Output: ['alternative query 1', 'alternative query 2', 'alternative query 3', 'green energy sources']
# Note: Up to 3 additional queries + 1 original query (if include_original_query=True)
# To control total number of queries:
expander = QueryExpander(n_expansions=2, include_original_query=True) # Up to 3 total
# or
expander = QueryExpander(n_expansions=3, include_original_query=False) # Exactly 3 total- feat: add QueryExpander component by @mpangrazzi in #331
π New Document Routers
We're introducing two new Routers: DocumentTypeRouter and DocumentLengthRouter.
πΌοΈ New Multimodal Features
We introduced several new multimodal features, mostly focused on indexing and retrieval.
A notebook will be published soon to show practical usage examples.
- multimodal support in
AmazonBedrockChatGenerator - new image Converters
SentenceTransformersDocumentImageEmbedder: a component to compute embeddings for image-based documentsLLMDocumentContentExtractor: a component to extract textual content from image-based documents using a vision-enabled LLM
Related PRs
- refactor: adopt pypdfium2 for PDF to image conversion by @anakin87 in #308
- feat: multimodal support in
AmazonBedrockChatGeneratorby @anakin87 in #307 - test: Fix mypy typing by @sjrl in #309
- feat: Add
DocumentToImageConentcomponent to help enable RAG with image Documents by @sjrl in #311 - chore: fix format for
DocumentToImageContentby @anakin87 in #318 - chore: ignore type errors in Bedrock monkey patches by @anakin87 in #322
- feat: add
SentenceTransformersDocumentImageEmbedderby @anakin87 in #319 - feat: Add
DocumentTypeRouterby @sjrl in #321 - refactor: refactor multimodal components and utility functions by @anakin87 in #324
- fix: Fix storage of file path in ImageContent by @sjrl in #325
- refactor: Refactor converters to follow embedders directory structure by @sjrl in #333
- feat: Add
normalize_embeddingstoSentenceTransformersDocumentImageEmbedderto match signature of other embedders by @sjrl in #335 - feat: add
DocumentLengthRoutercomponent by @anakin87 in #334 - feat: Add ImageFileToDocument converter by @sjrl in #336
- feat: Add
LLMDocumentContentExtractorto enable Vision-based LLMs to describe/convert an image into text by @sjrl in #338 - docs: add usage examples to docstrings of multimodal components by @anakin87 in #340
Other Updates
- refactor: synchronising/merging all pipeline related code with haystack main repository by @davidsbatista in #312
- chore: align Haystack experimental Hatch scripts by @anakin87 in #315
- chore: align experimental type checking with Haystack by @anakin87 in #320
- refactor: Refactor experimental Pipeline to use inheritancee by @sjrl in #323
- fix: refactor code and update
init_paramsindebug_stateby @Amnah199 in #317 - chore: fix
rufflinting error by @Amnah199 in #329 - fix: Fix logger message for pipeline breakpoints by @sjrl in #327
- fix: Fix validate_input becoming public method by @sjrl in #337
- Refactor serialization of breakpoints by @Amnah199 in #332
New Contributors
- @mpangrazzi made their first contribution in #331
Full Changelog: v0.10.0...v0.11
v0.10.0
π§ͺ New Experiments
πΌοΈ Multimodal Text Generation
We are adding support for passing images in user messages and other multimodal features.
from haystack_experimental.dataclasses import ImageContent, ChatMessage
from haystack_experimental.components.generators.chat import OpenAIChatGenerator
image_url = "https://cdn.britannica.com/79/191679-050-C7114D2B/Adult-capybara.jpg"
image_content = ImageContent.from_url(image_url)
message = ChatMessage.from_user(
content_parts=["Describe the image in short.", image_content]
)
llm = OpenAIChatGenerator(model="gpt-4o-mini")
print(llm.run([message])["replies"][0].text)For the list of implemented features, see #302.
For more usage examples, check out the example: π Introduction to Multimodal Text Generation.
Related PRs
- feat:
ImageContentdataclass by @anakin87 in #286 - feat: Add
ImageFileToImageContentandPDFToImageContentconverters by @sjrl in #290 - feat: multimodal support in
OpenAIChatGeneratorby @anakin87 in #292 - chore: improve Image Converters pydoc config by @anakin87 in #295
- feat: add convenience class methods to
Imagecontentby @anakin87 in #294 - chore: move
ImageContentto a separate module by @anakin87 in #296 - feat: add Jinja2 ChatMessage extension by @anakin87 in #297
- feat:
ImageContentvisualization by @anakin87 in #300 - feat: extend
ChatPromptBuilderto support string templates by @anakin87 in #299 - chore: update README with multimodal experiment by @anakin87 in #303
- fix: move IPython import by @anakin87 in #304
- feat:
ImageContentvalidation by @anakin87 in #305
π Bug Fixes
- fix: Update
__init__.pyto use double underscore by @sjrl in #288 - fix: preserve initialization parameters in debug state when run params are not supplied by @Amnah199 in #293
β Adopted Experiments
- chore: update/clean up experimental by @anakin87 in #285
- chore: Remove SuperComponent and pre-made super components. Update Readme by @sjrl in #287
- chore: remove dependencies needed for
MultiFileConverterby @anakin87 in #298
Other Updates
- Update issue template for adding new experiments by @bilgeyucel in #283
- docs: add missing pydocs by @dfokina in #291
Full Changelog: v0.9.0...v0.10.0
v0.9.0
π§ Updates to Experiments
Adding breakpoints to components in a Pipeline
It's now possible to set breakpoints at any component in any pipeline, forcing the pipeline execution to stop before that component runs and generating a JSON file with the complete state of the pipeline before the breakpoint component was run.
Usage Examples
# Setting breakpoints
pipeline.run(
data={"input": "value"},
breakpoints={("component_name", 0)}, # Break at the first visit
debug_path="debug_states/"
)This will generate a JSON with the complete pipeline state before the next component is run, i.e.: the one receiving the output of the component set in the breakpoint
# Resuming from a saved state
state = Pipeline.load_state("debug_states/component_state.json")
pipeline.run(
data={"input": "value"},
resume_state=state
)π§βπ³ See an example notebook here
π¬ Share your feedback in this discussion
β Adopted Experiments
- chore: Remove
Agentafter Haystack 2.12 release (#263) @julian-risch - chore: Remove
AutoMergingRetrieverafter Haystack 2.12 release (#265) @davidsbatista
Other Updates
- Proposal for changing internal working of Agent (#245) @sjrl
- refactor: Streamline super components input and output mapping logic (#243) @sjrl
- refactor: Small updates to Agent. Make pipeline internal, add check for warm_up (#244) @sjrl
- feat: Updates to insertion of values into
State(#239) @sjrl - feat: Add
unclassifiedto output of MultiFileConverter (#240) @julian-risch - feat: Enhance tool error logs and some refactoring (#235) @sjrl
Full Changelog: v0.8.0...v0.9.0
v0.8.0
π§ Updates to Experiments
Stream ChatGenerator responses with Agent
The Agent component now allows setting a streaming callback at init and run time. This way, an Agent's response can be streamed in chunks, enabling faster feedback for developers and end users. #233
agent = Agent(chat_generator=chat_generator, tools=[weather_tool])
response = agent.run([ChatMessage.from_user("Hello")], streaming_callback=streaming_callback)π Bug Fixes
- We fixed a bug that prevented ComponentTool to work with Jinja2-based components (PromptBuilder, ChatPromptBuilder, ConditionalRouter, OutputAdapter). #234
- The
Agentcomponent now deserializes Tools with the right class and usesdeserialize_tools_inplace. #213 #222
β Adopted Experiments
- chore: remove
LLMMetadataExtractorby @davidsbatista in #227 - chore: Remove some missed utility functions from previous experiments by @sjrl in #232
- chore: removing async version of
InMemoryDocumentStore,DocumentWriter,OpenAIChatGenerator, InMemory Retrievers by @davidsbatista in #220 - chore: remove pipeline experiments by @mathislucka in #214
π Discontinued Experiments
- chore: remove evaluation harness experiment by @julian-risch in #231
Full Changelog: v0.7.0...v0.8.0