Skip to content

Commit 9e28872

Browse files
sviluppweb-flow
andauthored
updated docs
Co-authored-by: svilupp <[email protected]>
1 parent d7e105f commit 9e28872

File tree

6 files changed

+518
-17
lines changed

6 files changed

+518
-17
lines changed

README.md

Lines changed: 73 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -103,7 +103,7 @@ For more practical examples, see the `examples/` folder and the [Advanced Exampl
103103
- [Using MistralAI API and other OpenAI-compatible APIs](#using-mistralai-api-and-other-openai-compatible-apis)
104104
- [Using OpenAI Responses API](#using-openai-responses-api)
105105
- [Using Anthropic Models](#using-anthropic-models)
106-
- [Advanced Observability with Logfire.jl](#advanced-observability-with-logfirejl)
106+
- [Companion Packages: Logfire.jl & TextPrompts.jl](#companion-packages-logfirejl--textpromptsjl)
107107
- [More Examples](#more-examples)
108108
- [Package Interface](#package-interface)
109109
- [Frequently Asked Questions](#frequently-asked-questions)
@@ -658,10 +658,19 @@ msg = aigenerate(
658658
```
659659

660660

661-
### Advanced Observability with Logfire.jl
661+
### Companion Packages: Logfire.jl & TextPrompts.jl
662+
663+
PromptingTools.jl integrates with two companion packages that provide observability and prompt management capabilities. Both packages are part of a **cross-language ecosystem** - the same tools and prompt files work with Python and TypeScript, enabling teams to share infrastructure across different codebases.
664+
665+
#### Observability with Logfire.jl
662666

663667
[Logfire.jl](https://github.com/svilupp/Logfire.jl) provides OpenTelemetry-based observability for your LLM applications. It automatically traces all your AI calls with detailed information about tokens, costs, messages, and latency.
664668

669+
| Language | Package | Installation |
670+
|----------|---------|--------------|
671+
| Julia | [Logfire.jl](https://github.com/svilupp/Logfire.jl) | `Pkg.add("Logfire")` |
672+
| Python | [logfire](https://docs.pydantic.dev/logfire/) | `pip install logfire` |
673+
665674
**Quick Setup:**
666675

667676
```julia
@@ -690,32 +699,79 @@ aigenerate("What is 2 + 2?"; model = "gpt4om")
690699
- Latency measurements and cache/streaming flags
691700
- Tool/function calls and structured extraction results
692701

693-
**Instrument Individual Models:**
702+
**Alternative Backends:**
703+
704+
You don't have to use Logfire cloud - send traces to any OpenTelemetry-compatible backend (Jaeger, Langfuse, etc.). That said, I strongly recommend [Pydantic Logfire](https://pydantic.dev/logfire) - their free tier provides hundreds of thousands of traced conversations per month.
705+
706+
See the [Logfire.jl documentation](https://svilupp.github.io/Logfire.jl/dev) and [`examples/observability_with_logfire.jl`](examples/observability_with_logfire.jl) for more details.
707+
708+
#### Prompt Management with TextPrompts.jl
694709

695-
You don't have to instrument all models. Wrap only specific models for selective tracing:
710+
[TextPrompts.jl](https://github.com/svilupp/textprompts/tree/main/packages/TextPrompts.jl) enables managing prompts as text files with optional TOML metadata. This approach allows version-controlled, collaborative prompt engineering separate from your code.
711+
712+
| Language | Package | Installation |
713+
|----------|---------|--------------|
714+
| Julia | [TextPrompts.jl](https://github.com/svilupp/textprompts/tree/main/packages/TextPrompts.jl) | `Pkg.add("TextPrompts")` |
715+
| Python | [textprompts](https://github.com/svilupp/textprompts/tree/main/packages/textprompts) | `pip install textprompts` |
716+
| TypeScript | [@anthropic/textprompts](https://github.com/svilupp/textprompts/tree/main/packages/textprompts-ts) | `npm install @anthropic/textprompts` |
717+
718+
**Quick Setup:**
696719

697720
```julia
698-
Logfire.instrument_promptingtools_model!("my-local-llm")
721+
using Pkg
722+
Pkg.add("TextPrompts")
723+
724+
using TextPrompts, PromptingTools
725+
726+
# Load prompt from file and format with placeholders
727+
prompt = load_prompt("prompts/system.txt")
728+
system_msg = prompt(; role = "Julia expert") |> SystemMessage
729+
730+
# Use with PromptingTools
731+
response = aigenerate([system_msg, UserMessage("How do macros work?")]; model = "gpt4om")
699732
```
700733

701-
**Alternative Backends:**
734+
**Prompt File Format:**
735+
736+
```
737+
---
738+
title = "Expert System Prompt"
739+
version = "1.0"
740+
description = "A system prompt for expert assistance"
741+
---
742+
You are a {role}. Be helpful and accurate.
743+
```
744+
745+
**Benefits:**
746+
- **Version control**: Track prompt changes in git with full history
747+
- **Validation**: Catch placeholder typos before LLM calls execute
748+
- **Metadata**: Track version, author, description per prompt
749+
- **Separation of concerns**: Keep prompt engineering separate from code
750+
- **Cross-language**: Same prompt files work in Python, TypeScript, and Julia
751+
752+
See [`examples/working_with_textprompts.jl`](examples/working_with_textprompts.jl) for more details.
702753

703-
You don't have to use Logfire cloud - send traces to any OpenTelemetry-compatible backend:
754+
#### Recommended Workflow
755+
756+
Combine both packages for a complete LLM development workflow:
757+
758+
1. **Store prompts** in version-controlled text files with TextPrompts.jl
759+
2. **Trace all calls** with Logfire.jl for full observability
760+
3. **Analyze performance** in the Logfire dashboard
761+
4. **Iterate and improve** prompts based on real-world data
704762

705763
```julia
706-
# Local development with Jaeger
707-
ENV["OTEL_EXPORTER_OTLP_ENDPOINT"] = "http://localhost:4318"
708-
Logfire.configure(service_name = "my-app", send_to_logfire = :always)
764+
using TextPrompts, PromptingTools, Logfire
709765

710-
# Or use Langfuse
711-
ENV["OTEL_EXPORTER_OTLP_ENDPOINT"] = "https://cloud.langfuse.com/api/public/otel"
712-
ENV["OTEL_EXPORTER_OTLP_HEADERS"] = "Authorization=Basic <base64-credentials>"
713-
Logfire.configure(service_name = "my-app", send_to_logfire = :always)
714-
```
766+
Logfire.configure(service_name = "my-app")
767+
Logfire.instrument_promptingtools!()
715768

716-
That said, I strongly recommend [Pydantic Logfire](https://pydantic.dev/logfire) - their free tier provides hundreds of thousands of traced conversations per month, which is more than enough for most use cases.
769+
# Load versioned prompts and trace the LLM call
770+
system = load_prompt("prompts/system.txt")(; role = "Expert") |> SystemMessage
771+
response = aigenerate([system, UserMessage("Analyze this data")]; model = "gpt4om")
772+
```
717773

718-
See the [Logfire.jl documentation](https://svilupp.github.io/Logfire.jl/dev) and [`examples/observability_with_logfire.jl`](examples/observability_with_logfire.jl) for more details.
774+
See the [announcement post](https://discourse.julialang.org/t/announcing-logfire-jl-textprompts-jl-observability-and-prompt-management-for-julia-genai/134268) for more information.
719775

720776
### More Examples
721777

docs/src/extra_tools/observability_logfire.md

Lines changed: 43 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,27 @@
22

33
[Logfire.jl](https://github.com/svilupp/Logfire.jl) provides OpenTelemetry-based observability for your LLM applications built with PromptingTools.jl. It automatically traces all your AI calls with detailed information about tokens, costs, messages, and latency.
44

5+
## Why Logfire.jl?
6+
7+
| Benefit | Description |
8+
|---------|-------------|
9+
| **Automatic Tracing** | All `ai*` function calls are traced with zero code changes |
10+
| **Full Visibility** | Token usage, costs, latency, messages, and errors captured |
11+
| **Flexible Backends** | Use Logfire cloud, Jaeger, Langfuse, or any OTLP-compatible backend |
12+
| **Cross-Language** | Same observability infrastructure works with Python and TypeScript |
13+
| **Generous Free Tier** | Hundreds of thousands of traced conversations/month on Logfire cloud |
14+
15+
## Cross-Language Ecosystem
16+
17+
Logfire is available across multiple languages, enabling teams to share observability infrastructure:
18+
19+
| Language | Package | Installation |
20+
|----------|---------|--------------|
21+
| **Julia** | [Logfire.jl](https://github.com/svilupp/Logfire.jl) | `Pkg.add("Logfire")` |
22+
| **Python** | [logfire](https://docs.pydantic.dev/logfire/) | `pip install logfire` |
23+
24+
All traces flow to the same [Pydantic Logfire](https://pydantic.dev/logfire) dashboard, giving you unified visibility across your entire stack!
25+
526
## Installation
627

728
Logfire.jl is a separate package that provides a PromptingTools extension. Install it along with DotEnv for loading secrets:
@@ -185,8 +206,30 @@ While you can use any OTLP-compatible backend, we strongly recommend [Pydantic L
185206

186207
See the full example at [`examples/observability_with_logfire.jl`](https://github.com/svilupp/PromptingTools.jl/blob/main/examples/observability_with_logfire.jl).
187208

209+
## Combining with TextPrompts.jl
210+
211+
For a complete LLM workflow, combine Logfire.jl with [TextPrompts.jl](prompt_management_textprompts.md) for prompt management:
212+
213+
```julia
214+
using TextPrompts, PromptingTools, Logfire
215+
216+
Logfire.configure(service_name = "my-app")
217+
Logfire.instrument_promptingtools!()
218+
219+
# Load prompts from version-controlled files
220+
system = load_prompt("prompts/system.txt")(; role = "Expert") |> SystemMessage
221+
user = load_prompt("prompts/task.txt")(; task = "analyze data") |> UserMessage
222+
223+
# Traces include full conversation with formatted prompts
224+
response = aigenerate([system, user]; model = "gpt4om")
225+
```
226+
227+
This enables a powerful workflow: version prompts in git, trace calls in Logfire, and continuously improve based on real-world performance.
228+
188229
## Further Reading
189230

190231
- [Logfire.jl Documentation](https://svilupp.github.io/Logfire.jl/dev)
191232
- [Logfire.jl GitHub](https://github.com/svilupp/Logfire.jl)
192233
- [Pydantic Logfire](https://pydantic.dev/logfire)
234+
- [TextPrompts.jl - Prompt Management](prompt_management_textprompts.md)
235+
- [Discourse Announcement](https://discourse.julialang.org/t/announcing-logfire-jl-textprompts-jl-observability-and-prompt-management-for-julia-genai/134268)
Lines changed: 215 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,215 @@
1+
# Prompt Management with TextPrompts.jl
2+
3+
[TextPrompts.jl](https://github.com/svilupp/textprompts/tree/main/packages/TextPrompts.jl) allows you to manage prompts as text files with optional TOML metadata. This enables version-controlled, collaborative prompt engineering that separates prompt content from code.
4+
5+
## Why TextPrompts.jl?
6+
7+
| Benefit | Description |
8+
|---------|-------------|
9+
| **Version Control** | Track prompt changes in git with full history and diffs |
10+
| **Collaboration** | Team members can edit prompts without touching code |
11+
| **Validation** | Catch placeholder typos before LLM calls execute |
12+
| **Metadata** | Track version, author, description for each prompt |
13+
| **Separation of Concerns** | Keep prompt engineering separate from application logic |
14+
| **Cross-Language** | Same prompt files work in Python, TypeScript, and Julia |
15+
16+
## Cross-Language Ecosystem
17+
18+
TextPrompts is available across multiple languages, enabling teams to share prompts across different codebases:
19+
20+
| Language | Package | Installation |
21+
|----------|---------|--------------|
22+
| **Julia** | [TextPrompts.jl](https://github.com/svilupp/textprompts/tree/main/packages/TextPrompts.jl) | `Pkg.add("TextPrompts")` |
23+
| **Python** | [textprompts](https://github.com/svilupp/textprompts/tree/main/packages/textprompts) | `pip install textprompts` |
24+
| **TypeScript** | [@anthropic/textprompts](https://github.com/svilupp/textprompts/tree/main/packages/textprompts-ts) | `npm install @anthropic/textprompts` |
25+
26+
The same prompt files with TOML frontmatter work identically across all three languages!
27+
28+
## Installation
29+
30+
TextPrompts.jl is a separate package. Install it to enable prompt file management:
31+
32+
```julia
33+
using Pkg
34+
Pkg.add("TextPrompts")
35+
```
36+
37+
## Quick Start
38+
39+
```julia
40+
using TextPrompts
41+
using PromptingTools
42+
43+
# Load a prompt template from file
44+
prompt = load_prompt("prompts/system.txt")
45+
46+
# Format with placeholders and convert to PromptingTools message
47+
system_msg = prompt(; role = "Julia expert", task = "explain macros") |> SystemMessage
48+
49+
# Use in aigenerate
50+
response = aigenerate([system_msg, UserMessage("How do macros work?")]; model = "gpt4om")
51+
```
52+
53+
## Prompt File Format
54+
55+
Prompts can include optional TOML frontmatter for metadata:
56+
57+
```
58+
---
59+
title = "Expert System Prompt"
60+
version = "1.0"
61+
author = "Team Name"
62+
description = "A system prompt for expert assistance"
63+
---
64+
You are a {role}. Be {style} and helpful.
65+
```
66+
67+
If no frontmatter is provided, the file is treated as plain text with the filename used as the title.
68+
69+
### Metadata Modes
70+
71+
TextPrompts supports three metadata handling modes:
72+
73+
| Mode | Behavior |
74+
|------|----------|
75+
| `:allow` (default) | Parse metadata if present, otherwise use filename as title |
76+
| `:strict` | Require title, description, and version fields |
77+
| `:ignore` | Treat entire file as content; use filename as title |
78+
79+
```julia
80+
# Strict mode - requires all metadata fields
81+
prompt = load_prompt("system.txt"; meta = :strict)
82+
83+
# Ignore mode - treat as plain text
84+
prompt = load_prompt("system.txt"; meta = :ignore)
85+
```
86+
87+
## Core API
88+
89+
### Loading Prompts
90+
91+
```julia
92+
# Load a single prompt file
93+
prompt = load_prompt("prompts/system.txt")
94+
95+
# Load all prompts from a directory
96+
all_prompts = load_prompts("prompts/"; recursive = true)
97+
98+
# Create prompt from string (no file needed)
99+
inline_prompt = from_string("""
100+
---
101+
title = "Inline Example"
102+
---
103+
Hello, {name}!
104+
""")
105+
```
106+
107+
### Accessing Prompt Data
108+
109+
```julia
110+
prompt = load_prompt("greeting.txt")
111+
112+
# Access raw content
113+
println(prompt.content)
114+
115+
# Access metadata
116+
println(prompt.meta.title)
117+
println(prompt.meta.version)
118+
println(prompt.meta.description)
119+
120+
# See available placeholders
121+
println(prompt.placeholders) # e.g., [:name, :day]
122+
```
123+
124+
### Formatting with Placeholders
125+
126+
```julia
127+
prompt = load_prompt("greeting.txt")
128+
129+
# Format by calling as a function
130+
formatted = prompt(; name = "World", day = "Monday")
131+
132+
# Partial formatting (skip validation for missing placeholders)
133+
partial = prompt(; name = "World", skip_validation = true)
134+
135+
# Alternative: use TextPrompts.format explicitly
136+
formatted2 = TextPrompts.format(prompt; name = "World", day = "Monday")
137+
```
138+
139+
### Integration with PromptingTools
140+
141+
The pipe operator `|>` creates a seamless workflow:
142+
143+
```julia
144+
using TextPrompts
145+
using PromptingTools
146+
using PromptingTools: SystemMessage, UserMessage
147+
148+
# One-liner pattern for creating message arrays
149+
messages = [
150+
load_prompt("system.txt")(; role = "Expert") |> SystemMessage,
151+
load_prompt("task.txt")(; task = "explain closures") |> UserMessage
152+
]
153+
154+
response = aigenerate(messages; model = "gpt4om")
155+
```
156+
157+
### Dynamic Prompt Selection
158+
159+
```julia
160+
# Load all templates and index by title
161+
templates = load_prompts("prompts/")
162+
by_title = Dict(p.meta.title => p for p in templates)
163+
164+
# Select based on runtime conditions
165+
template_name = determine_template() # your logic
166+
prompt = by_title[template_name]
167+
msg = prompt(; task = "some task") |> UserMessage
168+
```
169+
170+
### Saving Prompts
171+
172+
```julia
173+
# Save a prompt string to file
174+
save_prompt("output.txt", "Hello, {name}!")
175+
176+
# Save a prompt object
177+
prompt = from_string("Hello, {name}!")
178+
save_prompt("output.txt", prompt)
179+
```
180+
181+
## Combining with Logfire.jl
182+
183+
TextPrompts.jl works seamlessly with [Logfire.jl observability](observability_logfire.md):
184+
185+
```julia
186+
using TextPrompts, PromptingTools, Logfire
187+
188+
Logfire.configure(service_name = "my-app")
189+
Logfire.instrument_promptingtools!()
190+
191+
# Prompts from files are automatically traced
192+
msg = load_prompt("task.txt")(; task = "analyze data") |> UserMessage
193+
response = aigenerate(msg; model = "gpt4om")
194+
# Traces include full conversation with formatted prompts
195+
```
196+
197+
## Recommended Workflow
198+
199+
1. **Store prompts in a `prompts/` directory** in your project
200+
2. **Use TOML frontmatter** for metadata (version, description, author)
201+
3. **Version control with git** to track changes and collaborate
202+
4. **Load dynamically** based on use case or user input
203+
5. **Combine with Logfire.jl** for full observability of your LLM calls
204+
205+
This workflow enables continuous prompt improvement: version prompts in git, trace calls in Logfire, and iterate based on real-world performance.
206+
207+
## Example
208+
209+
See the full example at [`examples/working_with_textprompts.jl`](https://github.com/svilupp/PromptingTools.jl/blob/main/examples/working_with_textprompts.jl).
210+
211+
## Further Reading
212+
213+
- [TextPrompts.jl GitHub](https://github.com/svilupp/textprompts/tree/main/packages/TextPrompts.jl)
214+
- [Discourse Announcement](https://discourse.julialang.org/t/announcing-logfire-jl-textprompts-jl-observability-and-prompt-management-for-julia-genai/134268)
215+
- [Logfire.jl Observability](observability_logfire.md)

0 commit comments

Comments
 (0)