Pro Tip: Use a custom system prompt with codex to make it zero in on your projects #7296
Replies: 5 comments 3 replies
-
|
@guidedways Thanks for the example. I have actually defined custom prompts within In your example I see that you are utilizing custom prompts in So i've been passing instructions via
Appreciate your thoughts on this! |
Beta Was this translation helpful? Give feedback.
-
|
Instead of Update: the example above has been updated. |
Beta Was this translation helpful? Give feedback.
-
|
Thanks @guidedways, i've dug a bit deeper and would like to share some findings. It's likely this is documented somewhere, but for newcomers stumbling into this discussion I hope this sheds light on how the various prompts are expressed to the model api under the hood. Once codex has completed it's startup routine it will kick things off with a POST to the responses api at First there are the default "instructions" (not sure if "system prompt" is the right terminology here as there may be another level above this). Default instructions are located at Second, there are "developer instructions" -- these are conversation messages with a role type of "developer" -- these can be supplied via Third, there is your AGENTS.md file -- also expressed as conversation message with a role of "user". Lastly, there is the user's prompt -- also a convo message with a role of "user." Here the general shape of the payload sent for those wanting to see how all of this comes together: {
model : "gpt-5.1-codex-max,.."
instructions: "Corresponding DEFAULT system prompt for model (see codex-rs/core for all system prompts) OR your system prompt override from —config experimental_instructions_file=<path to system instructions override file>"
input : [
{
type : "message",
role : "developer",
content : [
{
type: "input_text",
text: "<developer instructions from --config developer_instructions>"
}
]
},
{
type: "message",
role: "user",
content: [
{
type: "input_text",
text: "<content from your AGENTS.md>"
}
]
},
{
type: "message",
role: "user",
content: [
{
type: "input_text",
text: "<environment_context> codex config settings here... </environment_context>"
}
]
},
{
type: "message",
role: "user",
content: [
{
type: "input_text",
text: "ACTUAL USER PROMPT HERE"
}
]
},
],
tools: [
...
],
...
} |
Beta Was this translation helpful? Give feedback.
-
|
@guidedways, thanks for posting this. I appreciate you taking the time to share these insights and suggestions with the Codex community! |
Beta Was this translation helpful? Give feedback.
-
|
This is amazing! Was looking for the solution! Thank you @guidedways ! |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
System prompts are important, because these are the very first instructions the model receives. Writing one for a general purpose CLI agent is tough - it needs to be generic and useful enough to work for a wide variety of development setups.
I've found however that using a custom system prompt for your particular setup squeezes even more out of SOTA models such as Codex (which is already an incredibly powerful model - with minimal instructions it does wonders).
To help you setup a custom system prompt, I've compiled a short tutorial:
Quick Setup
In my case, I've setup various aliases in
.zshrcsuch ascodex-swiftfor obj-c / swift projects,codex-bugfor meticulous bug hunting and so on. I leave more of the project specific stuff forAGENTS.mdwithin the repo I work in, while I keep my system prompt generic but suitable for my personal needs and specific to a particular tech stack (with a well defined role etc). I won't be sharing the prompts as I'll encourage you to create your own.Add to your
~/.zshrc(or~/.bashrc):Create Your Instructions
Usage
Beta Was this translation helpful? Give feedback.
All reactions