0.5 Released #427
Replies: 3 comments 2 replies
-
|
I love this platform you've created. Every time there is a major model release or a feature that Codex CLI adds, I go to the actions tab and wait for a push to main :) Keep up the good work. Quick question, are you still targeting iTerm2? |
Beta Was this translation helpful? Give feedback.
-
|
Thank you for all the work you've put into this project. Do you have a way setup for recurring support (Patreon, GitHub Sponsors, etc?) I'm probably not the only one who'd be happy to support your development. |
Beta Was this translation helpful? Give feedback.
-
|
Do you happen to have a Discord or something where we can chat about this stuff? Your work realized a lot of the things I had tried to do or even envisioned in the past. It would be neat to have a small community where we could talk about this kind of stuff without the structure of GitHub issues/discussions. Just a thought. Thank you and keep up the good work. It's really cool the things we can do now. One thing I would find interesting is for Every Code to be able to use Anthropic 'more directly' especially with the new cheaper Opus. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Over the weekend we shipped version 0.5. A surprising number of people have been calling the project Every Code, so to make life easier for everyone, we’ve renamed it. I still call it Code for short, and probably always will.
This release bundles a hefty set of bug fixes and quality of life improvements. Resume and undo now behave far more reliably. The new Codex compaction backend is used everywhere and is a huge upgrade from the old session hand-over setup.
We’ve expanded the core configuration in /settings so you have deeper control over model behaviour, and you can now add your own agents using any CLI.
New models are in: Codex-Max, Gemini 3 Pro and Opus 4.5. They’re all excellent and they use tokens very efficiently. Auto Drive now runs them as a combined front line, and the way they complement each other is genuinely impressive.
Lately I’ve been running Auto Drive with Codex-Max in XHigh as my main driver. It’s been so consistently strong that I’ll likely make it the default soon. The XHigh reasoning mode really shines in the orchestrator environment. Success rates are noticeably higher, and the new early-planning stage is doing great work seeding Auto Drive with the right style from the start.
As always, feedback and feature ideas are massively appreciated. Thanks again for all the issues and PRs - you all keep pushing this thing forward.
Beta Was this translation helpful? Give feedback.
All reactions