Skip to content

fix: modifies openai request logic for reasoning models (#4221) #4294

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

myaple
Copy link
Contributor

@myaple myaple commented Aug 22, 2025

This is a first take at fixing the issue described in #4221 where the logic used in the litellm provider to create the requests is the same as the openai format, which has significant bias towards openai models. I've added a GOOSE-level configuration, GOOSE_REASONING_EFFORT, that forces goose to use the reasoning level provided, or it falls back to the previous parsing style for the model name. The name-based parsing uses more specific schemes, calling out o{1..4} instead of just "o" as the starting char.

I also split the logic for "is this a reasoning model" and "is this an ox model" as those have different implications for the built request. I'm open to the idea of a better way to distinguish whether or not a model is reasoning for models other than openai-specific ones, as I'm not really sure - user config parameter, enumerate the most common reasoning models and try to parse them, not sure what else.

Signed-off-by: Matt Yaple <matt@yaple.dev>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant