Skip to content

Conversation

UntidyPhishBass
Copy link

@UntidyPhishBass UntidyPhishBass commented Jul 6, 2025

User description

🔧 fix: normalize openai/ model prefix in aembedding and acompletion

Summary:
Fixes inconsistent and conditional handling of openai/ prefix in model names. Previously, aembedding only applied the prefix when custom_api_key was not set, leading to malformed model names like openai/openai/... and broken usage tracking.

Changes:

  • Normalize model name in both aembedding and acompletion:
    model = f"openai/{model.removeprefix('openai/')}"
    

PR Type

Bug fix


Description

  • Fix inconsistent openai/ prefix handling in model names

  • Normalize model prefix in both acompletion and aembedding functions

  • Prevent malformed model names like openai/openai/...

  • Remove conditional prefix application in aembedding


Changes diagram

flowchart LR
  A["Model Name Input"] --> B["removeprefix('openai/')"]
  B --> C["Add 'openai/' prefix"]
  C --> D["Normalized Model Name"]
Loading

Changes walkthrough 📝

Relevant files
Bug fix
litellm.py
Normalize openai model prefix handling                                     

src/agents-api/agents_api/clients/litellm.py

  • Normalize model prefix in acompletion function using removeprefix
  • Remove conditional prefix logic from aembedding function
  • Apply consistent prefix normalization to both functions
  • +2/-3     

    Need help?
  • Type /help how to ... in the comments thread for any questions about Qodo Merge usage.
  • Check out the documentation for more information.
  • Copy link
    Contributor

    PR Reviewer Guide 🔍

    Here are some key observations to aid the review process:

    ⏱️ Estimated effort to review: 2 🔵🔵⚪⚪⚪
    🧪 No relevant tests
    🔒 No security concerns identified
    ⚡ Recommended focus areas for review

    Logic Consistency

    The acompletion function applies model normalization after retrieving the secret, while aembedding applies it at the beginning. This inconsistency could lead to different behavior when custom API keys are involved, and the normalized model name might affect secret retrieval logic.

            model = f"openai/{model.removeprefix('openai/')}"  # This is needed for litellm
    
        supported_params: list[str] = (
            get_supported_openai_params(model) or []
        )  # Supported params returns Optional[list[str]]
        supported_params += ["user", "mock_response", "stream_options"]
        settings = {k: v for k, v in kwargs.items() if k in supported_params}
    
        # NOTE: This is a fix for Mistral API, which expects a different message format
        if model[7:].startswith("mistral"):
            messages = [
                {"role": message["role"], "content": message["content"]} for message in messages
            ]
    
        for message in messages:
            if "tool_calls" in message and message["tool_calls"] == []:
                message.pop("tool_calls")
    
        model_response = await _acompletion(
            model=model,
            messages=messages,
            **settings,
            base_url=None if custom_api_key else litellm_url,
            api_key=custom_api_key or litellm_master_key,
        )
    
        response = patch_litellm_response(model_response)
    
        # Track usage in database if we have a user ID (which should be the developer ID)
        user = settings.get("user")
        if user and isinstance(response, ModelResponse):
            try:
                model = response.model
                await track_usage(
                    developer_id=UUID(user),
                    model=model,
                    messages=messages,
                    response=response,
                    custom_api_used=custom_api_key is not None,
                    metadata={"tags": kwargs.get("tags", [])},
                )
            except Exception as e:
                # Log error but don't fail the request if usage tracking fails
                print(f"Error tracking usage: {e}")
    
        return response
    
    
    @wraps(_aembedding)
    @beartype
    async def aembedding(
        *,
        inputs: str | list[str],
        model: str = embedding_model_id,
        embed_instruction: str | None = None,
        dimensions: int = embedding_dimensions,
        join_inputs: bool = False,
        custom_api_key: str | None = None,
        **settings,
    ) -> list[list[float]]:
    
        model = f"openai/{model.removeprefix('openai/')}"  # This is needed for litellm

    Copy link
    Contributor

    PR Code Suggestions ✨

    Explore these optional code suggestions:

    CategorySuggestion                                                                                                                                    Impact
    General
    Improve Python version compatibility

    The removeprefix method was introduced in Python 3.9. For better compatibility
    with older Python versions, consider using string slicing or startswith check
    instead.

    src/agents-api/agents_api/clients/litellm.py [79]

    -model = f"openai/{model.removeprefix('openai/')}"  # This is needed for litellm
    +model = f"openai/{model[7:] if model.startswith('openai/') else model}"  # This is needed for litellm
    • Apply / Chat
    Suggestion importance[1-10]: 8

    __

    Why: The suggestion correctly identifies that str.removeprefix() is only available in Python 3.9+ and provides a backward-compatible alternative, preventing potential runtime errors in older environments.

    Medium
    • More

    Copy link
    Contributor

    @ellipsis-dev ellipsis-dev bot left a comment

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    Caution

    Changes requested ❌

    Reviewed everything up to 8bd93ae in 58 seconds. Click for details.
    • Reviewed 24 lines of code in 1 files
    • Skipped 0 files when reviewing.
    • Skipped posting 1 draft comments. View those below.
    • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
    1. src/agents-api/agents_api/clients/litellm.py:140
    • Draft comment:
      Normalization is applied unconditionally in aembedding now. Consider aligning the handling logic with acompletion (perhaps via a shared utility) to prevent potential inconsistencies in future changes.
    • Reason this comment was not posted:
      Marked as duplicate.

    Workflow ID: wflow_KPakXrkFTTnu106G

    You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

    @@ -76,7 +76,7 @@ async def acompletion(
    )

    custom_api_key = secret and secret.value
    model = f"openai/{model}" # This is needed for litellm
    model = f"openai/{model.removeprefix('openai/')}" # This is needed for litellm
    Copy link
    Contributor

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    Consider extracting the model normalization (using removeprefix) into a helper function to avoid duplication and ensure consistent behavior across functions. Also, ensure that your minimum Python version supports str.removeprefix (Python 3.9+).

    Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
    Projects
    None yet
    Development

    Successfully merging this pull request may close these issues.

    1 participant