-
Notifications
You must be signed in to change notification settings - Fork 4.2k
Description
Describe the bug
Using Prompty module with GPT 5o models that does not support max_tokens
parameter anymore. The new parameter is max_completion_token
.
Property 'max_comUnhandled exception. (Line: 12, Col: 5, Idx: 290) - (Line: 12, Col: 26, Idx: 311): Property 'max_completion_tokens' not found on type 'Microsoft.SemanticKernel.Prompty.Core.PromptyModelParameters'.
To Reproduce
Steps to reproduce the behavior:
-
Specify
max_completion_tokens
as a model parameter in a prompty file.
e.g.
configuration: type: azure_openai azure_endpoint: ${env:AZURE_OPENAI_ENDPOINT} azure_deployment: ${env:AZURE_OPENAI_DEPLOYMENT} parameters: temperature: 1
-
Create function from prompty.
`
var kernel = Kernel.CreateBuilder()
.AddAzureOpenAIChatCompletion(deployment, endpoint, key)
.Build();
var function = kernel.CreateFunctionFromPromptyFile("../basic.prompty");
3. Invoke function.
var result = await function.InvokeAsync(kernel, kernelArguments);`
Expected behavior
Can invoke a GPT 5o series models.
Screenshots

Platform
- Language: C#
- Source: Top-level Package
`dotnet list package
Requested ResolvedMicrosoft.SemanticKernel 1.22.0 1.22.0
Microsoft.SemanticKernel.Prompty 1.23.0-alpha 1.23.0-alpha
` - AI model: [e.g. OpenAI:GPT-5o-mini(2024-07-18)]
- IDE: VS Code
- OS: Windows, Mac
Additional context
Add any other context about the problem here.
Metadata
Metadata
Assignees
Labels
Type
Projects
Status