-
Notifications
You must be signed in to change notification settings - Fork 28
Description
The current YAML-based deployment (triggered with hayhooks pipeline deploy
command) is not maintained since it had lot of type serialization issues.
On Deepset Platform YAML syntax of serialized pipelines is extended with inputs and outputs fields.
The idea here is to give new life to YAML-based deployment and:
- Require existence of
inputs
andoutputs
fields on YAML-only deployment endpoint - Create dynamic Pydantic models for request / response according to the related components inputs/outputs types
- Decide if to support additional metadata tags or not (for the first iteration maybe we can still use the filename as pipeline
name
)
This way, if one doesn't need the customisation offered by a PipelineWrapper
, it will be possible to simply serialize the pipeline as YAML and add inputs
/ outputs
tags. A pipeline endpoint will be created according to them.
We may need to rename the hayhooks pipeline deploy
command to e.g. hayhooks pipeline deploy-yaml
(since we already have hayhooks pipeline deploy-files
).
Sample chat_with_website.yaml
pipeline with added inputs
/ outputs
fields:
components:
converter:
type: haystack.components.converters.html.HTMLToDocument
init_parameters:
extraction_kwargs: null
fetcher:
init_parameters:
raise_on_failure: true
retry_attempts: 2
timeout: 3
user_agents:
- haystack/LinkContentFetcher/2.0.0b8
type: haystack.components.fetchers.link_content.LinkContentFetcher
llm:
init_parameters:
api_base_url: null
api_key:
env_vars:
- OPENAI_API_KEY
strict: true
type: env_var
generation_kwargs: {}
model: gpt-4o-mini
streaming_callback: null
system_prompt: null
type: haystack.components.generators.openai.OpenAIGenerator
prompt:
init_parameters:
template: |
"According to the contents of this website:
{% for document in documents %}
{{document.content}}
{% endfor %}
Answer the given question: {{query}}
Answer:
"
type: haystack.components.builders.prompt_builder.PromptBuilder
connections:
- receiver: converter.sources
sender: fetcher.streams
- receiver: prompt.documents
sender: converter.documents
- receiver: llm.prompt
sender: prompt.prompt
metadata: {}
inputs:
urls:
- fetcher.urls
query:
- prompt.query
outputs:
replies: llm.replies