Skip to content

Conversation

try-agaaain
Copy link

Link to #234

I'm working on replacing the .env configuration method with YAML and have completed a preliminary implementation. (I'll switch to OmegaConf later)

The generated configuration file looks like this:

config.yaml
AdminConfig:
  ADMIN_TOKEN: xxxx
  ENABLE_LOGIN: 'False'
  USER_TOKEN: '4321'
HugeGraphConfig:
  EDGE_LIMIT_PRE_LABEL: 8
  GRAPH_NAME: hugegraph
  GRAPH_PWD: xxx
  GRAPH_SPACE: null
  GRAPH_URL: 127.0.0.1:8080
  GRAPH_USER: admin
  LIMIT_PROPERTY: 'False'
  MAX_GRAPH_ITEMS: 30
  MAX_GRAPH_PATH: 10
  TOPK_PER_KEYWORD: 1
  TOPK_RETURN_RESULTS: 20
  VECTOR_DIS_THRESHOLD: 0.9
LLMConfig:
  CHAT_LLM_TYPE: openai
  COHERE_BASE_URL: https://api.cohere.com/v1/rerank
  EMBEDDING_TYPE: openai
  EXTRACT_LLM_TYPE: openai
  LITELLM_CHAT_API_BASE: null
  LITELLM_CHAT_API_KEY: null
  LITELLM_CHAT_LANGUAGE_MODEL: gemini-2.0-flash
  LITELLM_CHAT_TOKENS: 8192
  LITELLM_EMBEDDING_API_BASE: null
  LITELLM_EMBEDDING_API_KEY: null
  LITELLM_EMBEDDING_MODEL: openai/text-embedding-3-small
  LITELLM_EXTRACT_API_BASE: null
  LITELLM_EXTRACT_API_KEY: null
  LITELLM_EXTRACT_LANGUAGE_MODEL: gemini-2.0-flash
  LITELLM_EXTRACT_TOKENS: 256
  LITELLM_TEXT2GQL_API_BASE: null
  LITELLM_TEXT2GQL_API_KEY: null
  LITELLM_TEXT2GQL_LANGUAGE_MODEL: gemini-2.0-flash
  LITELLM_TEXT2GQL_TOKENS: 4096
  OLLAMA_CHAT_HOST: 127.0.0.1
  OLLAMA_CHAT_LANGUAGE_MODEL: null
  OLLAMA_CHAT_PORT: 11434
  OLLAMA_EMBEDDING_HOST: 127.0.0.1
  OLLAMA_EMBEDDING_MODEL: null
  OLLAMA_EMBEDDING_PORT: 11434
  OLLAMA_EXTRACT_HOST: 127.0.0.1
  OLLAMA_EXTRACT_LANGUAGE_MODEL: null
  OLLAMA_EXTRACT_PORT: 11434
  OLLAMA_TEXT2GQL_HOST: 127.0.0.1
  OLLAMA_TEXT2GQL_LANGUAGE_MODEL: null
  OLLAMA_TEXT2GQL_PORT: 11434
  OPENAI_CHAT_API_BASE: https://generativelanguage.googleapis.com/v1beta/openai
  OPENAI_CHAT_API_KEY: null
  OPENAI_CHAT_LANGUAGE_MODEL: gemini-2.0-flash
  OPENAI_CHAT_TOKENS: 8192
  OPENAI_EMBEDDING_API_BASE: https://generativelanguage.googleapis.com/v1beta/openai
  OPENAI_EMBEDDING_API_KEY: null
  OPENAI_EMBEDDING_MODEL: text-embedding-004
  OPENAI_EXTRACT_API_BASE: https://generativelanguage.googleapis.com/v1beta/openai
  OPENAI_EXTRACT_API_KEY: null
  OPENAI_EXTRACT_LANGUAGE_MODEL: gemini-2.0-flash
  OPENAI_EXTRACT_TOKENS: 8192
  OPENAI_TEXT2GQL_API_BASE: https://generativelanguage.googleapis.com/v1beta/openai
  OPENAI_TEXT2GQL_API_KEY: null
  OPENAI_TEXT2GQL_LANGUAGE_MODEL: gemini-2.0-flash
  OPENAI_TEXT2GQL_TOKENS: 8192
  QIANFAN_CHAT_ACCESS_TOKEN: null
  QIANFAN_CHAT_API_KEY: null
  QIANFAN_CHAT_LANGUAGE_MODEL: ERNIE-Speed-128K
  QIANFAN_CHAT_SECRET_KEY: null
  QIANFAN_CHAT_URL: https://aip.baidubce.com/rpc/2.0/ai_custom/v1/wenxinworkshop/chat/
  QIANFAN_EMBEDDING_API_KEY: null
  QIANFAN_EMBEDDING_MODEL: embedding-v1
  QIANFAN_EMBEDDING_SECRET_KEY: null
  QIANFAN_EMBED_URL: https://aip.baidubce.com/rpc/2.0/ai_custom/v1/wenxinworkshop/embeddings/
  QIANFAN_EXTRACT_ACCESS_TOKEN: null
  QIANFAN_EXTRACT_API_KEY: null
  QIANFAN_EXTRACT_LANGUAGE_MODEL: ERNIE-Speed-128K
  QIANFAN_EXTRACT_SECRET_KEY: null
  QIANFAN_TEXT2GQL_ACCESS_TOKEN: null
  QIANFAN_TEXT2GQL_API_KEY: null
  QIANFAN_TEXT2GQL_LANGUAGE_MODEL: ERNIE-Speed-128K
  QIANFAN_TEXT2GQL_SECRET_KEY: null
  QIANFAN_URL_PREFIX: https://aip.baidubce.com/rpc/2.0/ai_custom/v1/wenxinworkshop
  RERANKER_API_KEY: null
  RERANKER_MODEL: null
  RERANKER_TYPE: null
  TEXT2GQL_LLM_TYPE: openai

The LLMConfig section has a large number of configuration items. Do there any suggestions on how to better organize and manage these configurations?

@dosubot dosubot bot added the size:L This PR changes 100-499 lines, ignoring generated files. label Jun 17, 2025
@github-actions github-actions bot added the llm label Jun 17, 2025
@dosubot dosubot bot added the enhancement New feature or request label Jun 17, 2025
@imbajin imbajin requested a review from Copilot June 18, 2025 11:01
Copy link

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR replaces the legacy .env configuration management with YAML-based configuration to streamline and centralize configuration handling. Key changes include the replacement of update_env calls with update_configs, updates to the BaseConfig class to support YAML generation and synchronization, and dependency updates across requirements and build files.

  • Replaced .env-related functions with YAML-based functions in configuration blocks.
  • Refactored BaseConfig to generate, update, and check YAML configurations.
  • Updated project dependencies and .gitignore to support YAML.

Reviewed Changes

Copilot reviewed 5 out of 5 changed files in this pull request and generated 2 comments.

Show a summary per file
File Description
hugegraph-llm/src/hugegraph_llm/demo/rag_demo/configs_block.py Replaced update_env with update_configs in configuration functions.
hugegraph-llm/src/hugegraph_llm/config/models/base_config.py Refactored configuration syncing logic to use YAML instead of .env.
hugegraph-llm/requirements.txt Added OmegaConf dependency.
hugegraph-llm/pyproject.toml Updated dependencies (Gradio and OmegaConf).
hugegraph-llm/.gitignore Configured to ignore config.yaml.

current_class_name = self.__class__.__name__
with open(yaml_path, "r", encoding="utf-8") as f:
content = f.read()
yaml_config = yaml.safe_load(content) if content.strip() else {}
Copy link
Preview

Copilot AI Jun 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Before iterating over the YAML config section, ensure that current_class_name exists as a key in yaml_config. Consider initializing yaml_config[current_class_name] to an empty dict if it does not exist to prevent potential KeyError.

Suggested change
yaml_config = yaml.safe_load(content) if content.strip() else {}
yaml_config = yaml.safe_load(content) if content.strip() else {}
current_class_name = self.__class__.__name__
if current_class_name not in yaml_config:
yaml_config[current_class_name] = {}

Copilot uses AI. Check for mistakes.

if yaml_file_config.get(current_class_name):
self._sync_yaml_to_object(yaml_file_config, object_config_dict)

# Step 2: Add missing onfig items from object to yaml.config
Copy link
Preview

Copilot AI Jun 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is a minor typo in the comment ('onfig' should be 'config') – please correct it for clarity.

Suggested change
# Step 2: Add missing onfig items from object to yaml.config
# Step 2: Add missing config items from object to yaml.config

Copilot uses AI. Check for mistakes.

@@ -18,3 +18,4 @@ openpyxl~=3.1.5
pydantic-settings~=2.6.1
apscheduler~=3.10.4
litellm~=1.61.13
OmegaConf~=2.3
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
OmegaConf~=2.3
OmegaConf~=2.3

keep a line -> EOF

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request llm size:L This PR changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants