Auto-detection of models and manual model naming for Custom APIs.

Problem: Currently, the "Custom API" setup is restrictive because it doesn't allow the user to easily specify which model is running on their local server.

Proposed Solutions:

  1. Manual Model Name Field: Replace the "Custom Model" dropdown selection with an editable text field when "Custom API" is selected, allowing users to type the exact string for their model.

  2. Model Discovery (Fetch Models): Add a "Fetch Models" button next to the Base URL. When clicked, Vowen should call the API’s tags endpoint (e.g., /api/tags for Ollama) and populate the "Model" dropdown with the models actually available on the user's machine.

  3. Local Provider Presets: Add specific presets for Ollama, LM Studio, and LocalAI that pre-fill the default Base URL and port.

Please authenticate to join the conversation.

Upvoters
Status

Completed

Board
πŸ’‘

Feature Request

Date

2 months ago

Author

Anoir Ben Tanfous

Subscribe to post

Get notified by email when there are changes.