Problem: Currently, the "Custom API" setup is restrictive because it doesn't allow the user to easily specify which model is running on their local server.
Proposed Solutions:
Manual Model Name Field: Replace the "Custom Model" dropdown selection with an editable text field when "Custom API" is selected, allowing users to type the exact string for their model.
Model Discovery (Fetch Models): Add a "Fetch Models" button next to the Base URL. When clicked, Vowen should call the APIβs tags endpoint (e.g., /api/tags for Ollama) and populate the "Model" dropdown with the models actually available on the user's machine.
Local Provider Presets: Add specific presets for Ollama, LM Studio, and LocalAI that pre-fill the default Base URL and port.

Please authenticate to join the conversation.
Completed
Feature Request
2 months ago

Anoir Ben Tanfous
Get notified by email when there are changes.
Completed
Feature Request
2 months ago

Anoir Ben Tanfous
Get notified by email when there are changes.