You can configure a Local AI model on an On-premises Private Server so that all AI processing runs within your local infrastructure without connecting to external services. This improves data security and supports offline environments.
Configure the following parameters in ragic.properties:
1. LOCAL_AI_URL=
Required. Enter the API endpoint of your Local AI model service. It must be compatible with the OpenAI API format, for example: http://localhost:11434/v1/.
2. LOCAL_AI_KEY=
Optional. Enter the API key if required by your AI service. Most Local AI model services do not require this setting.
Note: Once LOCAL_AI_URL is configured, the system will always use the Local AI model and will not switch to cloud AI services.
The system automatically identifies and maps models based on their name prefixes and supports the following common models:
| Prefix | Example |
|---|---|
| local/ | local/custom-model |
| qwen* | qwen2.5:32b |
| llama* | llama3.3:70b |
| mistral* | mistral:22b |
| phi* | phi3:14b |
| deepseek* | deepseek-v2:16b |