FAQ
How can we help?
Full Site Search

How to Use a Local AI Model on an On-premises Private Server

You can configure a Local AI model on an On-premises Private Server so that all AI processing runs within your local infrastructure without connecting to external services. This improves data security and supports offline environments.

Configure the following parameters in ragic.properties:

1. LOCAL_AI_URL=

Required. Enter the API endpoint of your Local AI model service. It must be compatible with the OpenAI API format, for example: http://localhost:11434/v1/.

2. LOCAL_AI_KEY=

Optional. Enter the API key if required by your AI service. Most Local AI model services do not require this setting.

Note: Once LOCAL_AI_URL is configured, the system will always use the Local AI model and will not switch to cloud AI services.

Supported Model Prefixes

The system automatically identifies and maps models based on their name prefixes and supports the following common models:

Prefix Example
local/ local/custom-model
qwen* qwen2.5:32b
llama* llama3.3:70b
mistral* mistral:22b
phi* phi3:14b
deepseek* deepseek-v2:16b

Share your feedback with Ragic

What would you like to tell us?(required, multi select)

Please provide detailed explanations for the selected items above:

Screenshots to help us better understand your feedback:

Thank you for your valuable feedback!

    Start Ragic for free

    Sign up with Google

    Terms of Service | Privacy Policy