# Install Ollama
*curl -fsSL <https://ollama.com/install.sh> | sh*
# Pull a model
*ollama pull llama3*
# Run it manually
*ollama run llama3*
#list available models
*ollama list*
#starts a local server on **port 11434** (default)
*ollama serve*
// Backend can call ➜ http://localhost:11434/api/generate
If want to access it remotely (from your backend hosted elsewhere):
*export OLLAMA_HOST=0.0.0.0*
*ollama serve*
Then can call it from another machine:http://<server-ip>:11434/api/generate