Latest Posts

Saturday, November 29, 2025

Working with LLM/SLM locally using Ollama

Ollama is an open-source tool developed by Jeffrey Morgan and Michael Chiang. It allows users to run large language models (LLMs) locally on their own computers. It simplifies the process of downloading, managing, and executing open-source LLMs through a command-line interface (CLI) and an API. Running LLMs locally with Ollama has benefits such as enhanced data privacy and security, cost savings, and offline capabilities.

How to install ollama?

  • Windows
    • Go to the official Ollama website and download the Windows installer.
    • Run the downloaded OllamaSetup.exe file.
    • Follow the on-screen instructions to complete the installation.
    • Open a new command prompt or PowerShell window.

  • Mac and Linux OS - Run the command to install the Ollama 

        curl -fsSL https://ollama.ai/install.sh | sh 

How to verify the Ollama installation?

Open a terminal and run the command ollama --version. It will return the current running version of Ollama

How to run the Ollama locally?

To start the Ollma locally, run the command ollama serve 

How to check if Ollama is working or not?

Open a browser and run http://localhost:<OllamaPortNumber>/ if its working fine than it will show message "Ollama is running" .  

http://localhost:11434/

What are the different SLM/LLM models available that work with Ollama?

  • deepSeek-r1
  • ollama3.1
  • ph3
  • llama3.2
  • mistral

How to install the open source LLM/SLM models?

  • Open a terminal
  • Run ollama serve to start the Ollama.
  • Open a new terminal and run the command ollama pull <LLM model> like ollama pull llama3.2 to pull new LLM models

How to interact with models using ollama on terminal?

  • Run the command ollama run <model name> like ollama run llama3.2
    

  • Ask any question it will provide the answer

How to interact with models running under Ollama using the REST API?

  • Open a terminal
  • start the Ollama with Ollama serve command
  • run curl to get the answers to your queries. You can use POSt man also

curl --location 'http://localhost:11434/api/chat' \
--header 'Content-Type: application/json' \
--data '{
    "model": "deepseek-r1",
    "messages": [{ "role": "user", "content": "who is presidnet of united states" }],
    "stream": false
}'

  • If you face issue due to GET command then use POST. The GET doesn't support HTTP so it fails  

How to change the default port of Ollama?

Press Windows + R to open the Run dialog. Type sysdm.cpl and press Enter to open System Properties

In the System Properties window, navigate to the "Advanced" tab. Click on the "Environment Variables" button

In the "System variables" section (or "User variables" if you prefer it only for your user), click "New" if OLLAMA_HOST doesn't exist, or select it and click "Edit" if it does

Set this to the desired IP address and port, for example, 0.0.0.0:14434 to listen on all network interfaces on port 14434, or 127.0.0.1:8000 to listen only on localhost on port 8000.

Click "OK" on all open windows to save the changes.

Do we have any alternative for Ollama?

The other counterpart of Ollama is LN studio. Ollama and LM Studio both work with locally installed LLM/SLM models, but LM Studio is known for its simple, GUI-based experience. Ollama is ideal for developers who want command-line control, scripting, and API integration. LM Studio features an easy-to-use chat interface with built-in model downloading capabilities. Ollama provides more flexibility and is better for backend development due to its command-line and REST API focus. 


No comments:

Post a Comment