How to Install Ollama to Run DeepSeek R1 Locally on Windows and Access the API from Another Computer

Filesystem Requirements
Before installing Ollama, make sure your system meets the following requirements:
- No Administrator Privileges Needed: Ollama installs in your home directory by default.
- Storage Space:
- 4GB for the Ollama binary install.
- Additional tens to hundreds of GBs for storing large language models.
By default, Ollama is installed in your home directory, but you can change the installation directory.
Step 1: Download and Install Ollama on Windows
1. Download Ollama:
a. Go to the Ollama website.
b. Download the Windows version of the installer.
2. Install Ollama:
a. Run the downloaded OllamaSetup.exe.
b. If you want to install it in a custom directory, use the following command in the terminal:
OllamaSetup.exe /DIR="D:\some\location"
c. Follow the on-screen instructions to complete the installation.
Step 2: Changing the Model Storage Location
By default, models are stored in your home directory. If you want to store them in a different location, follow these steps:
- Open Settings (Windows 11) or Control Panel (Windows 10).
- Search for "Environment Variables".
- Click "Edit environment variables for your account".
- Create a new user environment variable:
- Variable Name:
OLLAMA_MODELS
- Value:
D:\your\desired\path
- Variable Name:
- Click OK/Apply to save the changes.
- If Ollama is running, quit the tray application and relaunch it from the Start menu.
Step 3: Running the DeepSeek-R1 Model on Ollama
1. Verify Ollama Installation
Open Command Prompt and run:
ollama -v
If installed correctly, it will return the installed version.
2. Download and Run DeepSeek-R1
1. Visit the Ollama DeepSeek Library.
2. Select DeepSeek-R1 (choose the version based on your needs).
3. Open Command Prompt and run:
ollama run deepseek-r1:8b
4. The system will download and install the DeepSeek-R1 model.
5. Once the model is installed, you can start prompting directly in the command window.
Step 4: Enabling API Access for Other Computers
By default, the Ollama API only runs on localhost:11434
, meaning it can’t be accessed from another computer. You need to change this setting.
1. Start Ollama API Server
Run the following command:
ollama serve
However, this only makes the API available on localhost.
2. Allow External Access
To allow access from another computer, you need to change the OLLAMA_HOST environment variable to 0.0.0.0
.
Method 1: Using Environment Variables
- Open Settings (Windows 11) or Control Panel (Windows 10).
- Search for "Environment Variables".
- Click "Edit environment variables for your account".
- Create a new user environment variable:
- Variable Name:
OLLAMA_HOST
- Value:
0.0.0.0
- Variable Name:
- Click OK/Apply to save.
- If Ollama is already running, quit the tray application and restart it.
Method 2: Using Command Line
Alternatively, you can set it manually in the terminal:
set OLLAMA_HOST=0.0.0.0
ollama serve
Step 5: Testing API Access from Another Computer
- Find the IP address of the computer running Ollama:
- Open Command Prompt and type:
ipconfig
- Look for the IPv4 Address.
- Open Command Prompt and type:
- On another computer, open a web browser and
http://<Ollama host IP address>:11434
Example:http://192.168.1.100:11434
- If you see "Ollama is running", the API is accessible.
- If you get an error:
- Check if port 11434 is open in your firewall.
- Allow incoming connections on port 11434 in Windows Defender Firewall.
Conclusion
By following these steps, you can successfully install Ollama on Windows, run DeepSeek-R1, and enable API access from external computers. This setup allows for remote inference, making it easier to integrate the model into various applications.
If you found this guide helpful, consider supporting me!