added the drop capability and updated readme accordingly

This commit is contained in:
Daniele Viti 2023-12-24 14:21:34 +01:00
parent 567b88bb00
commit 7063f00b71
2 changed files with 18 additions and 7 deletions

View file

@ -73,13 +73,22 @@ Don't forget to explore our sibling project, [OllamaHub](https://ollamahub.com/)
### Installing Both Ollama and Ollama Web UI Using Docker Compose ### Installing Both Ollama and Ollama Web UI Using Docker Compose
If you don't have Ollama installed yet, you can use the provided Docker Compose file for a hassle-free installation. Simply run the following command: If you don't have Ollama installed yet, you can use the provided bash script for a hassle-free installation. Simply run the following command:
For cpu-only container
```bash ```bash
docker compose up -d --build chmod +x run-compose.sh && ./run-compose.sh
``` ```
This command will install both Ollama and Ollama Web UI on your system. Ensure to modify the `compose.yaml` file for GPU support and Exposing Ollama API outside the container stack if needed. For gpu-enabled container (to enable this you must have your gpu driver for docker, it mostly works with nvidia so this is the official install guide: [nvidia-container-toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html))
```bash
chmod +x run-compose.sh && ./run-compose.sh --enable-gpu[count=1]
```
Note that both the above commands will use the latest production docker image in repository, to be able to build the latest local version you'll need to append the `--build` parameter, for example:
```bash
./run-compose.sh --build --enable-gpu[count=1]
```
### Installing Ollama Web UI Only ### Installing Ollama Web UI Only

View file

@ -80,10 +80,12 @@ usage() {
echo " -h, --help Show this help message." echo " -h, --help Show this help message."
echo "" echo ""
echo "Examples:" echo "Examples:"
echo " $0 --enable-gpu[count=1]" echo " ./$0 --drop"
echo " $0 --enable-api[port=11435]" echo " ./$0 --enable-gpu[count=1]"
echo " $0 --enable-gpu[count=1] --enable-api[port=12345] --webui[port=3000]" echo " ./$0 --enable-api[port=11435]"
echo " $0 --enable-gpu[count=1] --enable-api[port=12345] --webui[port=3000] --data[folder=./ollama-data]" echo " ./$0 --enable-gpu[count=1] --enable-api[port=12345] --webui[port=3000]"
echo " ./$0 --enable-gpu[count=1] --enable-api[port=12345] --webui[port=3000] --data[folder=./ollama-data]"
echo " ./$0 --enable-gpu[count=1] --enable-api[port=12345] --webui[port=3000] --data[folder=./ollama-data] --build"
echo "" echo ""
echo "This script configures and runs a docker-compose setup with optional GPU support, API exposure, and web UI configuration." echo "This script configures and runs a docker-compose setup with optional GPU support, API exposure, and web UI configuration."
echo "About the gpu to use, the script automatically detects it using the "lspci" command." echo "About the gpu to use, the script automatically detects it using the "lspci" command."