README.md Dockersection formatting and wording fix

This commit is contained in:
Jannik S 2024-04-10 10:14:54 +02:00 committed by GitHub
parent b9cadff16b
commit 62ec2651ba
No known key found for this signature in database
GPG key ID: B5690EEEBB952194

View file

@ -92,59 +92,56 @@ Don't forget to explore our sibling project, [Open WebUI Community](https://open
> [!NOTE] > [!NOTE]
> Please note that for certain Docker environments, additional configurations might be needed. If you encounter any connection issues, our detailed guide on [Open WebUI Documentation](https://docs.openwebui.com/) is ready to assist you. > Please note that for certain Docker environments, additional configurations might be needed. If you encounter any connection issues, our detailed guide on [Open WebUI Documentation](https://docs.openwebui.com/) is ready to assist you.
### Quick Start with Docker 🐳 ### Quick Start with Docker (3 ways) 🐳
> [!IMPORTANT] > [!IMPORTANT]
> When using Docker to install Open WebUI, make sure to include the `-v open-webui:/app/backend/data` in your Docker command. This step is crucial as it ensures your database is properly mounted and prevents any loss of data. > When using Docker to install Open WebUI, make sure to include the `-v open-webui:/app/backend/data` in your Docker command. This step is crucial as it ensures your database is properly mounted and prevents any loss of data.
- **If Ollama is on your computer**, use this command: 1. **If Ollama is on your computer**, use this command:
```bash ```bash
docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
``` ```
- **If Ollama is on a Different Server**, use this command: 2. **If Ollama is on a Different Server**, use this command:
- To connect to Ollama on another server, change the `OLLAMA_BASE_URL` to the server's URL: To connect to Ollama on another server, change the `OLLAMA_BASE_URL` to the server's URL:
```bash ```bash
docker run -d -p 3000:8080 -e OLLAMA_BASE_URL=https://example.com -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main docker run -d -p 3000:8080 -e OLLAMA_BASE_URL=https://example.com -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
``` ```
- After installation, you can access Open WebUI at [http://localhost:3000](http://localhost:3000). Enjoy! 😄 After installation, you can access Open WebUI at [http://localhost:3000](http://localhost:3000). Enjoy! 😄
- **If you want to customize your build with additional args**, use this commands: 3. **If you want to customize your build with additional ARGS**, use this commands:
> [!NOTE] > [!NOTE]
> If you only want to use Open WebUI with Ollama included or CUDA acelleration it's recomented to use our official images with the tags :cuda or :with-ollama > If you only want to use Open WebUI with Ollama included or CUDA acelleration it's recomented to use our official images with the tags :cuda or :ollama
> If you want a combination of both or more customisation options like a different embedding model and/or CUDA version you need to build the image yourself following the instructions below. > If you want a combination of both or more customisation options like a different embedding model and/or CUDA version you need to build the image yourself following the instructions below.
**For the build:** - **For the build:**
```bash ```bash
docker build -t open-webui docker build -t open-webui
``` ```
Optional build ARGS (use them in the docker build command below if needed): - **Optional build ARGS (use them in the docker build command below if needed):**
e.g. e.g.
```bash ```bash
--build-arg="USE_EMBEDDING_MODEL=intfloat/multilingual-e5-large" --build-arg="USE_EMBEDDING_MODEL=intfloat/multilingual-e5-large"
``` ```
For "intfloat/multilingual-e5-large" custom embedding model (default is all-MiniLM-L6-v2), only works with [sentence transforer models](https://huggingface.co/models?library=sentence-transformers). Current [Leaderbord](https://huggingface.co/spaces/mteb/leaderboard) of embedding models. For "intfloat/multilingual-e5-large" custom embedding model (default is all-MiniLM-L6-v2), only works with [sentence transforer models](https://huggingface.co/models?library=sentence-transformers). Current [Leaderbord](https://huggingface.co/spaces/mteb/leaderboard) of embedding models.
```bash ```bash
--build-arg="USE_OLLAMA=true" --build-arg="USE_OLLAMA=true"
``` ```
For including ollama in the image. For including ollama in the image.
```bash ```bash
--build-arg="USE_CUDA=true" --build-arg="USE_CUDA=true"
``` ```
To use CUDA exeleration for the embedding and whisper models. To use CUDA exeleration for the embedding and whisper models.
> [!NOTE] > [!NOTE]
@ -153,7 +150,6 @@ Don't forget to explore our sibling project, [Open WebUI Community](https://open
```bash ```bash
--build-arg="USE_CUDA_VER=cu117" --build-arg="USE_CUDA_VER=cu117"
``` ```
For CUDA 11 (default is CUDA 12) For CUDA 11 (default is CUDA 12)
**To run the image:** **To run the image:**