forked from open-webui/open-webui
doc: update
This commit is contained in:
parent
98911511f9
commit
68c5d53264
2 changed files with 15 additions and 123 deletions
86
README.md
86
README.md
|
@ -143,9 +143,11 @@ docker run -d -p 3000:8080 -e OLLAMA_API_BASE_URL=https://example.com/api -v oll
|
|||
|
||||
While we strongly recommend using our convenient Docker container installation for optimal support, we understand that some situations may require a non-Docker setup, especially for development purposes. Please note that non-Docker installations are not officially supported, and you might need to troubleshoot on your own.
|
||||
|
||||
**Warning: Backend Dependency for Proper Functionality**
|
||||
### Project Components
|
||||
|
||||
In order to ensure the seamless operation of our application, it is crucial to run both the backend and frontend components simultaneously. Serving only the frontend in isolation is not supported and may lead to unpredictable outcomes, rendering the application inoperable. Attempting to raise an issue when solely serving the frontend will not be addressed, as it falls outside the intended usage. To achieve optimal results, please strictly adhere to the specified steps outlined in this documentation. Utilize the frontend solely for building static files, and subsequently run the complete application with the provided backend commands. Failure to follow these instructions may result in unsupported configurations, and we may not be able to provide assistance in such cases. Your cooperation in following the prescribed procedures is essential for a smooth user experience and effective issue resolution.
|
||||
The Ollama Web UI consists of two primary components: the frontend and the backend (which serves as a reverse proxy, handling static frontend files, and additional features). Both need to be running concurrently for the development environment.
|
||||
|
||||
**Warning: Backend Dependency for Proper Functionality**
|
||||
|
||||
### TL;DR 🚀
|
||||
|
||||
|
@ -170,86 +172,6 @@ sh start.sh
|
|||
|
||||
You should have the Ollama Web UI up and running at http://localhost:8080/. Enjoy! 😄
|
||||
|
||||
### Project Components
|
||||
|
||||
The Ollama Web UI consists of two primary components: the frontend and the backend (which serves as a reverse proxy, handling static frontend files, and additional features). Both need to be running concurrently for the development environment using `npm run dev`. Alternatively, you can set the `PUBLIC_API_BASE_URL` during the build process to have the frontend connect directly to your Ollama instance or build the frontend as static files and serve them with the backend.
|
||||
|
||||
### Prerequisites
|
||||
|
||||
1. **Clone and Enter the Project:**
|
||||
|
||||
```sh
|
||||
git clone https://github.com/ollama-webui/ollama-webui.git
|
||||
cd ollama-webui/
|
||||
```
|
||||
|
||||
2. **Create and Edit `.env`:**
|
||||
|
||||
```sh
|
||||
cp -RPp example.env .env
|
||||
```
|
||||
|
||||
### Building Ollama Web UI Frontend
|
||||
|
||||
1. **Install Node Dependencies:**
|
||||
|
||||
```sh
|
||||
npm install
|
||||
```
|
||||
|
||||
2. **Run in Dev Mode or Build for Deployment:**
|
||||
|
||||
- Dev Mode (requires the backend to be running simultaneously):
|
||||
|
||||
```sh
|
||||
npm run dev
|
||||
```
|
||||
|
||||
- Build for Deployment:
|
||||
|
||||
```sh
|
||||
# `PUBLIC_API_BASE_URL` overwrites the value in `.env`
|
||||
PUBLIC_API_BASE_URL='https://example.com/api' npm run build
|
||||
```
|
||||
|
||||
3. **Test the Build with `Caddy` (or your preferred server):**
|
||||
|
||||
```sh
|
||||
curl https://webi.sh/caddy | sh
|
||||
|
||||
PUBLIC_API_BASE_URL='https://localhost/api' npm run build
|
||||
caddy run --envfile .env --config ./Caddyfile.localhost
|
||||
```
|
||||
|
||||
### Running Ollama Web UI Backend
|
||||
|
||||
If you wish to run the backend for deployment, ensure that the frontend is built so that the backend can serve the frontend files along with the API route.
|
||||
|
||||
#### Setup Instructions
|
||||
|
||||
1. **Install Python Requirements:**
|
||||
|
||||
```sh
|
||||
cd ./backend
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
2. **Run Python Backend:**
|
||||
|
||||
- Dev Mode with Hot Reloading:
|
||||
|
||||
```sh
|
||||
sh dev.sh
|
||||
```
|
||||
|
||||
- Deployment:
|
||||
|
||||
```sh
|
||||
sh start.sh
|
||||
```
|
||||
|
||||
Now, you should have the Ollama Web UI up and running at [http://localhost:8080/](http://localhost:8080/). Feel free to explore the features and functionalities of Ollama! If you encounter any issues, please refer to the instructions above or reach out to the community for assistance.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
See [TROUBLESHOOTING.md](/TROUBLESHOOTING.md) for information on how to troubleshoot and/or join our [Ollama Web UI Discord community](https://discord.gg/5rJgQTnV4s).
|
||||
|
|
|
@ -1,22 +1,22 @@
|
|||
# Ollama Web UI Troubleshooting Guide
|
||||
|
||||
## Ollama WebUI: Server Connection Error
|
||||
|
||||
If you're running ollama-webui and have chosen to install webui and ollama separately, you might encounter connection issues. This is often due to the docker container being unable to reach the Ollama server at 127.0.0.1:11434(host.docker.internal:11434). To resolve this, you can use the `--network=host` flag in the docker command. When done so port would be changed from 3000 to 8080, and the link would be: http://localhost:8080.
|
||||
|
||||
Here's an example of the command you should run:
|
||||
|
||||
```bash
|
||||
docker run -d --network=host -e OLLAMA_API_BASE_URL=http://127.0.0.1:11434/api --name ollama-webui --restart always ghcr.io/ollama-webui/ollama-webui:main
|
||||
```
|
||||
|
||||
## Connection Errors
|
||||
|
||||
Make sure you have the **latest version of Ollama** installed before proceeding with the installation. You can find the latest version of Ollama at [https://ollama.ai/](https://ollama.ai/).
|
||||
|
||||
If you encounter difficulties connecting to the Ollama server, please follow these steps to diagnose and resolve the issue:
|
||||
|
||||
**1. Verify Ollama Server Configuration**
|
||||
|
||||
Ensure that the Ollama server is properly configured to accept incoming connections from all origins. To do this, make sure the server is launched with the `OLLAMA_ORIGINS=*` environment variable, as shown in the following command:
|
||||
|
||||
```bash
|
||||
OLLAMA_HOST=0.0.0.0 OLLAMA_ORIGINS=* ollama serve
|
||||
```
|
||||
|
||||
This configuration allows Ollama to accept connections from any source.
|
||||
|
||||
**2. Check Ollama URL Format**
|
||||
**1. Check Ollama URL Format**
|
||||
|
||||
Ensure that the Ollama URL is correctly formatted in the application settings. Follow these steps:
|
||||
|
||||
|
@ -28,33 +28,3 @@ Ensure that the Ollama URL is correctly formatted in the application settings. F
|
|||
It is crucial to include the `/api` at the end of the URL to ensure that the Ollama Web UI can communicate with the server.
|
||||
|
||||
By following these troubleshooting steps, you should be able to identify and resolve connection issues with your Ollama server configuration. If you require further assistance or have additional questions, please don't hesitate to reach out or refer to our documentation for comprehensive guidance.
|
||||
|
||||
## Running ollama-webui as a container on Apple Silicon Mac
|
||||
|
||||
If you are running Docker on a M{1..3} based Mac and have taken the steps to run an x86 container, add "--platform linux/amd64" to the docker run command to prevent a warning.
|
||||
|
||||
Example:
|
||||
|
||||
```bash
|
||||
docker run -d -p 3000:8080 -e OLLAMA_API_BASE_URL=http://example.com:11434/api --name ollama-webui --restart always ghcr.io/ollama-webui/ollama-webui:main
|
||||
```
|
||||
|
||||
Becomes
|
||||
|
||||
```
|
||||
docker run --platform linux/amd64 -d -p 3000:8080 -e OLLAMA_API_BASE_URL=http://example.com:11434/api --name ollama-webui --restart always ghcr.io/ollama-webui/ollama-webui:main
|
||||
```
|
||||
|
||||
## Running ollama-webui as a container on WSL Ubuntu
|
||||
If you're running ollama-webui via docker on WSL Ubuntu and have chosen to install webui and ollama separately, you might encounter connection issues. This is often due to the docker container being unable to reach the Ollama server at 127.0.0.1:11434. To resolve this, you can use the `--network=host` flag in the docker command. When done so port would be changed from 3000 to 8080, and the link would be: http://localhost:8080.
|
||||
|
||||
Here's an example of the command you should run:
|
||||
|
||||
```bash
|
||||
docker run -d --network=host -e OLLAMA_API_BASE_URL=http://127.0.0.1:11434/api --name ollama-webui --restart always ghcr.io/ollama-webui/ollama-webui:main
|
||||
```
|
||||
|
||||
## References
|
||||
|
||||
[Change Docker Desktop Settings on Mac](https://docs.docker.com/desktop/settings/mac/) Search for "x86" in that page.
|
||||
[Run x86 (Intel) and ARM based images on Apple Silicon (M1) Macs?](https://forums.docker.com/t/run-x86-intel-and-arm-based-images-on-apple-silicon-m1-macs/117123)
|
||||
|
|
Loading…
Reference in a new issue