forked from open-webui/open-webui
Merge branch 'main' into dev
This commit is contained in:
commit
bd3e168e2f
7 changed files with 82 additions and 16 deletions
26
README.md
26
README.md
|
@ -35,7 +35,7 @@ Also check our sibling project, [OllamaHub](https://ollamahub.com/), where you c
|
||||||
|
|
||||||
- 🤖 **Multiple Model Support**: Seamlessly switch between different chat models for diverse interactions.
|
- 🤖 **Multiple Model Support**: Seamlessly switch between different chat models for diverse interactions.
|
||||||
|
|
||||||
- 🗃️ **Modelfile Builder**: Easily create Ollama modelfiles via the web UI. Create and add your own character to Ollama by customizing system prompts, conversation starters, and more.
|
- 🧩 **Modelfile Builder**: Easily create Ollama modelfiles via the web UI. Create and add characters/agents, customize chat elements, and import modelfiles effortlessly through [OllamaHub](https://ollamahub.com/) integration.
|
||||||
|
|
||||||
- ⚙️ **Many Models Conversations**: Effortlessly engage with various models simultaneously, harnessing their unique strengths for optimal responses. Enhance your experience by leveraging a diverse set of models in parallel.
|
- ⚙️ **Many Models Conversations**: Effortlessly engage with various models simultaneously, harnessing their unique strengths for optimal responses. Enhance your experience by leveraging a diverse set of models in parallel.
|
||||||
|
|
||||||
|
@ -121,6 +121,29 @@ docker run -d -p 3000:8080 -e OLLAMA_API_BASE_URL=https://example.com/api --name
|
||||||
|
|
||||||
While we strongly recommend using our convenient Docker container installation for optimal support, we understand that some situations may require a non-Docker setup, especially for development purposes. Please note that non-Docker installations are not officially supported, and you might need to troubleshoot on your own.
|
While we strongly recommend using our convenient Docker container installation for optimal support, we understand that some situations may require a non-Docker setup, especially for development purposes. Please note that non-Docker installations are not officially supported, and you might need to troubleshoot on your own.
|
||||||
|
|
||||||
|
### TL;DR 🚀
|
||||||
|
|
||||||
|
Run the following commands to install:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
git clone https://github.com/ollama-webui/ollama-webui.git
|
||||||
|
cd ollama-webui/
|
||||||
|
|
||||||
|
# Copying required .env file
|
||||||
|
cp -RPp example.env .env
|
||||||
|
|
||||||
|
# Building Frontend
|
||||||
|
npm i
|
||||||
|
npm run build
|
||||||
|
|
||||||
|
# Serving Frontend with the Backend
|
||||||
|
cd ./backend
|
||||||
|
pip install -r requirements.txt
|
||||||
|
sh start.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
You should have the Ollama Web UI up and running at http://localhost:8080/. Enjoy! 😄
|
||||||
|
|
||||||
### Project Components
|
### Project Components
|
||||||
|
|
||||||
The Ollama Web UI consists of two primary components: the frontend and the backend (which serves as a reverse proxy, handling static frontend files, and additional features). Both need to be running concurrently for the development environment using `npm run dev`. Alternatively, you can set the `PUBLIC_API_BASE_URL` during the build process to have the frontend connect directly to your Ollama instance or build the frontend as static files and serve them with the backend.
|
The Ollama Web UI consists of two primary components: the frontend and the backend (which serves as a reverse proxy, handling static frontend files, and additional features). Both need to be running concurrently for the development environment using `npm run dev`. Alternatively, you can set the `PUBLIC_API_BASE_URL` during the build process to have the frontend connect directly to your Ollama instance or build the frontend as static files and serve them with the backend.
|
||||||
|
@ -211,7 +234,6 @@ See [TROUBLESHOOTING.md](/TROUBLESHOOTING.md) for information on how to troubles
|
||||||
|
|
||||||
Here are some exciting tasks on our roadmap:
|
Here are some exciting tasks on our roadmap:
|
||||||
|
|
||||||
|
|
||||||
- 🔄 **Multi-Modal Support**: Seamlessly engage with models that support multimodal interactions, including images (e.g., LLava).
|
- 🔄 **Multi-Modal Support**: Seamlessly engage with models that support multimodal interactions, including images (e.g., LLava).
|
||||||
- 📚 **RAG Integration**: Experience first-class retrieval augmented generation support, enabling chat with your documents.
|
- 📚 **RAG Integration**: Experience first-class retrieval augmented generation support, enabling chat with your documents.
|
||||||
- 🔐 **Access Control**: Securely manage requests to Ollama by utilizing the backend as a reverse proxy gateway, ensuring only authenticated users can send specific requests.
|
- 🔐 **Access Control**: Securely manage requests to Ollama by utilizing the backend as a reverse proxy gateway, ensuring only authenticated users can send specific requests.
|
||||||
|
|
|
@ -25,3 +25,23 @@ Ensure that the Ollama URL is correctly formatted in the application settings. F
|
||||||
It is crucial to include the `/api` at the end of the URL to ensure that the Ollama Web UI can communicate with the server.
|
It is crucial to include the `/api` at the end of the URL to ensure that the Ollama Web UI can communicate with the server.
|
||||||
|
|
||||||
By following these troubleshooting steps, you should be able to identify and resolve connection issues with your Ollama server configuration. If you require further assistance or have additional questions, please don't hesitate to reach out or refer to our documentation for comprehensive guidance.
|
By following these troubleshooting steps, you should be able to identify and resolve connection issues with your Ollama server configuration. If you require further assistance or have additional questions, please don't hesitate to reach out or refer to our documentation for comprehensive guidance.
|
||||||
|
|
||||||
|
## Running ollama-webui as a container on Apple Silicon Mac
|
||||||
|
|
||||||
|
If you are running Docker on a M{1..3} based Mac and have taken the steps to run an x86 container, add "--platform linux/amd64" to the docker run command to prevent a warning.
|
||||||
|
|
||||||
|
Example:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker run -d -p 3000:8080 -e OLLAMA_API_BASE_URL=http://example.com:11434/api --name ollama-webui --restart always ghcr.io/ollama-webui/ollama-webui:main
|
||||||
|
```
|
||||||
|
|
||||||
|
Becomes
|
||||||
|
|
||||||
|
```
|
||||||
|
docker run --platform linux/amd64 -d -p 3000:8080 -e OLLAMA_API_BASE_URL=http://example.com:11434/api --name ollama-webui --restart always ghcr.io/ollama-webui/ollama-webui:main
|
||||||
|
```
|
||||||
|
|
||||||
|
## References
|
||||||
|
[Change Docker Desktop Settings on Mac](https://docs.docker.com/desktop/settings/mac/) Search for "x86" in that page.
|
||||||
|
[Run x86 (Intel) and ARM based images on Apple Silicon (M1) Macs?](https://forums.docker.com/t/run-x86-intel-and-arm-based-images-on-apple-silicon-m1-macs/117123)
|
||||||
|
|
|
@ -6,8 +6,7 @@ from secrets import token_bytes
|
||||||
from base64 import b64encode
|
from base64 import b64encode
|
||||||
import os
|
import os
|
||||||
|
|
||||||
|
load_dotenv(find_dotenv("../.env"))
|
||||||
load_dotenv(find_dotenv())
|
|
||||||
|
|
||||||
####################################
|
####################################
|
||||||
# ENV (dev,test,prod)
|
# ENV (dev,test,prod)
|
||||||
|
@ -38,7 +37,7 @@ WEBUI_VERSION = os.environ.get("WEBUI_VERSION", "v1.0.0-alpha.21")
|
||||||
####################################
|
####################################
|
||||||
|
|
||||||
|
|
||||||
WEBUI_AUTH = True if os.environ.get("WEBUI_AUTH", "TRUE") == "TRUE" else False
|
WEBUI_AUTH = True if os.environ.get("WEBUI_AUTH", "FALSE") == "TRUE" else False
|
||||||
|
|
||||||
|
|
||||||
####################################
|
####################################
|
||||||
|
|
18
example.env
18
example.env
|
@ -1,8 +1,12 @@
|
||||||
# must be defined, but defaults to 'http://{location.hostname}:11434/api'
|
# If you're serving both the frontend and backend (Recommended)
|
||||||
# can also use path, such as '/api'
|
# Set the public API base URL for seamless communication
|
||||||
PUBLIC_API_BASE_URL=''
|
PUBLIC_API_BASE_URL='/ollama/api'
|
||||||
|
|
||||||
OLLAMA_API_ID='my-api-token'
|
# If you're serving only the frontend (Not recommended and not fully supported)
|
||||||
OLLAMA_API_TOKEN='xxxxxxxxxxxxxxxx'
|
# Comment above and Uncomment below
|
||||||
# generated by passing the token to `caddy hash-password`
|
# You can use the default value or specify a custom path, e.g., '/api'
|
||||||
OLLAMA_API_TOKEN_DIGEST='$2a$14$iyyuawykR92xTHNR9lWzfu.uCct/9/xUPX3zBqLqrjAu0usNRPbyi'
|
# PUBLIC_API_BASE_URL='http://{location.hostname}:11434/api'
|
||||||
|
|
||||||
|
# Ollama URL for the backend to connect
|
||||||
|
# The path '/ollama/api' will be redirected to the specified backend URL
|
||||||
|
OLLAMA_API_BASE_URL='http://localhost:11434/api'
|
||||||
|
|
|
@ -317,7 +317,21 @@
|
||||||
</div>
|
</div>
|
||||||
<div class=" mt-2 text-2xl text-gray-800 dark:text-gray-100 font-semibold">
|
<div class=" mt-2 text-2xl text-gray-800 dark:text-gray-100 font-semibold">
|
||||||
{#if selectedModelfile}
|
{#if selectedModelfile}
|
||||||
|
<span class=" capitalize">
|
||||||
|
{selectedModelfile.title}
|
||||||
|
</span>
|
||||||
|
<div class="mt-0.5 text-base font-normal text-gray-600 dark:text-gray-400">
|
||||||
{selectedModelfile.desc}
|
{selectedModelfile.desc}
|
||||||
|
</div>
|
||||||
|
{#if selectedModelfile.user}
|
||||||
|
<div class="mt-0.5 text-sm font-normal text-gray-500 dark:text-gray-500">
|
||||||
|
By <a href="https://ollamahub.com/"
|
||||||
|
>{selectedModelfile.user.name
|
||||||
|
? selectedModelfile.user.name
|
||||||
|
: `@${selectedModelfile.user.username}`}</a
|
||||||
|
>
|
||||||
|
</div>
|
||||||
|
{/if}
|
||||||
{:else}
|
{:else}
|
||||||
How can I help you today?
|
How can I help you today?
|
||||||
{/if}
|
{/if}
|
||||||
|
|
|
@ -7,7 +7,7 @@
|
||||||
|
|
||||||
const deleteModelHandler = async (tagName) => {
|
const deleteModelHandler = async (tagName) => {
|
||||||
let success = null;
|
let success = null;
|
||||||
const res = await fetch(`${OLLAMA_API_BASE_URL}/delete`, {
|
const res = await fetch(`${$settings?.API_BASE_URL ?? OLLAMA_API_BASE_URL}/delete`, {
|
||||||
method: 'DELETE',
|
method: 'DELETE',
|
||||||
headers: {
|
headers: {
|
||||||
'Content-Type': 'text/event-stream',
|
'Content-Type': 'text/event-stream',
|
||||||
|
|
|
@ -52,6 +52,8 @@
|
||||||
num_ctx: ''
|
num_ctx: ''
|
||||||
};
|
};
|
||||||
|
|
||||||
|
let modelfileCreator = null;
|
||||||
|
|
||||||
$: tagName = title !== '' ? `${title.replace(/\s+/g, '-').toLowerCase()}:latest` : '';
|
$: tagName = title !== '' ? `${title.replace(/\s+/g, '-').toLowerCase()}:latest` : '';
|
||||||
|
|
||||||
$: if (!raw) {
|
$: if (!raw) {
|
||||||
|
@ -202,7 +204,8 @@ SYSTEM """${system}"""`.replace(/^\s*\n/gm, '');
|
||||||
desc: desc,
|
desc: desc,
|
||||||
content: content,
|
content: content,
|
||||||
suggestionPrompts: suggestions.filter((prompt) => prompt.content !== ''),
|
suggestionPrompts: suggestions.filter((prompt) => prompt.content !== ''),
|
||||||
categories: Object.keys(categories).filter((category) => categories[category])
|
categories: Object.keys(categories).filter((category) => categories[category]),
|
||||||
|
user: modelfileCreator !== null ? modelfileCreator : undefined
|
||||||
});
|
});
|
||||||
await goto('/modelfiles');
|
await goto('/modelfiles');
|
||||||
}
|
}
|
||||||
|
@ -237,6 +240,10 @@ SYSTEM """${system}"""`.replace(/^\s*\n/gm, '');
|
||||||
}
|
}
|
||||||
];
|
];
|
||||||
|
|
||||||
|
modelfileCreator = {
|
||||||
|
username: modelfile.user.username,
|
||||||
|
name: modelfile.user.name
|
||||||
|
};
|
||||||
for (const category of modelfile.categories) {
|
for (const category of modelfile.categories) {
|
||||||
categories[category.toLowerCase()] = true;
|
categories[category.toLowerCase()] = true;
|
||||||
}
|
}
|
||||||
|
|
Loading…
Reference in a new issue