Merge branch 'ollama-webui:main' into main

This commit is contained in:
Duc Dang 2023-11-29 19:21:54 -08:00 committed by GitHub
commit ec3f2a3e1e
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
3 changed files with 80 additions and 12 deletions

View file

@ -115,43 +115,53 @@ docker build -t ollama-webui .
docker run -d -p 3000:8080 -e OLLAMA_API_BASE_URL=https://example.com/api --name ollama-webui --restart always ollama-webui docker run -d -p 3000:8080 -e OLLAMA_API_BASE_URL=https://example.com/api --name ollama-webui --restart always ollama-webui
``` ```
## How to Build for Static Deployment ## How to Install Without Docker
1. Clone & Enter the project While we strongly recommend using our convenient Docker container installation for optimal support, we understand that some situations may require a non-Docker setup, especially for development purposes. Please note that non-Docker installations are not officially supported, and you might need to troubleshoot on your own.
### Project Components
The Ollama Web UI consists of two primary components: the frontend and the backend (which serves as a reverse proxy, handling static frontend files, and additional features). Both need to be running concurrently for the development environment using `npm run dev`. Alternatively, you can set the `PUBLIC_API_BASE_URL` during the build process to have the frontend connect directly to your Ollama instance or build the frontend as static files and serve them with the backend.
### Prerequisites
1. **Clone and Enter the Project:**
```sh ```sh
git clone https://github.com/ollama-webui/ollama-webui.git git clone https://github.com/ollama-webui/ollama-webui.git
pushd ./ollama-webui/ cd ollama-webui/
``` ```
2. Create and edit `.env` 2. **Create and Edit `.env`:**
```sh ```sh
cp -RPp example.env .env cp -RPp example.env .env
``` ```
3. Install node dependencies ### Building Ollama Web UI Frontend
1. **Install Node Dependencies:**
```sh ```sh
npm i npm install
``` ```
4. Run in dev mode, or build the site for deployment 2. **Run in Dev Mode or Build for Deployment:**
- Test in Dev mode: - Dev Mode (requires the backend to be running simultaneously):
```sh ```sh
npm run dev npm run dev
``` ```
- Build for Deploy: - Build for Deployment:
```sh ```sh
#`PUBLIC_API_BASE_URL` will overwrite the value in `.env` # `PUBLIC_API_BASE_URL` overwrites the value in `.env`
PUBLIC_API_BASE_URL='https://example.com/api' npm run build PUBLIC_API_BASE_URL='https://example.com/api' npm run build
``` ```
5. Test the build with `caddy` (or the server of your choice) 3. **Test the Build with `Caddy` (or your preferred server):**
```sh ```sh
curl https://webi.sh/caddy | sh curl https://webi.sh/caddy | sh
@ -160,6 +170,35 @@ docker run -d -p 3000:8080 -e OLLAMA_API_BASE_URL=https://example.com/api --name
caddy run --envfile .env --config ./Caddyfile.localhost caddy run --envfile .env --config ./Caddyfile.localhost
``` ```
### Running Ollama Web UI Backend
If you wish to run the backend for deployment, ensure that the frontend is built so that the backend can serve the frontend files along with the API route.
#### Setup Instructions
1. **Install Python Requirements:**
```sh
cd ./backend
pip install -r requirements.txt
```
2. **Run Python Backend:**
- Dev Mode with Hot Reloading:
```sh
sh dev.sh
```
- Deployment:
```sh
sh start.sh
```
Now, you should have the Ollama Web UI up and running at [http://localhost:8080/](http://localhost:8080/). Feel free to explore the features and functionalities of Ollama! If you encounter any issues, please refer to the instructions above or reach out to the community for assistance.
## Troubleshooting ## Troubleshooting
See [TROUBLESHOOTING.md](/TROUBLESHOOTING.md) for information on how to troubleshoot and/or join our [Ollama Web UI Discord community](https://discord.gg/5rJgQTnV4s). See [TROUBLESHOOTING.md](/TROUBLESHOOTING.md) for information on how to troubleshoot and/or join our [Ollama Web UI Discord community](https://discord.gg/5rJgQTnV4s).

View file

@ -28,6 +28,11 @@
})(); })();
} }
const speakMessage = (message) => {
const speak = new SpeechSynthesisUtterance(message);
speechSynthesis.speak(speak);
};
const createCopyCodeBlockButton = () => { const createCopyCodeBlockButton = () => {
// use a class selector if available // use a class selector if available
let blocks = document.querySelectorAll('pre'); let blocks = document.querySelectorAll('pre');
@ -420,7 +425,7 @@
{/each} {/each}
</div> </div>
{/if} {/if}
{message.content} <pre class="">{message.content}</pre>
<div class=" flex justify-start space-x-1"> <div class=" flex justify-start space-x-1">
{#if message.parentId !== null && message.parentId in history.messages && (history.messages[message.parentId]?.childrenIds.length ?? 0) > 1} {#if message.parentId !== null && message.parentId in history.messages && (history.messages[message.parentId]?.childrenIds.length ?? 0) > 1}
@ -692,6 +697,30 @@
> >
</button> </button>
<button
class="{messageIdx + 1 === messages.length
? 'visible'
: 'invisible group-hover:visible'} p-1 rounded dark:hover:bg-gray-800 transition"
on:click={() => {
speakMessage(message.content);
}}
>
<svg
xmlns="http://www.w3.org/2000/svg"
fill="none"
viewBox="0 0 24 24"
stroke-width="1.5"
stroke="currentColor"
class="w-4 h-4"
>
<path
stroke-linecap="round"
stroke-linejoin="round"
d="M19.114 5.636a9 9 0 010 12.728M16.463 8.288a5.25 5.25 0 010 7.424M6.75 8.25l4.72-4.72a.75.75 0 011.28.53v15.88a.75.75 0 01-1.28.53l-4.72-4.72H4.51c-.88 0-1.704-.507-1.938-1.354A9.01 9.01 0 012.25 12c0-.83.112-1.633.322-2.396C2.806 8.756 3.63 8.25 4.51 8.25H6.75z"
/>
</svg>
</button>
{#if messageIdx + 1 === messages.length} {#if messageIdx + 1 === messages.length}
<button <button
type="button" type="button"