open-webui/CHANGELOG.md
2024-02-25 11:55:15 -08:00

2 KiB

Changelog

All notable changes to this project will be documented in this file.

The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.

[0.1.104] - UNRELEASED

Added

  • Check for updates in Settings > About.

[0.1.103] - 2024-02-25

Added

  • 🔗 Built-in LiteLLM Proxy: Now includes LiteLLM proxy within Open WebUI for enhanced functionality.

    • Easily integrate existing LiteLLM configurations using -v /path/to/config.yaml:/app/backend/data/litellm/config.yaml flag.
    • When utilizing Docker container to run Open WebUI, ensure connections to localhost use host.docker.internal.
  • 🖼️ Image Generation Enhancements: Introducing Advanced Settings with Image Preview Feature.

    • Customize image generation by setting the number of steps; defaults to A1111 value.

Fixed

  • Resolved issue with RAG scan halting document loading upon encountering unsupported MIME types or exceptions (Issue #866).

Changed

[0.1.102] - 2024-02-22

Added

  • 🖼️ Image Generation: Generate Images using the AUTOMATIC1111/stable-diffusion-webui API. You can set this up in Settings > Images.
  • 📝 Change title generation prompt: Change the prompt used to generate titles for your chats. You can set this up in the Settings > Interface.
  • 🤖 Change embedding model: Change the embedding model used to generate embeddings for your chats in the Dockerfile. Use any sentence transformer model from huggingface.co.
  • 📢 CHANGELOG.md/Popup: This popup will show you the latest changes.

[0.1.101] - 2024-02-22

Fixed

  • LaTex output formatting issue (#828)

Changed

  • Instead of having the previous 1.0.0-alpha.101, we switched to semantic versioning as a way to respect global conventions.