feat: Expand bot usage control and API support
All checks were successful
Docker CI/CD / Docker Build and Push to Docker Hub (push) Successful in 17m54s
Python Package CI/CD / Setup and Test (push) Successful in 1m54s
Python Package CI/CD / Publish to PyPI (push) Successful in 55s

Enhanced bot flexibility by enabling the specification of room IDs in the allowed users' list, broadening access control capabilities. This change allows for more granular control over who can interact with the bot, particularly useful in scenarios where the bot's usage needs to be restricted to specific rooms. Additionally, updated documentation and configurations reflect the inclusion of new AI models and self-hosted API support, catering to a wider range of use cases and setups. The README.md and config.dist.ini files have been updated to offer clearer guidance on setup, configuration, and troubleshooting, aiming to improve user experience and ease of deployment.

- Introduced the ability for room-specific bot access, enhancing user and room management flexibility.
- Expanded AI model support, including `gpt-4o` and `ollama`, increases the bot's versatility and application scenarios.
- Updated Python version compatibility to 3.12 to ensure users are leveraging the latest language features and improvements.
- Improved troubleshooting documentation to assist users in resolving common issues more efficiently.
This commit is contained in:
Kumi 2024-05-16 07:24:34 +02:00
parent e58bea20ca
commit 15a93d8231
Signed by: kumi
GPG key ID: ECBCC9082395383F
5 changed files with 65 additions and 33 deletions

View file

@ -1,47 +1,52 @@
# Changelog
### 0.3.10 (2024-05-16)
- Add support for specifying room IDs in `AllowedUsers`
- Minor fixes
### 0.3.9 (2024-04-23)
* Add Docker support for running the bot in a container
* Add TrackingMore dependency to pyproject.toml
* Replace deprecated `pkg_resources` with `importlib.metadata`
* Allow password-based login on first login
- Add Docker support for running the bot in a container
- Add TrackingMore dependency to pyproject.toml
- Replace deprecated `pkg_resources` with `importlib.metadata`
- Allow password-based login on first login
### 0.3.7 / 0.3.8 (2024-04-15)
* Changes to URLs in pyproject.toml
* Migrated build pipeline to Forgejo Actions
- Changes to URLs in pyproject.toml
- Migrated build pipeline to Forgejo Actions
### 0.3.6 (2024-04-11)
* Fix issue where message type detection would fail for some messages (cece8cfb24e6f2e98d80d233b688c3e2c0ff05ae)
- Fix issue where message type detection would fail for some messages (cece8cfb24e6f2e98d80d233b688c3e2c0ff05ae)
### 0.3.5
* Only set room avatar if it is not already set (a9c23ee9c42d0a741a7eb485315e3e2d0a526725)
- Only set room avatar if it is not already set (a9c23ee9c42d0a741a7eb485315e3e2d0a526725)
### 0.3.4 (2024-02-18)
* Optimize chat model and message handling (10b74187eb43bca516e2a469b69be1dbc9496408)
* Fix parameter passing in chat response calls (2d564afd979e7bc9eee8204450254c9f86b663b5)
* Refine message filtering in bot event processing (c47f947f80f79a443bbd622833662e3122b121ef)
- Optimize chat model and message handling (10b74187eb43bca516e2a469b69be1dbc9496408)
- Fix parameter passing in chat response calls (2d564afd979e7bc9eee8204450254c9f86b663b5)
- Refine message filtering in bot event processing (c47f947f80f79a443bbd622833662e3122b121ef)
### 0.3.3 (2024-01-26)
* Implement recursion check in response generation (e6bc23e564e51aa149432fc67ce381a9260ee5f5)
* Implement tool emulation for models without tool support (0acc1456f9e4efa09e799f6ce2ec9a31f439fe4a)
* Allow selection of chat model by room (87173ae284957f66594e66166508e4e3bd60c26b)
- Implement recursion check in response generation (e6bc23e564e51aa149432fc67ce381a9260ee5f5)
- Implement tool emulation for models without tool support (0acc1456f9e4efa09e799f6ce2ec9a31f439fe4a)
- Allow selection of chat model by room (87173ae284957f66594e66166508e4e3bd60c26b)
### 0.3.2 (2023-12-14)
* Removed key upload from room event handler
* Fixed output of `python -m gptbot -v` to display currently installed version
* Workaround for bug preventing bot from responding when files are uploaded to an encrypted room
- Removed key upload from room event handler
- Fixed output of `python -m gptbot -v` to display currently installed version
- Workaround for bug preventing bot from responding when files are uploaded to an encrypted room
#### Known Issues
* When using Pantalaimon: Bot is unable to download/use files uploaded to unencrypted rooms
- When using Pantalaimon: Bot is unable to download/use files uploaded to unencrypted rooms
### 0.3.1 (2023-12-07)
* Fixed issue in newroom task causing it to be called over and over again
- Fixed issue in newroom task causing it to be called over and over again

View file

@ -1,6 +1,7 @@
# GPTbot
[![Support Private.coffee!](https://shields.private.coffee/badge/private.coffee-support%20us!-pink?logo=coffeescript)](https://private.coffee)
[![Matrix](https://shields.private.coffee/badge/Matrix-join%20us!-blue?logo=matrix)](https://matrix.to/#/#matrix-gptbot:private.coffee)
GPTbot is a simple bot that uses different APIs to generate responses to
messages in a Matrix room.
@ -9,8 +10,8 @@ messages in a Matrix room.
- AI-generated responses to text, image and voice messages in a Matrix room
(chatbot)
- Currently supports OpenAI (`gpt-3.5-turbo` and `gpt-4`, including vision
preview, `whisper` and `tts`)
- Currently supports OpenAI (`gpt-3.5-turbo` and `gpt-4`, `gpt-4o`, `whisper`
and `tts`) and compatible APIs (e.g. `ollama`)
- Able to generate pictures using OpenAI `dall-e-2`/`dall-e-3` models
- Able to browse the web to find information
- Able to use OpenWeatherMap to get weather information (requires separate
@ -25,7 +26,7 @@ messages in a Matrix room.
To run the bot, you will need Python 3.10 or newer.
The bot has been tested with Python 3.11 on Arch, but should work with any
The bot has been tested with Python 3.12 on Arch, but should work with any
current version, and should not require any special dependencies or operating
system features.
@ -217,10 +218,12 @@ Note that this currently only works for audio messages and .mp3 file uploads.
First of all, make sure that the bot is actually running. (Okay, that's not
really troubleshooting, but it's a good start.)
If the bot is running, check the logs. The first few lines should contain
"Starting bot...", "Syncing..." and "Bot started". If you don't see these
lines, something went wrong during startup. Fortunately, the logs should
contain more information about what went wrong.
If the bot is running, check the logs, these should tell you what is going on.
For example, if the bot is showing an error message like "Timed out, retrying",
it is unable to reach your homeserver. In this case, check your homeserver URL
and make sure that the bot can reach it. If you are using Pantalaimon, make
sure that the bot is pointed to Pantalaimon and not directly to your
homeserver, and that Pantalaimon is running and reachable.
If you need help figuring out what went wrong, feel free to open an issue.

View file

@ -45,10 +45,11 @@ Operator = Contact details not set
# DisplayName = GPTBot
# A list of allowed users
# If not defined, everyone is allowed to use the bot
# If not defined, everyone is allowed to use the bot (so you should really define this)
# Use the "*:homeserver.matrix" syntax to allow everyone on a given homeserver
# Alternatively, you can also specify a room ID to allow everyone in the room to use the bot within that room
#
# AllowedUsers = ["*:matrix.local"]
# AllowedUsers = ["*:matrix.local", "!roomid:matrix.local"]
# Minimum level of log messages that should be printed
# Available log levels in ascending order: trace, debug, info, warning, error, critical
@ -76,6 +77,9 @@ LogLevel = info
# Find this in your OpenAI account:
# https://platform.openai.com/account/api-keys
#
# This may not be required for self-hosted models in that case, just leave it
# as it is.
#
APIKey = sk-yoursecretkey
# The maximum amount of input sent to the API
@ -100,7 +104,7 @@ APIKey = sk-yoursecretkey
# The base URL of the OpenAI API
#
# Setting this allows you to use a self-hosted AI model for chat completions
# using something like https://github.com/abetlen/llama-cpp-python
# using something like llama-cpp-python or ollama
#
# BaseURL = https://openai.local/v1

View file

@ -7,7 +7,7 @@ allow-direct-references = true
[project]
name = "matrix-gptbot"
version = "0.3.9"
version = "0.3.10"
authors = [
{ name="Kumi Mitterer", email="gptbot@kumi.email" },

View file

@ -553,13 +553,31 @@ class GPTBot:
return (
(
user_id in self.allowed_users
or f"*:{user_id.split(':')[1]}" in self.allowed_users
or (
(
f"*:{user_id.split(':')[1]}" in self.allowed_users
or f"@*:{user_id.split(':')[1]}" in self.allowed_users
)
if not user_id.startswith("!") or user_id.startswith("#")
else False
)
)
if self.allowed_users
else True
)
def room_is_allowed(self, room_id: str) -> bool:
"""Check if everyone in a room is allowed to use the bot.
Args:
room_id (str): The room ID to check.
Returns:
bool: Whether everyone in the room is allowed to use the bot.
"""
# TODO: Handle published aliases
return self.user_is_allowed(room_id)
async def event_callback(self, room: MatrixRoom, event: Event):
"""Callback for events.
@ -571,7 +589,9 @@ class GPTBot:
if event.sender == self.matrix_client.user_id:
return
if not self.user_is_allowed(event.sender):
if not (
self.user_is_allowed(event.sender) or self.room_is_allowed(room.room_id)
):
if len(room.users) == 2:
await self.matrix_client.room_send(
room.room_id,