A bunch of changes

Switched from sqlite3 to DuckDB
Added comments to config template
Added more options to configuration
Added systemd service file
Added migration logging to database
Added command handling for help, room creation, stats, bot info
Improved context handling
Added some config checks
Added auto-detection of bot's Matrix user ID
Added more info to README
This commit is contained in:
Kumi 2023-04-17 20:28:29 +00:00
parent 543f8229d2
commit 60dc6094e8
Signed by: kumi
GPG key ID: ECBCC9082395383F
6 changed files with 459 additions and 98 deletions

1
.gitignore vendored
View file

@ -1,4 +1,5 @@
*.db *.db
*.db.wal
config.ini config.ini
venv/ venv/
*.pyc *.pyc

View file

@ -1,10 +1,10 @@
# GPTbot # GPTbot
GPTbot is a simple bot that uses the [OpenAI ChatCompletion API](https://platform.openai.com/docs/guides/chat) GPTbot is a simple bot that uses the [OpenAI Chat Completion API](https://platform.openai.com/docs/guides/chat)
to generate responses to messages in a Matrix room. to generate responses to messages in a Matrix room.
It will also save a log of the spent tokens to a sqlite database It will also save a log of the spent tokens to a DuckDB database
(token_usage.db in the working directory). (database.db in the working directory, by default).
## Installation ## Installation
@ -12,8 +12,8 @@ Simply clone this repository and install the requirements.
### Requirements ### Requirements
* Python 3.8 or later - Python 3.8 or later
* Requirements from `requirements.txt` (install with `pip install -r requirements.txt` in a venv) - Requirements from `requirements.txt` (install with `pip install -r requirements.txt` in a venv)
### Configuration ### Configuration
@ -24,12 +24,63 @@ Copy the provided `config.dist.ini` to `config.ini` and edit it to your needs.
The bot can be run with `python -m gptbot`. If required, activate a venv first. The bot can be run with `python -m gptbot`. If required, activate a venv first.
You may want to run the bot in a screen or tmux session, or use a process You may want to run the bot in a screen or tmux session, or use a process
manager like systemd. manager like systemd. The repository contains a sample systemd service file
(`gptbot.service`) that you can use as a starting point. You will need to
adjust the paths in the file to match your setup, then copy it to
`/etc/systemd/system/gptbot.service`. You can then start the bot with
`systemctl start gptbot` and enable it to start automatically on boot with
`systemctl enable gptbot`.
Once it is running, just invite it to a room and it will start responding to ## Usage
messages.
Once it is running, just invite the bot to a room and it will start responding
to messages. If you want to create a new room, you can use the `!gptbot newroom`
command at any time, which will cause the bot to create a new room and invite
you to it. You may also specify a room name, e.g. `!gptbot newroom My new room`.
Note that the bot will currently respond to _all_ messages in the room. So you
shouldn't invite it to a room with other people in it.
It also supports the `!gptbot help` command, which will print a list of available
commands. Messages starting with `!` are considered commands and will not be
considered for response generation.
## Troubleshooting
**Help, the bot is not responding!**
First of all, make sure that the bot is actually running. (Okay, that's not
really troubleshooting, but it's a good start.)
If the bot is running, check the logs. The first few lines should contain
"Starting bot...", "Syncing..." and "Bot started". If you don't see these
lines, something went wrong during startup. Fortunately, the logs should
contain more information about what went wrong.
If you need help figuring out what went wrong, feel free to open an issue.
**Help, the bot is flooding the room with responses!**
The bot will respond to _all_ messages in the room, with two exceptions:
- Messages starting with `!` are considered commands and will not be considered
for response generation - regardless of whether the command is valid for the
bot or not (e.g. `!help` will not be considered for response generation).
- Messages sent by the bot itself will not be considered for response generation.
There is a good chance that you are seeing the bot responding to its own
messages. First, stop the bot, or it will keep responding to its own messages,
consuming tokens.
Check that the UserID provided in the config file matches the UserID of the bot.
If it doesn't, change the config file and restart the bot. Note that the UserID
is optional, so you can also remove it from the config file altogether and the
bot will try to figure out its own User ID.
If the User ID is correct or not set, something else is going on. In this case,
please check the logs and open an issue if you can't figure out what's going on.
## License ## License
This project is licensed under the terms of the MIT license. This project is licensed under the terms of the MIT license.

View file

@ -1,8 +1,78 @@
# Copy this file to config.ini and replace the values below to match your needs
#
# The values that are not commented have to be set, everything else comes with
# sensible defaults.
[OpenAI] [OpenAI]
Model = gpt-3.5-turbo
# The Chat Completion model you want to use.
#
# Unless you are in the GPT-4 beta (if you don't know - you aren't),
# leave this as the default value (gpt-3.5-turbo)
#
# Model = gpt-3.5-turbo
# Your OpenAI API key
#
# Find this in your OpenAI account:
# https://platform.openai.com/account/api-keys
#
APIKey = sk-yoursecretkey APIKey = sk-yoursecretkey
# The maximum amount of input sent to the API
#
# In conjunction with MaxMessage, this determines how much context (= previous
# messages) you can send with your query.
#
# If you set this too high, the responses you receive will become shorter the
# longer the conversation gets.
#
# https://help.openai.com/en/articles/4936856-what-are-tokens-and-how-to-count-them
#
# MaxTokens = 3000
# The maximum number of messages in the room that will be considered as context
#
# By default, the last (up to) 20 messages will be sent as context, in addition
# to the system message and the current query itself.
#
# MaxMessages = 20
[Matrix] [Matrix]
# The URL to your Matrix homeserver
#
Homeserver = https://matrix.local Homeserver = https://matrix.local
# An Access Token for the user your bot runs as
# Can be obtained using a request like this:
#
# See https://www.matrix.org/docs/guides/client-server-api#login
# for information on how to obtain this value
#
AccessToken = syt_yoursynapsetoken AccessToken = syt_yoursynapsetoken
UserID = @gptbot:matrix.local
# The Matrix user ID of the bot (@local:domain.tld)
# Only specify this if the bot fails to figure it out by itself
#
# UserID = @gptbot:matrix.local
[GPTBot]
# The default room name used by the !newroom command
# Defaults to GPTBot if not set
#
# DefaultRoomName = GPTBot
# Contents of a special message sent to the GPT API with every request.
# Can be used to give the bot some context about the environment it's running in
#
# SystemMessage = You are a helpful bot.
[Database]
# Settings for the DuckDB database.
# Currently only used to store details on spent tokens per room.
# If not defined, the bot will not store this data.
Path = database.db

393
gptbot.py
View file

@ -1,65 +1,47 @@
import sqlite3
import os import os
import inspect import inspect
import logging
import signal
import openai import openai
import asyncio import asyncio
import markdown2 import markdown2
import tiktoken import tiktoken
import duckdb
from nio import AsyncClient, RoomMessageText, MatrixRoom, Event, InviteEvent from nio import AsyncClient, RoomMessageText, MatrixRoom, Event, InviteEvent
from nio.api import MessageDirection from nio.api import MessageDirection
from nio.responses import RoomMessagesError, SyncResponse from nio.responses import RoomMessagesError, SyncResponse, RoomRedactError
from configparser import ConfigParser from configparser import ConfigParser
from datetime import datetime from datetime import datetime
from argparse import ArgumentParser
config = ConfigParser() # Globals
config.read("config.ini")
# Set up GPT API DATABASE = False
openai.api_key = config["OpenAI"]["APIKey"] DEFAULT_ROOM_NAME = "GPTBot"
MODEL = config["OpenAI"].get("Model", "gpt-3.5-turbo") SYSTEM_MESSAGE = "You are a helpful assistant. "
MAX_TOKENS = 3000
MAX_MESSAGES = 20
DEFAULT_MODEL = "gpt-3.5-turbo"
# Set up Matrix client # Set up Matrix client
MATRIX_HOMESERVER = config["Matrix"]["Homeserver"] MATRIX_CLIENT = None
MATRIX_ACCESS_TOKEN = config["Matrix"]["AccessToken"]
BOT_USER_ID = config["Matrix"]["UserID"]
client = AsyncClient(MATRIX_HOMESERVER, BOT_USER_ID)
SYNC_TOKEN = None SYNC_TOKEN = None
# Set up SQLite3 database
conn = sqlite3.connect("token_usage.db")
cursor = conn.cursor()
cursor.execute(
"""
CREATE TABLE IF NOT EXISTS token_usage (
room_id TEXT NOT NULL,
tokens INTEGER NOT NULL,
timestamp DATETIME DEFAULT CURRENT_TIMESTAMP
)
"""
)
conn.commit()
# Define the system message and max token limit
SYSTEM_MESSAGE = "You are a helpful assistant."
MAX_TOKENS = 3000
def logging(message, log_level="info"): def logging(message, log_level="info"):
caller = inspect.currentframe().f_back.f_code.co_name caller = inspect.currentframe().f_back.f_code.co_name
timestamp = datetime.now().strftime("%Y-%m-%d %H:%M:%S:%f") timestamp = datetime.now().strftime("%Y-%m-%d %H:%M:%S:%f")
print(f"[{timestamp} - {caller}] [{log_level.upper()}] {message}") print(f"[{timestamp}] - {caller} - [{log_level.upper()}] {message}")
async def gpt_query(messages): async def gpt_query(messages, model=DEFAULT_MODEL):
logging(f"Querying GPT with {len(messages)} messages") logging(f"Querying GPT with {len(messages)} messages")
try: try:
response = openai.ChatCompletion.create( response = openai.ChatCompletion.create(
model=MODEL, model=model,
messages=messages messages=messages
) )
result_text = response.choices[0].message['content'] result_text = response.choices[0].message['content']
@ -72,17 +54,18 @@ async def gpt_query(messages):
return None, 0 return None, 0
async def fetch_last_n_messages(room_id, n=20): async def fetch_last_n_messages(room_id, n=MAX_MESSAGES):
# Fetch the last n messages from the room global SYNC_TOKEN, MATRIX_CLIENT
room = await client.join(room_id)
messages = [] messages = []
logging(f"Fetching last {n} messages from room {room_id} (starting at {SYNC_TOKEN})...") logging(
f"Fetching last {2*n} messages from room {room_id} (starting at {SYNC_TOKEN})...")
response = await client.room_messages( response = await MATRIX_CLIENT.room_messages(
room_id=room_id, room_id=room_id,
start=SYNC_TOKEN, start=SYNC_TOKEN,
limit=n, limit=2*n,
) )
if isinstance(response, RoomMessagesError): if isinstance(response, RoomMessagesError):
@ -91,17 +74,22 @@ async def fetch_last_n_messages(room_id, n=20):
return [] return []
for event in response.chunk: for event in response.chunk:
if len(messages) >= n:
break
if isinstance(event, RoomMessageText): if isinstance(event, RoomMessageText):
messages.append(event) if not event.body.startswith("!"):
messages.append(event)
logging(f"Found {len(messages)} messages") logging(f"Found {len(messages)} messages (limit: {n})")
# Reverse the list so that messages are in chronological order # Reverse the list so that messages are in chronological order
return messages[::-1] return messages[::-1]
def truncate_messages_to_fit_tokens(messages, max_tokens=MAX_TOKENS): def truncate_messages_to_fit_tokens(messages, max_tokens=MAX_TOKENS, model=DEFAULT_MODEL):
encoding = tiktoken.encoding_for_model(MODEL) global SYSTEM_MESSAGE
encoding = tiktoken.encoding_for_model(model)
total_tokens = 0 total_tokens = 0
system_message_tokens = len(encoding.encode(SYSTEM_MESSAGE)) + 1 system_message_tokens = len(encoding.encode(SYSTEM_MESSAGE)) + 1
@ -127,28 +115,19 @@ def truncate_messages_to_fit_tokens(messages, max_tokens=MAX_TOKENS):
return [truncated_messages[0]] + list(reversed(truncated_messages[1:])) return [truncated_messages[0]] + list(reversed(truncated_messages[1:]))
async def message_callback(room: MatrixRoom, event: RoomMessageText): async def process_query(room: MatrixRoom, event: RoomMessageText):
logging(f"Received message from {event.sender} in room {room.room_id}") global MATRIX_CLIENT, DATABASE, SYSTEM_MESSAGE
if event.sender == BOT_USER_ID: await MATRIX_CLIENT.room_typing(room.room_id, True)
logging("Message is from bot - ignoring")
return
await client.room_typing(room.room_id, True) await MATRIX_CLIENT.room_read_markers(room.room_id, event.event_id)
await client.room_read_markers(room.room_id, event.event_id)
last_messages = await fetch_last_n_messages(room.room_id, 20) last_messages = await fetch_last_n_messages(room.room_id, 20)
if not last_messages or all(message.sender == BOT_USER_ID for message in last_messages):
logging("No messages to respond to")
await client.room_typing(room.room_id, False)
return
chat_messages = [{"role": "system", "content": SYSTEM_MESSAGE}] chat_messages = [{"role": "system", "content": SYSTEM_MESSAGE}]
for message in last_messages: for message in last_messages:
role = "assistant" if message.sender == BOT_USER_ID else "user" role = "assistant" if message.sender == MATRIX_CLIENT.user_id else "user"
if not message.event_id == event.event_id: if not message.event_id == event.event_id:
chat_messages.append({"role": role, "content": message.body}) chat_messages.append({"role": role, "content": message.body})
@ -161,42 +140,164 @@ async def message_callback(room: MatrixRoom, event: RoomMessageText):
response, tokens_used = await gpt_query(truncated_messages) response, tokens_used = await gpt_query(truncated_messages)
if response: if response:
# Send the response to the room
logging(f"Sending response to room {room.room_id}...") logging(f"Sending response to room {room.room_id}...")
# Convert markdown to HTML
markdowner = markdown2.Markdown(extras=["fenced-code-blocks"]) markdowner = markdown2.Markdown(extras=["fenced-code-blocks"])
formatted_body = markdowner.convert(response) formatted_body = markdowner.convert(response)
await client.room_send( message = await MATRIX_CLIENT.room_send(
room.room_id, "m.room.message", {"msgtype": "m.text", "body": response, room.room_id, "m.room.message",
"format": "org.matrix.custom.html", "formatted_body": formatted_body} {"msgtype": "m.text", "body": response,
"format": "org.matrix.custom.html", "formatted_body": formatted_body}
) )
logging("Logging tokens used...") if DATABASE:
logging("Logging tokens used...")
cursor.execute( with DATABASE.cursor() as cursor:
"INSERT INTO token_usage (room_id, tokens) VALUES (?, ?)", (room.room_id, tokens_used)) cursor.execute(
conn.commit() "INSERT INTO token_usage (message_id, room_id, tokens, timestamp) VALUES (?, ?, ?, ?)",
(message.event_id, room.room_id, tokens_used, datetime.now()))
DATABASE.commit()
else: else:
# Send a notice to the room if there was an error # Send a notice to the room if there was an error
logging("Error during GPT API call - sending notice to room") logging("Error during GPT API call - sending notice to room")
await client.room_send( await MATRIX_CLIENT.room_send(
room.room_id, "m.room.message", { room.room_id, "m.room.message", {
"msgtype": "m.notice", "body": "Sorry, I'm having trouble connecting to the GPT API right now. Please try again later."} "msgtype": "m.notice", "body": "Sorry, I'm having trouble connecting to the GPT API right now. Please try again later."}
) )
print("No response from GPT API") print("No response from GPT API")
await client.room_typing(room.room_id, False) await MATRIX_CLIENT.room_typing(room.room_id, False)
async def command_newroom(room: MatrixRoom, event: RoomMessageText):
room_name = " ".join(event.body.split()[2:]) or DEFAULT_ROOM_NAME
logging("Creating new room...")
new_room = await MATRIX_CLIENT.room_create(name=room_name)
logging(f"Inviting {event.sender} to new room...")
await MATRIX_CLIENT.room_invite(new_room.room_id, event.sender)
await MATRIX_CLIENT.room_put_state(
new_room.room_id, "m.room.power_levels", {"users": {event.sender: 100}})
await MATRIX_CLIENT.room_send(
new_room.room_id, "m.room.message", {"msgtype": "m.text", "body": "Welcome to the new room!"})
async def command_help(room: MatrixRoom, event: RoomMessageText):
await MATRIX_CLIENT.room_send(
room.room_id, "m.room.message", {"msgtype": "m.notice",
"body": """Available commands:
!gptbot help - Show this message
!gptbot newroom <room name> - Create a new room and invite yourself to it
!gptbot stats - Show usage statistics for this room
!gptbot botinfo - Show information about the bot
"""}
)
async def command_stats(room: MatrixRoom, event: RoomMessageText):
global DATABASE, MATRIX_CLIENT
logging("Showing stats...")
if not DATABASE:
logging("No database connection - cannot show stats")
return
with DATABASE.cursor() as cursor:
cursor.execute(
"SELECT SUM(tokens) FROM token_usage WHERE room_id = ?", (room.room_id,))
total_tokens = cursor.fetchone()[0] or 0
await MATRIX_CLIENT.room_send(
room.room_id, "m.room.message", {"msgtype": "m.notice",
"body": f"Total tokens used: {total_tokens}"}
)
async def command_unknown(room: MatrixRoom, event: RoomMessageText):
global MATRIX_CLIENT
logging("Unknown command")
await MATRIX_CLIENT.room_send(
room.room_id, "m.room.message", {"msgtype": "m.notice",
"body": "Unknown command - try !gptbot help"}
)
async def command_botinfo(room: MatrixRoom, event: RoomMessageText):
global MATRIX_CLIENT
logging("Showing bot info...")
await MATRIX_CLIENT.room_send(
room.room_id, "m.room.message", {"msgtype": "m.notice",
"body": f"""GPT Info:
Model: {DEFAULT_MODEL}
Maximum context tokens: {MAX_TOKENS}
Maximum context messages: {MAX_MESSAGES}
System message: {SYSTEM_MESSAGE}
Room info:
Bot user ID: {MATRIX_CLIENT.user_id}
Current room ID: {room.room_id}
For usage statistics, run !gptbot stats
"""})
COMMANDS = {
"help": command_help,
"newroom": command_newroom,
"stats": command_stats,
"botinfo": command_botinfo
}
async def process_command(room: MatrixRoom, event: RoomMessageText):
global COMMANDS
logging(
f"Received command {event.body} from {event.sender} in room {room.room_id}")
command = event.body.split()[1] if event.body.split()[1:] else None
await COMMANDS.get(command, command_unknown)(room, event)
async def message_callback(room: MatrixRoom, event: RoomMessageText):
global DEFAULT_ROOM_NAME, MATRIX_CLIENT, SYSTEM_MESSAGE, DATABASE, MAX_TOKENS
logging(f"Received message from {event.sender} in room {room.room_id}")
if event.sender == MATRIX_CLIENT.user_id:
logging("Message is from bot itself - ignoring")
elif event.body.startswith("!gptbot"):
await process_command(room, event)
elif event.body.startswith("!"):
logging("Might be a command, but not for this bot - ignoring")
else:
await process_query(room, event)
async def room_invite_callback(room: MatrixRoom, event): async def room_invite_callback(room: MatrixRoom, event):
global MATRIX_CLIENT
logging(f"Received invite to room {room.room_id} - joining...") logging(f"Received invite to room {room.room_id} - joining...")
await client.join(room.room_id) await MATRIX_CLIENT.join(room.room_id)
await client.room_send( await MATRIX_CLIENT.room_send(
room.room_id, room.room_id,
"m.room.message", "m.room.message",
{"msgtype": "m.text", {"msgtype": "m.text",
@ -205,13 +306,15 @@ async def room_invite_callback(room: MatrixRoom, event):
async def accept_pending_invites(): async def accept_pending_invites():
global MATRIX_CLIENT
logging("Accepting pending invites...") logging("Accepting pending invites...")
for room_id in list(client.invited_rooms.keys()): for room_id in list(MATRIX_CLIENT.invited_rooms.keys()):
logging(f"Joining room {room_id}...") logging(f"Joining room {room_id}...")
await client.join(room_id) await MATRIX_CLIENT.join(room_id)
await client.room_send( await MATRIX_CLIENT.room_send(
room_id, room_id,
"m.room.message", "m.room.message",
{"msgtype": "m.text", {"msgtype": "m.text",
@ -221,37 +324,157 @@ async def accept_pending_invites():
async def sync_cb(response): async def sync_cb(response):
global SYNC_TOKEN global SYNC_TOKEN
logging(f"Sync response received (next batch: {response.next_batch})")
logging(
f"Sync response received (next batch: {response.next_batch})", "debug")
SYNC_TOKEN = response.next_batch SYNC_TOKEN = response.next_batch
async def main(): async def main():
global MATRIX_CLIENT
if not MATRIX_CLIENT.user_id:
whoami = await MATRIX_CLIENT.whoami()
MATRIX_CLIENT.user_id = whoami.user_id
try:
assert MATRIX_CLIENT.user_id
except AssertionError:
logging(
"Failed to get user ID - check your access token or try setting it manually", "critical")
await MATRIX_CLIENT.close()
return
logging("Starting bot...") logging("Starting bot...")
client.access_token = MATRIX_ACCESS_TOKEN # Set the access token directly MATRIX_CLIENT.add_response_callback(sync_cb, SyncResponse)
client.user_id = BOT_USER_ID # Set the user_id directly
client.add_response_callback(sync_cb, SyncResponse)
logging("Syncing...") logging("Syncing...")
await client.sync(timeout=30000) await MATRIX_CLIENT.sync(timeout=30000)
client.add_event_callback(message_callback, RoomMessageText) MATRIX_CLIENT.add_event_callback(message_callback, RoomMessageText)
client.add_event_callback(room_invite_callback, InviteEvent) MATRIX_CLIENT.add_event_callback(room_invite_callback, InviteEvent)
await accept_pending_invites() # Accept pending invites await accept_pending_invites() # Accept pending invites
logging("Bot started") logging("Bot started")
try: try:
await client.sync_forever(timeout=30000) # Continue syncing events # Continue syncing events
await MATRIX_CLIENT.sync_forever(timeout=30000)
finally: finally:
await client.close() # Properly close the aiohttp client session logging("Syncing one last time...")
await MATRIX_CLIENT.sync(timeout=30000)
await MATRIX_CLIENT.close() # Properly close the aiohttp client session
logging("Bot stopped") logging("Bot stopped")
def initialize_database(path):
global DATABASE
logging("Initializing database...")
DATABASE = duckdb.connect(path)
with DATABASE.cursor() as cursor:
# Get the latest migration ID if the migrations table exists
try:
cursor.execute(
"""
SELECT MAX(id) FROM migrations
"""
)
latest_migration = int(cursor.fetchone()[0])
except:
latest_migration = 0
# Version 1
if latest_migration < 1:
cursor.execute(
"""
CREATE TABLE IF NOT EXISTS token_usage (
message_id TEXT PRIMARY KEY,
room_id TEXT NOT NULL,
tokens INTEGER NOT NULL,
timestamp TIMESTAMP NOT NULL
)
"""
)
cursor.execute(
"""
CREATE TABLE IF NOT EXISTS migrations (
id INTEGER NOT NULL,
timestamp TIMESTAMP NOT NULL
)
"""
)
cursor.execute(
"INSERT INTO migrations (id, timestamp) VALUES (1, ?)",
(datetime.now(),)
)
DATABASE.commit()
if __name__ == "__main__": if __name__ == "__main__":
# Parse command line arguments
parser = ArgumentParser()
parser.add_argument(
"--config", help="Path to config file (default: config.ini in working directory)", default="config.ini")
args = parser.parse_args()
# Read config file
config = ConfigParser()
config.read(args.config)
# Set up Matrix client
try:
assert "Matrix" in config
assert "Homeserver" in config["Matrix"]
assert "AccessToken" in config["Matrix"]
except:
logging("Matrix config not found or incomplete", "critical")
exit(1)
MATRIX_CLIENT = AsyncClient(config["Matrix"]["Homeserver"])
MATRIX_CLIENT.access_token = config["Matrix"]["AccessToken"]
MATRIX_CLIENT.user_id = config["Matrix"].get("UserID")
# Set up GPT API
try:
assert "OpenAI" in config
assert "APIKey" in config["OpenAI"]
except:
logging("OpenAI config not found or incomplete", "critical")
exit(1)
openai.api_key = config["OpenAI"]["APIKey"]
if "Model" in config["OpenAI"]:
DEFAULT_MODEL = config["OpenAI"]["Model"]
if "MaxTokens" in config["OpenAI"]:
MAX_TOKENS = int(config["OpenAI"]["MaxTokens"])
if "MaxMessages" in config["OpenAI"]:
MAX_MESSAGES = int(config["OpenAI"]["MaxMessages"])
# Set up database
if "Database" in config and config["Database"].get("Path"):
initialize_database(config["Database"]["Path"])
# Start bot loop
try: try:
asyncio.run(main()) asyncio.run(main())
except KeyboardInterrupt:
logging("Received KeyboardInterrupt - exiting...")
except signal.SIGTERM:
logging("Received SIGTERM - exiting...")
finally: finally:
conn.close() if DATABASE:
DATABASE.close()

15
gptbot.service Normal file
View file

@ -0,0 +1,15 @@
[Unit]
Description=GPTbot - A GPT bot for Matrix
Requires=network.target
[Service]
Type=simple
User=gptbot
Group=gptbot
WorkingDirectory=/opt/gptbot
ExecStart=/opt/gptbot/venv/bin/python3 -m gptbot
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target

View file

@ -1,4 +1,5 @@
openai openai
matrix-nio[e2e] matrix-nio[e2e]
markdown2[all] markdown2[all]
tiktoken tiktoken
duckdb