This update introduces the ability for the bot to use a Matrix UserID and password for authentication, in addition to the existing Access Token method. Upon the first run with UserID and password, the bot automatically converts these credentials into an Access Token, updates the configuration with this token, and removes the password for security purposes. This enhancement simplifies the initial setup process for new users by directly utilizing Matrix login credentials, aligning with common user authentication workflows and enhancing security by not storing passwords long-term.
Refactored the bot initialization process in `GPTBot.from_config` to support dynamic login method selection based on provided credentials, and implemented automatic configuration updating to reflect the newly obtained Access Token and cleaned credentials.
Minor adjustments include formatting and comment clarification for better readability and maintenance.
This change addresses the need for a more straightforward and secure authentication process for bot deployment and user experience improvement.
Removed detailed Docker setup instructions, opting to simplify the Docker usage section by retaining only the Docker Compose method. This change aims to declutter the README and encourage a more standardized setup process for users, reducing potential confusion and maintaining focus on the primary installation method via Docker Compose. The update includes a minor adjustment to the database initialization step, ensuring users are guided to prepare their environment fully before running the bot. This revision makes the setup process more approachable and efficient, especially for newcomers.
By directing users to the `Running` section for config file setup instructions, we maintain consistency and avoid duplicative content, keeping the README streamlined and more manageable.
Transitioned from the deprecated `pkg_resources` to `importlib.metadata` for package version retrieval, improving performance and future compatibility.
Added a volume to the `matrix-gptbot` service configuration in `docker-compose.yml`, mounting the local `database.db` file into the container. This change enables persistent storage for the bot's data, ensuring that data is not lost when the container is restarted or redeployed. It enhances data management and allows for more robust operation of the bot service by leveraging persistency.
This development is crucial for scenarios requiring data retention across bot updates and system maintenance activities.
- Included the `ffmpeg` package in the Docker environment to support multimedia content processing.
- Added `trackingmore-api-tool` as a dependency to expand the bot's functionality with tracking capabilities.
- Adjusted the `all` dependencies list in `pyproject.toml` to include the `trackingmore` module, indicating a broader feature set for the application.
- Updated the bot class to prepare for integrating `TrackingMore` alongside existing services like `OpenAI` and `WolframAlpha`, highlighting an intention to make such integrations configurable in the future.
This enhancement enables the bot to interact with multimedia content more effectively and introduces package tracking features, laying groundwork for configurable service integrations.
Moved from building the GPT bot Docker container on the fly to using a pre-built image, enhancing the setup's efficiency and reducing build times for deployments. Adjusted the server's exposed port to resolve conflicts and standardize deployment configurations. Additionally, locked in the `future` package version to ensure compatibility with legacy Python code, mitigating potential future incompatibility issues.
Removed redundant internal Docker CI/CD workflow and unified naming for external Docker workflows to improve clarity and maintainability. Introduced a new workflow for tagging pushes, aligning deployments closer with best practices for version management and distribution. This change simplifies the CI/CD pipeline, reducing duplication and potential confusion, while ensuring that Docker images are built and pushed efficiently for both internal developments and tagged releases.
- Docker CI/CD internals were removed, focusing efforts on standardized workflows.
- Docker CI/CD workflow names were harmonized to reinforce their universal role across projects.
- A new tagging workflow supports better version control and facilitates automatic releases to Docker Hub, promoting consistency and reliability in image distribution.
This adjustment lays the groundwork for more streamlined and robust CI/CD operations, providing a solid framework for future enhancements and scalability.
Introduced a new CI/CD workflow specifically for building and pushing Docker images to the Forgejo Docker Registry, triggered by pushes to the 'docker' branch. This addition aims to streamline Docker image management and deployment within Forgejo's infrastructure, ensuring a more isolated and secure handling of images. Concurrently, the existing workflow for Docker Hub has been refined to clarify its purpose: it is now explicitly focused on pushing to Docker Hub, removing the overlap with Forgejo Docker Registry operations. This delineation enhances the clarity of our CI/CD processes and ensures a cleaner separation of concerns between public and internal image repositories.
Removed a duplicate and unnecessary line specifying the `latest` tag for the docker image in the workflow. This change simplifies the tag specification process, avoiding redundancy, and ensuring clear declaration of both the `latest` and SHA-specific tags for our docker images in CI/CD pipelines.
Migrated from direct GitHub context references to environment variables for GitHub SHA, actor, and token within the Docker build and push actions. This enhances portability and consistency across different execution environments, ensuring better compatibility and security when interfacing with GitHub and Forgejo Docker registries.
Enhanced Docker build workflows in `.forgejo/workflows/docker-latest.yml` to include dynamic tagging based on the GITHUB_SHA, alongside the existing 'latest' tag for both the kumitterer/matrix-gptbot and git.private.coffee/privatecoffee/matrix-gptbot images. This change allows for more precise versioning and traceability of images, facilitating rollback and specific version deployment. Also standardized authentication token variables for Docker login to the Forgejo Docker Registry, improving readability and consistency in CI/CD configurations.
Enhanced the CI pipeline for Docker images by supporting an additional push to the Forgejo Docker Registry alongside the existing push to Docker Hub. This change allows for better integration with private infrastructures and provides an alternative for users and systems that prefer or require images to be stored in a more controlled or private registry. It involves logging into both Docker Hub and Forgejo with respective credentials and pushing the built images to both, ensuring broader availability and redundancy of our Docker images.
Moved the installation of build-essential and libpython3-dev from the Docker workflow to the Dockerfile itself. This change optimizes the Docker setup process, ensuring that all necessary dependencies are encapsulated within the Docker build context. It simplifies the CI workflow by removing redundant steps and centralizes dependency management, making the build process more straightforward and maintainable.
This adjustment aids in achieving a cleaner division between CI setup and application-specific build requirements, potentially improving build times and reducing complexity for future updates or dependency changes.
Updated the docker-latest workflow to install additional critical build dependencies including build-essential and libpython3-dev, alongside docker.io. This enhancement is geared towards supporting a wider range of development scenarios and facilitating more complex build requirements directly within the CI pipeline.
Updated the Docker CI/CD pipeline in the `.forgejo/workflows/docker-latest.yml` to support better integration and efficiency. Key enhancements include setting a container environment with Node 20 on Debian Bookworm for consistency across builds, and installing Docker directly within the runner to streamline the process. This refinement simplifies the setup steps, reduces potential for errors, and possibly decreases pipeline execution time. These changes ensure that our Docker images are built and pushed in a more reliable and faster environment.
Removed the specific runner designation from the Docker workflow to allow for dynamic runner selection. This change aims to increase flexibility and compatibility across different CI environments. It reduces the dependency on a single OS version, potentially leading to better resource availability and efficiency in workflow execution.
Introduced a new Docker CI/CD workflow to automatically build and push images to Docker Hub on pushes to the 'docker' branch. This automation ensures that the latest version of the bot is readily available for deployment, facilitating easier distribution and deployment for users.
The README.md has been updated to improve clarity around installation methods, emphasizing PyPI as the recommended installation method while expanding on Docker usage. It now includes detailed instructions for Docker Hub images, local image building, and Docker Compose deployment, catering to a broader range of users and deployment scenarios. This update aims to make the bot more accessible and manageable by providing clear, step-by-step guidance for different deployment strategies.
Related to these changes, the documentation has been restructured to better organize information related to configuration and running the bot, ensuring users have a smoother experience setting up and deploying it in their environment.
These changes reflect our commitment to enhancing user experience and streamlining deployment processes, making it easier for users to adopt and maintain the matrix-gptbot in various environments.
Changed the volume mapping for GPTbot service in `docker-compose.yml` from `settings.ini` to `config.ini`. This modification aligns the container configuration with the new application configuration file naming convention, facilitating easier configuration management and clarity for development and deployment processes.
This change is essential for maintaining consistency across our documentation and deployment scripts, ensuring that all references to configuration files are accurate and up to date.
Redesigned the Docker setup to enhance project structure and configuration management. Changes include a more organized directory structure within the Docker container, separating source code, project metadata, and licenses explicitly to the `/app` directory for better clarity and management. Additionally, integrated `pantalaimon` as a dependency service in `docker-compose.yml`, enabling secure communication with Matrix services by automatically managing settings and configurations through mounted files. This setup simplifies the development environment setup and streamlines deployments.
Introduced Dockerfile and docker-compose.yml to encapsulate GPTBot into a Docker container. This simplifies deployment and ensures consistent environments across development and production setups. The Dockerfile outlines the necessary steps to build the image, incl. setting up the working directory, copying the current directory into the container, installing all dependencies, and defining the command to run the bot. The docker-compose.yml file further streamlines the deployment process by specifying service configuration, leveraging Docker Compose version 3.8 for its extended feature set and compatibility.
By containerizing GPTBot, we enhance portability, reduce set-up times for new contributors, and minimize "works on my machine" issues, fostering a more robust development lifecycle.
Extended the README to include a new section on bot configuration setup, emphasizing the necessity of a config.ini file for operation. This update clarifies the setup process for new users, ensuring they understand the requirement of configuring the bot before use. Additionally, outlined the repository policy regarding the use of the `main` branch for development and the process for contributing through feature branches and pull requests, aiming to streamline contribution workflows and maintain code quality.
The formatting improvements across the README enhance readability and ensure consistency in documentation presentation.
Removed an extraneous log statement that recorded the first message content in the OpenAI class. This change streamlines the logging process by eliminating unnecessary log clutter, improving the readability of logs and potentially enhancing performance by reducing I/O operations on the logging system. This adjustment is pivotal for maintaining a clean and efficient codebase, especially in production environments where excessive logging can lead to inflated log sizes and make troubleshooting more challenging.
- Initialized preparations for the unreleased 0.3.9 version in the changelog.
- Updated copyright information to include 2024 and added Private.coffee Team alongside Kumi Mitterer to reflect the collaborative nature of the project going forward.
- Incremented the project version to 0.3.9.dev0 in pyproject.toml to align with upcoming development efforts.
- Modified all references from Kumi's personal repo to the Private.coffee Team's repo in README.md, LICENSE, and pyproject.toml, ensuring future contributions and issues are directed to the correct repository. This change facilitates a broader collaboration platform and acknowledges the team's growing involvement in the project's development.
These updates are critical for the upcoming development phase and for accurately representing the collaborative efforts behind the project.
Updated the CHANGELOG.md to document enhancements in versions 0.3.7 and 0.3.8, including updates to URLs in the pyproject.toml and migration of the build pipeline to Forgejo Actions. These changes aim to improve project dependency management and streamline the CI/CD process, ensuring a more efficient development workflow.
Bumped project version to 0.3.8 for the next release cycle. Updated Homepage and Bug Tracker URLs to reflect the new hosting location, aiming for improved accessibility and collaboration. Additionally, introduced a Source Code URL for direct access to the repository, facilitating developers' engagement and contributions.
Simplified the publishing to PyPI steps in the release workflow by consolidating the Python environment setup, tool installation, package building, and publishing into a single job. This change makes the workflow more efficient and reduces the number of steps required to publish a package. It ensures that the environment setup, dependencies installation, package building, and publishing are done in one go, which can help in minimizing potential issues arising from multiple, separate steps.
This approach leverages the existing Python and PyPI tools more effectively and could potentially shorten the release cycle time. It also makes the workflow file cleaner and easier to maintain.
Switched the virtual environment activation command to be compliant with POSIX shell syntax. The previous `source` command is replaced with `.`, making the script more broadly compatible with different shell environments without requiring bash specifically. This change ensures greater portability and compatibility of the release workflow across diverse CI/CD execution environments.
Updated the GitHub Actions workflow in `.forgejo/workflows/release.yml` to explicitly use `python3` instead of `python` for both version checking and virtual environment setup. This change ensures compatibility and clarity across environments where `python` might still point to Python 2.x, preventing potential conflicts and erasing ambiguity. The adjustment aligns with modern best practices, acknowledging the widespread transition to Python 3 and its tooling.
Updated the CI/CD pipeline in `.forgejo/workflows/release.yml` to utilize the `node:20-bookworm` image instead of `python:3.10`. This necessitated adding steps for installing Python dependencies, given the switch to a Node.js-based image. This change accommodates projects that require both Node.js and Python environments for their setup and publishing processes, ensuring better compatibility and flexibility in handling mixed-technology stacks.
This update enhances our pipeline's capability to manage dependencies more efficiently across different technologies, catering to the evolving needs of our projects.
Removed the entire `.gitlab-ci.yml`, discontinuing the GitLab CI/CD process. This change reflects a shift toward consolidating our CI/CD workflows, potentially to another platform or to streamline our current processes. It's essential to assess the impacts on project deployment and distribution, especially regarding PyPI publishing previously handled by this GitLab CI configuration.
Introduced a new CI/CD GitHub Actions workflow for automated testing and publishing to PyPI, targeting Python 3.10 containers. This setup ensures code is tested before release and automates the distribution process to PyPI whenever a new tag is pushed. Additionally, updated README.md to include a support badge from private.coffee, encouraging community support and visibility.
This enhancement simplifies the release process, improves code quality assurance, and fosters community engagement by providing a direct way to support the project.
Upgraded project version to 0.3.6 to introduce a critical fix for message type detection failing on certain messages. This version also amends the package directory structure for improved organization, moving from `src/gptbot` to just `gptbot`. Additionally, updated the CHANGELOG to reflect this fix and organizational change, ensuring that it stays current with the project's progress.
- Fixes message type detection issue
Improve event type determination in the message fetching logic by adding try-except blocks to handle AttributeError and KeyError exceptions gracefully. This change allows the bot to continue processing other events if it encounters an event without a recognizable type or msgtype, avoiding premature termination and enhancing its ability to process diverse event streams more robustly. This approach also includes logging unprocessable events for debugging purposes, providing clearer insights into event handling anomalies.
Fixes#5
Updated the repository URLs for the matrix-gptbot installation
instructions and cloning directions in the README. The change points to
a new hosting location on `git.private.coffee` from the previous
`kumig.it`. This adjustment ensures users access the most current
version of the code and contributes to a streamlined setup process by
guiding them directly to the updated repository.
This update is crucial for users looking to install the bot or
contribute to its development, ensuring they're interfacing with the
latest version and accurate information.
conditional avatar update
Introduces a new method `get_state_event` to asynchronously retrieve
state events for a given room and event type, enhancing the bot's
ability to fetch specific room states before performing actions. This
functionality is leveraged to conditionally update room avatars only if
they are not already set, reducing unnecessary updates and improving
efficiency. Additionally, the commit includes minor formatting
adjustments for better code readability.
Refactoring the avatar updating process to assess the current state
before action prevents redundant network calls and aligns with optimal
resource usage practices, contributing to a smoother operation and
potentially reducing the workload on the server.
Enhanced event processing in the bot's message retrieval logic to
improve message relevance and accuracy. Changes include accepting all
'gptbot' prefixed events, refining handling of 'ignoreolder' command
with exact match rather than starts with, and now passing through
'custom' commands that start with '!'. The default behavior now excludes
notices unless explicitly included. This update allows for more precise
command interactions and reduces clutter from irrelevant notices.
Standardize the passing of 'messages' argument across various calls to
generate_chat_response method to ensure consistency and prevent
potential bugs in the GPT bot's response generation. The 'model'
parameter in one instance has been corrected to 'original_model' for
proper context loading. These changes improve code clarity and maintain
the intended message flow within the bot's conversation handling.
Refactored initialization of OpenAI APIs to correct a redundancy and
enhance clarity. Improved content extraction logic for robust handling
of various message types. Enhanced logging and messaging by including
user and room context, facilitating better debugging and user
interaction. Extended `send_message` to support custom message types,
allowing for richer interaction within the chat ecosystem. Updated
hardcoded chat models to leverage newer versions for potentially more
accurate tool overrides. Fixed async method call in recursion handling
to ensure proper response generation. Finally, increased message history
retrieval limit based on the `max_messages` attribute for more effective
conversation context.
Resolves issues with message context and enhances user feedback during
operations.
Removed the '-dev' suffix from the project version indicating the transition from a development state to the official release of version 0.3.3. This version bump aligns with the completion of features and fixes slated for this iteration.
Update the README to specify that issues with file attachments primarily occur in non-encrypted rooms when the same user operates the bot in both encrypted and non-encrypted rooms. This detail aims to inform users of potential pitfalls more precisely when setting up the bot with end-to-end encryption enabled.
Improved the error handling in the OpenAI class to prevent infinite recursion issues by retaining the original chat model during recursive calls. Enhanced logging within the recursion depth check for better debugging and traceability. Ensured consistency in chat responses by passing the initial model reference throughout the entire call stack. This is crucial when fallbacks due to errors or tool usage occur.
Refactored code for clarity and readability, ensuring that any recursion retains the original model and tool parameters. Additionally, proper logging and condition checks now standardize the flow of execution, preventing unintended modifications to the model's state that could lead to incorrect bot behavior.
Introduced the ability to specify and retrieve different OpenAI models on a per-room basis, thereby allowing enhanced customization of the bot's response behavior according to the preferences for each room. Cleaned up code formatting across the bot implementation files for improved readability and maintainability. Additional logic now checks for model overrides when generating responses, ensuring the correct model is used as configured.
Refactors include streamlined database and API initializations and a refined method for processing message formatting to accommodate images, texts, and system messages consistently. This change differentiates default behavior from room-specific configurations, catering to diverse user needs without compromising on default settings.
Added a safety check to prevent infinite recursion within the response generation function. When `use_tools` is active, the code now inspects the call stack and terminates the process if a certain recursion depth is exceeded. This ensures that the code is robust against potential infinite loops that could block or crash the service. A default threshold is set with a TODO for revisiting the hard-coded limit, and the recursion detection logs the occurrence for easier debugging and maintenance.
Note: Recursion limit handling may require future adjustments to the `allow_override` parameter based on real-world feedback or testing.