Compare commits

..

159 commits

Author SHA1 Message Date
c893fb33bd
chore(release): bump version to 0.3.21
All checks were successful
Docker CI/CD / Docker Build and Push to Docker Hub (push) Successful in 14m24s
Python Package CI/CD / Setup and Test (push) Successful in 3m26s
Python Package CI/CD / Publish to PyPI (push) Successful in 1m54s
Incremented the project version from 0.3.20 to 0.3.21 to signify the release of new updates or bug fixes. This minor version bump indicates backward-compatible changes or improvements.
2024-11-07 07:40:18 +01:00
1b37beeb0c
feat(logging): enhance logging with system message details
Added logging for system messages to improve debug traceability. This change helps in diagnosing issues by ensuring that system-level communications are captured alongside incoming user messages, thereby providing more context for developers monitoring application behavior.
2024-11-06 16:47:09 +01:00
253c7e581f
fix(logging): add debug log for truncated messages
Added a debug-level log statement to capture the final set of messages before returning, which aids in tracing message processing and debugging potential issues in the message truncation logic. This enhances transparency and facilitates easier troubleshooting.
2024-11-06 16:41:13 +01:00
21edc48050
fix: correct message concatenation logic
Wraps `system_message_dict` in a list to address a logic error in message concatenation. This ensures that the message list is correctly formatted, preventing potential runtime issues when processing the message sequence.
2024-11-06 16:24:54 +01:00
5fef1ab59c
fix(truncation): correct message handling and token calc
Updated message truncation logic to correctly return a system message dictionary and adjust token calculations. Improved model encoding fallback strategy to utilize "gpt-4o" instead of "gpt-3.5-turbo" for greater compatibility. This addresses message mishandling and ensures more robust operation.

Resolves the need for better error handling with encoding defaults.
2024-11-06 16:18:30 +01:00
571031002c
chore: bump version to 0.3.20
All checks were successful
Docker CI/CD / Docker Build and Push to Docker Hub (push) Successful in 8m41s
Python Package CI/CD / Setup and Test (push) Successful in 2m2s
Python Package CI/CD / Publish to PyPI (push) Successful in 1m11s
Updated project version from 0.3.19 to 0.3.20 to reflect the latest changes and improvements in the codebase. Ensures compatibility with the new updates and maintains version tracking.
2024-08-23 19:08:02 +02:00
179005a562
fix: add room check to prevent processing errors
Updated the method to include a room parameter, ensuring that message processing functions only when a room is provided. This prevents errors when trying to download and process media files, improving stability and avoiding unnecessary exceptions.
2024-08-23 19:06:49 +02:00
40f28b9f0b
chore: bump version to 0.3.19
All checks were successful
Docker CI/CD / Docker Build and Push to Docker Hub (push) Successful in 8m35s
Python Package CI/CD / Setup and Test (push) Successful in 2m1s
Python Package CI/CD / Publish to PyPI (push) Successful in 1m11s
Upgrade project version from 0.3.18 to 0.3.19 to reflect recent changes and improvements. No other modifications were made.
2024-08-18 10:54:31 +02:00
08fa83f1f9
fix(dice): handle missing dice roll parameter
Adjust exception handling to catch both ValueError and IndexError. This ensures the command gracefully defaults to 6 sides when input parameters are insufficient or improperly formatted. Improves robustness against user errors.
2024-08-18 10:54:03 +02:00
525aea3f05
fix(config): add fallback values for Matrix config checks
Added fallback values for Matrix 'Password' and 'UserID' config checks to prevent exceptions when these keys are not present. This ensures smoother handling of missing configurations.
2024-08-18 10:50:27 +02:00
99d3974e17
feat: update bot info and deprecate stats command
All checks were successful
Docker CI/CD / Docker Build and Push to Docker Hub (push) Successful in 8m55s
Python Package CI/CD / Setup and Test (push) Successful in 1m58s
Python Package CI/CD / Publish to PyPI (push) Successful in 1m11s
Updated bot info command to display model info specific to room.
Removed the now unsupported stats command from help and privacy.
Retired the 'stats' command, informing users of its deprecation.
Updated version to 0.3.18 to reflect these changes.
2024-08-18 10:44:09 +02:00
e4dba23e39
chore(release): bump version to 0.3.17
All checks were successful
Docker CI/CD / Docker Build and Push to Docker Hub (push) Successful in 8m42s
Python Package CI/CD / Setup and Test (push) Successful in 1m53s
Python Package CI/CD / Publish to PyPI (push) Successful in 1m12s
Updated project version to 0.3.17 to reflect new changes or fixes.
2024-08-04 20:10:07 +02:00
5378ac39e4
feat(openai): add event to incoming messages
Appended the event to the incoming messages list to ensure it gets processed. This change addresses situations where events were previously being overlooked, potentially leading to incomplete or incorrect processing. This enhancement ensures a more comprehensive handling of incoming data.
2024-08-04 20:04:50 +02:00
56b4f3617c
chore: bump version to 0.3.16
All checks were successful
Docker CI/CD / Docker Build and Push to Docker Hub (push) Successful in 10m44s
Python Package CI/CD / Setup and Test (push) Successful in 1m59s
Python Package CI/CD / Publish to PyPI (push) Successful in 1m11s
Updated the project version to 0.3.16 to prepare for the next release. This includes recent bug fixes and minor improvements. Ensure the updated version is reflected across all relevant documentation and deployment scripts.
2024-08-04 18:28:48 +02:00
48decdc9e2
feat(logging): enhance debug logging for message processing
Added debug logging to capture incoming, prepared, and truncated messages in the OpenAI class. Also, included logging for last messages fetched in the bot class. These additions aid in the traceability and debugging of message flows and processing errors.

Additionally, an option to log detailed error tracebacks in debug mode was implemented to facilitate better error analysis.
2024-08-04 18:28:12 +02:00
ca7245696a
fix: correct max_tokens reference in OpenAI class
Updated the reference to max_tokens in the truncation call from
self.chat_api.max_tokens to self.max_tokens, ensuring the correct
token limit is applied. This resolves potential issues with message
length handling.
2024-08-04 17:42:23 +02:00
c06da55d5d
feat: add video file support and integrate Google AI
All checks were successful
Docker CI/CD / Docker Build and Push to Docker Hub (push) Successful in 8m44s
Python Package CI/CD / Setup and Test (push) Successful in 1m22s
Python Package CI/CD / Publish to PyPI (push) Successful in 40s
Introduced the capability to handle video files as input for AI models that support it, enhancing the bot's versatility in processing media. This update includes a new configuration option to enable or disable video input, catering to different model capabilities. Additionally, integrated Google's Generative AI through the addition of a Google dependency and a corresponding AI class implementation. This move broadens the AI options available, providing users with more flexibility in choosing their desired AI backend. The update involves refactoring and simplifying message preparation and handling, ensuring compatibility and extending functionality to include the new video input feature and Google AI support.

- Added `ForceVideoInput` configuration option to toggle video file processing.
- Integrated Google Generative AI as an optional dependency and included it in the bot's AI choices.
- Implemented a unified method for preparing messages for AI processing, streamlining how the bot handles various message types.
- Removed obsolete code related to message truncation and specialized handling for images, files, and audio, reflecting a shift towards a more flexible and generalized message processing approach.
2024-05-25 17:35:22 +02:00
05ba26d540
feat(openai.py): expand message handling capabilities
Enhanced the OpenAI class to better support diverse message types in chat interactions, including image and video processing. This update introduces several key improvements:
- Added handling for image and video messages, converting them to a format compatible with the OpenAI API.
- Implemented a new method to prepare messages for OpenAI, allowing for richer interaction by including media content directly within the chat.
- Incorporated message truncation to adhere to token limits, ensuring efficient usage of OpenAI's API without sacrificing message content.
- Extended support for additional message types, such as audio and file messages, with specialized processing for each category.

This change aims to enhance user experience by allowing more dynamic and multimedia-rich interactions, aligning with modern chat functionalities. It also addresses potential issues with token limit surpassing and ensures smoother integration of different message formats into the chat flow.
2024-05-25 17:35:05 +02:00
75e637546a
feat(login): enhance login flow with UserID check
All checks were successful
Docker CI/CD / Docker Build and Push to Docker Hub (push) Successful in 8m17s
Python Package CI/CD / Setup and Test (push) Successful in 1m11s
Python Package CI/CD / Publish to PyPI (push) Successful in 36s
Improved the login logic in the bot's initialization process to require a UserID when a Password is provided for login. This update ensures a more secure and fail-proof login procedure by validating the presence of a UserID before attempting to log in, and by handling LoginError more explicitly with a clear error message. This change addresses the need for better error handling and validation during the bot's login phase to avoid silent failures and improve debuggability.

- Added LoginError import to handle login-related exceptions more gracefully.
- Refined the login process to create the AsyncClient instance with a UserID when password authentication is used, following best practices for client identification.
- Introduced explicit error raising for missing UserID configuration, enhancing configuration validation before attempting a login.
- Improved clarity and security by clearing the password from the configuration post-login, preventing inadvertent storage or reuse.

This update enhances the bot's robustness and configuration validation, ensuring smoother operations and better error handling during the initialization phase.
2024-05-21 08:14:04 +02:00
e1695f0cce
feat: enhance error handling for file downloads
Introduce `DownloadException` to improve error reporting and handling when file downloads fail. Modified `download_file` method to accept a `raise_error` flag, which, when set, raises `DownloadException` upon a download error instead of just logging it. This enables the bot to respond with a specific error message to the room if a download fails during processing of speech-to-text, file messages, and image messages, enhancing user feedback on download failures.
2024-05-20 10:41:09 +02:00
3f084ffdd3
feat: enhance tool and image handling
All checks were successful
Docker CI/CD / Docker Build and Push to Docker Hub (push) Successful in 8m22s
Python Package CI/CD / Setup and Test (push) Successful in 1m11s
Python Package CI/CD / Publish to PyPI (push) Successful in 38s
Introduced changes to the tool request behavior and image processing. Now, the configuration allows a dedicated model for tool requests (`ToolModel`) and enforces automatic resizing of context images to maximal dimensions, improving compatibility and performance with the AI model. The update shifts away from a rigid tool model use, accommodating varied model support for tool requests, and optimizes image handling for network and processing efficiency. These adjustments aim to enhance user experience with more flexible tool usage and efficient image handling in chat interactions.
2024-05-20 10:20:17 +02:00
89f06268a5
fix(migrations): handle specific exceptions gracefully
Updated the exception handling in the migration logic to catch `Exception` explicitly instead of using a bare except clause. This change improves the robustness of the migration process by ensuring that only known, broad exceptions are caught, aiding in debugging and maintaining the principle of least privilege in error handling. It prevents the swallowing of unrelated exceptions that could mask other issues or critical errors.
2024-05-18 21:39:06 +02:00
d0ab53b3e0
feat: standardize bot logging method
Switched to using the bot's centralized logging mechanism for bot info commands, enhancing consistency across the application. This change ensures that all log messages go through the same process, potentially simplifying future debugging and logging enhancements.
2024-05-18 21:38:44 +02:00
19aa91cf48
fix(openai response handling): narrow down exception handling
Refined exception handling in the OpenAI response parsing by specifying `Exception` instead of using a bare except. This change improves code reliability and maintainability by clearly defining the scope of exceptions we anticipate, leading to more precise error handling and easier debugging processes. It aligns with best practices for Python error handling, avoiding the catch-all approach that might inadvertently suppress unrelated errors, thus enhancing the overall robustness of the error management strategy.
2024-05-18 21:38:17 +02:00
99eec5395e
fix: correct vars in join_callback for space mapping
Resolved incorrect variable usage in join_callback function that affected the mapping of new rooms to the correct spaces. Previously, the event.sender variable was mistakenly used, leading to potential mismatches in identifying the correct user and room IDs for space assignments. This update ensures the response object's sender and room_id properties are correctly utilized, aligning room additions with the intended user spaces.
2024-05-18 21:37:51 +02:00
8a253fdf90
feat(invite): streamline invite handling and logging
Refactored the invite handling process within the invite callback for better consistency and maintainability. Swapped out a basic logging function with the bot's standardized logger for improved logging consistency across the application. Additionally, simplified the room joining process by removing redundant response handling, thus enhancing code readability and maintainability. These changes aim to unify the logging approach within the bot and ensure smoother invite processing without altering the underlying functionality.
2024-05-18 21:37:03 +02:00
28752ae3da
refactor: mark base imports in tools as used
Adjusted import statements in `tools.__init__.py` to silence linting warnings regarding unused imports. This emphasizes that `BaseTool`, `StopProcessing`, and `Handover` are intentionally imported for export purposes, despite not being directly referenced. This change aids in maintaining cleaner code and reduces confusion around import intentions.

No functional impact or changes in behavior are expected as a result of this refactor.
2024-05-18 21:36:21 +02:00
df567d005e
refactor(callbacks): remove test callbacks and imports
This commit streamlines the `callbacks` module by removing the debugging and testing related callbacks (`test_callback` and `test_response_callback`) along with their associated imports. It focuses on enhancing the clarity and maintainability of the callback handling by eliminating unused code paths and dependencies that were specifically used for testing purposes. This cleanup is part of an effort to mature the codebase and reduce potential confusion for new contributors by ensuring that only operational code is present in the production paths. This should not affect the existing functionality but will make future modifications and understanding of the callback mechanisms more straightforward.
2024-05-18 21:36:06 +02:00
e2e31060ce
refactor: improve code readability and efficiency
Enhanced code readability by formatting multiline log statements and adding missing line breaks in conditional blocks. Adopted a more robust error handling approach by catching general exceptions in encoding determination. Eliminated redundant variable assignments for async tasks to streamline event handling and response callbacks, directly invoking `asyncio.create_task()` for better clarity and efficiency. Simplify message and file sending routines by removing unnecessary status assignments, implying a focus on action over response verification. Lastly, optimized message truncation logic by discarding the unused result, focusing on in-place operation for token limit adherence. These changes collectively contribute to a cleaner, more maintainable, and efficient codebase, addressing potential bugs and performance bottlenecks.
2024-05-18 21:33:32 +02:00
7f8ff1502a
fix: remove redundant API usage log call
Removed an unnecessary call to `log_api_usage` after command execution in the calculate command handler. This change eliminates redundant logging that didn't contribute valuable insights and led to clutter in log files, streamlining the process and potentially improving performance by reducing I/O operations.
2024-05-18 21:32:32 +02:00
75d00ea50e
fix: correct user variable in invite error log
Updated the logger to use `event.sender` instead of an undefined `user` variable when logging a failed space invitation, ensuring the correct information is logged. This change addresses a bug where the wrong variable was referenced, potentially causing confusion when diagnosing issues with space invites.
2024-05-18 21:31:56 +02:00
2c04d8bf9c
refactor: remove unused json import from ai base
The ai base module in gptbot no longer requires the json package, leading to its removal. This cleanup enhances code readability and reduces unnecessary import overhead, streamlining the functionality of the ai base class without affecting its external behavior. Such optimizations contribute to the overall maintainability and performance of the codebase.
2024-05-18 21:31:17 +02:00
27df072c0d
fix: correct target room for avatar setup
Fixed an issue where the bot attempted to set the avatar for the wrong room when creating a new room. The avatar is now correctly assigned to the newly created room instead of the incorrectly referenced room variable. This ensures that newly created rooms properly display the intended logo from the start, improving the user experience by maintaining consistent branding across rooms.
2024-05-18 21:30:48 +02:00
141e89ab11
feat: add PyPI and Git badges to README
Enhanced project visibility and accessibility by including new badges in the README. These additions are aimed at providing quick links to the package on PyPI, showing the supported Python versions, license information, and the latest Git commit status. These enhancements make it easier for users and contributors to find important project details, contributing to a more open and engaging community.

This change underscores our commitment to transparency and support for the development community.
2024-05-17 16:36:19 +02:00
47bf7db380
refactor(ci): streamline Docker CI/CD workflows
Removed redundant Docker CI/CD workflow for the 'latest' tag and integrated its functionality into the existing tagging workflow. This change not only reduces the redundancy of having separate workflows for 'latest' and version-specific tags but also simplifies the CI/CD process by having a single, unified workflow for Docker image publications. Moving forward, every push will now ensure that the 'latest' tag is updated alongside the version-specific tags, maintaining a smoother and more predictable deployment and versioning flow.
2024-05-17 16:12:11 +02:00
9c6b6f5b99
feat(dependabot): enable daily pip updates
All checks were successful
Docker CI/CD / Docker Build and Push to Docker Hub (push) Successful in 8m8s
Added a Dependabot configuration to automate dependency updates for the Python package ecosystem. Dependabot will now check for updates on a daily basis, ensuring that our project dependencies remain up-to-date with the latest security patches and features without manual oversight. This proactive approach towards dependency management will aid in minimizing potential security vulnerabilities and compatibility issues, fostering a more secure and stable development environment.
2024-05-17 16:08:33 +02:00
344e736006
docs: emphasize venv usage in installation guide
All checks were successful
Docker CI/CD / Docker Build and Push to Docker Hub (push) Successful in 8m25s
Updated the README to strengthen the recommendation of using a virtual environment (venv) during installation. This adjustment aims to guide users towards best practices in Python environment management, potentially reducing common issues related to package dependencies and conflicts.
2024-05-17 12:46:03 +02:00
3e966334ba
refactor(pyproject.toml): streamline and update dependencies
This commit simplifies the pyproject.toml structure for better readability and maintenance. Key changes include formatting author and license information, consolidating dependency lists into a more concise format, and adding the `future` package to dependencies to ensure forward-compatibility. Optional dependencies are now listed in a more compact style, and the development dependencies section has been cleaned up. These adjustments make the project configuration cleaner and more accessible, facilitating future updates and dependency management.
2024-05-17 11:54:30 +02:00
9178ab23ac
fix: update Pantalaimon default port in README
All checks were successful
Docker CI/CD / Docker Build and Push to Docker Hub (push) Successful in 8m3s
Corrected the default port for Pantalaimon from 8010 to 8009 in the README documentation. This change aligns the documentation with the latest Pantalaimon configuration standards, ensuring that users setting up their homeserver URL in the bot's config.ini file use the correct port. This update is crucial for new users during initial setup to avoid connectivity issues.
2024-05-17 11:49:21 +02:00
ee7e866748
feat(config): change default port to 8009
Some checks are pending
Docker CI/CD / Docker Build and Push to Docker Hub (push) Waiting to run
Updated the default listening port in pantalaimon.example.conf from 8010 to 8009. This alteration ensures compatibility with new network policies and avoids collision with commonly used ports in the default configuration. It's an important change for users setting up new instances, enabling smoother initial configurations without manual port adjustments.
2024-05-17 11:45:41 +02:00
1cd7043a36
feat: enable third-party model vision support
All checks were successful
Docker CI/CD / Docker Build and Push to Docker Hub (push) Successful in 8m8s
Python Package CI/CD / Setup and Test (push) Successful in 1m8s
Python Package CI/CD / Publish to PyPI (push) Successful in 37s
Introduced the `ForceVision` configuration option to allow usage of third-party models for image recognition within the OpenAI setup. This change broadens the flexibility and applicability of the bot's image processing capabilities by not restricting to predefined vision models only. Also, added missing properties to the `OpenAI` class to provide comprehensive control over the bot's behavior, including options for forcing vision and tools usage, along with emulating tool capabilities in models not officially supporting them. These enhancements make the bot more adaptable to various models and user needs, especially for self-hosted setups.

Additionally, updated documentation and increment version to 0.3.12 to reflect these changes and improvements.
2024-05-17 11:37:10 +02:00
8e0cffe02a
feat: enhance AI integration & update models
All checks were successful
Docker CI/CD / Docker Build and Push to Docker Hub (push) Successful in 7m53s
Python Package CI/CD / Setup and Test (push) Successful in 1m13s
Python Package CI/CD / Publish to PyPI (push) Successful in 37s
Refactored the handling of AI providers to support multiple AI services efficiently, introducing a `BaseAI` class from which all AI providers now inherit. This change modernizes our approach to AI integration, providing a more flexible and maintainable architecture for future expansions and enhancements.

- Adopted `gpt-4o` and `dall-e-3` as the default models for chat and image generation, respectively, aligning with the latest advancements in AI capabilities.
- Integrated `ruff` as a development dependency to enforce coding standards and improve code quality through consistent linting.
- Removed unused API keys and sections from `config.dist.ini` to streamline configuration management and clarify setup processes for new users.
- Updated the command line tool for improved usability and fixed previous issues preventing its effective operation.
- Enhanced OpenAI integration with advanced settings for temperature, top_p, frequency_penalty, and presence_penalty, enabling finer control over AI-generated outputs.

This comprehensive update not only enhances the bot's performance and usability but also lays the groundwork for incorporating additional AI providers, ensuring the project remains at the forefront of AI-driven chatbot technologies.

Resolves #13
2024-05-17 11:26:37 +02:00
02887b9336
feat: add main_sync wrapper for asyncio compatibility
All checks were successful
Docker CI/CD / Docker Build and Push to Docker Hub (push) Successful in 10m11s
Refactored the main execution pathway to introduce a `main_sync` function that wraps the existing asynchronous `main` function, facilitating compatibility with environments that necessitate or prefer synchronous execution. This change enhances the bot's flexibility in various deployment scenarios without altering the core asynchronous functionality.

In addition, expanded the exception handling in `get_version` to catch all exceptions instead of limiting to `DistributionNotFound`. This broadens the robustness of version retrieval, ensuring the application can gracefully handle unexpected issues during version lookup.

Whitespace adjustments improve code readability by clearly separating function definitions.

These adjustments contribute to the maintainability and operability of the application, allowing for broader usage contexts and easier integration into diverse environments.
2024-05-17 10:58:01 +02:00
bc06f8939a
refactor: applying lots of linting
Some checks failed
Docker CI/CD / Docker Build and Push to Docker Hub (push) Has been cancelled
This commit removes unnecessary imports across several modules, enhancing code readability and potentially improving performance. Notably, `KeysUploadError` and `requests` were removed where no longer used, reflecting a cleaner dependency structure. Furthermore, logging calls have been standardized, removing dynamic string generation in favor of static messages. This change not only makes the logs more consistent but also slightly reduces the computational overhead associated with log generation. The removal of unused type hints also contributes to a more focused and maintainable code base.

Additionally, the commit includes minor text adjustments for user messages, replacing dynamic content with fixed strings where the dynamism was not needed. This enhances both the clarity and security of user-directed messages by avoiding unnecessary string formatting operations.

Finally, the simplification of the migration script and the adjustment in the tools module underscore an ongoing effort to maintain clean and efficient code infrastructure.
2024-05-17 10:54:54 +02:00
5bbcd3cfda
feat: add debug and keyring config options
All checks were successful
Docker CI/CD / Docker Build and Push to Docker Hub (push) Successful in 8m6s
Added `LogLevel` and `UseKeyring` configuration options to the example configuration file to provide users with more control over logging verbosity and the decision to utilize a system keyring for credentials storage. The LogLevel option allows for easier debugging by adjusting the verbosity of logs, whereas the UseKeyring option offers flexibility in credential management, catering to environments where a system keyring may not be preferred or available.

These changes enhance the tool's usability and adaptability to various user environments and debugging needs.
2024-05-17 10:38:23 +02:00
15a93d8231
feat: Expand bot usage control and API support
All checks were successful
Docker CI/CD / Docker Build and Push to Docker Hub (push) Successful in 17m54s
Python Package CI/CD / Setup and Test (push) Successful in 1m54s
Python Package CI/CD / Publish to PyPI (push) Successful in 55s
Enhanced bot flexibility by enabling the specification of room IDs in the allowed users' list, broadening access control capabilities. This change allows for more granular control over who can interact with the bot, particularly useful in scenarios where the bot's usage needs to be restricted to specific rooms. Additionally, updated documentation and configurations reflect the inclusion of new AI models and self-hosted API support, catering to a wider range of use cases and setups. The README.md and config.dist.ini files have been updated to offer clearer guidance on setup, configuration, and troubleshooting, aiming to improve user experience and ease of deployment.

- Introduced the ability for room-specific bot access, enhancing user and room management flexibility.
- Expanded AI model support, including `gpt-4o` and `ollama`, increases the bot's versatility and application scenarios.
- Updated Python version compatibility to 3.12 to ensure users are leveraging the latest language features and improvements.
- Improved troubleshooting documentation to assist users in resolving common issues more efficiently.
2024-05-16 07:24:34 +02:00
e58bea20ca
feat(logging): Add debug log for empty OpenAI responses
All checks were successful
Docker CI/CD / Docker Build and Push to Docker Hub (push) Successful in 16m29s
Introduces logging for cases where OpenAI's API returns an empty response, ensuring that such occurrences are captured for debugging purposes. This change enhances visibility into the interaction with OpenAI's endpoint, facilitating easier identification and resolution of issues where empty responses are received, potentially indicating API limitations, network issues, or unexpected behavior from the AI model.
2024-05-16 06:40:03 +02:00
bd0099aa29
feat(docker): Extended information on setting up Pantalaimon with Docker
All checks were successful
Docker CI/CD / Docker Build and Push to Docker Hub (push) Successful in 7m58s
2024-05-15 14:33:35 +02:00
e46be65707
feat: Add optional location name to weather report
All checks were successful
Docker CI/CD / Docker Build and Push to Docker Hub (push) Successful in 9m33s
This update allows users to provide a location name for their weather reports, which can be useful when requesting weather information for specific locations.
2024-05-10 18:31:24 +02:00
9a4c250eb4
fix: Enhance error handling for user authentication
When processing large volumes of data, it's essential to handle errors gracefully and provide clear feedback to users. This change introduces additional checks to ensure robust error handling during user authentication, reducing the likelihood of errors propagating further down the pipeline.

This improvement not only enhances the overall stability of the system but also provides a better user experience by providing more informative error messages in the event of an issue.
2024-05-10 18:18:40 +02:00
f6a3f4ce66
feat: Update pantalaimon-related scripts and configuration**
All checks were successful
Docker CI/CD / Docker Build and Push to Docker Hub (push) Successful in 8m44s
Renamed `pantalaimon_first_login.py` to `fetch_access_token.py` to better reflect its purpose. Additionally, updated README to remove obsolete instructions for using pantalaimon with the bot.
2024-05-10 17:27:30 +02:00
b88afda558
refactor: Update dependency matrix-nio to 0.24.0
Some checks failed
Docker CI/CD / Docker Build and Push to Docker Hub (push) Failing after 7m54s
This commit updates the `matrix-nio` dependency to version 0.24.0, ensuring compatibility and new features.
2024-05-10 17:07:30 +02:00
df3697b4ff
feat: Add Docker support and TrackingMore dependency
All checks were successful
Docker CI/CD / Docker Build and Push to Docker Hub (push) Successful in 8m42s
Release version 0.3.9 introduces Docker support, enhancing deployment options by allowing the bot to run in a containerized environment. This update greatly simplifies deployment and scaling processes. Additionally, the inclusion of the TrackingMore dependency expands the bot's functionality, enabling advanced tracking features. These changes collectively aim to improve the bot's adaptability and efficiency in handling tasks.
2024-04-23 18:13:53 +02:00
17c6938a9d
feat: Switch Docker CI/CD to main branch & release v0.3.9
All checks were successful
Docker CI/CD / Docker Build and Push to Docker Hub (push) Successful in 8m34s
Python Package CI/CD / Setup and Test (push) Successful in 1m27s
Python Package CI/CD / Publish to PyPI (push) Successful in 38s
- Updated the Docker CI/CD workflow to trigger on pushes to the main branch, aligning with standard Git flow practices for production deployment.
- Advanced project version to 0.3.9, marking a new release with consolidated features and bug fixes.

This adjustment ensures that the Docker images are built and deployed in a more streamlined manner, reflecting our shift towards a unified branching strategy for releases. The version bump signifies the stabilization of new functionalities and enhancements for broader usage.
2024-04-23 18:12:39 +02:00
e8691194a9
CHANGELOG update
All checks were successful
Docker CI/CD / Docker Build and Push to Docker Hub (push) Successful in 8m56s
2024-04-23 18:06:55 +02:00
c7c2cbc95f
feat: support password-based Matrix login
Some checks failed
Docker CI/CD / Docker Build and Push to Docker Hub (push) Has been cancelled
This update introduces the ability for the bot to use a Matrix UserID and password for authentication, in addition to the existing Access Token method. Upon the first run with UserID and password, the bot automatically converts these credentials into an Access Token, updates the configuration with this token, and removes the password for security purposes. This enhancement simplifies the initial setup process for new users by directly utilizing Matrix login credentials, aligning with common user authentication workflows and enhancing security by not storing passwords long-term.

Refactored the bot initialization process in `GPTBot.from_config` to support dynamic login method selection based on provided credentials, and implemented automatic configuration updating to reflect the newly obtained Access Token and cleaned credentials.

Minor adjustments include formatting and comment clarification for better readability and maintenance.

This change addresses the need for a more straightforward and secure authentication process for bot deployment and user experience improvement.
2024-04-23 18:05:50 +02:00
91a028d50b
refactor: streamline Docker setup in README
All checks were successful
Docker CI/CD / Docker Build and Push to Docker Hub (push) Successful in 9m12s
Removed detailed Docker setup instructions, opting to simplify the Docker usage section by retaining only the Docker Compose method. This change aims to declutter the README and encourage a more standardized setup process for users, reducing potential confusion and maintaining focus on the primary installation method via Docker Compose. The update includes a minor adjustment to the database initialization step, ensuring users are guided to prepare their environment fully before running the bot. This revision makes the setup process more approachable and efficient, especially for newcomers.

By directing users to the `Running` section for config file setup instructions, we maintain consistency and avoid duplicative content, keeping the README streamlined and more manageable.
2024-04-23 17:47:26 +02:00
5a9332d635
feat: Replace deprecated dependency
All checks were successful
Docker CI/CD / Docker Build and Push to Docker Hub (push) Successful in 8m56s
Transitioned from the deprecated `pkg_resources` to `importlib.metadata` for package version retrieval, improving performance and future compatibility.
2024-04-23 17:30:21 +02:00
7745593b91
feat(docker-compose): mount local database for persistence
All checks were successful
Docker CI/CD / Docker Build and Push to Docker Hub (push) Successful in 9m16s
Added a volume to the `matrix-gptbot` service configuration in `docker-compose.yml`, mounting the local `database.db` file into the container. This change enables persistent storage for the bot's data, ensuring that data is not lost when the container is restarted or redeployed. It enhances data management and allows for more robust operation of the bot service by leveraging persistency.

This development is crucial for scenarios requiring data retention across bot updates and system maintenance activities.
2024-04-23 17:08:13 +02:00
ca68ecb282
feat: add trackingmore API and ffmpeg
- Included the `ffmpeg` package in the Docker environment to support multimedia content processing.
- Added `trackingmore-api-tool` as a dependency to expand the bot's functionality with tracking capabilities.
- Adjusted the `all` dependencies list in `pyproject.toml` to include the `trackingmore` module, indicating a broader feature set for the application.
- Updated the bot class to prepare for integrating `TrackingMore` alongside existing services like `OpenAI` and `WolframAlpha`, highlighting an intention to make such integrations configurable in the future.

This enhancement enables the bot to interact with multimedia content more effectively and introduces package tracking features, laying groundwork for configurable service integrations.
2024-04-23 17:07:57 +02:00
076eb2d243
feat: switch to pre-built Docker image & update ports
All checks were successful
Docker CI/CD / Docker Build and Push to Docker Hub (push) Successful in 6m19s
Moved from building the GPT bot Docker container on the fly to using a pre-built image, enhancing the setup's efficiency and reducing build times for deployments. Adjusted the server's exposed port to resolve conflicts and standardize deployment configurations. Additionally, locked in the `future` package version to ensure compatibility with legacy Python code, mitigating potential future incompatibility issues.
2024-04-23 16:45:16 +02:00
eb9312099a
feat(workflows): streamline Docker CI/CD processes
All checks were successful
Docker CI/CD / Docker Build and Push to Docker Hub (push) Successful in 5m34s
Removed redundant internal Docker CI/CD workflow and unified naming for external Docker workflows to improve clarity and maintainability. Introduced a new workflow for tagging pushes, aligning deployments closer with best practices for version management and distribution. This change simplifies the CI/CD pipeline, reducing duplication and potential confusion, while ensuring that Docker images are built and pushed efficiently for both internal developments and tagged releases.

- Docker CI/CD internals were removed, focusing efforts on standardized workflows.
- Docker CI/CD workflow names were harmonized to reinforce their universal role across projects.
- A new tagging workflow supports better version control and facilitates automatic releases to Docker Hub, promoting consistency and reliability in image distribution.

This adjustment lays the groundwork for more streamlined and robust CI/CD operations, providing a solid framework for future enhancements and scalability.
2024-04-23 11:46:33 +02:00
fc26f4b591
feat(workflows): add Docker CI/CD for Forgejo, refine Docker Hub flow
Some checks failed
Docker CI/CD Internal / Docker Build and Push to Forgejo Docker Registry (push) Failing after 5m31s
Docker CI/CD External / Docker Build and Push to Docker Hub (push) Successful in 5m32s
Introduced a new CI/CD workflow specifically for building and pushing Docker images to the Forgejo Docker Registry, triggered by pushes to the 'docker' branch. This addition aims to streamline Docker image management and deployment within Forgejo's infrastructure, ensuring a more isolated and secure handling of images. Concurrently, the existing workflow for Docker Hub has been refined to clarify its purpose: it is now explicitly focused on pushing to Docker Hub, removing the overlap with Forgejo Docker Registry operations. This delineation enhances the clarity of our CI/CD processes and ensures a cleaner separation of concerns between public and internal image repositories.
2024-04-23 11:10:50 +02:00
5bc6344fdf
fix(docker): streamline tag format for latest image
Some checks failed
Docker CI/CD / Docker Build and Push (push) Failing after 5m59s
Removed a duplicate and unnecessary line specifying the `latest` tag for the docker image in the workflow. This change simplifies the tag specification process, avoiding redundancy, and ensuring clear declaration of both the `latest` and SHA-specific tags for our docker images in CI/CD pipelines.
2024-04-23 10:52:09 +02:00
c94c016cf1
fix(docker): use env vars for GitHub credentials in workflows
Some checks failed
Docker CI/CD / Docker Build and Push (push) Failing after 5m37s
Migrated from direct GitHub context references to environment variables for GitHub SHA, actor, and token within the Docker build and push actions. This enhances portability and consistency across different execution environments, ensuring better compatibility and security when interfacing with GitHub and Forgejo Docker registries.
2024-04-23 10:45:57 +02:00
f049285cb1
feat(docker): support dynamic tag based on commit SHA
Some checks failed
Docker CI/CD / Docker Build and Push (push) Failing after 1m34s
Enhanced Docker build workflows in `.forgejo/workflows/docker-latest.yml` to include dynamic tagging based on the GITHUB_SHA, alongside the existing 'latest' tag for both the kumitterer/matrix-gptbot and git.private.coffee/privatecoffee/matrix-gptbot images. This change allows for more precise versioning and traceability of images, facilitating rollback and specific version deployment. Also standardized authentication token variables for Docker login to the Forgejo Docker Registry, improving readability and consistency in CI/CD configurations.
2024-04-23 10:40:06 +02:00
35254a0b49
feat(docker): extend CI to push images to Forgejo
Some checks failed
Docker CI/CD / Docker Build and Push (push) Failing after 5m44s
Enhanced the CI pipeline for Docker images by supporting an additional push to the Forgejo Docker Registry alongside the existing push to Docker Hub. This change allows for better integration with private infrastructures and provides an alternative for users and systems that prefer or require images to be stored in a more controlled or private registry. It involves logging into both Docker Hub and Forgejo with respective credentials and pushing the built images to both, ensuring broader availability and redundancy of our Docker images.
2024-04-23 10:27:15 +02:00
bd0d6c5588
feat(docker): streamline Docker setup for GPTBot
All checks were successful
Docker CI/CD / Docker Build and Push (push) Successful in 7m23s
Moved the installation of build-essential and libpython3-dev from the Docker workflow to the Dockerfile itself. This change optimizes the Docker setup process, ensuring that all necessary dependencies are encapsulated within the Docker build context. It simplifies the CI workflow by removing redundant steps and centralizes dependency management, making the build process more straightforward and maintainable.

This adjustment aids in achieving a cleaner division between CI setup and application-specific build requirements, potentially improving build times and reducing complexity for future updates or dependency changes.
2024-04-23 09:52:43 +02:00
224535373e
feat(docker-workflow): add essential build dependencies
Some checks failed
Docker CI/CD / Docker Build and Push (push) Failing after 5m24s
Updated the docker-latest workflow to install additional critical build dependencies including build-essential and libpython3-dev, alongside docker.io. This enhancement is geared towards supporting a wider range of development scenarios and facilitating more complex build requirements directly within the CI pipeline.
2024-04-23 09:43:14 +02:00
a3b4cf217c
feat(docker-ci): enhance Docker CI/CD workflow
Some checks failed
Docker CI/CD / Docker Build and Push (push) Failing after 9m10s
Updated the Docker CI/CD pipeline in the `.forgejo/workflows/docker-latest.yml` to support better integration and efficiency. Key enhancements include setting a container environment with Node 20 on Debian Bookworm for consistency across builds, and installing Docker directly within the runner to streamline the process. This refinement simplifies the setup steps, reduces potential for errors, and possibly decreases pipeline execution time. These changes ensure that our Docker images are built and pushed in a more reliable and faster environment.
2024-04-23 09:26:15 +02:00
d23cfa35fa
refactor(docker-latest.yml): remove ubuntu-latest runner
Some checks failed
Docker CI/CD / docker (push) Failing after 1m35s
Removed the specific runner designation from the Docker workflow to allow for dynamic runner selection. This change aims to increase flexibility and compatibility across different CI environments. It reduces the dependency on a single OS version, potentially leading to better resource availability and efficiency in workflow execution.
2024-04-23 09:22:36 +02:00
054f29ea39
feat: Add Docker CI/CD and update docs for Docker usage
Some checks are pending
Docker CI/CD / docker (push) Waiting to run
Introduced a new Docker CI/CD workflow to automatically build and push images to Docker Hub on pushes to the 'docker' branch. This automation ensures that the latest version of the bot is readily available for deployment, facilitating easier distribution and deployment for users.

The README.md has been updated to improve clarity around installation methods, emphasizing PyPI as the recommended installation method while expanding on Docker usage. It now includes detailed instructions for Docker Hub images, local image building, and Docker Compose deployment, catering to a broader range of users and deployment scenarios. This update aims to make the bot more accessible and manageable by providing clear, step-by-step guidance for different deployment strategies.

Related to these changes, the documentation has been restructured to better organize information related to configuration and running the bot, ensuring users have a smoother experience setting up and deploying it in their environment.

These changes reflect our commitment to enhancing user experience and streamlining deployment processes, making it easier for users to adopt and maintain the matrix-gptbot in various environments.
2024-04-23 09:21:05 +02:00
f8861a16ad
feat: update GPTbot volume mapping
Changed the volume mapping for GPTbot service in `docker-compose.yml` from `settings.ini` to `config.ini`. This modification aligns the container configuration with the new application configuration file naming convention, facilitating easier configuration management and clarity for development and deployment processes.

This change is essential for maintaining consistency across our documentation and deployment scripts, ensuring that all references to configuration files are accurate and up to date.
2024-04-23 08:46:31 +02:00
ca07adbc93
refactor(docker): restructure project for improved management
Redesigned the Docker setup to enhance project structure and configuration management. Changes include a more organized directory structure within the Docker container, separating source code, project metadata, and licenses explicitly to the `/app` directory for better clarity and management. Additionally, integrated `pantalaimon` as a dependency service in `docker-compose.yml`, enabling secure communication with Matrix services by automatically managing settings and configurations through mounted files. This setup simplifies the development environment setup and streamlines deployments.
2024-04-23 08:42:12 +02:00
df2587ee74
feat: Add Docker support for GPTBot
Introduced Dockerfile and docker-compose.yml to encapsulate GPTBot into a Docker container. This simplifies deployment and ensures consistent environments across development and production setups. The Dockerfile outlines the necessary steps to build the image, incl. setting up the working directory, copying the current directory into the container, installing all dependencies, and defining the command to run the bot. The docker-compose.yml file further streamlines the deployment process by specifying service configuration, leveraging Docker Compose version 3.8 for its extended feature set and compatibility.

By containerizing GPTBot, we enhance portability, reduce set-up times for new contributors, and minimize "works on my machine" issues, fostering a more robust development lifecycle.
2024-04-23 08:25:12 +02:00
69fbbe251c
feat: Clarifications in README
Extended the README to include a new section on bot configuration setup, emphasizing the necessity of a config.ini file for operation. This update clarifies the setup process for new users, ensuring they understand the requirement of configuring the bot before use. Additionally, outlined the repository policy regarding the use of the `main` branch for development and the process for contributing through feature branches and pull requests, aiming to streamline contribution workflows and maintain code quality.

The formatting improvements across the README enhance readability and ensure consistency in documentation presentation.
2024-04-23 08:19:32 +02:00
63dc903123
refactor(openai.py): remove redundant logging
Removed an extraneous log statement that recorded the first message content in the OpenAI class. This change streamlines the logging process by eliminating unnecessary log clutter, improving the readability of logs and potentially enhancing performance by reducing I/O operations on the logging system. This adjustment is pivotal for maintaining a clean and efficient codebase, especially in production environments where excessive logging can lead to inflated log sizes and make troubleshooting more challenging.
2024-04-23 08:14:33 +02:00
819e4bbaae
feat: Prepare for 0.3.9 release and update copyright
- Initialized preparations for the unreleased 0.3.9 version in the changelog.
- Updated copyright information to include 2024 and added Private.coffee Team alongside Kumi Mitterer to reflect the collaborative nature of the project going forward.
- Incremented the project version to 0.3.9.dev0 in pyproject.toml to align with upcoming development efforts.
- Modified all references from Kumi's personal repo to the Private.coffee Team's repo in README.md, LICENSE, and pyproject.toml, ensuring future contributions and issues are directed to the correct repository. This change facilitates a broader collaboration platform and acknowledges the team's growing involvement in the project's development.

These updates are critical for the upcoming development phase and for accurately representing the collaborative efforts behind the project.
2024-04-23 07:58:34 +02:00
94a9881465
feat(changelog): update for v0.3.7/0.3.8 enhancements
All checks were successful
Python Package CI/CD / Setup and Test (push) Successful in 1m20s
Python Package CI/CD / Publish to PyPI (push) Successful in 37s
Updated the CHANGELOG.md to document enhancements in versions 0.3.7 and 0.3.8, including updates to URLs in the pyproject.toml and migration of the build pipeline to Forgejo Actions. These changes aim to improve project dependency management and streamline the CI/CD process, ensuring a more efficient development workflow.
2024-04-15 19:31:27 +02:00
6236142a21
chore: bump version to 0.3.8 and update URLs
Bumped project version to 0.3.8 for the next release cycle. Updated Homepage and Bug Tracker URLs to reflect the new hosting location, aiming for improved accessibility and collaboration. Additionally, introduced a Source Code URL for direct access to the repository, facilitating developers' engagement and contributions.
2024-04-15 19:29:23 +02:00
66d4dff72d
feat(release): streamline PyPI publishing process
All checks were successful
Python Package CI/CD / Setup and Test (push) Successful in 1m17s
Python Package CI/CD / Publish to PyPI (push) Successful in 39s
Simplified the publishing to PyPI steps in the release workflow by consolidating the Python environment setup, tool installation, package building, and publishing into a single job. This change makes the workflow more efficient and reduces the number of steps required to publish a package. It ensures that the environment setup, dependencies installation, package building, and publishing are done in one go, which can help in minimizing potential issues arising from multiple, separate steps.

This approach leverages the existing Python and PyPI tools more effectively and could potentially shorten the release cycle time. It also makes the workflow file cleaner and easier to maintain.
2024-04-15 19:22:07 +02:00
9110910b11
fix: use posix shell syntax for activation
Some checks failed
Python Package CI/CD / Setup and Test (push) Successful in 1m29s
Python Package CI/CD / Publish to PyPI (push) Failing after 18s
Switched the virtual environment activation command to be compliant with POSIX shell syntax. The previous `source` command is replaced with `.`, making the script more broadly compatible with different shell environments without requiring bash specifically. This change ensures greater portability and compatibility of the release workflow across diverse CI/CD execution environments.
2024-04-15 19:11:49 +02:00
c33efd2e73
feat(workflows): Use python3 for venv and version check
Some checks failed
Python Package CI/CD / Setup and Test (push) Failing after 21s
Python Package CI/CD / Publish to PyPI (push) Failing after 10m58s
Updated the GitHub Actions workflow in `.forgejo/workflows/release.yml` to explicitly use `python3` instead of `python` for both version checking and virtual environment setup. This change ensures compatibility and clarity across environments where `python` might still point to Python 2.x, preventing potential conflicts and erasing ambiguity. The adjustment aligns with modern best practices, acknowledging the widespread transition to Python 3 and its tooling.
2024-04-15 18:58:44 +02:00
d57a7367ab
feat(workflows): Switch base image to node:20-bookworm
Some checks failed
Python Package CI/CD / Setup and Test (push) Failing after 29s
Python Package CI/CD / Publish to PyPI (push) Failing after 16s
Updated the CI/CD pipeline in `.forgejo/workflows/release.yml` to utilize the `node:20-bookworm` image instead of `python:3.10`. This necessitated adding steps for installing Python dependencies, given the switch to a Node.js-based image. This change accommodates projects that require both Node.js and Python environments for their setup and publishing processes, ensuring better compatibility and flexibility in handling mixed-technology stacks.

This update enhances our pipeline's capability to manage dependencies more efficiently across different technologies, catering to the evolving needs of our projects.
2024-04-15 18:53:37 +02:00
037acf34b2
Version bump
Some checks failed
Python Package CI/CD / Publish to PyPI (push) Failing after 1m11s
Python Package CI/CD / Setup and Test (push) Failing after 11m51s
2024-04-15 18:18:43 +02:00
589c8395b7
ci: remove redundant GitLab CI configuration
Removed the entire `.gitlab-ci.yml`, discontinuing the GitLab CI/CD process. This change reflects a shift toward consolidating our CI/CD workflows, potentially to another platform or to streamline our current processes. It's essential to assess the impacts on project deployment and distribution, especially regarding PyPI publishing previously handled by this GitLab CI configuration.
2024-04-15 18:12:25 +02:00
35db931f4a
feat: Add CI/CD workflow and support badge
Introduced a new CI/CD GitHub Actions workflow for automated testing and publishing to PyPI, targeting Python 3.10 containers. This setup ensures code is tested before release and automates the distribution process to PyPI whenever a new tag is pushed. Additionally, updated README.md to include a support badge from private.coffee, encouraging community support and visibility.

This enhancement simplifies the release process, improves code quality assurance, and fosters community engagement by providing a direct way to support the project.
2024-04-15 18:11:42 +02:00
1e59c90df2
feat: Bump version to 0.3.6 with message type fix
Upgraded project version to 0.3.6 to introduce a critical fix for message type detection failing on certain messages. This version also amends the package directory structure for improved organization, moving from `src/gptbot` to just `gptbot`. Additionally, updated the CHANGELOG to reflect this fix and organizational change, ensuring that it stays current with the project's progress.

- Fixes message type detection issue
2024-04-11 07:41:52 +02:00
7edc69897b
Version bump to 0.3.5 2024-04-11 07:31:05 +02:00
cece8cfb24
feat(bot): enhance event processing robustness
Improve event type determination in the message fetching logic by adding try-except blocks to handle AttributeError and KeyError exceptions gracefully. This change allows the bot to continue processing other events if it encounters an event without a recognizable type or msgtype, avoiding premature termination and enhancing its ability to process diverse event streams more robustly. This approach also includes logging unprocessable events for debugging purposes, providing clearer insights into event handling anomalies.

Fixes #5
2024-04-11 07:20:44 +02:00
7e64dd5245
feat: Update repository URLs in README
Updated the repository URLs for the matrix-gptbot installation
instructions and cloning directions in the README. The change points to
a new hosting location on `git.private.coffee` from the previous
`kumig.it`. This adjustment ensures users access the most current
version of the code and contributes to a streamlined setup process by
guiding them directly to the updated repository.

This update is crucial for users looking to install the bot or
contribute to its development, ensuring they're interfacing with the
latest version and accurate information.
2024-04-01 07:48:52 +02:00
0289100a2d
feat(changelog): Updated changelog 2024-04-01 07:47:04 +02:00
a9c23ee9c4
feat: implement room state retrieval and
conditional avatar update

Introduces a new method `get_state_event` to asynchronously retrieve
state events for a given room and event type, enhancing the bot's
ability to fetch specific room states before performing actions. This
functionality is leveraged to conditionally update room avatars only if
they are not already set, reducing unnecessary updates and improving
efficiency. Additionally, the commit includes minor formatting
adjustments for better code readability.

Refactoring the avatar updating process to assess the current state
before action prevents redundant network calls and aligns with optimal
resource usage practices, contributing to a smoother operation and
potentially reducing the workload on the server.
2024-04-01 07:42:44 +02:00
d5a96aebb6
Version bump 2024-02-18 11:03:13 +01:00
c47f947f80
Refine message filtering in bot event processing
Enhanced event processing in the bot's message retrieval logic to
improve message relevance and accuracy. Changes include accepting all
'gptbot' prefixed events, refining handling of 'ignoreolder' command
with exact match rather than starts with, and now passing through
'custom' commands that start with '!'. The default behavior now excludes
notices unless explicitly included. This update allows for more precise
command interactions and reduces clutter from irrelevant notices.
2024-02-18 10:49:27 +01:00
2d564afd97
Fix parameter passing in chat response calls
Standardize the passing of 'messages' argument across various calls to
generate_chat_response method to ensure consistency and prevent
potential bugs in the GPT bot's response generation. The 'model'
parameter in one instance has been corrected to 'original_model' for
proper context loading. These changes improve code clarity and maintain
the intended message flow within the bot's conversation handling.
2024-02-15 18:11:19 +01:00
10b74187eb
Optimize chat model and message handling
Refactored initialization of OpenAI APIs to correct a redundancy and
enhance clarity. Improved content extraction logic for robust handling
of various message types. Enhanced logging and messaging by including
user and room context, facilitating better debugging and user
interaction. Extended `send_message` to support custom message types,
allowing for richer interaction within the chat ecosystem. Updated
hardcoded chat models to leverage newer versions for potentially more
accurate tool overrides. Fixed async method call in recursion handling
to ensure proper response generation. Finally, increased message history
retrieval limit based on the `max_messages` attribute for more effective
conversation context.

Resolves issues with message context and enhances user feedback during
operations.
2024-02-14 17:45:57 +01:00
b65dcc7d83
Finalize release version 0.3.3
Removed the '-dev' suffix from the project version indicating the transition from a development state to the official release of version 0.3.3. This version bump aligns with the completion of features and fixes slated for this iteration.
2024-01-26 09:31:12 +01:00
fc92ac1ebb
Clarify bot's E2E encryption caveat
Update the README to specify that issues with file attachments primarily occur in non-encrypted rooms when the same user operates the bot in both encrypted and non-encrypted rooms. This detail aims to inform users of potential pitfalls more precisely when setting up the bot with end-to-end encryption enabled.
2024-01-26 09:20:30 +01:00
2b33f681cd
Update CHANGELOG 2024-01-26 09:18:40 +01:00
c4e23cb9d3
Fix recursion errors in OpenAI class
Improved the error handling in the OpenAI class to prevent infinite recursion issues by retaining the original chat model during recursive calls. Enhanced logging within the recursion depth check for better debugging and traceability. Ensured consistency in chat responses by passing the initial model reference throughout the entire call stack. This is crucial when fallbacks due to errors or tool usage occur.

Refactored code for clarity and readability, ensuring that any recursion retains the original model and tool parameters. Additionally, proper logging and condition checks now standardize the flow of execution, preventing unintended modifications to the model's state that could lead to incorrect bot behavior.
2024-01-26 09:17:01 +01:00
87173ae284
Enable per-room model overrides and clean up code
Introduced the ability to specify and retrieve different OpenAI models on a per-room basis, thereby allowing enhanced customization of the bot's response behavior according to the preferences for each room. Cleaned up code formatting across the bot implementation files for improved readability and maintainability. Additional logic now checks for model overrides when generating responses, ensuring the correct model is used as configured.

Refactors include streamlined database and API initializations and a refined method for processing message formatting to accommodate images, texts, and system messages consistently. This change differentiates default behavior from room-specific configurations, catering to diverse user needs without compromising on default settings.
2024-01-26 09:11:39 +01:00
ad0d694222
Update docs
Updated README to better describe issues related to end-to-end encryption affecting file attachments in certain circumstances. Updated CHANGELOG.
2023-12-29 23:01:03 +01:00
e6bc23e564
Implement recursion check in response generation
Added a safety check to prevent infinite recursion within the response generation function. When `use_tools` is active, the code now inspects the call stack and terminates the process if a certain recursion depth is exceeded. This ensures that the code is robust against potential infinite loops that could block or crash the service. A default threshold is set with a TODO for revisiting the hard-coded limit, and the recursion detection logs the occurrence for easier debugging and maintenance.

Note: Recursion limit handling may require future adjustments to the `allow_override` parameter based on real-world feedback or testing.
2023-12-29 22:56:22 +01:00
0acc1456f9
Enable tool emulation and improved error logging
Introduced a configuration option to emulate tool usage in models that do not natively support tools, facilitating the use of such functionality without direct model support. This should benefit users aiming to leverage tool-based features without relying on specific AI models. Additionally, enhanced error logging in the GPTBot class by including traceback details, aiding in debugging and incident resolution.

- Added `EmulateTools` option in the `config.dist.ini` for flexible tool handling.
- Enriched error logging with stack traces in `bot.py` for better insight during failures.
2023-12-29 22:46:19 +01:00
c7e448126d
Enhanced debug logging in bot initialization
Introduced additional debug log entries in the `GPTBot` class to provide clarity on the initial sync and callback setup process. This helps with monitoring and troubleshooting during the early stages of bot deployment, making it easier to pinpoint issues around bot startup and room joining behavior.
Bumped project version to 0.3.3-dev to signal ongoing development.
2023-12-29 20:37:42 +01:00
f206aa8f0f
Warn against using E2E encryption due to bugs
Updated README to caution users about the current issues with end-to-end encryption, specifically that it can disrupt file uploads and downloads. The aim is to prevent user frustration and potential data loss until a fix is implemented.
2023-12-29 20:30:46 +01:00
63e52169a3
Fix file handling in encrypted rooms and update dependencies
Resolved an issue that prevented the bot from responding when files were uploaded to encrypted rooms by implementing a workaround. The bot now tries to generate text from uploaded files and logs errors without interrupting the message flow. Upgraded the Pantalaimon dependency to ensure compatibility. Also, refined the message processing logic to handle different message types correctly and made the download_file method asynchronous to match the matrix client's expected behavior. Additionally, updated the changelog and bumped the project version to reflect these fixes and improvements.

Known issues have been documented, including a limitation when using Pantalaimon where the bot cannot download/use files uploaded to encrypted rooms.
2023-12-14 18:10:12 +01:00
e1630795ba
Update changelog 2023-12-07 19:44:19 +01:00
2b7813f715
Make "gptbot -v" actually output the correct version 2023-12-07 19:43:30 +01:00
04662fc1f3
Internal version bump 2023-12-07 19:43:17 +01:00
62982ce23b
Remove uploading of keys 2023-12-07 16:39:04 +01:00
d4cf70b273
Fix handling of KeysUploadError 2023-12-07 16:38:47 +01:00
2211edc25a
Fix missing " in pyproject.toml 2023-12-07 16:36:01 +01:00
f6b15ea6b9
version bump, changelog 2023-12-07 16:33:26 +01:00
ab62ecb877
fix: incorrect variable name use in newroom tool 2023-12-07 16:31:19 +01:00
11f11a369c
Enhanced GPTbot's capabilities and installation method
Upgraded bot features to interpret and respond to text, image, and voice prompts in Matrix rooms using advanced OpenAI models, including vision preview and text-to-speech. Streamlined installation process with bot now available via PyPI, simplifying setup and extending accessibility. Eliminated planned features section, signaling a shift towards realized functionalities over prospective development.

Configured Pantalaimon as an optional dependency to enable bot use in E2EE rooms while maintaining compatibility with non-encrypted rooms. Removed trackingmore dependency, indicating a refinement in the feature set towards core functionalities. Version bumped to 0.3.0, signifying major enhancements over previous iteration.
2023-12-05 14:50:37 +01:00
35f51e1201
Added systemd service for Pantalaimon integration
Introduced a new systemd service configuration for GPTbot to ensure Pantalaimon starts as a background process on system boot, maintaining persistent Matrix encryption handling. Ensures seamless restarts and network dependency management for improved reliability.
2023-12-05 14:18:18 +01:00
57b68ef3e3
Enhance Pantalaimon integration and config
Integrated Pantalaimon support with updated configuration instructions and examples, facilitating secure communication when using the Matrix homeserver. The .gitignore is now extended to exclude a Pantalaimon configuration file, preventing sensitive information from accidental commits. Removed encryption callbacks and related functions as the application leverages Pantalaimon for E2EE, simplifying the codebase and shifting encryption responsibilities externally. Streamlined dependency management by removing the requirements.txt in favor of pyproject.toml, aligning with modern Python practices. This change overall improves security handling and eases future maintenance.
2023-12-05 10:09:14 +01:00
4eb33a3c0a
Fix allowed_users property in GPTBot
Resolved a syntax error in the allowed_users property within the GPTBot class by adding the missing 'self' parameter. This correction ensures the proper functioning of the property method, enabling the bot to correctly retrieve the list of users authorized to use it.
2023-12-05 08:36:21 +01:00
1319371446
Handle mp3 attachments as audio messages
Extended the condition for the audio message handling in the chatbot to recognize MP3 audio files sent as file attachments. This ensures that MP3 files will be properly processed as audio messages, improving the bot's media handling capabilities. This is just a test at this point, and may be rolled back.
2023-12-05 08:18:39 +01:00
8a77c326aa
Refactor bot properties to be dynamic
Migrated several hardcoded bot configuration settings to dynamic properties with fallbacks, enhancing flexibility and customization. The properties now read from a configuration file, allowing changes without code modification. Simplified the instantiation logic by removing immediate attribute setting in favor of lazy-loaded properties. Additionally, prepared to segregate OpenAI-related settings into a dedicated class (noted with TODO comments).

Note: Verify the presence of necessary configuration parameters or include defaults to prevent potential runtime issues.
2023-12-04 18:44:19 +01:00
31f001057a
Enhance Wikipedia tool flexibility
Added options to extract specific info and summarize content from Wikipedia pages within the gptbot's Wikipedia tool. The 'extract' option enables partial retrieval of page data based on a user-defined string, leveraging the bot's existing chat API for extraction. The 'summarize' option allows users to get concise versions of articles, again utilizing the bot's chat capabilities. These additions provide users with more granular control over the information they receive, potentially reducing response clutter and focusing on user-specified interests.
2023-12-04 18:07:57 +01:00
c1986203e8
Ensure consistent user ID typing and improve logging
Cast user objects to strings to standardize ID handling across API calls. Enhanced logging statements now include user and room context, providing better traceability for response generation. Also, refined error handling for API token limits by falling back to an altered response flow, removing tool roles from messages when a max token error occurs, before reattempting. This targets more graceful handling of response generation without tool assistance when constraints are hit.
2023-12-04 18:07:19 +01:00
eccca2a624
Handle max tokens exception in chat response generation
Introduced error handling for the 'max_tokens' exception during chat response generation. In cases where the maximum token count is exceeded, the bot now falls back to a no-tools response, avoiding the halt of conversation. This ensures a smoother chat experience and persistence in responding, even when a limit is reached. Any other exceptions will still be raised as before, maintaining error transparency.
2023-11-30 14:18:47 +01:00
670667567e
Handle key upload errors and refine tool flow responses
- Introduce error handling for the keys upload process, logging failures to assist with troubleshooting.
- Improve exception handling in the OpenAI class by returning a more informative response based on the exception arguments if available.
- Replace a return statement in the Newroom tool with an exception raise to standardize tool action termination and provide clearer flow control.

Resolves issue with silent key upload failures. Refines response and control flow for better clarity and debugging.
2023-11-29 20:31:07 +01:00
75360d040a
Introduce Async Enhancements and New Room Tool
Enabled asynchronous key upload in the roommember callback to improve efficiency. Fixed the chat response generation by properly referencing the event sender rather than the room ID, aligning user context with chat messages. Corrected the user parameter misuse in the OpenAI class to utilize the room ID. Extended the toolkit to include a 'newroom' feature for creating and setting up new Matrix rooms, thereby enhancing bot functionality.

This commit significantly improves bot response times and contextual accuracy while interacting within rooms and adds a valuable feature for users to create rooms seamlessly.
2023-11-29 15:48:56 +01:00
ad600faf4b
Refine logging and add image description feature
Enhanced the speech generation logging to display the word count of the input text instead of the full text. This change prioritizes user privacy and improves log readability. Implemented a new feature to generate descriptions for images within a conversation, expanding the bot's capabilities. Also, refactor `BaseTool` class to securely access arguments through `.get` method and to include `messages` by default, ensuring graceful handling of missing arguments.
2023-11-29 14:53:19 +01:00
03768b5b27
Improve speech-to-text audio handling
Enhanced the audio processing in speech-to-text conversion by converting the input audio to MP3 format before transcription. The logging now reflects the word count of the recognized text, providing clearer insight into the output. This should improve compatibility with the transcription service and result in more accurate transcriptions.
2023-11-29 12:30:26 +01:00
eba650188b
Add datetime tool to gptbot
Introduced a new 'datetime' tool to the gptbot, which provides the current date and time in UTC. This enhancement caters to the need for time-related queries within the bot's functionality, expanding its utility for users dealing with time-sensitive information.
2023-11-29 12:06:58 +01:00
e1782d1034
Disable test and encryption callbacks
Temporarily commented out callbacks for test responses, event handling, and encrypted messages to focus on core functionality stabilization. This change aims to simplify the debugging process and enhance the reliability of active features during the development phase. Encryption handling will be reintroduced after refining base features.
2023-11-29 11:43:09 +01:00
a6fca53b51
Improve Wikipedia error messages
Refined the exception details in the Wikipedia tool to include the search query when no results are found, enhancing the clarity of error outputs for end-users. This change helps in debugging by indicating the exact query that led to a no-results situation. Additionally, the existing failure-to-connect error message was left as-is, maintaining accurate API connectivity diagnostics.
2023-11-29 11:33:24 +01:00
c92828def1
Optimize message concatenation, add Wikipedia tool
Refactor the message concatenation logic within the chat response to ensure the original final message remains intact at the end of the sequence. Introduce a new 'Wikipedia' tool to the bot's capabilities, allowing users to query and retrieve information from Wikipedia directly through the bot's interface. This enhancement aligns with efforts to provide a more informative and interactive user experience.
2023-11-29 11:30:54 +01:00
3ee7505aa5
Remove debug URL print from weather tool
Eliminated a print statement that was outputting the API request URL in the weather fetching tool, ensuring sensitive key information is not displayed in logs. This increases security by preventing potential API key exposure.
2023-11-29 09:02:47 +01:00
36e34d5fcf
Removed traceback print from exception handling in bot's tool call
Eliminated the printing of traceback in the exception handling block when the GPTBot encounters an error calling a tool. This change cleans up the logs by removing a redundant error output since relevant information is already being logged. The update aims to enhance the clarity and readability of the logs in case of tool calling errors.
2023-11-29 08:33:41 +01:00
4d64593e89
Enhanced tool handling and image gen
Refactored `call_tool` to pass `room` and `user` for improved context during tool execution.
Introduced `Handover` and `StopProcessing` exceptions to better control the flow when calling tools involves managing exceptions and handovers between tools and text generation.
Enabled flexibility with `room` param in sending images and files, now accepting both `MatrixRoom` and `str` types.
Updated `generate_chat_response` in OpenAI class to incorporate tool usage flag and more pruned message handling for tool responses.
Introduced `orientation` option for image generation to specify landscape or portrait.
Implemented two new tool classes, `Imagine` and `Imagedescription`, to streamline image creation and description processes accordingly.

This improved error handling and additional granularity in tool invocation ensure that the bot behaves more predictably and transparently, particularly when interacting with generative AI and handling dialogue. The flexibility in both response and image generation caters to a wider range of user inputs and scenarios, ultimately enhancing the bot's user experience.
2023-11-29 08:33:20 +01:00
54dd80ed50
feat: Enable calling tools in chat completion
This commit adds functionality to call tools within the chat completion model. By introducing the `call_tool()` method in the `GPTBot` class, tools can now be invoked with the appropriate tool call. The commit also includes the necessary changes in the `OpenAI` class to handle tool calls during response generation. Additionally, new tool classes for geocoding and dice rolling have been implemented. This enhancement aims to expand the capabilities of the bot by allowing users to leverage various tools directly within the chat conversation.
2023-11-28 18:15:21 +01:00
155ea68e7a
feat: Add voice input and output support
This change adds support for voice input and output to the GPTbot. Users can enable this feature using the new `!gptbot roomsettings` command. Voice input and output are currently supported via OpenAI's TTS and Whisper models. However, note that voice input may be unreliable at the moment. This enhancement expands the capabilities of the bot, allowing users to interact with it using their voice. This addresses the need for a more user-friendly and natural way of communication.
2023-11-26 07:58:10 +01:00
554d3d8aa0
Refactor OpenAI class methods to retrieve assistant and thread IDs. 2023-11-19 17:02:40 +01:00
c3fe074b1e
Refactor OpenAI class to use asynchronous room check for generating response.
- Replaced synchronous room check with asynchronous room check using `await`.
- Updated the code to use the `await` keyword before calling `self.room_uses_assistant(room)`.
- This change enables the code to generate assistant response asynchronously.
2023-11-19 16:15:28 +01:00
65bf724a0b
feat: Add method to check if room uses assistant
This commit adds a new method `room_uses_assistant` to the OpenAI class. This method allows checking whether a given room uses an assistant. It uses the `room_settings` table in the database to determine if the specified room has the `openai_assistant` setting.
2023-11-19 16:06:59 +01:00
54f56c1b1a
Add launch configuration for Python module 2023-11-19 15:38:43 +01:00
fbbe82a1fc
Refactor image generation code with dynamic size and model
The commit modifies the image generation code in the OpenAI class. The size and model of the generated image can now be dynamically set based on the provided prompt. The code has been refactored to handle different image sizes and models correctly.
2023-11-19 15:38:34 +01:00
14da88de8b
feat: Improve image generation quality selection
The code change adds a conditional statement to select the image generation quality based on the selected image model.
2023-11-19 15:26:09 +01:00
474af54ae1
Start assistant implementation to support generating responses using an assistant if the room uses an assistant. Also, add methods to create and setup an assistant for a room. 2023-11-19 15:24:22 +01:00
2269018e92
Work towards encryption support 2023-11-11 17:22:43 +01:00
09393b4216
Remove debugging output that is no longer needed 2023-11-11 14:54:40 +01:00
48f13fcf7f
Fix truncation calculation 2023-11-11 13:32:31 +01:00
0317b2f5aa
Improve handling of event and other messages for chat response
Remove limitation of number of attached images for image-aware chat completions
2023-11-11 13:26:21 +01:00
4113a02232
Add image input on models that support it, fix some bugs, bump required OpenAI version 2023-11-11 12:27:19 +01:00
c238da9b99
openai: Fix image_model assignment, add method to check if the used chat model supports images messages 2023-11-11 09:51:28 +01:00
1b290c6b92
Revert last commit 2023-11-09 12:31:46 +01:00
37a1e6a85c
Fix handling of commands in GPTBot class 2023-11-09 12:30:12 +01:00
72340095f9
Fix bot command prefix recognition and handle ignore bot commands.
- Fixed bot command prefix recognition to include prefixes starting with an asterisk (= edited messages)
- Added handling of ignoring bot commands in the '_last_n_messages' method.
2023-11-09 12:29:04 +01:00
2e6c07eb22
Fix issue with OpenAI class to set the quality parameter based on the model type. 2023-11-07 15:55:25 +01:00
5a1a3733c5
Update OpenAI configuration with ImageModel and related changes
Previously, there was no option to specify the model for image generation in the OpenAI configuration. This commit adds a new option called "ImageModel" where you can specify the desired model. The default value for this option is "dall-e-2".

In the `GPTBot` class, the OpenAI object is now initialized with the `ImageModel` option if it is provided in the configuration. This allows the bot to use the specified image generation model in addition to the chat model.

Furthermore, in the `OpenAI` class, the `image_api` attribute has been renamed to `image_model` to reflect its purpose more accurately. The default value has also been updated to "dall-e-2" to align with the new configuration option.

This commit ensures that the OpenAI configuration is up-to-date and allows users to specify the desired image generation model.
2023-11-07 14:02:10 +01:00
d2c6682faa
Version bump to 0.1.1 2023-06-01 06:34:09 +00:00
8147a89f69
Merge branch 'thebalaa-main' 2023-06-01 06:28:22 +00:00
27078243a8
Merge branch 'thebalaa-main' 2023-06-01 06:24:45 +00:00
58 changed files with 2846 additions and 650 deletions

View file

@ -0,0 +1,33 @@
name: Docker CI/CD
on:
push:
tags:
- "*"
jobs:
docker:
name: Docker Build and Push to Docker Hub
container:
image: node:20-bookworm
steps:
- name: Install dependencies
run: |
apt update
apt install -y docker.io
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Build and push to Docker Hub
uses: docker/build-push-action@v5
with:
push: true
tags: |
kumitterer/matrix-gptbot:latest
kumitterer/matrix-gptbot:${{ env.GITHUB_REF_NAME }}

View file

@ -0,0 +1,54 @@
name: Python Package CI/CD
on:
workflow_dispatch:
push:
tags:
- "*"
jobs:
setup:
name: Setup and Test
container:
image: node:20-bookworm
steps:
- name: Check out code
uses: actions/checkout@v4
- name: Install dependencies
run: |
apt update
apt install -y python3 python3-venv
- name: Set up Python environment
run: |
python3 -V
python3 -m venv venv
. ./venv/bin/activate
pip install -U pip
pip install .[all]
publish:
name: Publish to PyPI
container:
image: node:20-bookworm
steps:
- name: Check out code
uses: actions/checkout@v4
- name: Install dependencies
run: |
apt update
apt install -y python3 python3-venv
- name: Publish to PyPI
run: |
python3 -m venv venv
. ./venv/bin/activate
pip install -U hatchling twine build
python -m build .
python -m twine upload --username __token__ --password ${PYPI_TOKEN} dist/*
env:
PYPI_TOKEN: ${{ secrets.PYPI_TOKEN }}

6
.github/dependabot.yml vendored Normal file
View file

@ -0,0 +1,6 @@
version: 2
updates:
- package-ecosystem: "pip"
directory: "/"
schedule:
interval: "daily"

5
.gitignore vendored
View file

@ -5,4 +5,7 @@ config.ini
venv/
*.pyc
__pycache__/
*.bak
*.bak
dist/
pantalaimon.conf
.ruff_cache/

15
.vscode/launch.json vendored Normal file
View file

@ -0,0 +1,15 @@
{
// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"configurations": [
{
"name": "Python: Module",
"type": "python",
"request": "launch",
"module": "gptbot",
"justMyCode": true
}
]
}

79
CHANGELOG.md Normal file
View file

@ -0,0 +1,79 @@
# Changelog
### 0.3.14 (2024-05-21)
- Fixed issue in handling of login credentials, added error handling for login failures
### 0.3.13 (2024-05-20)
- **Breaking Change**: The `ForceTools` configuration option behavior has changed. Instead of using a separate model for tools, the bot will now try to use the default chat model for tool requests, even if that model is not known to support tools.
- Added `ToolModel` to OpenAI configuration to allow specifying a separate model for tool requests
- Automatically resize context images to a default maximum of 2000x768 pixels before sending them to the AI model
### 0.3.12 (2024-05-17)
- Added `ForceVision` to OpenAI configuration to allow third-party models to be used for image recognition
- Added some missing properties to `OpenAI` class
### 0.3.11 (2024-05-17)
- Refactoring of AI provider handling in preparation for multiple AI providers: Introduced a `BaseAI` class that all AI providers must inherit from
- Added support for temperature, top_p, frequency_penalty, and presence_penalty in `AllowedUsers`
- Introduced ruff as a development dependency for linting and applied some linting fixes
- Fixed `gptbot` command line tool
- Changed default chat model to `gpt-4o`
- Changed default image generation model to `dall-e-3`
- Removed currently unused sections from `config.dist.ini`
- Changed provided Pantalaimon config file to not use a key ring by default
- Prevent bot from crashing when an unneeded dependency is missing
### 0.3.10 (2024-05-16)
- Add support for specifying room IDs in `AllowedUsers`
- Minor fixes
### 0.3.9 (2024-04-23)
- Add Docker support for running the bot in a container
- Add TrackingMore dependency to pyproject.toml
- Replace deprecated `pkg_resources` with `importlib.metadata`
- Allow password-based login on first login
### 0.3.7 / 0.3.8 (2024-04-15)
- Changes to URLs in pyproject.toml
- Migrated build pipeline to Forgejo Actions
### 0.3.6 (2024-04-11)
- Fix issue where message type detection would fail for some messages (cece8cfb24e6f2e98d80d233b688c3e2c0ff05ae)
### 0.3.5
- Only set room avatar if it is not already set (a9c23ee9c42d0a741a7eb485315e3e2d0a526725)
### 0.3.4 (2024-02-18)
- Optimize chat model and message handling (10b74187eb43bca516e2a469b69be1dbc9496408)
- Fix parameter passing in chat response calls (2d564afd979e7bc9eee8204450254c9f86b663b5)
- Refine message filtering in bot event processing (c47f947f80f79a443bbd622833662e3122b121ef)
### 0.3.3 (2024-01-26)
- Implement recursion check in response generation (e6bc23e564e51aa149432fc67ce381a9260ee5f5)
- Implement tool emulation for models without tool support (0acc1456f9e4efa09e799f6ce2ec9a31f439fe4a)
- Allow selection of chat model by room (87173ae284957f66594e66166508e4e3bd60c26b)
### 0.3.2 (2023-12-14)
- Removed key upload from room event handler
- Fixed output of `python -m gptbot -v` to display currently installed version
- Workaround for bug preventing bot from responding when files are uploaded to an encrypted room
#### Known Issues
- When using Pantalaimon: Bot is unable to download/use files uploaded to unencrypted rooms
### 0.3.1 (2023-12-07)
- Fixed issue in newroom task causing it to be called over and over again

14
Dockerfile Normal file
View file

@ -0,0 +1,14 @@
FROM python:3.12-slim
WORKDIR /app
COPY src/ /app/src
COPY pyproject.toml /app
COPY README.md /app
COPY LICENSE /app
RUN apt update
RUN apt install -y build-essential libpython3-dev ffmpeg
RUN pip install .[all]
RUN pip install 'future==1.0.0'
CMD ["python", "-m", "gptbot"]

View file

@ -1,4 +1,5 @@
Copyright (c) 2023 Kumi Mitterer <gptbot@kumi.email>
Copyright (c) 2023-2024 Kumi Mitterer <gptbot@kumi.email>, Private.coffee Team
<support@private.coffee>
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal

193
README.md
View file

@ -1,91 +1,151 @@
# GPTbot
GPTbot is a simple bot that uses different APIs to generate responses to
messages in a Matrix room.
[![Support Private.coffee!](https://shields.private.coffee/badge/private.coffee-support%20us!-pink?logo=coffeescript)](https://private.coffee)
[![Matrix](https://shields.private.coffee/badge/Matrix-join%20us!-blue?logo=matrix)](https://matrix.to/#/#matrix-gptbot:private.coffee)
[![PyPI](https://shields.private.coffee/pypi/v/matrix-gptbot)](https://pypi.org/project/matrix-gptbot/)
[![PyPI - Python Version](https://shields.private.coffee/pypi/pyversions/matrix-gptbot)](https://pypi.org/project/matrix-gptbot/)
[![PyPI - License](https://shields.private.coffee/pypi/l/matrix-gptbot)](https://pypi.org/project/matrix-gptbot/)
[![Latest Git Commit](https://shields.private.coffee/gitea/last-commit/privatecoffee/matrix-gptbot?gitea_url=https://git.private.coffee)](https://git.private.coffee/privatecoffee/matrix-gptbot)
It is called GPTbot because it was originally intended to only use GPT-3 to
generate responses. However, it supports other services/APIs, and I will
probably add more in the future, so the name is a bit misleading.
GPTbot is a simple bot that uses different APIs to generate responses to
messages in a Matrix room.
## Features
- AI-generated responses to messages in a Matrix room (chatbot)
- Currently supports OpenAI (tested with `gpt-3.5-turbo` and `gpt-4`)
- AI-generated pictures via the `!gptbot imagine` command
- Currently supports OpenAI (DALL-E)
- AI-generated responses to text, image and voice messages in a Matrix room
(chatbot)
- Currently supports OpenAI (`gpt-3.5-turbo` and `gpt-4`, `gpt-4o`, `whisper`
and `tts`) and compatible APIs (e.g. `ollama`)
- Able to generate pictures using OpenAI `dall-e-2`/`dall-e-3` models
- Able to browse the web to find information
- Able to use OpenWeatherMap to get weather information (requires separate
API key)
- Even able to roll dice!
- Mathematical calculations via the `!gptbot calculate` command
- Currently supports WolframAlpha
- Automatic classification of messages (for `imagine`, `calculate`, etc.)
- Beta feature, see Usage section for details
- Currently supports WolframAlpha (requires separate API key)
- Really useful commands like `!gptbot help` and `!gptbot coin`
- sqlite3 database to store room settings
## Planned features
- End-to-end encryption support (partly implemented, but not yet working)
## Installation
To run the bot, you will need Python 3.10 or newer.
To run the bot, you will need Python 3.10 or newer.
The bot has been tested with Python 3.11 on Arch, but should work with any
The bot has been tested with Python 3.12 on Arch, but should work with any
current version, and should not require any special dependencies or operating
system features.
### Production
The easiest way to install the bot is to use pip to install it directly from
[its Git repository](https://kumig.it/kumitterer/matrix-gptbot/):
#### PyPI
The recommended way to install the bot is to use pip to install it from PyPI.
```shell
# If desired, activate a venv first
# Recommended: activate a venv first
python -m venv venv
. venv/bin/activate
# Install the bot
pip install git+https://kumig.it/kumitterer/matrix-gptbot.git
pip install matrix-gptbot[all]
```
This will install the bot from the main branch and all required dependencies.
A release to PyPI is planned, but not yet available.
This will install the latest release of the bot and all required dependencies
for all available features.
### Development
You can also use `pip install git+https://git.private.coffee/privatecoffee/matrix-gptbot.git`
to install the latest version from the Git repository.
Clone the repository and install the requirements to a virtual environment.
#### Docker
A `docker-compose.yml` file is provided that you can use to run the bot with
Docker Compose. You will need to create a `config.ini` file as described in the
`Running` section.
```shell
# Clone the repository
git clone https://git.private.coffee/privatecoffee/matrix-gptbot.git
cd matrix-gptbot
git clone https://kumig.it/kumitterer/matrix-gptbot.git
# Create a config file
cp config.dist.ini config.ini
# Edit the config file to your needs
# Initialize the database file
sqlite3 database.db "SELECT 1"
# Optionally, create Pantalaimon config
cp contrib/pantalaimon.example.conf pantalaimon.conf
# Edit the Pantalaimon config file to your needs
# Update your homeserver URL in the bot's config.ini to point to Pantalaimon (probably http://pantalaimon:8009 if you used the provided example config)
# You can use `fetch_access_token.py` to get an access token for the bot
# Start the bot
docker-compose up -d
```
#### End-to-end encryption
WARNING: Using end-to-end encryption seems to sometimes cause problems with
file attachments, especially in rooms that are not encrypted, if the same
user also uses the bot in encrypted rooms.
The bot itself does not implement end-to-end encryption. However, it can be
used in conjunction with [pantalaimon](https://github.com/matrix-org/pantalaimon).
You first have to log in to your homeserver using `python fetch_access_token.py`,
and can then use the returned access token in your bot's `config.ini` file.
Make sure to also point the bot to your pantalaimon instance by setting
`homeserver` to your pantalaimon instance instead of directly to your
homeserver in your `config.ini`.
Note: If you don't use pantalaimon, the bot will still work, but it will not
be able to decrypt or encrypt messages. This means that you cannot use it in
rooms with end-to-end encryption enabled.
### Development
Clone the repository and install the requirements to a virtual environment.
```shell
# Clone the repository
git clone https://git.private.coffee/privatecoffee/matrix-gptbot.git
cd matrix-gptbot
# If desired, activate a venv first
python -m venv venv
. venv/bin/activate
# Install the bot in editable mode
pip install -e .[dev]
# Go to the bot directory and start working
cd src/gptbot
```
Of course, you can also fork the repository on [GitHub](https://github.com/kumitterer/matrix-gptbot/)
and work on your own copy.
### Configuration
#### Repository policy
The bot requires a configuration file to be present in the working directory.
Copy the provided `config.dist.ini` to `config.ini` and edit it to your needs.
Generally, the `main` branch is considered unstable and should not be used in
production. Instead, use the latest release tag. The `main` branch is used for
development and may contain breaking changes at any time.
For development, a feature branch should be created from `main` and merged back
into `main` with a pull request. The pull request will be reviewed and tested
before merging.
## Running
The bot can be run with `python -m gptbot`. If required, activate a venv first.
The bot requires a configuration file to be present in the working directory.
Copy the provided `config.dist.ini` to `config.ini` and edit it to your needs.
The bot can then be run with `python -m gptbot`. If required, activate a venv
first.
You may want to run the bot in a screen or tmux session, or use a process
manager like systemd. The repository contains a sample systemd service file
@ -95,6 +155,9 @@ adjust the paths in the file to match your setup, then copy it to
`systemctl start gptbot` and enable it to start automatically on boot with
`systemctl enable gptbot`.
Analogously, you can use the provided `gptbot-pantalaimon.service` file to run
pantalaimon as a systemd service.
## Usage
Once it is running, just invite the bot to a room and it will start responding
@ -115,35 +178,42 @@ With this setting, the bot will only be triggered if a message begins with
bot to generate a response to the message `Hello, how are you?`. The bot will
still get previous messages in the room as context for generating the response.
### Tools
The bot has a selection of tools at its disposal that it will automatically use
to generate responses. For example, if you send a message like "Draw me a
picture of a cat", the bot will automatically use DALL-E to generate an image
of a cat.
Note that this only works if the bot is configured to use a model that supports
tools. This currently is only the case for OpenAI's `gpt-3.5-turbo` model. If
you wish to use `gpt-4` instead, you can set the `ForceTools` option in the
`[OpenAI]` section of the config file to `1`. This will cause the bot to use
`gpt-3.5-turbo` for tool generation and `gpt-4` for generating the final text
response.
Similarly, it will attempt to use the `gpt-4-vision-preview` model to "read"
the contents of images if a non-vision model is used.
### Commands
There are a few commands that you can use to interact with the bot. For example,
if you want to generate an image from a text prompt, you can use the
`!gptbot imagine` command. For example, `!gptbot imagine a cat` will cause the
bot to generate an image of a cat.
There are a few commands that you can use to explicitly call a certain feature
of the bot. For example, if you want to generate an image from a text prompt,
you can use the `!gptbot imagine` command. For example, `!gptbot imagine a cat`
will cause the bot to generate an image of a cat.
To learn more about the available commands, `!gptbot help` will print a list of
available commands.
### Automatic classification
### Voice input and output
As a beta feature, the bot can automatically classify messages and use the
appropriate API to generate a response. For example, if you send a message
like "Draw me a picture of a cat", the bot will automatically use the
`imagine` command to generate an image of a cat.
The bot supports voice input and output, but it is disabled by default. To
enable it, use the `!gptbot roomsettings` command to change the settings for
the current room. `!gptbot roomsettings stt true` will enable voice input using
OpenAI's `whisper` model, and `!gptbot roomsettings tts true` will enable voice
output using the `tts` model.
This feature is disabled by default. To enable it, use the `!gptbot roomsettings`
command to change the settings for the current room. `!gptbot roomsettings classification true`
will enable automatic classification, and `!gptbot roomsettings classification false`
will disable it again.
Note that this feature is still in beta and may not work as expected. You can
always use the commands manually if the automatic classification doesn't work
for you (including `!gptbot chat` for a regular chat message).
Also note that this feature conflicts with the `always_reply false` setting -
or rather, it doesn't make sense then because you already have to explicitly
specify the command to use.
Note that this currently only works for audio messages and .mp3 file uploads.
## Troubleshooting
@ -152,10 +222,12 @@ specify the command to use.
First of all, make sure that the bot is actually running. (Okay, that's not
really troubleshooting, but it's a good start.)
If the bot is running, check the logs. The first few lines should contain
"Starting bot...", "Syncing..." and "Bot started". If you don't see these
lines, something went wrong during startup. Fortunately, the logs should
contain more information about what went wrong.
If the bot is running, check the logs, these should tell you what is going on.
For example, if the bot is showing an error message like "Timed out, retrying",
it is unable to reach your homeserver. In this case, check your homeserver URL
and make sure that the bot can reach it. If you are using Pantalaimon, make
sure that the bot is pointed to Pantalaimon and not directly to your
homeserver, and that Pantalaimon is running and reachable.
If you need help figuring out what went wrong, feel free to open an issue.
@ -181,4 +253,5 @@ please check the logs and open an issue if you can't figure out what's going on.
## License
This project is licensed under the terms of the MIT license. See the [LICENSE](LICENSE) file for details.
This project is licensed under the terms of the MIT license. See the [LICENSE](LICENSE)
file for details.

View file

@ -45,10 +45,11 @@ Operator = Contact details not set
# DisplayName = GPTBot
# A list of allowed users
# If not defined, everyone is allowed to use the bot
# If not defined, everyone is allowed to use the bot (so you should really define this)
# Use the "*:homeserver.matrix" syntax to allow everyone on a given homeserver
# Alternatively, you can also specify a room ID to allow everyone in the room to use the bot within that room
#
# AllowedUsers = ["*:matrix.local"]
# AllowedUsers = ["*:matrix.local", "!roomid:matrix.local"]
# Minimum level of log messages that should be printed
# Available log levels in ascending order: trace, debug, info, warning, error, critical
@ -62,16 +63,20 @@ LogLevel = info
# The Chat Completion model you want to use.
#
# Unless you are in the GPT-4 beta (if you don't know - you aren't),
# leave this as the default value (gpt-3.5-turbo)
# Model = gpt-4o
# The Image Generation model you want to use.
#
# Model = gpt-3.5-turbo
# ImageModel = dall-e-3
# Your OpenAI API key
#
# Find this in your OpenAI account:
# https://platform.openai.com/account/api-keys
#
# This may not be required for self-hosted models in that case, just leave it
# as it is.
#
APIKey = sk-yoursecretkey
# The maximum amount of input sent to the API
@ -96,9 +101,78 @@ APIKey = sk-yoursecretkey
# The base URL of the OpenAI API
#
# Setting this allows you to use a self-hosted AI model for chat completions
# using something like https://github.com/abetlen/llama-cpp-python
# using something like llama-cpp-python or ollama
#
# BaseURL = https://openai.local/v1
# BaseURL = https://api.openai.com/v1/
# Whether to force the use of tools in the chat completion model
#
# This will make the bot allow the use of tools in the chat completion model,
# even if the model you are using isn't known to support tools. This is useful
# if you are using a self-hosted model that supports tools, but the bot doesn't
# know about it.
#
# ForceTools = 1
# Whether a dedicated model should be used for tools
#
# This will make the bot use a dedicated model for tools. This is useful if you
# want to use a model that doesn't support tools, but still want to be able to
# use tools.
#
# ToolModel = gpt-4o
# Whether to emulate tools in the chat completion model
#
# This will make the bot use the default model to *emulate* tools. This is
# useful if you want to use a model that doesn't support tools, but still want
# to be able to use tools. However, this may cause all kinds of weird results.
#
# EmulateTools = 0
# Force vision in the chat completion model
#
# By default, the bot only supports image recognition in known vision models.
# If you set this to 1, the bot will assume that the model you're using supports
# vision, and will send images to the model as well. This may be required for
# some self-hosted models.
#
# ForceVision = 0
# Maximum width and height of images sent to the API if vision is enabled
#
# The OpenAI API has a limit of 2000 pixels for the long side of an image, and
# 768 pixels for the short side. You may have to adjust these values if you're
# using a self-hosted model that has different limits. You can also set these
# to 0 to disable image resizing.
#
# MaxImageLongSide = 2000
# MaxImageShortSide = 768
# Whether the used model supports video files as input
#
# If you are using a model that supports video files as input, set this to 1.
# This will make the bot send video files to the model as well as images.
# This may be possible with some self-hosted models, but is not supported by
# the OpenAI API at this time.
#
# ForceVideoInput = 0
# Advanced settings for the OpenAI API
#
# These settings are not required for normal operation, but can be used to
# tweak the behavior of the bot.
#
# Note: These settings are not validated by the bot, so make sure they are
# correct before setting them, or the bot may not work as expected.
#
# For more information, see the OpenAI documentation:
# https://platform.openai.com/docs/api-reference/chat/create
#
# Temperature = 1
# TopP = 1
# FrequencyPenalty = 0
# PresencePenalty = 0
###############################################################################
@ -117,20 +191,29 @@ APIKey = sk-yoursecretkey
# The URL to your Matrix homeserver
#
# If you are using Pantalaimon, this should be the URL of your Pantalaimon
# instance, not the Matrix homeserver itself.
#
Homeserver = https://matrix.local
# An Access Token for the user your bot runs as
# Can be obtained using a request like this:
#
# See https://www.matrix.org/docs/guides/client-server-api#login
# for information on how to obtain this value
#
AccessToken = syt_yoursynapsetoken
# The Matrix user ID of the bot (@local:domain.tld)
# Only specify this if the bot fails to figure it out by itself
# Instead of an Access Token, you can also use a User ID and password
# to log in. Upon first run, the bot will automatically turn this into
# an Access Token and store it in the config file, and remove the
# password from the config file.
#
# This is particularly useful if you are using Pantalaimon, where this
# is the only (easy) way to generate an Access Token.
#
# UserID = @gptbot:matrix.local
# Password = yourpassword
###############################################################################
@ -141,11 +224,6 @@ AccessToken = syt_yoursynapsetoken
#
Path = database.db
# Path of the Crypto Store - required to support encrypted rooms
# (not tested/supported yet)
#
CryptoStore = store.db
###############################################################################
[TrackingMore]
@ -157,21 +235,10 @@ CryptoStore = store.db
###############################################################################
[Replicate]
[OpenWeatherMap]
# API key for replicate.com
# Can be used to run lots of different AI models
# If not defined, the features that depend on it are not available
#
# APIKey = r8_alotoflettersandnumbershere
###############################################################################
[HuggingFace]
# API key for Hugging Face
# Can be used to run lots of different AI models
# If not defined, the features that depend on it are not available
# API key for OpenWeatherMap
# If not defined, the bot will be unable to provide weather information
#
# APIKey = __________________________

View file

@ -0,0 +1,7 @@
[Homeserver]
Homeserver = https://example.com
ListenAddress = localhost
ListenPort = 8009
IgnoreVerification = True
LogLevel = debug
UseKeyring = no

15
docker-compose.yml Normal file
View file

@ -0,0 +1,15 @@
version: '3.8'
services:
gptbot:
image: kumitterer/matrix-gptbot
volumes:
- ./config.ini:/app/config.ini
- ./database.db:/app/database.db
pantalaimon:
image: matrixdotorg/pantalaimon
volumes:
- ./pantalaimon.conf:/etc/pantalaimon/pantalaimon.conf
ports:
- "8009:8009"

22
fetch_access_token.py Normal file
View file

@ -0,0 +1,22 @@
from nio import AsyncClient
from configparser import ConfigParser
async def main():
config = ConfigParser()
config.read("config.ini")
user_id = input("User ID: ")
password = input("Password: ")
client = AsyncClient(config["Matrix"]["Homeserver"])
client.user = user_id
await client.login(password)
print("Access token: " + client.access_token)
await client.close()
if __name__ == "__main__":
import asyncio
asyncio.get_event_loop().run_until_complete(main())

View file

@ -0,0 +1,15 @@
[Unit]
Description=Pantalaimon for GPTbot
Requires=network.target
[Service]
Type=simple
User=gptbot
Group=gptbot
WorkingDirectory=/opt/gptbot
ExecStart=/opt/gptbot/venv/bin/python3 -um pantalaimon.main -c pantalaimon.conf
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target

View file

@ -7,63 +7,59 @@ allow-direct-references = true
[project]
name = "matrix-gptbot"
version = "0.1.0"
version = "0.3.21"
authors = [
{ name="Kumi Mitterer", email="gptbot@kumi.email" },
{ name = "Kumi", email = "gptbot@kumi.email" },
{ name = "Private.coffee Team", email = "support@private.coffee" },
]
description = "Multifunctional Chatbot for Matrix"
readme = "README.md"
license = { file="LICENSE" }
license = { file = "LICENSE" }
requires-python = ">=3.10"
packages = [
"src/gptbot"
]
packages = ["src/gptbot"]
classifiers = [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
]
dependencies = [
"matrix-nio[e2e]",
"markdown2[all]",
"tiktoken",
"python-magic",
"pillow",
]
"matrix-nio[e2e]>=0.24.0",
"markdown2[all]",
"tiktoken",
"python-magic",
"pillow",
"future>=1.0.0",
]
[project.optional-dependencies]
openai = [
"openai",
]
openai = ["openai>=1.2", "pydub"]
wolframalpha = [
"wolframalpha",
]
google = ["google-generativeai"]
trackingmore = [
"trackingmore @ git+https://kumig.it/kumitterer/trackingmore-api-tool.git",
]
wolframalpha = ["wolframalpha"]
trackingmore = ["trackingmore-api-tool"]
all = [
"matrix-gptbot[openai,wolframalpha,trackingmore]",
"matrix-gptbot[openai,wolframalpha,trackingmore,google]",
"geopy",
"beautifulsoup4",
]
dev = [
"matrix-gptbot[all]",
"black",
]
dev = ["matrix-gptbot[all]", "black", "hatchling", "twine", "build", "ruff"]
[project.urls]
"Homepage" = "https://kumig.it/kumitterer/matrix-gptbot"
"Bug Tracker" = "https://kumig.it/kumitterer/matrix-gptbot/issues"
"Homepage" = "https://git.private.coffee/privatecoffee/matrix-gptbot"
"Bug Tracker" = "https://git.private.coffee/privatecoffee/matrix-gptbot/issues"
"Source Code" = "https://git.private.coffee/privatecoffee/matrix-gptbot"
[project.scripts]
gptbot = "gptbot:main"
gptbot = "gptbot.__main__:main_sync"
[tool.hatch.build.targets.wheel]
packages = ["src/gptbot"]
packages = ["src/gptbot"]

View file

@ -1,10 +0,0 @@
openai
matrix-nio[e2e]
markdown2[all]
tiktoken
duckdb
python-magic
pillow
wolframalpha
git+https://kumig.it/kumitterer/trackingmore-api-tool.git

View file

@ -5,13 +5,22 @@ from configparser import ConfigParser
import signal
import asyncio
import importlib.metadata
def sigterm_handler(_signo, _stack_frame):
exit()
if __name__ == "__main__":
def get_version():
try:
package_version = importlib.metadata.version("matrix_gptbot")
except Exception:
return None
return package_version
async def main():
# Parse command line arguments
parser = ArgumentParser()
parser.add_argument(
@ -25,7 +34,7 @@ if __name__ == "__main__":
"-v",
help="Print version and exit",
action="version",
version="GPTBot v0.1.0",
version=f"GPTBot {get_version() or '- version unknown'}",
)
args = parser.parse_args()
@ -34,15 +43,28 @@ if __name__ == "__main__":
config.read(args.config)
# Create bot
bot = GPTBot.from_config(config)
bot, new_config = await GPTBot.from_config(config)
# Update config with new values
if new_config:
with open(args.config, "w") as configfile:
new_config.write(configfile)
# Listen for SIGTERM
signal.signal(signal.SIGTERM, sigterm_handler)
# Start bot
try:
asyncio.run(bot.run())
await bot.run()
except KeyboardInterrupt:
print("Received KeyboardInterrupt - exiting...")
except SystemExit:
print("Received SIGTERM - exiting...")
def main_sync():
asyncio.run(main())
if __name__ == "__main__":
main_sync()

View file

@ -1,35 +1,24 @@
from nio import (
RoomMessageText,
MegolmEvent,
InviteEvent,
Event,
SyncResponse,
JoinResponse,
InviteEvent,
OlmEvent,
MegolmEvent,
RoomMemberEvent,
Response,
)
from .test import test_callback
from .sync import sync_callback
from .invite import room_invite_callback
from .join import join_callback
from .message import message_callback
from .roommember import roommember_callback
from .test_response import test_response_callback
RESPONSE_CALLBACKS = {
Response: test_response_callback,
SyncResponse: sync_callback,
JoinResponse: join_callback,
}
EVENT_CALLBACKS = {
Event: test_callback,
InviteEvent: room_invite_callback,
RoomMessageText: message_callback,
MegolmEvent: message_callback,
RoomMemberEvent: roommember_callback,
}
}

View file

@ -2,9 +2,9 @@ from nio import InviteEvent, MatrixRoom
async def room_invite_callback(room: MatrixRoom, event: InviteEvent, bot):
if room.room_id in bot.matrix_client.rooms:
logging(f"Already in room {room.room_id} - ignoring invite")
bot.logger.log(f"Already in room {room.room_id} - ignoring invite")
return
bot.logger.log(f"Received invite to room {room.room_id} - joining...")
response = await bot.matrix_client.join(room.room_id)
await bot.matrix_client.join(room.room_id)

View file

@ -8,11 +8,13 @@ async def join_callback(response, bot):
with closing(bot.database.cursor()) as cursor:
cursor.execute(
"SELECT space_id FROM user_spaces WHERE user_id = ? AND active = TRUE", (event.sender,))
"SELECT space_id FROM user_spaces WHERE user_id = ? AND active = TRUE", (response.sender,))
space = cursor.fetchone()
if space:
bot.logger.log(f"Adding new room to space {space[0]}...")
await bot.add_rooms_to_space(space[0], [new_room.room_id])
await bot.add_rooms_to_space(space[0], [response.room_id])
bot.matrix_client.keys_upload()
await bot.send_message(bot.matrix_client.rooms[response.room_id], "Hello! Thanks for inviting me! How can I help you today?")

View file

@ -1,39 +1,18 @@
from nio import MatrixRoom, RoomMessageText, MegolmEvent, RoomKeyRequestError, RoomKeyRequestResponse
from nio import MatrixRoom, RoomMessageText
from datetime import datetime
async def message_callback(room: MatrixRoom | str, event: RoomMessageText | MegolmEvent, bot):
async def message_callback(room: MatrixRoom | str, event: RoomMessageText, bot):
bot.logger.log(f"Received message from {event.sender} in room {room.room_id}")
sent = datetime.fromtimestamp(event.server_timestamp / 1000)
received = datetime.now()
latency = received - sent
if isinstance(event, MegolmEvent):
try:
event = await bot.matrix_client.decrypt_event(event)
except Exception as e:
try:
bot.logger.log("Requesting new encryption keys...")
response = await bot.matrix_client.request_room_key(event)
if isinstance(response, RoomKeyRequestError):
bot.logger.log(f"Error requesting encryption keys: {response}", "error")
elif isinstance(response, RoomKeyRequestResponse):
bot.logger.log(f"Encryption keys received: {response}", "debug")
bot.matrix_bot.olm.handle_response(response)
event = await bot.matrix_client.decrypt_event(event)
except:
pass
bot.logger.log(f"Error decrypting message: {e}", "error")
await bot.send_message(room, "Sorry, I couldn't decrypt that message. Please try again later or switch to a room without encryption.", True)
return
if event.sender == bot.matrix_client.user_id:
bot.logger.log("Message is from bot itself - ignoring")
elif event.body.startswith("!gptbot"):
elif event.body.startswith("!gptbot") or event.body.startswith("* !gptbot"):
await bot.process_command(room, event)
elif event.body.startswith("!"):

View file

@ -1,11 +0,0 @@
from nio import MatrixRoom, Event
async def test_callback(room: MatrixRoom, event: Event, bot):
"""Test callback for debugging purposes.
Args:
room (MatrixRoom): The room the event was sent in.
event (Event): The event that was sent.
"""
bot.logger.log(f"Test callback called: {room.room_id} {event.event_id} {event.sender} {event.__class__}")

View file

@ -1,11 +0,0 @@
from nio import ErrorResponse
async def test_response_callback(response, bot):
if isinstance(response, ErrorResponse):
bot.logger.log(
f"Error response received ({response.__class__.__name__}): {response.message}",
"warning",
)
else:
bot.logger.log(f"{response.__class__} response received", "debug")

View file

View file

@ -0,0 +1,76 @@
from ...classes.logging import Logger
import asyncio
from functools import partial
from typing import Any, AsyncGenerator, Dict, Optional, Mapping
from nio import Event
class AttributeDictionary(dict):
def __init__(self, *args, **kwargs):
super(AttributeDictionary, self).__init__(*args, **kwargs)
self.__dict__ = self
class BaseAI:
bot: Any
logger: Logger
def __init__(self, bot, config: Mapping, logger: Optional[Logger] = None):
self.bot = bot
self.logger = logger or bot.logger or Logger()
self._config = config
@property
def chat_api(self) -> str:
return self.chat_model
async def prepare_messages(
self, event: Event, messages: list[Any], system_message: Optional[str] = None
) -> list[Any]:
"""A helper method to prepare messages for the AI.
This converts a list of Matrix messages into whatever format the AI requires.
Args:
event (Event): The event that triggered the message generation. Generally a text message from a user.
messages (list[Dict[str, str]]): The messages to prepare. Generally of type RoomMessage*.
system_message (Optional[str], optional): A system message to include. Defaults to None.
Returns:
list[Any]: The prepared messages in the format the AI requires.
Raises:
NotImplementedError: If the method is not implemented in the subclass.
"""
raise NotImplementedError(
"Implementations of BaseAI must implement prepare_messages."
)
async def _request_with_retries(
self, request: partial, attempts: int = 5, retry_interval: int = 2
) -> AsyncGenerator[Any | list | Dict, None]:
"""Retry a request a set number of times if it fails.
Args:
request (partial): The request to make with retries.
attempts (int, optional): The number of attempts to make. Defaults to 5.
retry_interval (int, optional): The interval in seconds between attempts. Defaults to 2 seconds.
Returns:
AsyncGenerator[Any | list | Dict, None]: The response for the request.
"""
current_attempt = 1
while current_attempt <= attempts:
try:
response = await request()
return response
except Exception as e:
self.logger.log(f"Request failed: {e}", "error")
self.logger.log(f"Retrying in {retry_interval} seconds...")
await asyncio.sleep(retry_interval)
current_attempt += 1
raise Exception("Request failed after all attempts.")

View file

@ -0,0 +1,73 @@
from .base import BaseAI
from ..logging import Logger
from typing import Optional, Mapping, List, Dict, Tuple
import google.generativeai as genai
class GeminiAI(BaseAI):
api_code: str = "google"
@property
def chat_api(self) -> str:
return self.chat_model
google_api: genai.GenerativeModel
operator: str = "Google (https://ai.google)"
def __init__(
self,
bot,
config: Mapping,
logger: Optional[Logger] = None,
):
super().__init__(bot, config, logger)
genai.configure(api_key=self.api_key)
self.gemini_api = genai.GenerativeModel(self.chat_model)
@property
def api_key(self):
return self._config["APIKey"]
@property
def chat_model(self):
return self._config.get("Model", fallback="gemini-pro")
def prepare_messages(event, messages: List[Dict[str, str]], ) -> List[str]:
return [message["content"] for message in messages]
async def generate_chat_response(
self,
messages: List[Dict[str, str]],
user: Optional[str] = None,
room: Optional[str] = None,
use_tools: bool = True,
model: Optional[str] = None,
) -> Tuple[str, int]:
"""Generate a response to a chat message.
Args:
messages (List[Dict[str, str]]): A list of messages to use as context.
user (Optional[str], optional): The user to use the assistant for. Defaults to None.
room (Optional[str], optional): The room to use the assistant for. Defaults to None.
use_tools (bool, optional): Whether to use tools. Defaults to True.
model (Optional[str], optional): The model to use. Defaults to None, which uses the default chat model.
Returns:
Tuple[str, int]: The response text and the number of tokens used.
"""
self.logger.log(
f"Generating response to {len(messages)} messages for user {user} in room {room}..."
)
messages = self.prepare_messages(messages)
return self.gemini_api.generate_content(
messages=messages,
user=user,
room=room,
use_tools=use_tools,
model=model,
)

File diff suppressed because it is too large Load diff

View file

@ -1,7 +1,6 @@
import markdown2
import tiktoken
import asyncio
import functools
from PIL import Image
@ -15,9 +14,6 @@ from nio import (
MatrixRoom,
Api,
RoomMessagesError,
MegolmEvent,
GroupEncryptionError,
EncryptionError,
RoomMessageText,
RoomSendResponse,
SyncResponse,
@ -27,11 +23,17 @@ from nio import (
RoomSendError,
RoomVisibility,
RoomCreateError,
RoomMessageMedia,
DownloadError,
RoomGetStateError,
DiskDownloadResponse,
MemoryDownloadResponse,
LoginError,
)
from nio.crypto import Olm
from nio.store import SqliteStore
from typing import Optional, List
from typing import Optional, List, Any, Union
from configparser import ConfigParser
from datetime import datetime
from io import BytesIO
@ -41,46 +43,132 @@ from contextlib import closing
import uuid
import traceback
import json
import importlib.util
import sys
import sqlite3
from .logging import Logger
from ..migrations import migrate
from ..callbacks import RESPONSE_CALLBACKS, EVENT_CALLBACKS
from ..commands import COMMANDS
from .openai import OpenAI
from .wolframalpha import WolframAlpha
from .trackingmore import TrackingMore
from ..tools import TOOLS, Handover, StopProcessing
from .ai.base import BaseAI
from .exceptions import DownloadException
class GPTBot:
# Default values
database: Optional[sqlite3.Connection] = None
crypto_store_path: Optional[str | Path] = None
# Default name of rooms created by the bot
display_name = default_room_name = "GPTBot"
default_system_message: str = "You are a helpful assistant."
# Force default system message to be included even if a custom room message is set
force_system_message: bool = False
max_tokens: int = 3000 # Maximum number of input tokens
max_messages: int = 30 # Maximum number of messages to consider as input
database_path: Optional[str | Path] = None
matrix_client: Optional[AsyncClient] = None
sync_token: Optional[str] = None
logger: Optional[Logger] = Logger()
chat_api: Optional[OpenAI] = None
image_api: Optional[OpenAI] = None
classification_api: Optional[OpenAI] = None
parcel_api: Optional[TrackingMore] = None
operator: Optional[str] = None
chat_api: Optional[BaseAI] = None
image_api: Optional[BaseAI] = None
classification_api: Optional[BaseAI] = None
tts_api: Optional[BaseAI] = None
stt_api: Optional[BaseAI] = None
parcel_api: Optional[Any] = None
calculation_api: Optional[Any] = None
room_ignore_list: List[str] = [] # List of rooms to ignore invites from
debug: bool = False
logo: Optional[Image.Image] = None
logo_uri: Optional[str] = None
allowed_users: List[str] = []
config: ConfigParser = ConfigParser()
# Properties
@property
def allowed_users(self) -> List[str]:
"""List of users allowed to use the bot.
Returns:
List[str]: List of user IDs. Defaults to [], which means all users are allowed.
"""
try:
return json.loads(self.config["GPTBot"]["AllowedUsers"])
except Exception:
return []
@property
def display_name(self) -> str:
"""Display name of the bot user.
Returns:
str: The display name of the bot user. Defaults to "GPTBot".
"""
return self.config["GPTBot"].get("DisplayName", "GPTBot")
@property
def default_room_name(self) -> str:
"""Default name of rooms created by the bot.
Returns:
str: The default name of rooms created by the bot. Defaults to the display name of the bot.
"""
return self.config["GPTBot"].get("DefaultRoomName", self.display_name)
@property
def default_system_message(self) -> str:
"""Default system message to include in rooms created by the bot.
Returns:
str: The default system message to include in rooms created by the bot. Defaults to "You are a helpful assistant.".
"""
return self.config["GPTBot"].get(
"SystemMessage",
"You are a helpful assistant.",
)
@property
def force_system_message(self) -> bool:
"""Whether to force the default system message to be included even if a custom room message is set.
Returns:
bool: Whether to force the default system message to be included even if a custom room message is set. Defaults to False.
"""
return self.config["GPTBot"].getboolean("ForceSystemMessage", False)
@property
def operator(self) -> Optional[str]:
"""Operator of the bot.
Returns:
Optional[str]: The matrix user ID of the operator of the bot. Defaults to None.
"""
return self.config["GPTBot"].get("Operator")
@property
def debug(self) -> bool:
"""Whether to enable debug logging.
Returns:
bool: Whether to enable debug logging. Defaults to False.
"""
return self.config["GPTBot"].getboolean("Debug", False)
@property
def logo_path(self) -> str:
"""Path to the logo of the bot.
Returns:
str: The path to the logo of the bot. Defaults to "assets/logo.png" in the bot's directory.
"""
return self.config["GPTBot"].get(
"Logo", str(Path(__file__).parent.parent / "assets/logo.png")
)
@property
def allow_model_override(self) -> bool:
"""Whether to allow per-room model overrides.
Returns:
bool: Whether to allow per-room model overrides. Defaults to False.
"""
return self.config["GPTBot"].getboolean("AllowModelOverride", False)
# User agent to use for HTTP requests
USER_AGENT = "matrix-gptbot/dev (+https://kumig.it/kumitterer/matrix-gptbot)"
@classmethod
def from_config(cls, config: ConfigParser):
async def from_config(cls, config: ConfigParser):
"""Create a new GPTBot instance from a config file.
Args:
@ -92,83 +180,90 @@ class GPTBot:
# Create a new GPTBot instance
bot = cls()
bot.config = config
# Set the database connection
bot.database = (
sqlite3.connect(config["Database"]["Path"])
bot.database_path = (
config["Database"]["Path"]
if "Database" in config and "Path" in config["Database"]
else None
)
bot.crypto_store_path = (
config["Database"]["CryptoStore"]
if "Database" in config and "CryptoStore" in config["Database"]
else None
)
bot.database = sqlite3.connect(bot.database_path) if bot.database_path else None
# Override default values
if "GPTBot" in config:
bot.operator = config["GPTBot"].get("Operator", bot.operator)
bot.default_room_name = config["GPTBot"].get(
"DefaultRoomName", bot.default_room_name
)
bot.default_system_message = config["GPTBot"].get(
"SystemMessage", bot.default_system_message
)
bot.force_system_message = config["GPTBot"].getboolean(
"ForceSystemMessage", bot.force_system_message
)
bot.debug = config["GPTBot"].getboolean("Debug", bot.debug)
if "LogLevel" in config["GPTBot"]:
bot.logger = Logger(config["GPTBot"]["LogLevel"])
logo_path = config["GPTBot"].get(
"Logo", str(Path(__file__).parent.parent / "assets/logo.png")
)
bot.logger.log(f"Loading logo from {bot.logo_path}", "debug")
bot.logger.log(f"Loading logo from {logo_path}", "debug")
if Path(bot.logo_path).exists() and Path(bot.logo_path).is_file():
bot.logo = Image.open(bot.logo_path)
if Path(logo_path).exists() and Path(logo_path).is_file():
bot.logo = Image.open(logo_path)
# Set up OpenAI
assert (
"OpenAI" in config
), "OpenAI config not found" # TODO: Update this to support other providers
bot.display_name = config["GPTBot"].get("DisplayName", bot.display_name)
from .ai.openai import OpenAI
if "AllowedUsers" in config["GPTBot"]:
bot.allowed_users = json.loads(config["GPTBot"]["AllowedUsers"])
openai_api = OpenAI(bot=bot, config=config["OpenAI"])
bot.chat_api = bot.image_api = bot.classification_api = OpenAI(
config["OpenAI"]["APIKey"], config["OpenAI"].get("Model"), bot.logger
)
bot.max_tokens = config["OpenAI"].getint("MaxTokens", bot.max_tokens)
bot.max_messages = config["OpenAI"].getint("MaxMessages", bot.max_messages)
if "Model" in config["OpenAI"]:
bot.chat_api = openai_api
bot.classification_api = openai_api
if "BaseURL" in config["OpenAI"]:
bot.chat_api.base_url = config["OpenAI"]["BaseURL"]
bot.image_api = None
if "ImageModel" in config["OpenAI"]:
bot.image_api = openai_api
if "TTSModel" in config["OpenAI"]:
bot.tts_api = openai_api
if "STTModel" in config["OpenAI"]:
bot.stt_api = openai_api
# Set up WolframAlpha
if "WolframAlpha" in config:
from .wolframalpha import WolframAlpha
bot.calculation_api = WolframAlpha(
config["WolframAlpha"]["APIKey"], bot.logger
)
# Set up TrackingMore
if "TrackingMore" in config:
from .trackingmore import TrackingMore
bot.parcel_api = TrackingMore(config["TrackingMore"]["APIKey"], bot.logger)
# Set up the Matrix client
assert "Matrix" in config, "Matrix config not found"
homeserver = config["Matrix"]["Homeserver"]
bot.matrix_client = AsyncClient(homeserver)
bot.matrix_client.access_token = config["Matrix"]["AccessToken"]
bot.matrix_client.user_id = config["Matrix"].get("UserID")
bot.matrix_client.device_id = config["Matrix"].get("DeviceID")
# Return the new GPTBot instance
return bot
if config.get("Matrix", "Password", fallback=""):
if not config.get("Matrix", "UserID", fallback=""):
raise Exception("Cannot log in: UserID not set in config")
bot.matrix_client = AsyncClient(homeserver, user=config["Matrix"]["UserID"])
login = await bot.matrix_client.login(password=config["Matrix"]["Password"])
if isinstance(login, LoginError):
raise Exception(f"Could not log in: {login.message}")
config["Matrix"]["AccessToken"] = bot.matrix_client.access_token
config["Matrix"]["DeviceID"] = bot.matrix_client.device_id
config["Matrix"]["Password"] = ""
else:
bot.matrix_client = AsyncClient(homeserver)
bot.matrix_client.access_token = config["Matrix"]["AccessToken"]
bot.matrix_client.user_id = config["Matrix"].get("UserID")
bot.matrix_client.device_id = config["Matrix"].get("DeviceID")
# Return the new GPTBot instance and the (potentially modified) config
return bot, config
async def _get_user_id(self) -> str:
"""Get the user ID of the bot from the whoami endpoint.
@ -194,9 +289,14 @@ class GPTBot:
return user_id
async def _last_n_messages(self, room: str | MatrixRoom, n: Optional[int]):
async def _last_n_messages(
self,
room: str | MatrixRoom,
n: Optional[int],
ignore_notices: bool = True,
):
messages = []
n = n or self.max_messages
n = n or self.chat_api.max_messages
room_id = room.room_id if isinstance(room, MatrixRoom) else room
self.logger.log(
@ -217,72 +317,47 @@ class GPTBot:
)
for event in response.chunk:
if len(messages) >= n:
break
if isinstance(event, MegolmEvent):
try:
event_type = event.type
except AttributeError:
try:
event = await self.matrix_client.decrypt_event(event)
except (GroupEncryptionError, EncryptionError):
self.logger.log(
f"Could not decrypt message {event.event_id} in room {room_id}",
"error",
)
continue
if isinstance(event, (RoomMessageText, RoomMessageNotice)):
if event.body.startswith("!gptbot ignoreolder"):
event_type = event.source["content"]["msgtype"]
except KeyError:
if event.__class__.__name__ in ("RoomMemberEvent",):
self.logger.log(
f"Ignoring event of type {event.__class__.__name__}",
"debug",
)
continue
self.logger.log(f"Could not process event: {event}", "warning")
continue # This is most likely not a message event
if event_type.startswith("gptbot"):
messages.append(event)
elif isinstance(event, RoomMessageText):
if event.body.split() == ["!gptbot", "ignoreolder"]:
break
if (not event.body.startswith("!")) or (
event.body.startswith("!gptbot")
event.body.split()[1] == "custom"
):
messages.append(event)
elif isinstance(event, RoomMessageNotice):
if not ignore_notices:
messages.append(event)
elif isinstance(event, RoomMessageMedia):
messages.append(event)
if len(messages) >= n:
break
self.logger.log(f"Found {len(messages)} messages (limit: {n})", "debug")
# Reverse the list so that messages are in chronological order
return messages[::-1]
def _truncate(
self,
messages: list,
max_tokens: Optional[int] = None,
model: Optional[str] = None,
system_message: Optional[str] = None,
):
max_tokens = max_tokens or self.max_tokens
model = model or self.chat_api.chat_model
system_message = (
self.default_system_message if system_message is None else system_message
)
encoding = tiktoken.encoding_for_model(model)
total_tokens = 0
system_message_tokens = (
0 if not system_message else (len(encoding.encode(system_message)) + 1)
)
if system_message_tokens > max_tokens:
self.logger.log(
f"System message is too long to fit within token limit ({system_message_tokens} tokens) - cannot proceed",
"error",
)
return []
total_tokens += system_message_tokens
total_tokens = len(system_message) + 1
truncated_messages = []
for message in [messages[0]] + list(reversed(messages[1:])):
content = message["content"]
tokens = len(encoding.encode(content)) + 1
if total_tokens + tokens > max_tokens:
break
total_tokens += tokens
truncated_messages.append(message)
return [truncated_messages[0]] + list(reversed(truncated_messages[1:]))
async def _get_device_id(self) -> str:
"""Guess the device ID of the bot.
Requires an access token to be set up.
@ -305,6 +380,44 @@ class GPTBot:
return device_id
async def call_tool(self, tool_call: dict, room: str, user: str, **kwargs):
"""Call a tool.
Args:
tool_call (dict): The tool call to make.
room (str): The room to call the tool in.
user (str): The user to call the tool as.
"""
tool = tool_call.function.name
args = json.loads(tool_call.function.arguments)
self.logger.log(
f"Calling tool {tool} with args {args} for user {user} in room {room}",
"debug",
)
await self.send_message(
room, f"Calling tool {tool} with arguments {args}.", True
)
try:
tool_class = TOOLS[tool]
result = await tool_class(**args, room=room, bot=self, user=user).run()
await self.send_message(room, result, msgtype="gptbot.tool_result")
return result
except (Handover, StopProcessing):
raise
except KeyError:
self.logger.log(f"Tool {tool} not found", "error")
return "Error: Tool not found"
except Exception as e:
self.logger.log(f"Error calling tool {tool}: {e}", "error")
return f"Error: Something went wrong calling tool {tool}"
async def process_command(self, room: MatrixRoom, event: RoomMessageText):
"""Process a command. Called from the event_callback() method.
Delegates to the appropriate command handler.
@ -318,6 +431,10 @@ class GPTBot:
f"Received command {event.body} from {event.sender} in room {room.room_id}",
"debug",
)
if event.body.startswith("* "):
event.body = event.body[2:]
command = event.body.split()[1] if event.body.split()[1:] else None
await COMMANDS.get(command, COMMANDS[None])(room, event, self)
@ -371,13 +488,31 @@ class GPTBot:
return (
(
user_id in self.allowed_users
or f"*:{user_id.split(':')[1]}" in self.allowed_users
or f"@*:{user_id.split(':')[1]}" in self.allowed_users
or (
(
f"*:{user_id.split(':')[1]}" in self.allowed_users
or f"@*:{user_id.split(':')[1]}" in self.allowed_users
)
if not user_id.startswith("!") or user_id.startswith("#")
else False
)
)
if self.allowed_users
else True
)
def room_is_allowed(self, room_id: str) -> bool:
"""Check if everyone in a room is allowed to use the bot.
Args:
room_id (str): The room ID to check.
Returns:
bool: Whether everyone in the room is allowed to use the bot.
"""
# TODO: Handle published aliases
return self.user_is_allowed(room_id)
async def event_callback(self, room: MatrixRoom, event: Event):
"""Callback for events.
@ -389,7 +524,9 @@ class GPTBot:
if event.sender == self.matrix_client.user_id:
return
if not self.user_is_allowed(event.sender):
if not (
self.user_is_allowed(event.sender) or self.room_is_allowed(room.room_id)
):
if len(room.users) == 2:
await self.matrix_client.room_send(
room.room_id,
@ -401,7 +538,7 @@ class GPTBot:
)
return
task = asyncio.create_task(self._event_callback(room, event))
asyncio.create_task(self._event_callback(room, event))
def room_uses_timing(self, room: MatrixRoom):
"""Check if a room uses timing.
@ -429,7 +566,7 @@ class GPTBot:
await callback(response, self)
async def response_callback(self, response: Response):
task = asyncio.create_task(self._response_callback(response))
asyncio.create_task(self._response_callback(response))
async def accept_pending_invites(self):
"""Accept all pending invites."""
@ -438,7 +575,7 @@ class GPTBot:
invites = self.matrix_client.invited_rooms
for invite in invites.keys():
for invite in [k for k in invites.keys()]:
if invite in self.room_ignore_list:
self.logger.log(
f"Ignoring invite to room {invite} (room is in ignore list)",
@ -497,13 +634,16 @@ class GPTBot:
"""Send an image to a room.
Args:
room (MatrixRoom): The room to send the image to.
room (MatrixRoom|str): The room to send the image to.
image (bytes): The image to send.
message (str, optional): The message to send with the image. Defaults to None.
"""
if isinstance(room, MatrixRoom):
room = room.room_id
self.logger.log(
f"Sending image of size {len(image)} bytes to room {room.room_id}", "debug"
f"Sending image of size {len(image)} bytes to room {room}", "debug"
)
bio = BytesIO(image)
@ -533,14 +673,50 @@ class GPTBot:
"url": content_uri,
}
status = await self.matrix_client.room_send(
room.room_id, "m.room.message", content
)
await self.matrix_client.room_send(room, "m.room.message", content)
self.logger.log("Sent image", "debug")
async def send_file(
self, room: MatrixRoom, file: bytes, filename: str, mime: str, msgtype: str
):
"""Send a file to a room.
Args:
room (MatrixRoom|str): The room to send the file to.
file (bytes): The file to send.
filename (str): The name of the file.
mime (str): The MIME type of the file.
"""
if isinstance(room, MatrixRoom):
room = room.room_id
self.logger.log(
f"Sending file of size {len(file)} bytes to room {room}", "debug"
)
content_uri = await self.upload_file(file, filename, mime)
self.logger.log("Uploaded file - sending message...", "debug")
content = {
"body": filename,
"info": {"mimetype": mime, "size": len(file)},
"msgtype": msgtype,
"url": content_uri,
}
await self.matrix_client.room_send(room, "m.room.message", content)
self.logger.log("Sent file", "debug")
async def send_message(
self, room: MatrixRoom | str, message: str, notice: bool = False
self,
room: MatrixRoom | str,
message: str,
notice: bool = False,
msgtype: Optional[str] = None,
):
"""Send a message to a room.
@ -556,47 +732,24 @@ class GPTBot:
markdowner = markdown2.Markdown(extras=["fenced-code-blocks"])
formatted_body = markdowner.convert(message)
msgtype = "m.notice" if notice else "m.text"
msgtype = msgtype if msgtype else "m.notice" if notice else "m.text"
msgcontent = {
"msgtype": msgtype,
"body": message,
"format": "org.matrix.custom.html",
"formatted_body": formatted_body,
}
if not msgtype.startswith("gptbot."):
msgcontent = {
"msgtype": msgtype,
"body": message,
"format": "org.matrix.custom.html",
"formatted_body": formatted_body,
}
else:
msgcontent = {
"msgtype": msgtype,
"content": message,
}
content = None
if self.matrix_client.olm and room.encrypted:
try:
if not room.members_synced:
responses = []
responses.append(
await self.matrix_client.joined_members(room.room_id)
)
if self.matrix_client.olm.should_share_group_session(room.room_id):
try:
event = self.matrix_client.sharing_session[room.room_id]
await event.wait()
except KeyError:
await self.matrix_client.share_group_session(
room.room_id,
ignore_unverified_devices=True,
)
if msgtype != "m.reaction":
response = self.matrix_client.encrypt(
room.room_id, "m.room.message", msgcontent
)
msgtype, content = response
except Exception as e:
self.logger.log(
f"Error encrypting message: {e} - sending unencrypted", "warning"
)
raise
if not content:
msgtype = "m.room.message"
content = msgcontent
@ -643,6 +796,22 @@ class GPTBot:
(message, room, tokens, api, datetime.now()),
)
async def get_state_event(
self, room: MatrixRoom | str, event_type: str, state_key: Optional[str] = None
):
if isinstance(room, MatrixRoom):
room = room.room_id
state = await self.matrix_client.room_get_state(room)
if isinstance(state, RoomGetStateError):
self.logger.log(f"Could not get state for room {room}")
for event in state.events:
if event["type"] == event_type:
if state_key is None or event["state_key"] == state_key:
return event
async def run(self):
"""Start the bot."""
@ -657,16 +826,10 @@ class GPTBot:
if not self.matrix_client.device_id:
self.matrix_client.device_id = await self._get_device_id()
# Set up database
IN_MEMORY = False
if not self.database:
self.logger.log(
"No database connection set up, using in-memory database. Data will be lost on bot shutdown.",
"warning",
self.database = sqlite3.connect(
Path(__file__).parent.parent / "database.db"
)
IN_MEMORY = True
self.database = sqlite3.connect(":memory:")
self.logger.log("Running migrations...")
@ -686,35 +849,17 @@ class GPTBot:
else:
self.logger.log(f"Already at latest version {after}.")
if IN_MEMORY:
client_config = AsyncClientConfig(
store_sync_tokens=True, encryption_enabled=False
)
else:
matrix_store = SqliteStore
client_config = AsyncClientConfig(
store_sync_tokens=True, encryption_enabled=True, store=matrix_store
)
self.matrix_client.config = client_config
self.matrix_client.store = matrix_store(
self.matrix_client.user_id,
self.matrix_client.device_id,
'.', #store path
database_name=self.crypto_store_path or "",
)
matrix_store = SqliteStore
client_config = AsyncClientConfig(
store_sync_tokens=True, encryption_enabled=False, store=matrix_store
)
self.matrix_client.config = client_config
self.matrix_client.olm = Olm(
self.matrix_client.user_id,
self.matrix_client.device_id,
self.matrix_client.store,
)
# Run initial sync (includes joining rooms)
self.matrix_client.encrypted_rooms = (
self.matrix_client.store.load_encrypted_rooms()
)
self.logger.log("Running initial sync...", "debug")
# Run initial sync (now includes joining rooms)
sync = await self.matrix_client.sync(timeout=30000)
sync = await self.matrix_client.sync(timeout=30000, full_state=True)
if isinstance(sync, SyncResponse):
await self.response_callback(sync)
else:
@ -723,6 +868,8 @@ class GPTBot:
# Set up callbacks
self.logger.log("Setting up callbacks...", "debug")
self.matrix_client.add_event_callback(self.event_callback, Event)
self.matrix_client.add_response_callback(self.response_callback, Response)
@ -743,20 +890,22 @@ class GPTBot:
asyncio.create_task(self.matrix_client.set_avatar(uri))
for room in self.matrix_client.rooms.keys():
self.logger.log(f"Setting avatar for {room}...", "debug")
asyncio.create_task(
self.matrix_client.room_put_state(
room, "m.room.avatar", {"url": uri}, ""
room_avatar = await self.get_state_event(room, "m.room.avatar")
if not room_avatar:
self.logger.log(f"Setting avatar for {room}...", "debug")
asyncio.create_task(
self.matrix_client.room_put_state(
room, "m.room.avatar", {"url": uri}, ""
)
)
)
# Start syncing events
self.logger.log("Starting sync loop...", "warning")
try:
await self.matrix_client.sync_forever(timeout=30000)
await self.matrix_client.sync_forever(timeout=30000, full_state=True)
finally:
self.logger.log("Syncing one last time...", "warning")
await self.matrix_client.sync(timeout=30000)
await self.matrix_client.sync(timeout=30000, full_state=True)
async def create_space(self, name, visibility=RoomVisibility.private) -> str:
"""Create a space.
@ -818,6 +967,46 @@ class GPTBot:
space,
)
def room_uses_stt(self, room: MatrixRoom | str) -> bool:
"""Check if a room uses STT.
Args:
room (MatrixRoom | str): The room to check.
Returns:
bool: Whether the room uses STT.
"""
room_id = room.room_id if isinstance(room, MatrixRoom) else room
with closing(self.database.cursor()) as cursor:
cursor.execute(
"SELECT value FROM room_settings WHERE room_id = ? AND setting = ?",
(room_id, "stt"),
)
result = cursor.fetchone()
return False if not result else bool(int(result[0]))
def room_uses_tts(self, room: MatrixRoom | str) -> bool:
"""Check if a room uses TTS.
Args:
room (MatrixRoom | str): The room to check.
Returns:
bool: Whether the room uses TTS.
"""
room_id = room.room_id if isinstance(room, MatrixRoom) else room
with closing(self.database.cursor()) as cursor:
cursor.execute(
"SELECT value FROM room_settings WHERE room_id = ? AND setting = ?",
(room_id, "tts"),
)
result = cursor.fetchone()
return False if not result else bool(int(result[0]))
def respond_to_room_messages(self, room: MatrixRoom | str) -> bool:
"""Check whether the bot should respond to all messages sent in a room.
@ -840,6 +1029,28 @@ class GPTBot:
return True if not result else bool(int(result[0]))
async def get_room_model(self, room: MatrixRoom | str) -> str:
"""Get the model used for a room.
Args:
room (MatrixRoom | str): The room to check.
Returns:
str: The model used for the room.
"""
if isinstance(room, MatrixRoom):
room = room.room_id
with closing(self.database.cursor()) as cursor:
cursor.execute(
"SELECT value FROM room_settings WHERE room_id = ? AND setting = ?",
(room, "model"),
)
result = cursor.fetchone()
return result[0] if result else self.chat_api.chat_model
async def process_query(
self, room: MatrixRoom, event: RoomMessageText, from_chat_command: bool = False
):
@ -889,7 +1100,10 @@ class GPTBot:
return
try:
last_messages = await self._last_n_messages(room.room_id, 20)
last_messages = await self._last_n_messages(
room.room_id, self.chat_api.max_messages
)
self.logger.log(f"Last messages: {last_messages}", "debug")
except Exception as e:
self.logger.log(f"Error getting last messages: {e}", "error")
await self.send_message(
@ -899,28 +1113,24 @@ class GPTBot:
system_message = self.get_system_message(room)
chat_messages = [{"role": "system", "content": system_message}]
for message in last_messages:
role = (
"assistant" if message.sender == self.matrix_client.user_id else "user"
)
if not message.event_id == event.event_id:
chat_messages.append({"role": role, "content": message.body})
chat_messages.append({"role": "user", "content": event.body})
# Truncate messages to fit within the token limit
truncated_messages = self._truncate(
chat_messages, self.max_tokens - 1, system_message=system_message
chat_messages = await self.chat_api.prepare_messages(
event, last_messages, system_message
)
# Check for a model override
if self.allow_model_override:
model = await self.get_room_model(room)
else:
model = self.chat_api.chat_model
try:
response, tokens_used = await self.chat_api.generate_chat_response(
chat_messages, user=room.room_id
chat_messages, user=event.sender, room=room.room_id, model=model
)
except Exception as e:
print(traceback.format_exc())
self.logger.log(f"Error generating response: {e}", "error")
await self.send_message(
room, "Something went wrong. Please try again.", True
)
@ -936,19 +1146,51 @@ class GPTBot:
self.logger.log(f"Sending response to room {room.room_id}...")
# Convert markdown to HTML
if self.room_uses_tts(room):
self.logger.log("TTS enabled for room", "debug")
message = await self.send_message(room, response)
try:
audio = await self.tts_api.text_to_speech(response)
await self.send_file(room, audio, response, "audio/mpeg", "m.audio")
return
else:
# Send a notice to the room if there was an error
self.logger.log("Didn't get a response from GPT API", "error")
await self.send_message(
room, "Something went wrong. Please try again.", True
)
except Exception as e:
self.logger.log(f"Error generating audio: {e}", "error")
await self.send_message(
room, "Something went wrong generating audio file.", True
)
if self.debug:
await self.send_message(
room, f"Error: {e}\n\n```\n{traceback.format_exc()}\n```", True
)
await self.send_message(room, response)
await self.matrix_client.room_typing(room.room_id, False)
async def download_file(
self, mxc: str, raise_error: bool = False
) -> Union[DiskDownloadResponse, MemoryDownloadResponse]:
"""Download a file from the homeserver.
Args:
mxc (str): The MXC URI of the file to download.
Returns:
Optional[bytes]: The downloaded file, or None if there was an error.
"""
download = await self.matrix_client.download(mxc)
if isinstance(download, DownloadError):
self.logger.log(f"Error downloading file: {download.message}", "error")
if raise_error:
raise DownloadException(download.message)
return
return download
def get_system_message(self, room: MatrixRoom | str) -> str:
"""Get the system message for a room.

View file

@ -0,0 +1,2 @@
class DownloadException(Exception):
pass

View file

@ -1,164 +0,0 @@
import openai
import requests
import asyncio
import json
from functools import partial
from .logging import Logger
from typing import Dict, List, Tuple, Generator, AsyncGenerator, Optional, Any
class OpenAI:
api_key: str
chat_model: str = "gpt-3.5-turbo"
logger: Logger
api_code: str = "openai"
@property
def chat_api(self) -> str:
return self.chat_model
classification_api = chat_api
image_api: str = "dalle"
operator: str = "OpenAI ([https://openai.com](https://openai.com))"
def __init__(self, api_key, chat_model=None, logger=None):
self.api_key = api_key
self.chat_model = chat_model or self.chat_model
self.logger = logger or Logger()
self.base_url = openai.api_base
async def _request_with_retries(self, request: partial, attempts: int = 5, retry_interval: int = 2) -> AsyncGenerator[Any | list | Dict, None]:
"""Retry a request a set number of times if it fails.
Args:
request (partial): The request to make with retries.
attempts (int, optional): The number of attempts to make. Defaults to 5.
retry_interval (int, optional): The interval in seconds between attempts. Defaults to 2 seconds.
Returns:
AsyncGenerator[Any | list | Dict, None]: The OpenAI response for the request.
"""
# call the request function and return the response if it succeeds, else retry
current_attempt = 1
while current_attempt <= attempts:
try:
response = await request()
return response
except Exception as e:
self.logger.log(f"Request failed: {e}", "error")
self.logger.log(f"Retrying in {retry_interval} seconds...")
await asyncio.sleep(retry_interval)
current_attempt += 1
# if all attempts failed, raise an exception
raise Exception("Request failed after all attempts.")
async def generate_chat_response(self, messages: List[Dict[str, str]], user: Optional[str] = None) -> Tuple[str, int]:
"""Generate a response to a chat message.
Args:
messages (List[Dict[str, str]]): A list of messages to use as context.
Returns:
Tuple[str, int]: The response text and the number of tokens used.
"""
self.logger.log(f"Generating response to {len(messages)} messages using {self.chat_model}...")
chat_partial = partial(
openai.ChatCompletion.acreate,
model=self.chat_model,
messages=messages,
api_key=self.api_key,
user=user,
api_base=self.base_url,
)
response = await self._request_with_retries(chat_partial)
result_text = response.choices[0].message['content']
tokens_used = response.usage["total_tokens"]
self.logger.log(f"Generated response with {tokens_used} tokens.")
return result_text, tokens_used
async def classify_message(self, query: str, user: Optional[str] = None) -> Tuple[Dict[str, str], int]:
system_message = """You are a classifier for different types of messages. You decide whether an incoming message is meant to be a prompt for an AI chat model, or meant for a different API. You respond with a JSON object like this:
{ "type": event_type, "prompt": prompt }
- If the message you received is meant for the AI chat model, the event_type is "chat", and the prompt is the literal content of the message you received. This is also the default if none of the other options apply.
- If it is a prompt for a calculation that can be answered better by WolframAlpha than an AI chat bot, the event_type is "calculate". Optimize the message you received for input to WolframAlpha, and return it as the prompt attribute.
- If it is a prompt for an AI image generation, the event_type is "imagine". Optimize the message you received for use with DALL-E, and return it as the prompt attribute.
- If the user is asking you to create a new room, the event_type is "newroom", and the prompt is the name of the room, if one is given, else an empty string.
- If the user is asking you to throw a coin, the event_type is "coin". The prompt is an empty string.
- If the user is asking you to roll a dice, the event_type is "dice". The prompt is an string containing an optional number of sides, if one is given, else an empty string.
- If for any reason you are unable to classify the message (for example, if it infringes on your terms of service), the event_type is "error", and the prompt is a message explaining why you are unable to process the message.
Only the event_types mentioned above are allowed, you must not respond in any other way."""
messages = [
{
"role": "system",
"content": system_message
},
{
"role": "user",
"content": query
}
]
self.logger.log(f"Classifying message '{query}'...")
chat_partial = partial(
openai.ChatCompletion.acreate,
model=self.chat_model,
messages=messages,
api_key=self.api_key,
user=user,
api_base=self.base_url,
)
response = await self._request_with_retries(chat_partial)
try:
result = json.loads(response.choices[0].message['content'])
except:
result = {"type": "chat", "prompt": query}
tokens_used = response.usage["total_tokens"]
self.logger.log(f"Classified message as {result['type']} with {tokens_used} tokens.")
return result, tokens_used
async def generate_image(self, prompt: str, user: Optional[str] = None) -> Generator[bytes, None, None]:
"""Generate an image from a prompt.
Args:
prompt (str): The prompt to use.
Yields:
bytes: The image data.
"""
self.logger.log(f"Generating image from prompt '{prompt}'...")
image_partial = partial(
openai.Image.acreate,
prompt=prompt,
n=1,
api_key=self.api_key,
size="1024x1024",
user=user,
api_base=self.base_url,
)
response = await self._request_with_retries(image_partial)
images = []
for image in response.data:
image = requests.get(image.url).content
images.append(image)
return images, len(images)

View file

@ -1,9 +1,8 @@
import trackingmore
import requests
from .logging import Logger
from typing import Dict, List, Tuple, Generator, Optional
from typing import Tuple, Optional
class TrackingMore:
api_key: str

View file

@ -3,7 +3,7 @@ import requests
from .logging import Logger
from typing import Dict, List, Tuple, Generator, Optional
from typing import Generator, Optional
class WolframAlpha:
api_key: str

View file

@ -22,6 +22,7 @@ for command in [
"dice",
"parcel",
"space",
"tts",
]:
function = getattr(import_module(
"." + command, "gptbot.commands"), "command_" + command)

View file

@ -3,21 +3,16 @@ from nio.rooms import MatrixRoom
async def command_botinfo(room: MatrixRoom, event: RoomMessageText, bot):
logging("Showing bot info...")
bot.logger.log("Showing bot info...")
body = f"""GPT Info:
body = f"""GPT Room info:
Model: {bot.model}
Maximum context tokens: {bot.max_tokens}
Maximum context messages: {bot.max_messages}
Room info:
Bot user ID: {bot.matrix_client.user_id}
Current room ID: {room.room_id}
Model: {await bot.get_room_model(room)}\n
Maximum context tokens: {bot.chat_api.max_tokens}\n
Maximum context messages: {bot.chat_api.max_messages}\n
Bot user ID: {bot.matrix_client.user_id}\n
Current room ID: {room.room_id}\n
System message: {bot.get_system_message(room)}
For usage statistics, run !gptbot stats
"""
await bot.send_message(room, body, True)

View file

@ -23,14 +23,12 @@ async def command_calculate(room: MatrixRoom, event: RoomMessageText, bot):
bot.logger.log("Querying calculation API...")
for subpod in bot.calculation_api.generate_calculation_response(prompt, text, results_only, user=room.room_id):
bot.logger.log(f"Sending subpod...")
bot.logger.log("Sending subpod...")
if isinstance(subpod, bytes):
await bot.send_image(room, subpod)
else:
await bot.send_message(room, subpod, True)
bot.log_api_usage(event, room, f"{bot.calculation_api.api_code}-{bot.calculation_api.calculation_api}", tokens_used)
return
await bot.send_message(room, "You need to provide a prompt.", True)

View file

@ -9,7 +9,7 @@ async def command_dice(room: MatrixRoom, event: RoomMessageText, bot):
try:
sides = int(event.body.split()[2])
except ValueError:
except (ValueError, IndexError):
sides = 6
if sides < 2:

View file

@ -8,17 +8,17 @@ async def command_help(room: MatrixRoom, event: RoomMessageText, bot):
- !gptbot help - Show this message
- !gptbot botinfo - Show information about the bot
- !gptbot privacy - Show privacy information
- !gptbot newroom \<room name\> - Create a new room and invite yourself to it
- !gptbot stats - Show usage statistics for this room
- !gptbot systemmessage \<message\> - Get or set the system message for this room
- !gptbot newroom <room name> - Create a new room and invite yourself to it
- !gptbot systemmessage <message> - Get or set the system message for this room
- !gptbot space [enable|disable|update|invite] - Enable, disable, force update, or invite yourself to your space
- !gptbot coin - Flip a coin (heads or tails)
- !gptbot dice [number] - Roll a dice with the specified number of sides (default: 6)
- !gptbot imagine \<prompt\> - Generate an image from a prompt
- !gptbot calculate [--text] [--details] \<query\> - Calculate a result to a calculation, optionally forcing text output instead of an image, and optionally showing additional details like the input interpretation
- !gptbot chat \<message\> - Send a message to the chat API
- !gptbot classify \<message\> - Classify a message using the classification API
- !gptbot custom \<message\> - Used for custom commands handled by the chat model and defined through the room's system message
- !gptbot imagine <prompt> - Generate an image from a prompt
- !gptbot calculate [--text] [--details] <query> - Calculate a result to a calculation, optionally forcing text output instead of an image, and optionally showing additional details like the input interpretation
- !gptbot chat <message> - Send a message to the chat API
- !gptbot classify <message> - Classify a message using the classification API
- !gptbot custom <message> - Used for custom commands handled by the chat model and defined through the room's system message
- !gptbot roomsettings [use_classification|use_timing|always_reply|system_message|tts] [true|false|<message>] - Get or set room settings
- !gptbot ignoreolder - Ignore messages before this point as context
"""

View file

@ -16,10 +16,10 @@ async def command_imagine(room: MatrixRoom, event: RoomMessageText, bot):
return
for image in images:
bot.logger.log(f"Sending image...")
bot.logger.log("Sending image...")
await bot.send_image(room, image)
bot.log_api_usage(event, room, f"{bot.image_api.api_code}-{bot.image_api.image_api}", tokens_used)
bot.log_api_usage(event, room, f"{bot.image_api.api_code}-{bot.image_api.image_model}", tokens_used)
return

View file

@ -13,7 +13,7 @@ async def command_newroom(room: MatrixRoom, event: RoomMessageText, bot):
if isinstance(new_room, RoomCreateError):
bot.logger.log(f"Failed to create room: {new_room.message}")
await bot.send_message(room, f"Sorry, I was unable to create a new room. Please try again later, or create a room manually.", True)
await bot.send_message(room, "Sorry, I was unable to create a new room. Please try again later, or create a room manually.", True)
return
bot.logger.log(f"Inviting {event.sender} to new room...")
@ -21,7 +21,7 @@ async def command_newroom(room: MatrixRoom, event: RoomMessageText, bot):
if isinstance(invite, RoomInviteError):
bot.logger.log(f"Failed to invite user: {invite.message}")
await bot.send_message(room, f"Sorry, I was unable to invite you to the new room. Please try again later, or create a room manually.", True)
await bot.send_message(room, "Sorry, I was unable to invite you to the new room. Please try again later, or create a room manually.", True)
return
with closing(bot.database.cursor()) as cursor:
@ -43,4 +43,4 @@ async def command_newroom(room: MatrixRoom, event: RoomMessageText, bot):
await bot.matrix_client.joined_rooms()
await bot.send_message(room, f"Alright, I've created a new room called '{room_name}' and invited you to it. You can find it at {new_room.room_id}", True)
await bot.send_message(bot.matrix_client.rooms[new_room.room_id], f"Welcome to the new room! What can I do for you?")
await bot.send_message(bot.matrix_client.rooms[new_room.room_id], "Welcome to the new room! What can I do for you?")

View file

@ -11,7 +11,7 @@ async def command_privacy(room: MatrixRoom, event: RoomMessageText, bot):
body += "- For chat requests: " + f"{bot.chat_api.operator}" + "\n"
if bot.image_api:
body += "- For image generation requests (!gptbot imagine): " + f"{bot.image_api.operator}" + "\n"
if bot.calculate_api:
body += "- For calculation requests (!gptbot calculate): " + f"{bot.calculate_api.operator}" + "\n"
if bot.calculation_api:
body += "- For calculation requests (!gptbot calculate): " + f"{bot.calculation_api.operator}" + "\n"
await bot.send_message(room, body, True)

View file

@ -25,6 +25,8 @@ async def command_roomsettings(room: MatrixRoom, event: RoomMessageText, bot):
(room.room_id, "system_message", value, value)
)
bot.database.commit()
await bot.send_message(room, f"Alright, I've stored the system message: '{value}'.", True)
return
@ -35,7 +37,7 @@ async def command_roomsettings(room: MatrixRoom, event: RoomMessageText, bot):
await bot.send_message(room, f"The current system message is: '{system_message}'.", True)
return
if setting in ("use_classification", "always_reply", "use_timing"):
if setting in ("use_classification", "always_reply", "use_timing", "tts", "stt"):
if value:
if value.lower() in ["true", "false"]:
value = value.lower() == "true"
@ -49,6 +51,8 @@ async def command_roomsettings(room: MatrixRoom, event: RoomMessageText, bot):
(room.room_id, setting, "1" if value else "0", "1" if value else "0")
)
bot.database.commit()
await bot.send_message(room, f"Alright, I've set {setting} to: '{value}'.", True)
return
@ -76,11 +80,51 @@ async def command_roomsettings(room: MatrixRoom, event: RoomMessageText, bot):
await bot.send_message(room, f"The current {setting} status is: '{value}'.", True)
return
message = f"""The following settings are available:
if bot.allow_model_override and setting == "model":
if value:
bot.logger.log(f"Setting chat model for {room.room_id} to {value}...")
with closing(bot.database.cursor()) as cur:
cur.execute(
"""INSERT INTO room_settings (room_id, setting, value) VALUES (?, ?, ?)
ON CONFLICT (room_id, setting) DO UPDATE SET value = ?;""",
(room.room_id, "model", value, value)
)
bot.database.commit()
await bot.send_message(room, f"Alright, I've set the chat model to: '{value}'.", True)
return
bot.logger.log(f"Retrieving chat model for {room.room_id}...")
with closing(bot.database.cursor()) as cur:
cur.execute(
"""SELECT value FROM room_settings WHERE room_id = ? AND setting = ?;""",
(room.room_id, "model")
)
value = cur.fetchone()[0]
if not value:
value = bot.chat_api.chat_model
else:
value = str(value)
await bot.send_message(room, f"The current chat model is: '{value}'.", True)
return
message = """The following settings are available:
- system_message [message]: Get or set the system message to be sent to the chat model
- classification [true/false]: Get or set whether the room uses classification
- always_reply [true/false]: Get or set whether the bot should reply to all messages (if false, only reply to mentions and commands)
- tts [true/false]: Get or set whether the bot should generate audio files instead of sending text
- stt [true/false]: Get or set whether the bot should attempt to process information from audio files
- timing [true/false]: Get or set whether the bot should return information about the time it took to generate a response
"""
if bot.allow_model_override:
message += "- model [model]: Get or set the chat model to be used for this room"
await bot.send_message(room, message, True)

View file

@ -120,7 +120,7 @@ async def command_space(room: MatrixRoom, event: RoomMessageText, bot):
if isinstance(response, RoomInviteError):
bot.logger.log(
f"Failed to invite user {user} to space {space}", "error")
f"Failed to invite user {event.sender} to space {space}", "error")
await bot.send_message(
room, "Sorry, I couldn't invite you to the space. Please try again later.", True)
return

View file

@ -5,16 +5,30 @@ from contextlib import closing
async def command_stats(room: MatrixRoom, event: RoomMessageText, bot):
await bot.send_message(
room,
"The `stats` command is no longer supported. Sorry for the inconvenience.",
True,
)
return
# Yes, the code below is unreachable, but it's kept here for reference.
bot.logger.log("Showing stats...")
if not bot.database:
bot.logger.log("No database connection - cannot show stats")
bot.send_message(room, "Sorry, I'm not connected to a database, so I don't have any statistics on your usage.", True)
return
await bot.send_message(
room,
"Sorry, I'm not connected to a database, so I don't have any statistics on your usage.",
True,
)
return
with closing(bot.database.cursor()) as cursor:
cursor.execute(
"SELECT SUM(tokens) FROM token_usage WHERE room_id = ?", (room.room_id,))
"SELECT SUM(tokens) FROM token_usage WHERE room_id = ?", (room.room_id,)
)
total_tokens = cursor.fetchone()[0] or 0
bot.send_message(room, f"Total tokens used: {total_tokens}", True)
await bot.send_message(room, f"Total tokens used: {total_tokens}", True)

View file

@ -0,0 +1,23 @@
from nio.events.room_events import RoomMessageText
from nio.rooms import MatrixRoom
async def command_tts(room: MatrixRoom, event: RoomMessageText, bot):
prompt = " ".join(event.body.split()[2:])
if prompt:
bot.logger.log("Generating speech...")
try:
content = await bot.tts_api.text_to_speech(prompt, user=room.room_id)
except Exception as e:
bot.logger.log(f"Error generating speech: {e}", "error")
await bot.send_message(room, "Sorry, I couldn't generate an audio file. Please try again later.", True)
return
bot.logger.log("Sending audio file...")
await bot.send_file(room, content, "audio.mp3", "audio/mpeg", "m.audio")
return
await bot.send_message(room, "You need to provide a prompt.", True)

View file

@ -22,7 +22,7 @@ def get_version(db: SQLiteConnection) -> int:
try:
return int(db.execute("SELECT MAX(id) FROM migrations").fetchone()[0])
except:
except Exception:
return 0
def migrate(db: SQLiteConnection, from_version: Optional[int] = None, to_version: Optional[int] = None) -> None:

View file

@ -1,7 +1,5 @@
# Migration for Matrix Store - No longer used
from datetime import datetime
from contextlib import closing
def migration(conn):
pass

View file

@ -0,0 +1,21 @@
from importlib import import_module
from .base import BaseTool, StopProcessing, Handover # noqa: F401
TOOLS = {}
for tool in [
"weather",
"geocode",
"dice",
"websearch",
"webrequest",
"imagine",
"imagedescription",
"wikipedia",
"datetime",
"newroom",
]:
tool_class = getattr(import_module(
"." + tool, "gptbot.tools"), tool.capitalize())
TOOLS[tool] = tool_class

21
src/gptbot/tools/base.py Normal file
View file

@ -0,0 +1,21 @@
class BaseTool:
DESCRIPTION: str
PARAMETERS: dict
def __init__(self, **kwargs):
self.kwargs = kwargs
self.bot = kwargs.get("bot")
self.room = kwargs.get("room")
self.user = kwargs.get("user")
self.messages = kwargs.get("messages", [])
async def run(self):
raise NotImplementedError()
class StopProcessing(Exception):
"""Stop processing the message."""
pass
class Handover(Exception):
"""Handover to the original model, if applicable. Stop using tools."""
pass

View file

@ -0,0 +1,16 @@
from .base import BaseTool
from datetime import datetime
class Datetime(BaseTool):
DESCRIPTION = "Get the current date and time."
PARAMETERS = {
"type": "object",
"properties": {
},
}
async def run(self):
"""Get the current date and time."""
return f"""**Current date and time (UTC)**
{datetime.utcnow().strftime("%Y-%m-%d %H:%M:%S")}"""

26
src/gptbot/tools/dice.py Normal file
View file

@ -0,0 +1,26 @@
from .base import BaseTool
from random import SystemRandom
class Dice(BaseTool):
DESCRIPTION = "Roll dice."
PARAMETERS = {
"type": "object",
"properties": {
"dice": {
"type": "string",
"description": "The number of sides on the dice.",
"default": "6",
},
},
"required": [],
}
async def run(self):
"""Roll dice."""
dice = int(self.kwargs.get("dice", 6))
return f"""**Dice roll**
Used dice: {dice}
Result: {SystemRandom().randint(1, dice)}
"""

View file

@ -0,0 +1,34 @@
from geopy.geocoders import Nominatim
from .base import BaseTool
class Geocode(BaseTool):
DESCRIPTION = "Get location information (latitude, longitude) for a given location name."
PARAMETERS = {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The location name.",
},
},
"required": ["location"],
}
async def run(self):
"""Get location information for a given location."""
if not (location := self.kwargs.get("location")):
raise Exception('No location provided.')
geolocator = Nominatim(user_agent=self.bot.USER_AGENT)
location = geolocator.geocode(location)
if location:
return f"""**Location information for {location.address}**
Latitude: {location.latitude}
Longitude: {location.longitude}
"""
raise Exception('Could not find location data for that location.')

View file

@ -0,0 +1,15 @@
from .base import BaseTool
class Imagedescription(BaseTool):
DESCRIPTION = "Describe the content of the images in the conversation."
PARAMETERS = {
"type": "object",
"properties": {
},
}
async def run(self):
"""Describe images in the conversation."""
image_api = self.bot.image_api
return (await image_api.describe_images(self.messages, self.user))[0]

View file

@ -0,0 +1,34 @@
from .base import BaseTool, StopProcessing
class Imagine(BaseTool):
DESCRIPTION = "Use generative AI to create images from text prompts."
PARAMETERS = {
"type": "object",
"properties": {
"prompt": {
"type": "string",
"description": "The prompt to use.",
},
"orientation": {
"type": "string",
"description": "The orientation of the image.",
"enum": ["square", "landscape", "portrait"],
"default": "square",
},
},
"required": ["prompt"],
}
async def run(self):
"""Use generative AI to create images from text prompts."""
if not (prompt := self.kwargs.get("prompt")):
raise Exception('No prompt provided.')
api = self.bot.image_api
orientation = self.kwargs.get("orientation", "square")
images, tokens = await api.generate_image(prompt, self.room, orientation=orientation)
for image in images:
await self.bot.send_image(self.room, image, prompt)
raise StopProcessing()

View file

@ -0,0 +1,57 @@
from .base import BaseTool, StopProcessing
from nio import RoomCreateError, RoomInviteError
from contextlib import closing
class Newroom(BaseTool):
DESCRIPTION = "Create a new Matrix room"
PARAMETERS = {
"type": "object",
"properties": {
"name": {
"type": "string",
"description": "The name of the room to create.",
"default": "GPTBot"
}
},
}
async def run(self):
"""Create a new Matrix room"""
name = self.kwargs.get("name", "GPTBot")
self.bot.logger.log("Creating new room...")
new_room = await self.bot.matrix_client.room_create(name=name)
if isinstance(new_room, RoomCreateError):
self.bot.logger.log(f"Failed to create room: {new_room.message}")
raise
self.bot.logger.log(f"Inviting {self.user} to new room...")
invite = await self.bot.matrix_client.room_invite(new_room.room_id, self.user)
if isinstance(invite, RoomInviteError):
self.bot.logger.log(f"Failed to invite user: {invite.message}")
raise
await self.bot.send_message(new_room.room_id, "Welcome to your new room! What can I do for you?")
with closing(self.bot.database.cursor()) as cursor:
cursor.execute(
"SELECT space_id FROM user_spaces WHERE user_id = ? AND active = TRUE", (self.user,))
space = cursor.fetchone()
if space:
self.bot.logger.log(f"Adding new room to space {space[0]}...")
await self.bot.add_rooms_to_space(space[0], [new_room.room_id])
if self.bot.logo_uri:
await self.bot.matrix_client.room_put_state(new_room, "m.room.avatar", {
"url": self.bot.logo_uri
}, "")
await self.bot.matrix_client.room_put_state(
new_room.room_id, "m.room.power_levels", {"users": {self.user: 100, self.bot.matrix_client.user_id: 100}})
raise StopProcessing("Created new Matrix room with ID " + new_room.room_id + " and invited user.")

View file

@ -0,0 +1,58 @@
import aiohttp
from datetime import datetime
from .base import BaseTool
class Weather(BaseTool):
DESCRIPTION = "Get weather information for a given location."
PARAMETERS = {
"type": "object",
"properties": {
"latitude": {
"type": "string",
"description": "The latitude of the location.",
},
"longitude": {
"type": "string",
"description": "The longitude of the location.",
},
"name": {
"type": "string",
"description": "A location name to include in the report. This is optional, latitude and longitude are always required."
}
},
"required": ["latitude", "longitude"],
}
async def run(self):
"""Get weather information for a given location."""
if not (latitude := self.kwargs.get("latitude")) or not (longitude := self.kwargs.get("longitude")):
raise Exception('No location provided.')
name = self.kwargs.get("name")
weather_api_key = self.bot.config.get("OpenWeatherMap", "APIKey")
if not weather_api_key:
raise Exception('Weather API key not found.')
url = f'https://api.openweathermap.org/data/3.0/onecall?lat={latitude}&lon={longitude}&appid={weather_api_key}&units=metric'
async with aiohttp.ClientSession() as session:
async with session.get(url) as response:
if response.status == 200:
data = await response.json()
return f"""**Weather report{f" for {name}" if name else ""}**
Current: {data['current']['temp']}°C, {data['current']['weather'][0]['description']}
Feels like: {data['current']['feels_like']}°C
Humidity: {data['current']['humidity']}%
Wind: {data['current']['wind_speed']}m/s
Sunrise: {datetime.fromtimestamp(data['current']['sunrise']).strftime('%H:%M')}
Sunset: {datetime.fromtimestamp(data['current']['sunset']).strftime('%H:%M')}
Today: {data['daily'][0]['temp']['day']}°C, {data['daily'][0]['weather'][0]['description']}, {data['daily'][0]['summary']}
Tomorrow: {data['daily'][1]['temp']['day']}°C, {data['daily'][1]['weather'][0]['description']}, {data['daily'][1]['summary']}
"""
else:
raise Exception(f'Could not connect to weather API: {response.status} {response.reason}')

View file

@ -0,0 +1,59 @@
from .base import BaseTool
import aiohttp
from bs4 import BeautifulSoup
import re
class Webrequest(BaseTool):
DESCRIPTION = "Browse an external website by URL."
PARAMETERS = {
"type": "object",
"properties": {
"url": {
"type": "string",
"description": "The URL to request.",
},
},
"required": ["url"],
}
async def html_to_text(self, html):
# Parse the HTML content of the response
soup = BeautifulSoup(html, 'html.parser')
# Format the links within the text
for link in soup.find_all('a'):
link_text = link.get_text()
link_href = link.get('href')
new_link_text = f"{link_text} ({link_href})"
link.replace_with(new_link_text)
# Extract the plain text content of the website
plain_text_content = soup.get_text()
# Remove extra whitespace
plain_text_content = re.sub('\s+', ' ', plain_text_content).strip()
# Return the formatted text content of the website
return plain_text_content
async def run(self):
"""Make a web request to a given URL."""
if not (url := self.kwargs.get("url")):
raise Exception('No URL provided.')
async with aiohttp.ClientSession() as session:
async with session.get(url) as response:
if response.status == 200:
data = await response.text()
output = await self.html_to_text(data)
return f"""**Web request**
URL: {url}
Status: {response.status} {response.reason}
{output}
"""

View file

@ -0,0 +1,37 @@
from .base import BaseTool
import aiohttp
from urllib.parse import quote_plus
class Websearch(BaseTool):
DESCRIPTION = "Search the web for a given query."
PARAMETERS = {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "The query to search for.",
},
},
"required": ["query"],
}
async def run(self):
"""Search the web for a given query."""
if not (query := self.kwargs.get("query")):
raise Exception('No query provided.')
query = quote_plus(query)
url = f'https://librey.private.coffee/api.php?q={query}'
async with aiohttp.ClientSession() as session:
async with session.get(url) as response:
if response.status == 200:
data = await response.json()
response_text = "**Search results for {query}**"
for result in data:
response_text += f"\n{result['title']}\n{result['url']}\n{result['description']}\n"
return response_text

View file

@ -0,0 +1,79 @@
from .base import BaseTool
from urllib.parse import urlencode
import aiohttp
class Wikipedia(BaseTool):
DESCRIPTION = "Get information from Wikipedia."
PARAMETERS = {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "The query to search for.",
},
"language": {
"type": "string",
"description": "The language to search in.",
"default": "en",
},
"extract": {
"type": "string",
"description": "What information to extract from the page. If not provided, the full page will be returned."
},
"summarize": {
"type": "boolean",
"description": "Whether to summarize the page or not.",
"default": False,
}
},
"required": ["query"],
}
async def run(self):
"""Get information from Wikipedia."""
if not (query := self.kwargs.get("query")):
raise Exception('No query provided.')
language = self.kwargs.get("language", "en")
extract = self.kwargs.get("extract")
summarize = self.kwargs.get("summarize", False)
args = {
"action": "query",
"format": "json",
"titles": query,
}
args["prop"] = "revisions"
args["rvprop"] = "content"
url = f'https://{language}.wikipedia.org/w/api.php?{urlencode(args)}'
async with aiohttp.ClientSession() as session:
async with session.get(url) as response:
if response.status == 200:
data = await response.json()
try:
pages = data['query']['pages']
page = list(pages.values())[0]
content = page['revisions'][0]['*']
except KeyError:
raise Exception(f'No results for {query} found in Wikipedia.')
if extract:
chat_messages = [{"role": "system", "content": f"Extract the following from the provided content: {extract}"}]
chat_messages.append({"role": "user", "content": content})
content, _ = await self.bot.chat_api.generate_chat_response(chat_messages, room=self.room, user=self.user, allow_override=False, use_tools=False)
if summarize:
chat_messages = [{"role": "system", "content": "Summarize the following content:"}]
chat_messages.append({"role": "user", "content": content})
content, _ = await self.bot.chat_api.generate_chat_response(chat_messages, room=self.room, user=self.user, allow_override=False, use_tools=False)
return f"**Wikipedia: {page['title']}**\n{content}"
else:
raise Exception(f'Could not connect to Wikipedia API: {response.status} {response.reason}')