Compare commits

...

70 commits

Author SHA1 Message Date
571031002c
chore: bump version to 0.3.20
All checks were successful
Docker CI/CD / Docker Build and Push to Docker Hub (push) Successful in 8m41s
Python Package CI/CD / Setup and Test (push) Successful in 2m2s
Python Package CI/CD / Publish to PyPI (push) Successful in 1m11s
Updated project version from 0.3.19 to 0.3.20 to reflect the latest changes and improvements in the codebase. Ensures compatibility with the new updates and maintains version tracking.
2024-08-23 19:08:02 +02:00
179005a562
fix: add room check to prevent processing errors
Updated the method to include a room parameter, ensuring that message processing functions only when a room is provided. This prevents errors when trying to download and process media files, improving stability and avoiding unnecessary exceptions.
2024-08-23 19:06:49 +02:00
40f28b9f0b
chore: bump version to 0.3.19
All checks were successful
Docker CI/CD / Docker Build and Push to Docker Hub (push) Successful in 8m35s
Python Package CI/CD / Setup and Test (push) Successful in 2m1s
Python Package CI/CD / Publish to PyPI (push) Successful in 1m11s
Upgrade project version from 0.3.18 to 0.3.19 to reflect recent changes and improvements. No other modifications were made.
2024-08-18 10:54:31 +02:00
08fa83f1f9
fix(dice): handle missing dice roll parameter
Adjust exception handling to catch both ValueError and IndexError. This ensures the command gracefully defaults to 6 sides when input parameters are insufficient or improperly formatted. Improves robustness against user errors.
2024-08-18 10:54:03 +02:00
525aea3f05
fix(config): add fallback values for Matrix config checks
Added fallback values for Matrix 'Password' and 'UserID' config checks to prevent exceptions when these keys are not present. This ensures smoother handling of missing configurations.
2024-08-18 10:50:27 +02:00
99d3974e17
feat: update bot info and deprecate stats command
All checks were successful
Docker CI/CD / Docker Build and Push to Docker Hub (push) Successful in 8m55s
Python Package CI/CD / Setup and Test (push) Successful in 1m58s
Python Package CI/CD / Publish to PyPI (push) Successful in 1m11s
Updated bot info command to display model info specific to room.
Removed the now unsupported stats command from help and privacy.
Retired the 'stats' command, informing users of its deprecation.
Updated version to 0.3.18 to reflect these changes.
2024-08-18 10:44:09 +02:00
e4dba23e39
chore(release): bump version to 0.3.17
All checks were successful
Docker CI/CD / Docker Build and Push to Docker Hub (push) Successful in 8m42s
Python Package CI/CD / Setup and Test (push) Successful in 1m53s
Python Package CI/CD / Publish to PyPI (push) Successful in 1m12s
Updated project version to 0.3.17 to reflect new changes or fixes.
2024-08-04 20:10:07 +02:00
5378ac39e4
feat(openai): add event to incoming messages
Appended the event to the incoming messages list to ensure it gets processed. This change addresses situations where events were previously being overlooked, potentially leading to incomplete or incorrect processing. This enhancement ensures a more comprehensive handling of incoming data.
2024-08-04 20:04:50 +02:00
56b4f3617c
chore: bump version to 0.3.16
All checks were successful
Docker CI/CD / Docker Build and Push to Docker Hub (push) Successful in 10m44s
Python Package CI/CD / Setup and Test (push) Successful in 1m59s
Python Package CI/CD / Publish to PyPI (push) Successful in 1m11s
Updated the project version to 0.3.16 to prepare for the next release. This includes recent bug fixes and minor improvements. Ensure the updated version is reflected across all relevant documentation and deployment scripts.
2024-08-04 18:28:48 +02:00
48decdc9e2
feat(logging): enhance debug logging for message processing
Added debug logging to capture incoming, prepared, and truncated messages in the OpenAI class. Also, included logging for last messages fetched in the bot class. These additions aid in the traceability and debugging of message flows and processing errors.

Additionally, an option to log detailed error tracebacks in debug mode was implemented to facilitate better error analysis.
2024-08-04 18:28:12 +02:00
ca7245696a
fix: correct max_tokens reference in OpenAI class
Updated the reference to max_tokens in the truncation call from
self.chat_api.max_tokens to self.max_tokens, ensuring the correct
token limit is applied. This resolves potential issues with message
length handling.
2024-08-04 17:42:23 +02:00
c06da55d5d
feat: add video file support and integrate Google AI
All checks were successful
Docker CI/CD / Docker Build and Push to Docker Hub (push) Successful in 8m44s
Python Package CI/CD / Setup and Test (push) Successful in 1m22s
Python Package CI/CD / Publish to PyPI (push) Successful in 40s
Introduced the capability to handle video files as input for AI models that support it, enhancing the bot's versatility in processing media. This update includes a new configuration option to enable or disable video input, catering to different model capabilities. Additionally, integrated Google's Generative AI through the addition of a Google dependency and a corresponding AI class implementation. This move broadens the AI options available, providing users with more flexibility in choosing their desired AI backend. The update involves refactoring and simplifying message preparation and handling, ensuring compatibility and extending functionality to include the new video input feature and Google AI support.

- Added `ForceVideoInput` configuration option to toggle video file processing.
- Integrated Google Generative AI as an optional dependency and included it in the bot's AI choices.
- Implemented a unified method for preparing messages for AI processing, streamlining how the bot handles various message types.
- Removed obsolete code related to message truncation and specialized handling for images, files, and audio, reflecting a shift towards a more flexible and generalized message processing approach.
2024-05-25 17:35:22 +02:00
05ba26d540
feat(openai.py): expand message handling capabilities
Enhanced the OpenAI class to better support diverse message types in chat interactions, including image and video processing. This update introduces several key improvements:
- Added handling for image and video messages, converting them to a format compatible with the OpenAI API.
- Implemented a new method to prepare messages for OpenAI, allowing for richer interaction by including media content directly within the chat.
- Incorporated message truncation to adhere to token limits, ensuring efficient usage of OpenAI's API without sacrificing message content.
- Extended support for additional message types, such as audio and file messages, with specialized processing for each category.

This change aims to enhance user experience by allowing more dynamic and multimedia-rich interactions, aligning with modern chat functionalities. It also addresses potential issues with token limit surpassing and ensures smoother integration of different message formats into the chat flow.
2024-05-25 17:35:05 +02:00
75e637546a
feat(login): enhance login flow with UserID check
All checks were successful
Docker CI/CD / Docker Build and Push to Docker Hub (push) Successful in 8m17s
Python Package CI/CD / Setup and Test (push) Successful in 1m11s
Python Package CI/CD / Publish to PyPI (push) Successful in 36s
Improved the login logic in the bot's initialization process to require a UserID when a Password is provided for login. This update ensures a more secure and fail-proof login procedure by validating the presence of a UserID before attempting to log in, and by handling LoginError more explicitly with a clear error message. This change addresses the need for better error handling and validation during the bot's login phase to avoid silent failures and improve debuggability.

- Added LoginError import to handle login-related exceptions more gracefully.
- Refined the login process to create the AsyncClient instance with a UserID when password authentication is used, following best practices for client identification.
- Introduced explicit error raising for missing UserID configuration, enhancing configuration validation before attempting a login.
- Improved clarity and security by clearing the password from the configuration post-login, preventing inadvertent storage or reuse.

This update enhances the bot's robustness and configuration validation, ensuring smoother operations and better error handling during the initialization phase.
2024-05-21 08:14:04 +02:00
e1695f0cce
feat: enhance error handling for file downloads
Introduce `DownloadException` to improve error reporting and handling when file downloads fail. Modified `download_file` method to accept a `raise_error` flag, which, when set, raises `DownloadException` upon a download error instead of just logging it. This enables the bot to respond with a specific error message to the room if a download fails during processing of speech-to-text, file messages, and image messages, enhancing user feedback on download failures.
2024-05-20 10:41:09 +02:00
3f084ffdd3
feat: enhance tool and image handling
All checks were successful
Docker CI/CD / Docker Build and Push to Docker Hub (push) Successful in 8m22s
Python Package CI/CD / Setup and Test (push) Successful in 1m11s
Python Package CI/CD / Publish to PyPI (push) Successful in 38s
Introduced changes to the tool request behavior and image processing. Now, the configuration allows a dedicated model for tool requests (`ToolModel`) and enforces automatic resizing of context images to maximal dimensions, improving compatibility and performance with the AI model. The update shifts away from a rigid tool model use, accommodating varied model support for tool requests, and optimizes image handling for network and processing efficiency. These adjustments aim to enhance user experience with more flexible tool usage and efficient image handling in chat interactions.
2024-05-20 10:20:17 +02:00
89f06268a5
fix(migrations): handle specific exceptions gracefully
Updated the exception handling in the migration logic to catch `Exception` explicitly instead of using a bare except clause. This change improves the robustness of the migration process by ensuring that only known, broad exceptions are caught, aiding in debugging and maintaining the principle of least privilege in error handling. It prevents the swallowing of unrelated exceptions that could mask other issues or critical errors.
2024-05-18 21:39:06 +02:00
d0ab53b3e0
feat: standardize bot logging method
Switched to using the bot's centralized logging mechanism for bot info commands, enhancing consistency across the application. This change ensures that all log messages go through the same process, potentially simplifying future debugging and logging enhancements.
2024-05-18 21:38:44 +02:00
19aa91cf48
fix(openai response handling): narrow down exception handling
Refined exception handling in the OpenAI response parsing by specifying `Exception` instead of using a bare except. This change improves code reliability and maintainability by clearly defining the scope of exceptions we anticipate, leading to more precise error handling and easier debugging processes. It aligns with best practices for Python error handling, avoiding the catch-all approach that might inadvertently suppress unrelated errors, thus enhancing the overall robustness of the error management strategy.
2024-05-18 21:38:17 +02:00
99eec5395e
fix: correct vars in join_callback for space mapping
Resolved incorrect variable usage in join_callback function that affected the mapping of new rooms to the correct spaces. Previously, the event.sender variable was mistakenly used, leading to potential mismatches in identifying the correct user and room IDs for space assignments. This update ensures the response object's sender and room_id properties are correctly utilized, aligning room additions with the intended user spaces.
2024-05-18 21:37:51 +02:00
8a253fdf90
feat(invite): streamline invite handling and logging
Refactored the invite handling process within the invite callback for better consistency and maintainability. Swapped out a basic logging function with the bot's standardized logger for improved logging consistency across the application. Additionally, simplified the room joining process by removing redundant response handling, thus enhancing code readability and maintainability. These changes aim to unify the logging approach within the bot and ensure smoother invite processing without altering the underlying functionality.
2024-05-18 21:37:03 +02:00
28752ae3da
refactor: mark base imports in tools as used
Adjusted import statements in `tools.__init__.py` to silence linting warnings regarding unused imports. This emphasizes that `BaseTool`, `StopProcessing`, and `Handover` are intentionally imported for export purposes, despite not being directly referenced. This change aids in maintaining cleaner code and reduces confusion around import intentions.

No functional impact or changes in behavior are expected as a result of this refactor.
2024-05-18 21:36:21 +02:00
df567d005e
refactor(callbacks): remove test callbacks and imports
This commit streamlines the `callbacks` module by removing the debugging and testing related callbacks (`test_callback` and `test_response_callback`) along with their associated imports. It focuses on enhancing the clarity and maintainability of the callback handling by eliminating unused code paths and dependencies that were specifically used for testing purposes. This cleanup is part of an effort to mature the codebase and reduce potential confusion for new contributors by ensuring that only operational code is present in the production paths. This should not affect the existing functionality but will make future modifications and understanding of the callback mechanisms more straightforward.
2024-05-18 21:36:06 +02:00
e2e31060ce
refactor: improve code readability and efficiency
Enhanced code readability by formatting multiline log statements and adding missing line breaks in conditional blocks. Adopted a more robust error handling approach by catching general exceptions in encoding determination. Eliminated redundant variable assignments for async tasks to streamline event handling and response callbacks, directly invoking `asyncio.create_task()` for better clarity and efficiency. Simplify message and file sending routines by removing unnecessary status assignments, implying a focus on action over response verification. Lastly, optimized message truncation logic by discarding the unused result, focusing on in-place operation for token limit adherence. These changes collectively contribute to a cleaner, more maintainable, and efficient codebase, addressing potential bugs and performance bottlenecks.
2024-05-18 21:33:32 +02:00
7f8ff1502a
fix: remove redundant API usage log call
Removed an unnecessary call to `log_api_usage` after command execution in the calculate command handler. This change eliminates redundant logging that didn't contribute valuable insights and led to clutter in log files, streamlining the process and potentially improving performance by reducing I/O operations.
2024-05-18 21:32:32 +02:00
75d00ea50e
fix: correct user variable in invite error log
Updated the logger to use `event.sender` instead of an undefined `user` variable when logging a failed space invitation, ensuring the correct information is logged. This change addresses a bug where the wrong variable was referenced, potentially causing confusion when diagnosing issues with space invites.
2024-05-18 21:31:56 +02:00
2c04d8bf9c
refactor: remove unused json import from ai base
The ai base module in gptbot no longer requires the json package, leading to its removal. This cleanup enhances code readability and reduces unnecessary import overhead, streamlining the functionality of the ai base class without affecting its external behavior. Such optimizations contribute to the overall maintainability and performance of the codebase.
2024-05-18 21:31:17 +02:00
27df072c0d
fix: correct target room for avatar setup
Fixed an issue where the bot attempted to set the avatar for the wrong room when creating a new room. The avatar is now correctly assigned to the newly created room instead of the incorrectly referenced room variable. This ensures that newly created rooms properly display the intended logo from the start, improving the user experience by maintaining consistent branding across rooms.
2024-05-18 21:30:48 +02:00
141e89ab11
feat: add PyPI and Git badges to README
Enhanced project visibility and accessibility by including new badges in the README. These additions are aimed at providing quick links to the package on PyPI, showing the supported Python versions, license information, and the latest Git commit status. These enhancements make it easier for users and contributors to find important project details, contributing to a more open and engaging community.

This change underscores our commitment to transparency and support for the development community.
2024-05-17 16:36:19 +02:00
47bf7db380
refactor(ci): streamline Docker CI/CD workflows
Removed redundant Docker CI/CD workflow for the 'latest' tag and integrated its functionality into the existing tagging workflow. This change not only reduces the redundancy of having separate workflows for 'latest' and version-specific tags but also simplifies the CI/CD process by having a single, unified workflow for Docker image publications. Moving forward, every push will now ensure that the 'latest' tag is updated alongside the version-specific tags, maintaining a smoother and more predictable deployment and versioning flow.
2024-05-17 16:12:11 +02:00
9c6b6f5b99
feat(dependabot): enable daily pip updates
All checks were successful
Docker CI/CD / Docker Build and Push to Docker Hub (push) Successful in 8m8s
Added a Dependabot configuration to automate dependency updates for the Python package ecosystem. Dependabot will now check for updates on a daily basis, ensuring that our project dependencies remain up-to-date with the latest security patches and features without manual oversight. This proactive approach towards dependency management will aid in minimizing potential security vulnerabilities and compatibility issues, fostering a more secure and stable development environment.
2024-05-17 16:08:33 +02:00
344e736006
docs: emphasize venv usage in installation guide
All checks were successful
Docker CI/CD / Docker Build and Push to Docker Hub (push) Successful in 8m25s
Updated the README to strengthen the recommendation of using a virtual environment (venv) during installation. This adjustment aims to guide users towards best practices in Python environment management, potentially reducing common issues related to package dependencies and conflicts.
2024-05-17 12:46:03 +02:00
3e966334ba
refactor(pyproject.toml): streamline and update dependencies
This commit simplifies the pyproject.toml structure for better readability and maintenance. Key changes include formatting author and license information, consolidating dependency lists into a more concise format, and adding the `future` package to dependencies to ensure forward-compatibility. Optional dependencies are now listed in a more compact style, and the development dependencies section has been cleaned up. These adjustments make the project configuration cleaner and more accessible, facilitating future updates and dependency management.
2024-05-17 11:54:30 +02:00
9178ab23ac
fix: update Pantalaimon default port in README
All checks were successful
Docker CI/CD / Docker Build and Push to Docker Hub (push) Successful in 8m3s
Corrected the default port for Pantalaimon from 8010 to 8009 in the README documentation. This change aligns the documentation with the latest Pantalaimon configuration standards, ensuring that users setting up their homeserver URL in the bot's config.ini file use the correct port. This update is crucial for new users during initial setup to avoid connectivity issues.
2024-05-17 11:49:21 +02:00
ee7e866748
feat(config): change default port to 8009
Some checks are pending
Docker CI/CD / Docker Build and Push to Docker Hub (push) Waiting to run
Updated the default listening port in pantalaimon.example.conf from 8010 to 8009. This alteration ensures compatibility with new network policies and avoids collision with commonly used ports in the default configuration. It's an important change for users setting up new instances, enabling smoother initial configurations without manual port adjustments.
2024-05-17 11:45:41 +02:00
1cd7043a36
feat: enable third-party model vision support
All checks were successful
Docker CI/CD / Docker Build and Push to Docker Hub (push) Successful in 8m8s
Python Package CI/CD / Setup and Test (push) Successful in 1m8s
Python Package CI/CD / Publish to PyPI (push) Successful in 37s
Introduced the `ForceVision` configuration option to allow usage of third-party models for image recognition within the OpenAI setup. This change broadens the flexibility and applicability of the bot's image processing capabilities by not restricting to predefined vision models only. Also, added missing properties to the `OpenAI` class to provide comprehensive control over the bot's behavior, including options for forcing vision and tools usage, along with emulating tool capabilities in models not officially supporting them. These enhancements make the bot more adaptable to various models and user needs, especially for self-hosted setups.

Additionally, updated documentation and increment version to 0.3.12 to reflect these changes and improvements.
2024-05-17 11:37:10 +02:00
8e0cffe02a
feat: enhance AI integration & update models
All checks were successful
Docker CI/CD / Docker Build and Push to Docker Hub (push) Successful in 7m53s
Python Package CI/CD / Setup and Test (push) Successful in 1m13s
Python Package CI/CD / Publish to PyPI (push) Successful in 37s
Refactored the handling of AI providers to support multiple AI services efficiently, introducing a `BaseAI` class from which all AI providers now inherit. This change modernizes our approach to AI integration, providing a more flexible and maintainable architecture for future expansions and enhancements.

- Adopted `gpt-4o` and `dall-e-3` as the default models for chat and image generation, respectively, aligning with the latest advancements in AI capabilities.
- Integrated `ruff` as a development dependency to enforce coding standards and improve code quality through consistent linting.
- Removed unused API keys and sections from `config.dist.ini` to streamline configuration management and clarify setup processes for new users.
- Updated the command line tool for improved usability and fixed previous issues preventing its effective operation.
- Enhanced OpenAI integration with advanced settings for temperature, top_p, frequency_penalty, and presence_penalty, enabling finer control over AI-generated outputs.

This comprehensive update not only enhances the bot's performance and usability but also lays the groundwork for incorporating additional AI providers, ensuring the project remains at the forefront of AI-driven chatbot technologies.

Resolves #13
2024-05-17 11:26:37 +02:00
02887b9336
feat: add main_sync wrapper for asyncio compatibility
All checks were successful
Docker CI/CD / Docker Build and Push to Docker Hub (push) Successful in 10m11s
Refactored the main execution pathway to introduce a `main_sync` function that wraps the existing asynchronous `main` function, facilitating compatibility with environments that necessitate or prefer synchronous execution. This change enhances the bot's flexibility in various deployment scenarios without altering the core asynchronous functionality.

In addition, expanded the exception handling in `get_version` to catch all exceptions instead of limiting to `DistributionNotFound`. This broadens the robustness of version retrieval, ensuring the application can gracefully handle unexpected issues during version lookup.

Whitespace adjustments improve code readability by clearly separating function definitions.

These adjustments contribute to the maintainability and operability of the application, allowing for broader usage contexts and easier integration into diverse environments.
2024-05-17 10:58:01 +02:00
bc06f8939a
refactor: applying lots of linting
Some checks failed
Docker CI/CD / Docker Build and Push to Docker Hub (push) Has been cancelled
This commit removes unnecessary imports across several modules, enhancing code readability and potentially improving performance. Notably, `KeysUploadError` and `requests` were removed where no longer used, reflecting a cleaner dependency structure. Furthermore, logging calls have been standardized, removing dynamic string generation in favor of static messages. This change not only makes the logs more consistent but also slightly reduces the computational overhead associated with log generation. The removal of unused type hints also contributes to a more focused and maintainable code base.

Additionally, the commit includes minor text adjustments for user messages, replacing dynamic content with fixed strings where the dynamism was not needed. This enhances both the clarity and security of user-directed messages by avoiding unnecessary string formatting operations.

Finally, the simplification of the migration script and the adjustment in the tools module underscore an ongoing effort to maintain clean and efficient code infrastructure.
2024-05-17 10:54:54 +02:00
5bbcd3cfda
feat: add debug and keyring config options
All checks were successful
Docker CI/CD / Docker Build and Push to Docker Hub (push) Successful in 8m6s
Added `LogLevel` and `UseKeyring` configuration options to the example configuration file to provide users with more control over logging verbosity and the decision to utilize a system keyring for credentials storage. The LogLevel option allows for easier debugging by adjusting the verbosity of logs, whereas the UseKeyring option offers flexibility in credential management, catering to environments where a system keyring may not be preferred or available.

These changes enhance the tool's usability and adaptability to various user environments and debugging needs.
2024-05-17 10:38:23 +02:00
15a93d8231
feat: Expand bot usage control and API support
All checks were successful
Docker CI/CD / Docker Build and Push to Docker Hub (push) Successful in 17m54s
Python Package CI/CD / Setup and Test (push) Successful in 1m54s
Python Package CI/CD / Publish to PyPI (push) Successful in 55s
Enhanced bot flexibility by enabling the specification of room IDs in the allowed users' list, broadening access control capabilities. This change allows for more granular control over who can interact with the bot, particularly useful in scenarios where the bot's usage needs to be restricted to specific rooms. Additionally, updated documentation and configurations reflect the inclusion of new AI models and self-hosted API support, catering to a wider range of use cases and setups. The README.md and config.dist.ini files have been updated to offer clearer guidance on setup, configuration, and troubleshooting, aiming to improve user experience and ease of deployment.

- Introduced the ability for room-specific bot access, enhancing user and room management flexibility.
- Expanded AI model support, including `gpt-4o` and `ollama`, increases the bot's versatility and application scenarios.
- Updated Python version compatibility to 3.12 to ensure users are leveraging the latest language features and improvements.
- Improved troubleshooting documentation to assist users in resolving common issues more efficiently.
2024-05-16 07:24:34 +02:00
e58bea20ca
feat(logging): Add debug log for empty OpenAI responses
All checks were successful
Docker CI/CD / Docker Build and Push to Docker Hub (push) Successful in 16m29s
Introduces logging for cases where OpenAI's API returns an empty response, ensuring that such occurrences are captured for debugging purposes. This change enhances visibility into the interaction with OpenAI's endpoint, facilitating easier identification and resolution of issues where empty responses are received, potentially indicating API limitations, network issues, or unexpected behavior from the AI model.
2024-05-16 06:40:03 +02:00
bd0099aa29
feat(docker): Extended information on setting up Pantalaimon with Docker
All checks were successful
Docker CI/CD / Docker Build and Push to Docker Hub (push) Successful in 7m58s
2024-05-15 14:33:35 +02:00
e46be65707
feat: Add optional location name to weather report
All checks were successful
Docker CI/CD / Docker Build and Push to Docker Hub (push) Successful in 9m33s
This update allows users to provide a location name for their weather reports, which can be useful when requesting weather information for specific locations.
2024-05-10 18:31:24 +02:00
9a4c250eb4
fix: Enhance error handling for user authentication
When processing large volumes of data, it's essential to handle errors gracefully and provide clear feedback to users. This change introduces additional checks to ensure robust error handling during user authentication, reducing the likelihood of errors propagating further down the pipeline.

This improvement not only enhances the overall stability of the system but also provides a better user experience by providing more informative error messages in the event of an issue.
2024-05-10 18:18:40 +02:00
f6a3f4ce66
feat: Update pantalaimon-related scripts and configuration**
All checks were successful
Docker CI/CD / Docker Build and Push to Docker Hub (push) Successful in 8m44s
Renamed `pantalaimon_first_login.py` to `fetch_access_token.py` to better reflect its purpose. Additionally, updated README to remove obsolete instructions for using pantalaimon with the bot.
2024-05-10 17:27:30 +02:00
b88afda558
refactor: Update dependency matrix-nio to 0.24.0
Some checks failed
Docker CI/CD / Docker Build and Push to Docker Hub (push) Failing after 7m54s
This commit updates the `matrix-nio` dependency to version 0.24.0, ensuring compatibility and new features.
2024-05-10 17:07:30 +02:00
df3697b4ff
feat: Add Docker support and TrackingMore dependency
All checks were successful
Docker CI/CD / Docker Build and Push to Docker Hub (push) Successful in 8m42s
Release version 0.3.9 introduces Docker support, enhancing deployment options by allowing the bot to run in a containerized environment. This update greatly simplifies deployment and scaling processes. Additionally, the inclusion of the TrackingMore dependency expands the bot's functionality, enabling advanced tracking features. These changes collectively aim to improve the bot's adaptability and efficiency in handling tasks.
2024-04-23 18:13:53 +02:00
17c6938a9d
feat: Switch Docker CI/CD to main branch & release v0.3.9
All checks were successful
Docker CI/CD / Docker Build and Push to Docker Hub (push) Successful in 8m34s
Python Package CI/CD / Setup and Test (push) Successful in 1m27s
Python Package CI/CD / Publish to PyPI (push) Successful in 38s
- Updated the Docker CI/CD workflow to trigger on pushes to the main branch, aligning with standard Git flow practices for production deployment.
- Advanced project version to 0.3.9, marking a new release with consolidated features and bug fixes.

This adjustment ensures that the Docker images are built and deployed in a more streamlined manner, reflecting our shift towards a unified branching strategy for releases. The version bump signifies the stabilization of new functionalities and enhancements for broader usage.
2024-04-23 18:12:39 +02:00
e8691194a9
CHANGELOG update
All checks were successful
Docker CI/CD / Docker Build and Push to Docker Hub (push) Successful in 8m56s
2024-04-23 18:06:55 +02:00
c7c2cbc95f
feat: support password-based Matrix login
Some checks failed
Docker CI/CD / Docker Build and Push to Docker Hub (push) Has been cancelled
This update introduces the ability for the bot to use a Matrix UserID and password for authentication, in addition to the existing Access Token method. Upon the first run with UserID and password, the bot automatically converts these credentials into an Access Token, updates the configuration with this token, and removes the password for security purposes. This enhancement simplifies the initial setup process for new users by directly utilizing Matrix login credentials, aligning with common user authentication workflows and enhancing security by not storing passwords long-term.

Refactored the bot initialization process in `GPTBot.from_config` to support dynamic login method selection based on provided credentials, and implemented automatic configuration updating to reflect the newly obtained Access Token and cleaned credentials.

Minor adjustments include formatting and comment clarification for better readability and maintenance.

This change addresses the need for a more straightforward and secure authentication process for bot deployment and user experience improvement.
2024-04-23 18:05:50 +02:00
91a028d50b
refactor: streamline Docker setup in README
All checks were successful
Docker CI/CD / Docker Build and Push to Docker Hub (push) Successful in 9m12s
Removed detailed Docker setup instructions, opting to simplify the Docker usage section by retaining only the Docker Compose method. This change aims to declutter the README and encourage a more standardized setup process for users, reducing potential confusion and maintaining focus on the primary installation method via Docker Compose. The update includes a minor adjustment to the database initialization step, ensuring users are guided to prepare their environment fully before running the bot. This revision makes the setup process more approachable and efficient, especially for newcomers.

By directing users to the `Running` section for config file setup instructions, we maintain consistency and avoid duplicative content, keeping the README streamlined and more manageable.
2024-04-23 17:47:26 +02:00
5a9332d635
feat: Replace deprecated dependency
All checks were successful
Docker CI/CD / Docker Build and Push to Docker Hub (push) Successful in 8m56s
Transitioned from the deprecated `pkg_resources` to `importlib.metadata` for package version retrieval, improving performance and future compatibility.
2024-04-23 17:30:21 +02:00
7745593b91
feat(docker-compose): mount local database for persistence
All checks were successful
Docker CI/CD / Docker Build and Push to Docker Hub (push) Successful in 9m16s
Added a volume to the `matrix-gptbot` service configuration in `docker-compose.yml`, mounting the local `database.db` file into the container. This change enables persistent storage for the bot's data, ensuring that data is not lost when the container is restarted or redeployed. It enhances data management and allows for more robust operation of the bot service by leveraging persistency.

This development is crucial for scenarios requiring data retention across bot updates and system maintenance activities.
2024-04-23 17:08:13 +02:00
ca68ecb282
feat: add trackingmore API and ffmpeg
- Included the `ffmpeg` package in the Docker environment to support multimedia content processing.
- Added `trackingmore-api-tool` as a dependency to expand the bot's functionality with tracking capabilities.
- Adjusted the `all` dependencies list in `pyproject.toml` to include the `trackingmore` module, indicating a broader feature set for the application.
- Updated the bot class to prepare for integrating `TrackingMore` alongside existing services like `OpenAI` and `WolframAlpha`, highlighting an intention to make such integrations configurable in the future.

This enhancement enables the bot to interact with multimedia content more effectively and introduces package tracking features, laying groundwork for configurable service integrations.
2024-04-23 17:07:57 +02:00
076eb2d243
feat: switch to pre-built Docker image & update ports
All checks were successful
Docker CI/CD / Docker Build and Push to Docker Hub (push) Successful in 6m19s
Moved from building the GPT bot Docker container on the fly to using a pre-built image, enhancing the setup's efficiency and reducing build times for deployments. Adjusted the server's exposed port to resolve conflicts and standardize deployment configurations. Additionally, locked in the `future` package version to ensure compatibility with legacy Python code, mitigating potential future incompatibility issues.
2024-04-23 16:45:16 +02:00
eb9312099a
feat(workflows): streamline Docker CI/CD processes
All checks were successful
Docker CI/CD / Docker Build and Push to Docker Hub (push) Successful in 5m34s
Removed redundant internal Docker CI/CD workflow and unified naming for external Docker workflows to improve clarity and maintainability. Introduced a new workflow for tagging pushes, aligning deployments closer with best practices for version management and distribution. This change simplifies the CI/CD pipeline, reducing duplication and potential confusion, while ensuring that Docker images are built and pushed efficiently for both internal developments and tagged releases.

- Docker CI/CD internals were removed, focusing efforts on standardized workflows.
- Docker CI/CD workflow names were harmonized to reinforce their universal role across projects.
- A new tagging workflow supports better version control and facilitates automatic releases to Docker Hub, promoting consistency and reliability in image distribution.

This adjustment lays the groundwork for more streamlined and robust CI/CD operations, providing a solid framework for future enhancements and scalability.
2024-04-23 11:46:33 +02:00
fc26f4b591
feat(workflows): add Docker CI/CD for Forgejo, refine Docker Hub flow
Some checks failed
Docker CI/CD Internal / Docker Build and Push to Forgejo Docker Registry (push) Failing after 5m31s
Docker CI/CD External / Docker Build and Push to Docker Hub (push) Successful in 5m32s
Introduced a new CI/CD workflow specifically for building and pushing Docker images to the Forgejo Docker Registry, triggered by pushes to the 'docker' branch. This addition aims to streamline Docker image management and deployment within Forgejo's infrastructure, ensuring a more isolated and secure handling of images. Concurrently, the existing workflow for Docker Hub has been refined to clarify its purpose: it is now explicitly focused on pushing to Docker Hub, removing the overlap with Forgejo Docker Registry operations. This delineation enhances the clarity of our CI/CD processes and ensures a cleaner separation of concerns between public and internal image repositories.
2024-04-23 11:10:50 +02:00
5bc6344fdf
fix(docker): streamline tag format for latest image
Some checks failed
Docker CI/CD / Docker Build and Push (push) Failing after 5m59s
Removed a duplicate and unnecessary line specifying the `latest` tag for the docker image in the workflow. This change simplifies the tag specification process, avoiding redundancy, and ensuring clear declaration of both the `latest` and SHA-specific tags for our docker images in CI/CD pipelines.
2024-04-23 10:52:09 +02:00
c94c016cf1
fix(docker): use env vars for GitHub credentials in workflows
Some checks failed
Docker CI/CD / Docker Build and Push (push) Failing after 5m37s
Migrated from direct GitHub context references to environment variables for GitHub SHA, actor, and token within the Docker build and push actions. This enhances portability and consistency across different execution environments, ensuring better compatibility and security when interfacing with GitHub and Forgejo Docker registries.
2024-04-23 10:45:57 +02:00
f049285cb1
feat(docker): support dynamic tag based on commit SHA
Some checks failed
Docker CI/CD / Docker Build and Push (push) Failing after 1m34s
Enhanced Docker build workflows in `.forgejo/workflows/docker-latest.yml` to include dynamic tagging based on the GITHUB_SHA, alongside the existing 'latest' tag for both the kumitterer/matrix-gptbot and git.private.coffee/privatecoffee/matrix-gptbot images. This change allows for more precise versioning and traceability of images, facilitating rollback and specific version deployment. Also standardized authentication token variables for Docker login to the Forgejo Docker Registry, improving readability and consistency in CI/CD configurations.
2024-04-23 10:40:06 +02:00
35254a0b49
feat(docker): extend CI to push images to Forgejo
Some checks failed
Docker CI/CD / Docker Build and Push (push) Failing after 5m44s
Enhanced the CI pipeline for Docker images by supporting an additional push to the Forgejo Docker Registry alongside the existing push to Docker Hub. This change allows for better integration with private infrastructures and provides an alternative for users and systems that prefer or require images to be stored in a more controlled or private registry. It involves logging into both Docker Hub and Forgejo with respective credentials and pushing the built images to both, ensuring broader availability and redundancy of our Docker images.
2024-04-23 10:27:15 +02:00
bd0d6c5588
feat(docker): streamline Docker setup for GPTBot
All checks were successful
Docker CI/CD / Docker Build and Push (push) Successful in 7m23s
Moved the installation of build-essential and libpython3-dev from the Docker workflow to the Dockerfile itself. This change optimizes the Docker setup process, ensuring that all necessary dependencies are encapsulated within the Docker build context. It simplifies the CI workflow by removing redundant steps and centralizes dependency management, making the build process more straightforward and maintainable.

This adjustment aids in achieving a cleaner division between CI setup and application-specific build requirements, potentially improving build times and reducing complexity for future updates or dependency changes.
2024-04-23 09:52:43 +02:00
224535373e
feat(docker-workflow): add essential build dependencies
Some checks failed
Docker CI/CD / Docker Build and Push (push) Failing after 5m24s
Updated the docker-latest workflow to install additional critical build dependencies including build-essential and libpython3-dev, alongside docker.io. This enhancement is geared towards supporting a wider range of development scenarios and facilitating more complex build requirements directly within the CI pipeline.
2024-04-23 09:43:14 +02:00
a3b4cf217c
feat(docker-ci): enhance Docker CI/CD workflow
Some checks failed
Docker CI/CD / Docker Build and Push (push) Failing after 9m10s
Updated the Docker CI/CD pipeline in the `.forgejo/workflows/docker-latest.yml` to support better integration and efficiency. Key enhancements include setting a container environment with Node 20 on Debian Bookworm for consistency across builds, and installing Docker directly within the runner to streamline the process. This refinement simplifies the setup steps, reduces potential for errors, and possibly decreases pipeline execution time. These changes ensure that our Docker images are built and pushed in a more reliable and faster environment.
2024-04-23 09:26:15 +02:00
d23cfa35fa
refactor(docker-latest.yml): remove ubuntu-latest runner
Some checks failed
Docker CI/CD / docker (push) Failing after 1m35s
Removed the specific runner designation from the Docker workflow to allow for dynamic runner selection. This change aims to increase flexibility and compatibility across different CI environments. It reduces the dependency on a single OS version, potentially leading to better resource availability and efficiency in workflow execution.
2024-04-23 09:22:36 +02:00
054f29ea39
feat: Add Docker CI/CD and update docs for Docker usage
Some checks are pending
Docker CI/CD / docker (push) Waiting to run
Introduced a new Docker CI/CD workflow to automatically build and push images to Docker Hub on pushes to the 'docker' branch. This automation ensures that the latest version of the bot is readily available for deployment, facilitating easier distribution and deployment for users.

The README.md has been updated to improve clarity around installation methods, emphasizing PyPI as the recommended installation method while expanding on Docker usage. It now includes detailed instructions for Docker Hub images, local image building, and Docker Compose deployment, catering to a broader range of users and deployment scenarios. This update aims to make the bot more accessible and manageable by providing clear, step-by-step guidance for different deployment strategies.

Related to these changes, the documentation has been restructured to better organize information related to configuration and running the bot, ensuring users have a smoother experience setting up and deploying it in their environment.

These changes reflect our commitment to enhancing user experience and streamlining deployment processes, making it easier for users to adopt and maintain the matrix-gptbot in various environments.
2024-04-23 09:21:05 +02:00
f8861a16ad
feat: update GPTbot volume mapping
Changed the volume mapping for GPTbot service in `docker-compose.yml` from `settings.ini` to `config.ini`. This modification aligns the container configuration with the new application configuration file naming convention, facilitating easier configuration management and clarity for development and deployment processes.

This change is essential for maintaining consistency across our documentation and deployment scripts, ensuring that all references to configuration files are accurate and up to date.
2024-04-23 08:46:31 +02:00
ca07adbc93
refactor(docker): restructure project for improved management
Redesigned the Docker setup to enhance project structure and configuration management. Changes include a more organized directory structure within the Docker container, separating source code, project metadata, and licenses explicitly to the `/app` directory for better clarity and management. Additionally, integrated `pantalaimon` as a dependency service in `docker-compose.yml`, enabling secure communication with Matrix services by automatically managing settings and configurations through mounted files. This setup simplifies the development environment setup and streamlines deployments.
2024-04-23 08:42:12 +02:00
df2587ee74
feat: Add Docker support for GPTBot
Introduced Dockerfile and docker-compose.yml to encapsulate GPTBot into a Docker container. This simplifies deployment and ensures consistent environments across development and production setups. The Dockerfile outlines the necessary steps to build the image, incl. setting up the working directory, copying the current directory into the container, installing all dependencies, and defining the command to run the bot. The docker-compose.yml file further streamlines the deployment process by specifying service configuration, leveraging Docker Compose version 3.8 for its extended feature set and compatibility.

By containerizing GPTBot, we enhance portability, reduce set-up times for new contributors, and minimize "works on my machine" issues, fostering a more robust development lifecycle.
2024-04-23 08:25:12 +02:00
44 changed files with 1106 additions and 563 deletions

View file

@ -0,0 +1,33 @@
name: Docker CI/CD
on:
push:
tags:
- "*"
jobs:
docker:
name: Docker Build and Push to Docker Hub
container:
image: node:20-bookworm
steps:
- name: Install dependencies
run: |
apt update
apt install -y docker.io
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Build and push to Docker Hub
uses: docker/build-push-action@v5
with:
push: true
tags: |
kumitterer/matrix-gptbot:latest
kumitterer/matrix-gptbot:${{ env.GITHUB_REF_NAME }}

6
.github/dependabot.yml vendored Normal file
View file

@ -0,0 +1,6 @@
version: 2
updates:
- package-ecosystem: "pip"
directory: "/"
schedule:
interval: "daily"

3
.gitignore vendored
View file

@ -7,4 +7,5 @@ venv/
__pycache__/
*.bak
dist/
pantalaimon.conf
pantalaimon.conf
.ruff_cache/

View file

@ -1,42 +1,79 @@
# Changelog
### 0.3.9 (unreleased)
### 0.3.14 (2024-05-21)
- Fixed issue in handling of login credentials, added error handling for login failures
### 0.3.13 (2024-05-20)
- **Breaking Change**: The `ForceTools` configuration option behavior has changed. Instead of using a separate model for tools, the bot will now try to use the default chat model for tool requests, even if that model is not known to support tools.
- Added `ToolModel` to OpenAI configuration to allow specifying a separate model for tool requests
- Automatically resize context images to a default maximum of 2000x768 pixels before sending them to the AI model
### 0.3.12 (2024-05-17)
- Added `ForceVision` to OpenAI configuration to allow third-party models to be used for image recognition
- Added some missing properties to `OpenAI` class
### 0.3.11 (2024-05-17)
- Refactoring of AI provider handling in preparation for multiple AI providers: Introduced a `BaseAI` class that all AI providers must inherit from
- Added support for temperature, top_p, frequency_penalty, and presence_penalty in `AllowedUsers`
- Introduced ruff as a development dependency for linting and applied some linting fixes
- Fixed `gptbot` command line tool
- Changed default chat model to `gpt-4o`
- Changed default image generation model to `dall-e-3`
- Removed currently unused sections from `config.dist.ini`
- Changed provided Pantalaimon config file to not use a key ring by default
- Prevent bot from crashing when an unneeded dependency is missing
### 0.3.10 (2024-05-16)
- Add support for specifying room IDs in `AllowedUsers`
- Minor fixes
### 0.3.9 (2024-04-23)
- Add Docker support for running the bot in a container
- Add TrackingMore dependency to pyproject.toml
- Replace deprecated `pkg_resources` with `importlib.metadata`
- Allow password-based login on first login
### 0.3.7 / 0.3.8 (2024-04-15)
* Changes to URLs in pyproject.toml
* Migrated build pipeline to Forgejo Actions
- Changes to URLs in pyproject.toml
- Migrated build pipeline to Forgejo Actions
### 0.3.6 (2024-04-11)
* Fix issue where message type detection would fail for some messages (cece8cfb24e6f2e98d80d233b688c3e2c0ff05ae)
- Fix issue where message type detection would fail for some messages (cece8cfb24e6f2e98d80d233b688c3e2c0ff05ae)
### 0.3.5
* Only set room avatar if it is not already set (a9c23ee9c42d0a741a7eb485315e3e2d0a526725)
- Only set room avatar if it is not already set (a9c23ee9c42d0a741a7eb485315e3e2d0a526725)
### 0.3.4 (2024-02-18)
* Optimize chat model and message handling (10b74187eb43bca516e2a469b69be1dbc9496408)
* Fix parameter passing in chat response calls (2d564afd979e7bc9eee8204450254c9f86b663b5)
* Refine message filtering in bot event processing (c47f947f80f79a443bbd622833662e3122b121ef)
- Optimize chat model and message handling (10b74187eb43bca516e2a469b69be1dbc9496408)
- Fix parameter passing in chat response calls (2d564afd979e7bc9eee8204450254c9f86b663b5)
- Refine message filtering in bot event processing (c47f947f80f79a443bbd622833662e3122b121ef)
### 0.3.3 (2024-01-26)
* Implement recursion check in response generation (e6bc23e564e51aa149432fc67ce381a9260ee5f5)
* Implement tool emulation for models without tool support (0acc1456f9e4efa09e799f6ce2ec9a31f439fe4a)
* Allow selection of chat model by room (87173ae284957f66594e66166508e4e3bd60c26b)
- Implement recursion check in response generation (e6bc23e564e51aa149432fc67ce381a9260ee5f5)
- Implement tool emulation for models without tool support (0acc1456f9e4efa09e799f6ce2ec9a31f439fe4a)
- Allow selection of chat model by room (87173ae284957f66594e66166508e4e3bd60c26b)
### 0.3.2 (2023-12-14)
* Removed key upload from room event handler
* Fixed output of `python -m gptbot -v` to display currently installed version
* Workaround for bug preventing bot from responding when files are uploaded to an encrypted room
- Removed key upload from room event handler
- Fixed output of `python -m gptbot -v` to display currently installed version
- Workaround for bug preventing bot from responding when files are uploaded to an encrypted room
#### Known Issues
* When using Pantalaimon: Bot is unable to download/use files uploaded to unencrypted rooms
- When using Pantalaimon: Bot is unable to download/use files uploaded to unencrypted rooms
### 0.3.1 (2023-12-07)
* Fixed issue in newroom task causing it to be called over and over again
- Fixed issue in newroom task causing it to be called over and over again

14
Dockerfile Normal file
View file

@ -0,0 +1,14 @@
FROM python:3.12-slim
WORKDIR /app
COPY src/ /app/src
COPY pyproject.toml /app
COPY README.md /app
COPY LICENSE /app
RUN apt update
RUN apt install -y build-essential libpython3-dev ffmpeg
RUN pip install .[all]
RUN pip install 'future==1.0.0'
CMD ["python", "-m", "gptbot"]

View file

@ -1,6 +1,11 @@
# GPTbot
[![Support Private.coffee!](https://shields.private.coffee/badge/private.coffee-support%20us!-pink?logo=coffeescript)](https://private.coffee)
[![Matrix](https://shields.private.coffee/badge/Matrix-join%20us!-blue?logo=matrix)](https://matrix.to/#/#matrix-gptbot:private.coffee)
[![PyPI](https://shields.private.coffee/pypi/v/matrix-gptbot)](https://pypi.org/project/matrix-gptbot/)
[![PyPI - Python Version](https://shields.private.coffee/pypi/pyversions/matrix-gptbot)](https://pypi.org/project/matrix-gptbot/)
[![PyPI - License](https://shields.private.coffee/pypi/l/matrix-gptbot)](https://pypi.org/project/matrix-gptbot/)
[![Latest Git Commit](https://shields.private.coffee/gitea/last-commit/privatecoffee/matrix-gptbot?gitea_url=https://git.private.coffee)](https://git.private.coffee/privatecoffee/matrix-gptbot)
GPTbot is a simple bot that uses different APIs to generate responses to
messages in a Matrix room.
@ -9,8 +14,8 @@ messages in a Matrix room.
- AI-generated responses to text, image and voice messages in a Matrix room
(chatbot)
- Currently supports OpenAI (`gpt-3.5-turbo` and `gpt-4`, including vision
preview, `whisper` and `tts`)
- Currently supports OpenAI (`gpt-3.5-turbo` and `gpt-4`, `gpt-4o`, `whisper`
and `tts`) and compatible APIs (e.g. `ollama`)
- Able to generate pictures using OpenAI `dall-e-2`/`dall-e-3` models
- Able to browse the web to find information
- Able to use OpenWeatherMap to get weather information (requires separate
@ -25,16 +30,18 @@ messages in a Matrix room.
To run the bot, you will need Python 3.10 or newer.
The bot has been tested with Python 3.11 on Arch, but should work with any
The bot has been tested with Python 3.12 on Arch, but should work with any
current version, and should not require any special dependencies or operating
system features.
### Production
The easiest way to install the bot is to use pip to install it from pypi.
#### PyPI
The recommended way to install the bot is to use pip to install it from PyPI.
```shell
# If desired, activate a venv first
# Recommended: activate a venv first
python -m venv venv
. venv/bin/activate
@ -50,10 +57,33 @@ for all available features.
You can also use `pip install git+https://git.private.coffee/privatecoffee/matrix-gptbot.git`
to install the latest version from the Git repository.
#### Configuration
#### Docker
The bot requires a configuration file to be present in the working directory.
Copy the provided `config.dist.ini` to `config.ini` and edit it to your needs.
A `docker-compose.yml` file is provided that you can use to run the bot with
Docker Compose. You will need to create a `config.ini` file as described in the
`Running` section.
```shell
# Clone the repository
git clone https://git.private.coffee/privatecoffee/matrix-gptbot.git
cd matrix-gptbot
# Create a config file
cp config.dist.ini config.ini
# Edit the config file to your needs
# Initialize the database file
sqlite3 database.db "SELECT 1"
# Optionally, create Pantalaimon config
cp contrib/pantalaimon.example.conf pantalaimon.conf
# Edit the Pantalaimon config file to your needs
# Update your homeserver URL in the bot's config.ini to point to Pantalaimon (probably http://pantalaimon:8009 if you used the provided example config)
# You can use `fetch_access_token.py` to get an access token for the bot
# Start the bot
docker-compose up -d
```
#### End-to-end encryption
@ -62,14 +92,9 @@ file attachments, especially in rooms that are not encrypted, if the same
user also uses the bot in encrypted rooms.
The bot itself does not implement end-to-end encryption. However, it can be
used in conjunction with [pantalaimon](https://github.com/matrix-org/pantalaimon),
which is actually installed as a dependency of the bot.
used in conjunction with [pantalaimon](https://github.com/matrix-org/pantalaimon).
To use pantalaimon, create a `pantalaimon.conf` following the example in
`pantalaimon.example.conf`, making sure to change the homeserver URL to match
your homeserver. Then, start pantalaimon with `pantalaimon -c pantalaimon.conf`.
You first have to log in to your homeserver using `python pantalaimon_first_login.py`,
You first have to log in to your homeserver using `python fetch_access_token.py`,
and can then use the returned access token in your bot's `config.ini` file.
Make sure to also point the bot to your pantalaimon instance by setting
@ -115,7 +140,12 @@ before merging.
## Running
The bot can be run with `python -m gptbot`. If required, activate a venv first.
The bot requires a configuration file to be present in the working directory.
Copy the provided `config.dist.ini` to `config.ini` and edit it to your needs.
The bot can then be run with `python -m gptbot`. If required, activate a venv
first.
You may want to run the bot in a screen or tmux session, or use a process
manager like systemd. The repository contains a sample systemd service file
@ -192,10 +222,12 @@ Note that this currently only works for audio messages and .mp3 file uploads.
First of all, make sure that the bot is actually running. (Okay, that's not
really troubleshooting, but it's a good start.)
If the bot is running, check the logs. The first few lines should contain
"Starting bot...", "Syncing..." and "Bot started". If you don't see these
lines, something went wrong during startup. Fortunately, the logs should
contain more information about what went wrong.
If the bot is running, check the logs, these should tell you what is going on.
For example, if the bot is showing an error message like "Timed out, retrying",
it is unable to reach your homeserver. In this case, check your homeserver URL
and make sure that the bot can reach it. If you are using Pantalaimon, make
sure that the bot is pointed to Pantalaimon and not directly to your
homeserver, and that Pantalaimon is running and reachable.
If you need help figuring out what went wrong, feel free to open an issue.

View file

@ -45,10 +45,11 @@ Operator = Contact details not set
# DisplayName = GPTBot
# A list of allowed users
# If not defined, everyone is allowed to use the bot
# If not defined, everyone is allowed to use the bot (so you should really define this)
# Use the "*:homeserver.matrix" syntax to allow everyone on a given homeserver
# Alternatively, you can also specify a room ID to allow everyone in the room to use the bot within that room
#
# AllowedUsers = ["*:matrix.local"]
# AllowedUsers = ["*:matrix.local", "!roomid:matrix.local"]
# Minimum level of log messages that should be printed
# Available log levels in ascending order: trace, debug, info, warning, error, critical
@ -62,20 +63,20 @@ LogLevel = info
# The Chat Completion model you want to use.
#
# Unless you are in the GPT-4 beta (if you don't know - you aren't),
# leave this as the default value (gpt-3.5-turbo)
#
# Model = gpt-3.5-turbo
# Model = gpt-4o
# The Image Generation model you want to use.
#
# ImageModel = dall-e-2
# ImageModel = dall-e-3
# Your OpenAI API key
#
# Find this in your OpenAI account:
# https://platform.openai.com/account/api-keys
#
# This may not be required for self-hosted models in that case, just leave it
# as it is.
#
APIKey = sk-yoursecretkey
# The maximum amount of input sent to the API
@ -100,17 +101,26 @@ APIKey = sk-yoursecretkey
# The base URL of the OpenAI API
#
# Setting this allows you to use a self-hosted AI model for chat completions
# using something like https://github.com/abetlen/llama-cpp-python
# using something like llama-cpp-python or ollama
#
# BaseURL = https://openai.local/v1
# BaseURL = https://api.openai.com/v1/
# Whether to force the use of tools in the chat completion model
#
# Currently, only gpt-3.5-turbo supports tools. If you set this to 1, the bot
# will use that model for tools even if you have a different model set as the
# default. It will only generate the final result using the default model.
# This will make the bot allow the use of tools in the chat completion model,
# even if the model you are using isn't known to support tools. This is useful
# if you are using a self-hosted model that supports tools, but the bot doesn't
# know about it.
#
# ForceTools = 0
# ForceTools = 1
# Whether a dedicated model should be used for tools
#
# This will make the bot use a dedicated model for tools. This is useful if you
# want to use a model that doesn't support tools, but still want to be able to
# use tools.
#
# ToolModel = gpt-4o
# Whether to emulate tools in the chat completion model
#
@ -120,6 +130,50 @@ APIKey = sk-yoursecretkey
#
# EmulateTools = 0
# Force vision in the chat completion model
#
# By default, the bot only supports image recognition in known vision models.
# If you set this to 1, the bot will assume that the model you're using supports
# vision, and will send images to the model as well. This may be required for
# some self-hosted models.
#
# ForceVision = 0
# Maximum width and height of images sent to the API if vision is enabled
#
# The OpenAI API has a limit of 2000 pixels for the long side of an image, and
# 768 pixels for the short side. You may have to adjust these values if you're
# using a self-hosted model that has different limits. You can also set these
# to 0 to disable image resizing.
#
# MaxImageLongSide = 2000
# MaxImageShortSide = 768
# Whether the used model supports video files as input
#
# If you are using a model that supports video files as input, set this to 1.
# This will make the bot send video files to the model as well as images.
# This may be possible with some self-hosted models, but is not supported by
# the OpenAI API at this time.
#
# ForceVideoInput = 0
# Advanced settings for the OpenAI API
#
# These settings are not required for normal operation, but can be used to
# tweak the behavior of the bot.
#
# Note: These settings are not validated by the bot, so make sure they are
# correct before setting them, or the bot may not work as expected.
#
# For more information, see the OpenAI documentation:
# https://platform.openai.com/docs/api-reference/chat/create
#
# Temperature = 1
# TopP = 1
# FrequencyPenalty = 0
# PresencePenalty = 0
###############################################################################
[WolframAlpha]
@ -143,17 +197,23 @@ APIKey = sk-yoursecretkey
Homeserver = https://matrix.local
# An Access Token for the user your bot runs as
# Can be obtained using a request like this:
#
# See https://www.matrix.org/docs/guides/client-server-api#login
# for information on how to obtain this value
#
AccessToken = syt_yoursynapsetoken
# The Matrix user ID of the bot (@local:domain.tld)
# Only specify this if the bot fails to figure it out by itself
# Instead of an Access Token, you can also use a User ID and password
# to log in. Upon first run, the bot will automatically turn this into
# an Access Token and store it in the config file, and remove the
# password from the config file.
#
# This is particularly useful if you are using Pantalaimon, where this
# is the only (easy) way to generate an Access Token.
#
# UserID = @gptbot:matrix.local
# Password = yourpassword
###############################################################################
@ -164,11 +224,6 @@ AccessToken = syt_yoursynapsetoken
#
Path = database.db
# Path of the Crypto Store - required to support encrypted rooms
# (not tested/supported yet)
#
CryptoStore = store.db
###############################################################################
[TrackingMore]
@ -180,26 +235,6 @@ CryptoStore = store.db
###############################################################################
[Replicate]
# API key for replicate.com
# Can be used to run lots of different AI models
# If not defined, the features that depend on it are not available
#
# APIKey = r8_alotoflettersandnumbershere
###############################################################################
[HuggingFace]
# API key for Hugging Face
# Can be used to run lots of different AI models
# If not defined, the features that depend on it are not available
#
# APIKey = __________________________
###############################################################################
[OpenWeatherMap]
# API key for OpenWeatherMap

View file

@ -0,0 +1,7 @@
[Homeserver]
Homeserver = https://example.com
ListenAddress = localhost
ListenPort = 8009
IgnoreVerification = True
LogLevel = debug
UseKeyring = no

15
docker-compose.yml Normal file
View file

@ -0,0 +1,15 @@
version: '3.8'
services:
gptbot:
image: kumitterer/matrix-gptbot
volumes:
- ./config.ini:/app/config.ini
- ./database.db:/app/database.db
pantalaimon:
image: matrixdotorg/pantalaimon
volumes:
- ./pantalaimon.conf:/etc/pantalaimon/pantalaimon.conf
ports:
- "8009:8009"

View file

@ -1,5 +0,0 @@
[Homeserver]
Homeserver = https://example.com
ListenAddress = localhost
ListenPort = 8010
IgnoreVerification = True

View file

@ -7,63 +7,51 @@ allow-direct-references = true
[project]
name = "matrix-gptbot"
version = "0.3.9.dev0"
version = "0.3.20"
authors = [
{ name="Kumi Mitterer", email="gptbot@kumi.email" },
{ name="Private.coffee Team", email="support@private.coffee" },
{ name = "Kumi", email = "gptbot@kumi.email" },
{ name = "Private.coffee Team", email = "support@private.coffee" },
]
description = "Multifunctional Chatbot for Matrix"
readme = "README.md"
license = { file="LICENSE" }
license = { file = "LICENSE" }
requires-python = ">=3.10"
packages = [
"src/gptbot"
]
packages = ["src/gptbot"]
classifiers = [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
]
dependencies = [
"matrix-nio[e2e]",
"markdown2[all]",
"tiktoken",
"python-magic",
"pillow",
]
"matrix-nio[e2e]>=0.24.0",
"markdown2[all]",
"tiktoken",
"python-magic",
"pillow",
"future>=1.0.0",
]
[project.optional-dependencies]
openai = [
"openai>=1.2",
"pydub",
]
openai = ["openai>=1.2", "pydub"]
wolframalpha = [
"wolframalpha",
]
google = ["google-generativeai"]
e2ee = [
"pantalaimon>=0.10.5",
]
wolframalpha = ["wolframalpha"]
trackingmore = ["trackingmore-api-tool"]
all = [
"matrix-gptbot[openai,wolframalpha,e2ee]",
"matrix-gptbot[openai,wolframalpha,trackingmore,google]",
"geopy",
"beautifulsoup4",
]
dev = [
"matrix-gptbot[all]",
"black",
"hatchling",
"twine",
"build",
]
dev = ["matrix-gptbot[all]", "black", "hatchling", "twine", "build", "ruff"]
[project.urls]
"Homepage" = "https://git.private.coffee/privatecoffee/matrix-gptbot"
@ -71,7 +59,7 @@ dev = [
"Source Code" = "https://git.private.coffee/privatecoffee/matrix-gptbot"
[project.scripts]
gptbot = "gptbot.__main__:main"
gptbot = "gptbot.__main__:main_sync"
[tool.hatch.build.targets.wheel]
packages = ["src/gptbot"]
packages = ["src/gptbot"]

View file

@ -5,19 +5,22 @@ from configparser import ConfigParser
import signal
import asyncio
import pkg_resources
import importlib.metadata
def sigterm_handler(_signo, _stack_frame):
exit()
def get_version():
try:
package_version = pkg_resources.get_distribution("matrix_gptbot").version
except pkg_resources.DistributionNotFound:
package_version = importlib.metadata.version("matrix_gptbot")
except Exception:
return None
return package_version
def main():
async def main():
# Parse command line arguments
parser = ArgumentParser()
parser.add_argument(
@ -40,19 +43,28 @@ def main():
config.read(args.config)
# Create bot
bot = GPTBot.from_config(config)
bot, new_config = await GPTBot.from_config(config)
# Update config with new values
if new_config:
with open(args.config, "w") as configfile:
new_config.write(configfile)
# Listen for SIGTERM
signal.signal(signal.SIGTERM, sigterm_handler)
# Start bot
try:
asyncio.run(bot.run())
await bot.run()
except KeyboardInterrupt:
print("Received KeyboardInterrupt - exiting...")
except SystemExit:
print("Received SIGTERM - exiting...")
def main_sync():
asyncio.run(main())
if __name__ == "__main__":
main()
main_sync()

View file

@ -1,32 +1,24 @@
from nio import (
RoomMessageText,
InviteEvent,
Event,
SyncResponse,
JoinResponse,
RoomMemberEvent,
Response,
MegolmEvent,
KeysQueryResponse
)
from .test import test_callback
from .sync import sync_callback
from .invite import room_invite_callback
from .join import join_callback
from .message import message_callback
from .roommember import roommember_callback
from .test_response import test_response_callback
RESPONSE_CALLBACKS = {
#Response: test_response_callback,
SyncResponse: sync_callback,
JoinResponse: join_callback,
}
EVENT_CALLBACKS = {
#Event: test_callback,
InviteEvent: room_invite_callback,
RoomMessageText: message_callback,
RoomMemberEvent: roommember_callback,
}
}

View file

@ -2,9 +2,9 @@ from nio import InviteEvent, MatrixRoom
async def room_invite_callback(room: MatrixRoom, event: InviteEvent, bot):
if room.room_id in bot.matrix_client.rooms:
logging(f"Already in room {room.room_id} - ignoring invite")
bot.logger.log(f"Already in room {room.room_id} - ignoring invite")
return
bot.logger.log(f"Received invite to room {room.room_id} - joining...")
response = await bot.matrix_client.join(room.room_id)
await bot.matrix_client.join(room.room_id)

View file

@ -8,12 +8,12 @@ async def join_callback(response, bot):
with closing(bot.database.cursor()) as cursor:
cursor.execute(
"SELECT space_id FROM user_spaces WHERE user_id = ? AND active = TRUE", (event.sender,))
"SELECT space_id FROM user_spaces WHERE user_id = ? AND active = TRUE", (response.sender,))
space = cursor.fetchone()
if space:
bot.logger.log(f"Adding new room to space {space[0]}...")
await bot.add_rooms_to_space(space[0], [new_room.room_id])
await bot.add_rooms_to_space(space[0], [response.room_id])
bot.matrix_client.keys_upload()

View file

@ -1,4 +1,4 @@
from nio import RoomMemberEvent, MatrixRoom, KeysUploadError
from nio import RoomMemberEvent, MatrixRoom
async def roommember_callback(room: MatrixRoom, event: RoomMemberEvent, bot):
if event.membership == "leave":

View file

@ -1,11 +0,0 @@
from nio import MatrixRoom, Event
async def test_callback(room: MatrixRoom, event: Event, bot):
"""Test callback for debugging purposes.
Args:
room (MatrixRoom): The room the event was sent in.
event (Event): The event that was sent.
"""
bot.logger.log(f"Test callback called: {room.room_id} {event.event_id} {event.sender} {event.__class__}")

View file

@ -1,11 +0,0 @@
from nio import ErrorResponse
async def test_response_callback(response, bot):
if isinstance(response, ErrorResponse):
bot.logger.log(
f"Error response received ({response.__class__.__name__}): {response.message}",
"warning",
)
else:
bot.logger.log(f"{response.__class__} response received", "debug")

View file

View file

@ -0,0 +1,76 @@
from ...classes.logging import Logger
import asyncio
from functools import partial
from typing import Any, AsyncGenerator, Dict, Optional, Mapping
from nio import Event
class AttributeDictionary(dict):
def __init__(self, *args, **kwargs):
super(AttributeDictionary, self).__init__(*args, **kwargs)
self.__dict__ = self
class BaseAI:
bot: Any
logger: Logger
def __init__(self, bot, config: Mapping, logger: Optional[Logger] = None):
self.bot = bot
self.logger = logger or bot.logger or Logger()
self._config = config
@property
def chat_api(self) -> str:
return self.chat_model
async def prepare_messages(
self, event: Event, messages: list[Any], system_message: Optional[str] = None
) -> list[Any]:
"""A helper method to prepare messages for the AI.
This converts a list of Matrix messages into whatever format the AI requires.
Args:
event (Event): The event that triggered the message generation. Generally a text message from a user.
messages (list[Dict[str, str]]): The messages to prepare. Generally of type RoomMessage*.
system_message (Optional[str], optional): A system message to include. Defaults to None.
Returns:
list[Any]: The prepared messages in the format the AI requires.
Raises:
NotImplementedError: If the method is not implemented in the subclass.
"""
raise NotImplementedError(
"Implementations of BaseAI must implement prepare_messages."
)
async def _request_with_retries(
self, request: partial, attempts: int = 5, retry_interval: int = 2
) -> AsyncGenerator[Any | list | Dict, None]:
"""Retry a request a set number of times if it fails.
Args:
request (partial): The request to make with retries.
attempts (int, optional): The number of attempts to make. Defaults to 5.
retry_interval (int, optional): The interval in seconds between attempts. Defaults to 2 seconds.
Returns:
AsyncGenerator[Any | list | Dict, None]: The response for the request.
"""
current_attempt = 1
while current_attempt <= attempts:
try:
response = await request()
return response
except Exception as e:
self.logger.log(f"Request failed: {e}", "error")
self.logger.log(f"Retrying in {retry_interval} seconds...")
await asyncio.sleep(retry_interval)
current_attempt += 1
raise Exception("Request failed after all attempts.")

View file

@ -0,0 +1,73 @@
from .base import BaseAI
from ..logging import Logger
from typing import Optional, Mapping, List, Dict, Tuple
import google.generativeai as genai
class GeminiAI(BaseAI):
api_code: str = "google"
@property
def chat_api(self) -> str:
return self.chat_model
google_api: genai.GenerativeModel
operator: str = "Google (https://ai.google)"
def __init__(
self,
bot,
config: Mapping,
logger: Optional[Logger] = None,
):
super().__init__(bot, config, logger)
genai.configure(api_key=self.api_key)
self.gemini_api = genai.GenerativeModel(self.chat_model)
@property
def api_key(self):
return self._config["APIKey"]
@property
def chat_model(self):
return self._config.get("Model", fallback="gemini-pro")
def prepare_messages(event, messages: List[Dict[str, str]], ) -> List[str]:
return [message["content"] for message in messages]
async def generate_chat_response(
self,
messages: List[Dict[str, str]],
user: Optional[str] = None,
room: Optional[str] = None,
use_tools: bool = True,
model: Optional[str] = None,
) -> Tuple[str, int]:
"""Generate a response to a chat message.
Args:
messages (List[Dict[str, str]]): A list of messages to use as context.
user (Optional[str], optional): The user to use the assistant for. Defaults to None.
room (Optional[str], optional): The room to use the assistant for. Defaults to None.
use_tools (bool, optional): Whether to use tools. Defaults to True.
model (Optional[str], optional): The model to use. Defaults to None, which uses the default chat model.
Returns:
Tuple[str, int]: The response text and the number of tokens used.
"""
self.logger.log(
f"Generating response to {len(messages)} messages for user {user} in room {room}..."
)
messages = self.prepare_messages(messages)
return self.gemini_api.generate_content(
messages=messages,
user=user,
room=room,
use_tools=use_tools,
model=model,
)

View file

@ -2,20 +2,30 @@ import openai
import requests
import tiktoken
import asyncio
import json
import base64
import json
import inspect
from functools import partial
from contextlib import closing
from typing import Dict, List, Tuple, Generator, AsyncGenerator, Optional, Any
from typing import Dict, List, Tuple, Generator, Optional, Mapping, Any
from io import BytesIO
from pydub import AudioSegment
from PIL import Image
from nio import (
RoomMessageNotice,
RoomMessageText,
RoomMessageAudio,
RoomMessageFile,
RoomMessageImage,
RoomMessageVideo,
Event,
)
from .logging import Logger
from ..tools import TOOLS, Handover, StopProcessing
from ..logging import Logger
from ...tools import TOOLS, Handover, StopProcessing
from ..exceptions import DownloadException
from .base import BaseAI, AttributeDictionary
ASSISTANT_CODE_INTERPRETER = [
{
@ -24,58 +34,126 @@ ASSISTANT_CODE_INTERPRETER = [
]
class AttributeDictionary(dict):
def __init__(self, *args, **kwargs):
super(AttributeDictionary, self).__init__(*args, **kwargs)
self.__dict__ = self
class OpenAI:
api_key: str
chat_model: str = "gpt-3.5-turbo"
logger: Logger
class OpenAI(BaseAI):
api_code: str = "openai"
@property
def chat_api(self) -> str:
return self.chat_model
classification_api = chat_api
image_model: str = "dall-e-2"
tts_model: str = "tts-1-hd"
tts_voice: str = "alloy"
stt_model: str = "whisper-1"
openai_api: openai.AsyncOpenAI
operator: str = "OpenAI ([https://openai.com](https://openai.com))"
def __init__(
self,
bot,
api_key,
chat_model=None,
image_model=None,
tts_model=None,
tts_voice=None,
stt_model=None,
base_url=None,
logger=None,
config: Mapping,
logger: Optional[Logger] = None,
):
self.bot = bot
self.api_key = api_key
self.chat_model = chat_model or self.chat_model
self.image_model = image_model or self.image_model
self.logger = logger or bot.logger or Logger()
self.base_url = base_url or openai.base_url
super().__init__(bot, config, logger)
self.openai_api = openai.AsyncOpenAI(
api_key=self.api_key, base_url=self.base_url
)
self.tts_model = tts_model or self.tts_model
self.tts_voice = tts_voice or self.tts_voice
self.stt_model = stt_model or self.stt_model
# TODO: Add descriptions for these properties
@property
def api_key(self):
return self._config["APIKey"]
@property
def chat_model(self):
return self._config.get("Model", fallback="gpt-4o")
@property
def image_model(self):
return self._config.get("ImageModel", fallback="dall-e-3")
@property
def tts_model(self):
return self._config.get("TTSModel", fallback="tts-1-hd")
@property
def tts_voice(self):
return self._config.get("TTSVoice", fallback="alloy")
@property
def stt_model(self):
return self._config.get("STTModel", fallback="whisper-1")
@property
def base_url(self):
return self._config.get("BaseURL", fallback="https://api.openai.com/v1/")
@property
def temperature(self):
return self._config.getfloat("Temperature", fallback=1.0)
@property
def top_p(self):
return self._config.getfloat("TopP", fallback=1.0)
@property
def frequency_penalty(self):
return self._config.getfloat("FrequencyPenalty", fallback=0.0)
@property
def presence_penalty(self):
return self._config.getfloat("PresencePenalty", fallback=0.0)
@property
def force_vision(self):
return self._config.getboolean("ForceVision", fallback=False)
@property
def force_video_input(self):
return self._config.getboolean("ForceVideoInput", fallback=False)
@property
def force_tools(self):
return self._config.getboolean("ForceTools", fallback=False)
@property
def tool_model(self):
return self._config.get("ToolModel")
@property
def vision_model(self):
return self._config.get("VisionModel")
@property
def emulate_tools(self):
return self._config.getboolean("EmulateTools", fallback=False)
@property
def max_tokens(self):
# TODO: This should be model-specific
return self._config.getint("MaxTokens", fallback=4000)
@property
def max_messages(self):
return self._config.getint("MaxMessages", fallback=30)
@property
def max_image_long_side(self):
return self._config.getint("MaxImageLongSide", fallback=2000)
@property
def max_image_short_side(self):
return self._config.getint("MaxImageShortSide", fallback=768)
def _is_tool_model(self, model: str) -> bool:
return model in ("gpt-3.5-turbo", "gpt-4-turbo", "gpt-4o")
def _is_vision_model(self, model: str) -> bool:
return model in ("gpt-4-turbo", "gpt-4o") or "vision" in model
def supports_chat_images(self):
return "vision" in self.chat_model
return self._is_vision_model(self.chat_model) or self.force_vision
def supports_chat_videos(self):
return self.force_video_input
def json_decode(self, data):
if data.startswith("```json\n"):
@ -88,36 +166,320 @@ class OpenAI:
try:
return json.loads(data)
except:
except Exception:
return False
async def _request_with_retries(
self, request: partial, attempts: int = 5, retry_interval: int = 2
) -> AsyncGenerator[Any | list | Dict, None]:
"""Retry a request a set number of times if it fails.
async def prepare_messages(
self,
event: Event,
messages: List[Dict[str, str]],
system_message=None,
room=None,
) -> List[Any]:
chat_messages = []
self.logger.log(f"Incoming messages: {messages}", "debug")
messages.append(event)
for message in messages:
if isinstance(message, (RoomMessageNotice, RoomMessageText)):
role = (
"assistant"
if message.sender == self.bot.matrix_client.user_id
else "user"
)
if message == event or (not message.event_id == event.event_id):
message_body = (
message.body
if not self.supports_chat_images()
else [{"type": "text", "text": message.body}]
)
chat_messages.append({"role": role, "content": message_body})
elif isinstance(message, RoomMessageAudio) or (
isinstance(message, RoomMessageFile) and message.body.endswith(".mp3")
):
role = (
"assistant"
if message.sender == self.bot.matrix_client.user_id
else "user"
)
if message == event or (not message.event_id == event.event_id):
if room and self.room_uses_stt(room):
try:
download = await self.bot.download_file(
message.url, raise_error=True
)
message_text = await self.bot.stt_api.speech_to_text(
download.body
)
except Exception as e:
self.logger.log(
f"Error generating text from audio: {e}", "error"
)
message_text = message.body
else:
message_text = message.body
message_body = (
message_text
if not self.supports_chat_images()
else [{"type": "text", "text": message_text}]
)
chat_messages.append({"role": role, "content": message_body})
elif isinstance(message, RoomMessageFile):
try:
download = await self.bot.download_file(
message.url, raise_error=True
)
if download:
try:
text = download.body.decode("utf-8")
except UnicodeDecodeError:
text = None
if text:
role = (
"assistant"
if message.sender == self.bot.matrix_client.user_id
else "user"
)
if message == event or (
not message.event_id == event.event_id
):
message_body = (
text
if not self.supports_chat_images()
else [{"type": "text", "text": text}]
)
chat_messages.append(
{"role": role, "content": message_body}
)
except Exception as e:
self.logger.log(f"Error generating text from file: {e}", "error")
message_body = (
message.body
if not self.supports_chat_images()
else [{"type": "text", "text": message.body}]
)
chat_messages.append({"role": "system", "content": message_body})
elif self.supports_chat_images() and isinstance(message, RoomMessageImage):
try:
image_url = message.url
download = await self.bot.download_file(image_url, raise_error=True)
if download:
pil_image = Image.open(BytesIO(download.body))
file_format = pil_image.format or "PNG"
max_long_side = self.max_image_long_side
max_short_side = self.max_image_short_side
if max_long_side and max_short_side:
if pil_image.width > pil_image.height:
if pil_image.width > max_long_side:
pil_image.thumbnail((max_long_side, max_short_side))
else:
if pil_image.height > max_long_side:
pil_image.thumbnail((max_short_side, max_long_side))
bio = BytesIO()
pil_image.save(bio, format=file_format)
encoded_url = f"data:{download.content_type};base64,{base64.b64encode(bio.getvalue()).decode('utf-8')}"
parent = (
chat_messages[-1]
if chat_messages
and chat_messages[-1]["role"]
== (
"assistant"
if message.sender == self.bot.matrix_client.user_id
else "user"
)
else None
)
if not parent:
chat_messages.append(
{
"role": (
"assistant"
if message.sender == self.matrix_client.user_id
else "user"
),
"content": [],
}
)
parent = chat_messages[-1]
parent["content"].append(
{"type": "image_url", "image_url": {"url": encoded_url}}
)
except Exception as e:
if room and isinstance(e, DownloadException):
self.bot.send_message(
room,
f"Could not process image due to download error: {e.args[0]}",
True,
)
self.logger.log(f"Error generating image from file: {e}", "error")
message_body = (
message.body
if not self.supports_chat_images()
else [{"type": "text", "text": message.body}]
)
chat_messages.append({"role": "system", "content": message_body})
elif self.supports_chat_videos() and (
isinstance(message, RoomMessageVideo)
or (
isinstance(message, RoomMessageFile)
and message.body.endswith(".mp4")
)
):
try:
video_url = message.url
download = await self.bot.download_file(video_url, raise_error=True)
if download:
video = BytesIO(download.body)
video_url = f"data:{download.content_type};base64,{base64.b64encode(video.getvalue()).decode('utf-8')}"
parent = (
chat_messages[-1]
if chat_messages
and chat_messages[-1]["role"]
== (
"assistant"
if message.sender == self.bot.matrix_client.user_id
else "user"
)
else None
)
if not parent:
chat_messages.append(
{
"role": (
"assistant"
if message.sender == self.matrix_client.user_id
else "user"
),
"content": [],
}
)
parent = chat_messages[-1]
parent["content"].append(
{"type": "image_url", "image_url": {"url": video_url}}
)
except Exception as e:
if room and isinstance(e, DownloadException):
self.bot.send_message(
room,
f"Could not process video due to download error: {e.args[0]}",
True,
)
self.logger.log(f"Error generating video from file: {e}", "error")
message_body = (
message.body
if not self.supports_chat_images()
else [{"type": "text", "text": message.body}]
)
chat_messages.append({"role": "system", "content": message_body})
self.logger.log(f"Prepared messages: {chat_messages}", "debug")
# Truncate messages to fit within the token limit
self._truncate(
messages=chat_messages,
max_tokens=self.max_tokens - 1,
system_message=system_message,
)
return chat_messages
def _truncate(
self,
messages: List[Any],
max_tokens: Optional[int] = None,
model: Optional[str] = None,
system_message: Optional[str] = None,
) -> List[Any]:
"""Truncate messages to fit within the token limit.
Args:
request (partial): The request to make with retries.
attempts (int, optional): The number of attempts to make. Defaults to 5.
retry_interval (int, optional): The interval in seconds between attempts. Defaults to 2 seconds.
messages (List[Any]): The messages to truncate.
max_tokens (Optional[int], optional): The maximum number of tokens to use. Defaults to None, which uses the default token limit.
model (Optional[str], optional): The model to use. Defaults to None, which uses the default chat model.
system_message (Optional[str], optional): The system message to use. Defaults to None, which uses the default system message.
Returns:
AsyncGenerator[Any | list | Dict, None]: The OpenAI response for the request.
List[Any]: The truncated messages.
"""
# call the request function and return the response if it succeeds, else retry
current_attempt = 1
while current_attempt <= attempts:
try:
response = await request()
return response
except Exception as e:
self.logger.log(f"Request failed: {e}", "error")
self.logger.log(f"Retrying in {retry_interval} seconds...")
await asyncio.sleep(retry_interval)
current_attempt += 1
# if all attempts failed, raise an exception
raise Exception("Request failed after all attempts.")
max_tokens = max_tokens or self.max_tokens
model = model or self.chat_model
system_message = (
self.bot.default_system_message
if system_message is None
else system_message
)
try:
encoding = tiktoken.encoding_for_model(model)
except Exception:
# TODO: Handle this more gracefully
encoding = tiktoken.encoding_for_model("gpt-3.5-turbo")
total_tokens = 0
system_message_tokens = (
0 if not system_message else (len(encoding.encode(system_message)) + 1)
)
if system_message_tokens > max_tokens:
self.logger.log(
f"System message is too long to fit within token limit ({system_message_tokens} tokens) - cannot proceed",
"error",
)
return []
total_tokens += system_message_tokens
total_tokens = len(system_message) + 1
truncated_messages = []
self.logger.log(f"Messages: {messages}", "debug")
for message in [messages[0]] + list(reversed(messages[1:])):
content = (
message["content"]
if isinstance(message["content"], str)
else (
message["content"][0]["text"]
if isinstance(message["content"][0].get("text"), str)
else ""
)
)
tokens = len(encoding.encode(content)) + 1
if total_tokens + tokens > max_tokens:
break
total_tokens += tokens
truncated_messages.append(message)
return [truncated_messages[0]] + list(reversed(truncated_messages[1:]))
async def generate_chat_response(
self,
@ -162,9 +524,9 @@ class OpenAI:
)
if count > 5:
self.logger.log(f"Recursion depth exceeded, aborting.")
self.logger.log("Recursion depth exceeded, aborting.")
return await self.generate_chat_response(
messsages=messages,
messages=messages,
user=user,
room=room,
allow_override=False, # TODO: Could this be a problem?
@ -186,10 +548,15 @@ class OpenAI:
original_messages = messages
if allow_override and not "gpt-3.5-turbo" in original_model:
if self.bot.config.getboolean("OpenAI", "ForceTools", fallback=False):
self.logger.log(f"Overriding chat model to use tools")
chat_model = "gpt-3.5-turbo-0125"
if (
allow_override
and use_tools
and self.tool_model
and not (self._is_tool_model(chat_model) or self.force_tools)
):
if self.tool_model:
self.logger.log("Overriding chat model to use tools")
chat_model = self.tool_model
out_messages = []
@ -214,9 +581,9 @@ class OpenAI:
if (
use_tools
and self.bot.config.getboolean("OpenAI", "EmulateTools", fallback=False)
and not self.bot.config.getboolean("OpenAI", "ForceTools", fallback=False)
and not "gpt-3.5-turbo" in chat_model
and self.emulate_tools
and not self.force_tools
and not self._is_tool_model(chat_model)
):
self.bot.logger.log("Using tool emulation mode.", "debug")
@ -257,19 +624,33 @@ class OpenAI:
"model": chat_model,
"messages": messages,
"user": room,
"temperature": self.temperature,
"top_p": self.top_p,
"frequency_penalty": self.frequency_penalty,
"presence_penalty": self.presence_penalty,
}
if "gpt-3.5-turbo" in chat_model and use_tools:
if (self._is_tool_model(chat_model) and use_tools) or self.force_tools:
kwargs["tools"] = tools
# TODO: Look into this
if "gpt-4" in chat_model:
kwargs["max_tokens"] = self.bot.config.getint(
"OpenAI", "MaxTokens", fallback=4000
)
kwargs["max_tokens"] = self.max_tokens
api_url = self.base_url
if chat_model.startswith("gpt-"):
if not self.chat_model.startswith("gpt-"):
# The model is overridden, we have to ensure that OpenAI is used
if self.api_key.startswith("sk-"):
self.openai_api.base_url = "https://api.openai.com/v1/"
chat_partial = partial(self.openai_api.chat.completions.create, **kwargs)
response = await self._request_with_retries(chat_partial)
# Setting back the API URL to whatever it was before
self.openai_api.base_url = api_url
choice = response.choices[0]
result_text = choice.message.content
@ -284,7 +665,7 @@ class OpenAI:
tool_response = await self.bot.call_tool(
tool_call, room=room, user=user
)
if tool_response != False:
if tool_response is not False:
tool_responses.append(
{
"role": "tool",
@ -305,7 +686,7 @@ class OpenAI:
)
if not tool_responses:
self.logger.log(f"No more responses received, aborting.")
self.logger.log("No more responses received, aborting.")
result_text = False
else:
try:
@ -321,7 +702,7 @@ class OpenAI:
except openai.APIError as e:
if e.code == "max_tokens":
self.logger.log(
f"Max tokens exceeded, falling back to no-tools response."
"Max tokens exceeded, falling back to no-tools response."
)
try:
new_messages = []
@ -370,7 +751,6 @@ class OpenAI:
elif isinstance((tool_object := self.json_decode(result_text)), dict):
if "tool" in tool_object:
tool_name = tool_object["tool"]
tool_class = TOOLS[tool_name]
tool_parameters = (
tool_object["parameters"] if "parameters" in tool_object else {}
)
@ -394,7 +774,7 @@ class OpenAI:
tool_response = await self.bot.call_tool(
tool_call, room=room, user=user
)
if tool_response != False:
if tool_response is not False:
tool_responses = [
{
"role": "system",
@ -414,7 +794,7 @@ class OpenAI:
)
if not tool_responses:
self.logger.log(f"No response received, aborting.")
self.logger.log("No response received, aborting.")
result_text = False
else:
try:
@ -482,9 +862,14 @@ class OpenAI:
model=original_model,
)
if not result_text:
self.logger.log(
"Received an empty response from the OpenAI endpoint.", "debug"
)
try:
tokens_used = response.usage.total_tokens
except:
except Exception:
tokens_used = 0
self.logger.log(f"Generated response with {tokens_used} tokens.")
@ -523,7 +908,7 @@ Only the event_types mentioned above are allowed, you must not respond in any ot
try:
result = json.loads(response.choices[0].message["content"])
except:
except Exception:
result = {"type": "chat", "prompt": query}
tokens_used = response.usage["total_tokens"]
@ -566,7 +951,7 @@ Only the event_types mentioned above are allowed, you must not respond in any ot
Returns:
Tuple[str, int]: The text and the number of tokens used.
"""
self.logger.log(f"Generating text from speech...")
self.logger.log("Generating text from speech...")
audio_file = BytesIO()
AudioSegment.from_file(BytesIO(audio)).export(audio_file, format="mp3")
@ -653,18 +1038,20 @@ Only the event_types mentioned above are allowed, you must not respond in any ot
Returns:
Tuple[str, int]: The description and the number of tokens used.
"""
self.logger.log(f"Generating description for images in conversation...")
self.logger.log("Generating description for images in conversation...")
system_message = "You are an image description generator. You generate descriptions for all images in the current conversation, one after another."
messages = [{"role": "system", "content": system_message}] + messages[1:]
if not "vision" in (chat_model := self.chat_model):
chat_model = self.chat_model + "gpt-4-vision-preview"
chat_model = self.chat_model
if not self._is_vision_model(chat_model):
chat_model = self.vision_model or "gpt-4o"
chat_partial = partial(
self.openai_api.chat.completions.create,
model=self.chat_model,
model=chat_model,
messages=messages,
user=str(user),
)

View file

@ -1,7 +1,6 @@
import markdown2
import tiktoken
import asyncio
import functools
from PIL import Image
@ -15,8 +14,6 @@ from nio import (
MatrixRoom,
Api,
RoomMessagesError,
GroupEncryptionError,
EncryptionError,
RoomMessageText,
RoomSendResponse,
SyncResponse,
@ -27,42 +24,34 @@ from nio import (
RoomVisibility,
RoomCreateError,
RoomMessageMedia,
RoomMessageImage,
RoomMessageFile,
RoomMessageAudio,
DownloadError,
DownloadResponse,
ToDeviceEvent,
ToDeviceError,
RoomGetStateError,
DiskDownloadResponse,
MemoryDownloadResponse,
LoginError,
)
from nio.store import SqliteStore
from typing import Optional, List
from typing import Optional, List, Any, Union
from configparser import ConfigParser
from datetime import datetime
from io import BytesIO
from pathlib import Path
from contextlib import closing
import base64
import uuid
import traceback
import json
import importlib.util
import sys
import sqlite3
import traceback
from .logging import Logger
from ..migrations import migrate
from ..callbacks import RESPONSE_CALLBACKS, EVENT_CALLBACKS
from ..commands import COMMANDS
from ..tools import TOOLS, Handover, StopProcessing
from .openai import OpenAI
from .wolframalpha import WolframAlpha
from .trackingmore import TrackingMore
from .ai.base import BaseAI
from .exceptions import DownloadException
class GPTBot:
@ -72,12 +61,13 @@ class GPTBot:
matrix_client: Optional[AsyncClient] = None
sync_token: Optional[str] = None
logger: Optional[Logger] = Logger()
chat_api: Optional[OpenAI] = None
image_api: Optional[OpenAI] = None
classification_api: Optional[OpenAI] = None
tts_api: Optional[OpenAI] = None
stt_api: Optional[OpenAI] = None
parcel_api: Optional[TrackingMore] = None
chat_api: Optional[BaseAI] = None
image_api: Optional[BaseAI] = None
classification_api: Optional[BaseAI] = None
tts_api: Optional[BaseAI] = None
stt_api: Optional[BaseAI] = None
parcel_api: Optional[Any] = None
calculation_api: Optional[Any] = None
room_ignore_list: List[str] = [] # List of rooms to ignore invites from
logo: Optional[Image.Image] = None
logo_uri: Optional[str] = None
@ -94,7 +84,7 @@ class GPTBot:
"""
try:
return json.loads(self.config["GPTBot"]["AllowedUsers"])
except:
except Exception:
return []
@property
@ -136,26 +126,6 @@ class GPTBot:
"""
return self.config["GPTBot"].getboolean("ForceSystemMessage", False)
@property
def max_tokens(self) -> int:
"""Maximum number of input tokens.
Returns:
int: The maximum number of input tokens. Defaults to 3000.
"""
return self.config["OpenAI"].getint("MaxTokens", 3000)
# TODO: Move this to OpenAI class
@property
def max_messages(self) -> int:
"""Maximum number of messages to consider as input.
Returns:
int: The maximum number of messages to consider as input. Defaults to 30.
"""
return self.config["OpenAI"].getint("MaxMessages", 30)
# TODO: Move this to OpenAI class
@property
def operator(self) -> Optional[str]:
"""Operator of the bot.
@ -198,7 +168,7 @@ class GPTBot:
USER_AGENT = "matrix-gptbot/dev (+https://kumig.it/kumitterer/matrix-gptbot)"
@classmethod
def from_config(cls, config: ConfigParser):
async def from_config(cls, config: ConfigParser):
"""Create a new GPTBot instance from a config file.
Args:
@ -230,44 +200,70 @@ class GPTBot:
if Path(bot.logo_path).exists() and Path(bot.logo_path).is_file():
bot.logo = Image.open(bot.logo_path)
bot.chat_api = bot.image_api = bot.classification_api = bot.tts_api = (
bot.stt_api
) = OpenAI(
bot=bot,
api_key=config["OpenAI"]["APIKey"],
chat_model=config["OpenAI"].get("Model"),
image_model=config["OpenAI"].get("ImageModel"),
tts_model=config["OpenAI"].get("TTSModel"),
stt_model=config["OpenAI"].get("STTModel"),
base_url=config["OpenAI"].get("BaseURL"),
)
# Set up OpenAI
assert (
"OpenAI" in config
), "OpenAI config not found" # TODO: Update this to support other providers
if "BaseURL" in config["OpenAI"]:
bot.chat_api.base_url = config["OpenAI"]["BaseURL"]
bot.image_api = None
from .ai.openai import OpenAI
openai_api = OpenAI(bot=bot, config=config["OpenAI"])
if "Model" in config["OpenAI"]:
bot.chat_api = openai_api
bot.classification_api = openai_api
if "ImageModel" in config["OpenAI"]:
bot.image_api = openai_api
if "TTSModel" in config["OpenAI"]:
bot.tts_api = openai_api
if "STTModel" in config["OpenAI"]:
bot.stt_api = openai_api
# Set up WolframAlpha
if "WolframAlpha" in config:
from .wolframalpha import WolframAlpha
bot.calculation_api = WolframAlpha(
config["WolframAlpha"]["APIKey"], bot.logger
)
# Set up TrackingMore
if "TrackingMore" in config:
from .trackingmore import TrackingMore
bot.parcel_api = TrackingMore(config["TrackingMore"]["APIKey"], bot.logger)
# Set up the Matrix client
assert "Matrix" in config, "Matrix config not found"
homeserver = config["Matrix"]["Homeserver"]
bot.matrix_client = AsyncClient(homeserver)
bot.matrix_client.access_token = config["Matrix"]["AccessToken"]
bot.matrix_client.user_id = config["Matrix"].get("UserID")
bot.matrix_client.device_id = config["Matrix"].get("DeviceID")
# Return the new GPTBot instance
return bot
if config.get("Matrix", "Password", fallback=""):
if not config.get("Matrix", "UserID", fallback=""):
raise Exception("Cannot log in: UserID not set in config")
bot.matrix_client = AsyncClient(homeserver, user=config["Matrix"]["UserID"])
login = await bot.matrix_client.login(password=config["Matrix"]["Password"])
if isinstance(login, LoginError):
raise Exception(f"Could not log in: {login.message}")
config["Matrix"]["AccessToken"] = bot.matrix_client.access_token
config["Matrix"]["DeviceID"] = bot.matrix_client.device_id
config["Matrix"]["Password"] = ""
else:
bot.matrix_client = AsyncClient(homeserver)
bot.matrix_client.access_token = config["Matrix"]["AccessToken"]
bot.matrix_client.user_id = config["Matrix"].get("UserID")
bot.matrix_client.device_id = config["Matrix"].get("DeviceID")
# Return the new GPTBot instance and the (potentially modified) config
return bot, config
async def _get_user_id(self) -> str:
"""Get the user ID of the bot from the whoami endpoint.
@ -300,7 +296,7 @@ class GPTBot:
ignore_notices: bool = True,
):
messages = []
n = n or self.max_messages
n = n or self.chat_api.max_messages
room_id = room.room_id if isinstance(room, MatrixRoom) else room
self.logger.log(
@ -327,8 +323,14 @@ class GPTBot:
try:
event_type = event.source["content"]["msgtype"]
except KeyError:
self.logger.log(f"Could not process event: {event}", "debug")
continue # This is most likely not a message event
if event.__class__.__name__ in ("RoomMemberEvent",):
self.logger.log(
f"Ignoring event of type {event.__class__.__name__}",
"debug",
)
continue
self.logger.log(f"Could not process event: {event}", "warning")
continue # This is most likely not a message event
if event_type.startswith("gptbot"):
messages.append(event)
@ -356,56 +358,6 @@ class GPTBot:
# Reverse the list so that messages are in chronological order
return messages[::-1]
def _truncate(
self,
messages: list,
max_tokens: Optional[int] = None,
model: Optional[str] = None,
system_message: Optional[str] = None,
):
max_tokens = max_tokens or self.max_tokens
model = model or self.chat_api.chat_model
system_message = (
self.default_system_message if system_message is None else system_message
)
encoding = tiktoken.encoding_for_model(model)
total_tokens = 0
system_message_tokens = (
0 if not system_message else (len(encoding.encode(system_message)) + 1)
)
if system_message_tokens > max_tokens:
self.logger.log(
f"System message is too long to fit within token limit ({system_message_tokens} tokens) - cannot proceed",
"error",
)
return []
total_tokens += system_message_tokens
total_tokens = len(system_message) + 1
truncated_messages = []
for message in [messages[0]] + list(reversed(messages[1:])):
content = (
message["content"]
if isinstance(message["content"], str)
else (
message["content"][0]["text"]
if isinstance(message["content"][0].get("text"), str)
else ""
)
)
tokens = len(encoding.encode(content)) + 1
if total_tokens + tokens > max_tokens:
break
total_tokens += tokens
truncated_messages.append(message)
return [truncated_messages[0]] + list(reversed(truncated_messages[1:]))
async def _get_device_id(self) -> str:
"""Guess the device ID of the bot.
Requires an access token to be set up.
@ -458,7 +410,7 @@ class GPTBot:
except (Handover, StopProcessing):
raise
except KeyError as e:
except KeyError:
self.logger.log(f"Tool {tool} not found", "error")
return "Error: Tool not found"
@ -536,13 +488,31 @@ class GPTBot:
return (
(
user_id in self.allowed_users
or f"*:{user_id.split(':')[1]}" in self.allowed_users
or f"@*:{user_id.split(':')[1]}" in self.allowed_users
or (
(
f"*:{user_id.split(':')[1]}" in self.allowed_users
or f"@*:{user_id.split(':')[1]}" in self.allowed_users
)
if not user_id.startswith("!") or user_id.startswith("#")
else False
)
)
if self.allowed_users
else True
)
def room_is_allowed(self, room_id: str) -> bool:
"""Check if everyone in a room is allowed to use the bot.
Args:
room_id (str): The room ID to check.
Returns:
bool: Whether everyone in the room is allowed to use the bot.
"""
# TODO: Handle published aliases
return self.user_is_allowed(room_id)
async def event_callback(self, room: MatrixRoom, event: Event):
"""Callback for events.
@ -554,7 +524,9 @@ class GPTBot:
if event.sender == self.matrix_client.user_id:
return
if not self.user_is_allowed(event.sender):
if not (
self.user_is_allowed(event.sender) or self.room_is_allowed(room.room_id)
):
if len(room.users) == 2:
await self.matrix_client.room_send(
room.room_id,
@ -566,7 +538,7 @@ class GPTBot:
)
return
task = asyncio.create_task(self._event_callback(room, event))
asyncio.create_task(self._event_callback(room, event))
def room_uses_timing(self, room: MatrixRoom):
"""Check if a room uses timing.
@ -594,7 +566,7 @@ class GPTBot:
await callback(response, self)
async def response_callback(self, response: Response):
task = asyncio.create_task(self._response_callback(response))
asyncio.create_task(self._response_callback(response))
async def accept_pending_invites(self):
"""Accept all pending invites."""
@ -701,7 +673,7 @@ class GPTBot:
"url": content_uri,
}
status = await self.matrix_client.room_send(room, "m.room.message", content)
await self.matrix_client.room_send(room, "m.room.message", content)
self.logger.log("Sent image", "debug")
@ -735,7 +707,7 @@ class GPTBot:
"url": content_uri,
}
status = await self.matrix_client.room_send(room, "m.room.message", content)
await self.matrix_client.room_send(room, "m.room.message", content)
self.logger.log("Sent file", "debug")
@ -1128,7 +1100,10 @@ class GPTBot:
return
try:
last_messages = await self._last_n_messages(room.room_id, self.max_messages)
last_messages = await self._last_n_messages(
room.room_id, self.chat_api.max_messages
)
self.logger.log(f"Last messages: {last_messages}", "debug")
except Exception as e:
self.logger.log(f"Error getting last messages: {e}", "error")
await self.send_message(
@ -1138,141 +1113,8 @@ class GPTBot:
system_message = self.get_system_message(room)
chat_messages = [{"role": "system", "content": system_message}]
last_messages = last_messages + [event]
for message in last_messages:
if isinstance(message, (RoomMessageNotice, RoomMessageText)):
role = (
"assistant"
if message.sender == self.matrix_client.user_id
else "user"
)
if message == event or (not message.event_id == event.event_id):
message_body = (
message.body
if not self.chat_api.supports_chat_images()
else [{"type": "text", "text": message.body}]
)
chat_messages.append({"role": role, "content": message_body})
elif isinstance(message, RoomMessageAudio) or (
isinstance(message, RoomMessageFile) and message.body.endswith(".mp3")
):
role = (
"assistant"
if message.sender == self.matrix_client.user_id
else "user"
)
if message == event or (not message.event_id == event.event_id):
if self.room_uses_stt(room):
try:
download = await self.download_file(message.url)
message_text = await self.stt_api.speech_to_text(
download.body
)
except Exception as e:
self.logger.log(
f"Error generating text from audio: {e}", "error"
)
message_text = message.body
else:
message_text = message.body
message_body = (
message_text
if not self.chat_api.supports_chat_images()
else [{"type": "text", "text": message_text}]
)
chat_messages.append({"role": role, "content": message_body})
elif isinstance(message, RoomMessageFile):
try:
download = await self.download_file(message.url)
if download:
try:
text = download.body.decode("utf-8")
except UnicodeDecodeError:
text = None
if text:
role = (
"assistant"
if message.sender == self.matrix_client.user_id
else "user"
)
if message == event or (
not message.event_id == event.event_id
):
message_body = (
text
if not self.chat_api.supports_chat_images()
else [{"type": "text", "text": text}]
)
chat_messages.append(
{"role": role, "content": message_body}
)
except Exception as e:
self.logger.log(f"Error generating text from file: {e}", "error")
message_body = (
message.body
if not self.chat_api.supports_chat_images()
else [{"type": "text", "text": message.body}]
)
chat_messages.append({"role": "system", "content": message_body})
elif self.chat_api.supports_chat_images() and isinstance(
message, RoomMessageImage
):
try:
image_url = message.url
download = await self.download_file(image_url)
if download:
encoded_url = f"data:{download.content_type};base64,{base64.b64encode(download.body).decode('utf-8')}"
parent = (
chat_messages[-1]
if chat_messages
and chat_messages[-1]["role"]
== (
"assistant"
if message.sender == self.matrix_client.user_id
else "user"
)
else None
)
if not parent:
chat_messages.append(
{
"role": (
"assistant"
if message.sender == self.matrix_client.user_id
else "user"
),
"content": [],
}
)
parent = chat_messages[-1]
parent["content"].append(
{"type": "image_url", "image_url": {"url": encoded_url}}
)
except Exception as e:
self.logger.log(f"Error generating image from file: {e}", "error")
message_body = (
message.body
if not self.chat_api.supports_chat_images()
else [{"type": "text", "text": message.body}]
)
chat_messages.append({"role": "system", "content": message_body})
# Truncate messages to fit within the token limit
truncated_messages = self._truncate(
chat_messages[1:], self.max_tokens - 1, system_message=system_message
chat_messages = await self.chat_api.prepare_messages(
event, last_messages, system_message
)
# Check for a model override
@ -1317,12 +1159,19 @@ class GPTBot:
await self.send_message(
room, "Something went wrong generating audio file.", True
)
if self.debug:
await self.send_message(
room, f"Error: {e}\n\n```\n{traceback.format_exc()}\n```", True
)
message = await self.send_message(room, response)
await self.send_message(room, response)
await self.matrix_client.room_typing(room.room_id, False)
async def download_file(self, mxc) -> Optional[bytes]:
async def download_file(
self, mxc: str, raise_error: bool = False
) -> Union[DiskDownloadResponse, MemoryDownloadResponse]:
"""Download a file from the homeserver.
Args:
@ -1336,6 +1185,8 @@ class GPTBot:
if isinstance(download, DownloadError):
self.logger.log(f"Error downloading file: {download.message}", "error")
if raise_error:
raise DownloadException(download.message)
return
return download

View file

@ -0,0 +1,2 @@
class DownloadException(Exception):
pass

View file

@ -1,9 +1,8 @@
import trackingmore
import requests
from .logging import Logger
from typing import Dict, List, Tuple, Generator, Optional
from typing import Tuple, Optional
class TrackingMore:
api_key: str

View file

@ -3,7 +3,7 @@ import requests
from .logging import Logger
from typing import Dict, List, Tuple, Generator, Optional
from typing import Generator, Optional
class WolframAlpha:
api_key: str

View file

@ -3,21 +3,16 @@ from nio.rooms import MatrixRoom
async def command_botinfo(room: MatrixRoom, event: RoomMessageText, bot):
logging("Showing bot info...")
bot.logger.log("Showing bot info...")
body = f"""GPT Info:
body = f"""GPT Room info:
Model: {bot.model}
Maximum context tokens: {bot.max_tokens}
Maximum context messages: {bot.max_messages}
Room info:
Bot user ID: {bot.matrix_client.user_id}
Current room ID: {room.room_id}
Model: {await bot.get_room_model(room)}\n
Maximum context tokens: {bot.chat_api.max_tokens}\n
Maximum context messages: {bot.chat_api.max_messages}\n
Bot user ID: {bot.matrix_client.user_id}\n
Current room ID: {room.room_id}\n
System message: {bot.get_system_message(room)}
For usage statistics, run !gptbot stats
"""
await bot.send_message(room, body, True)

View file

@ -23,14 +23,12 @@ async def command_calculate(room: MatrixRoom, event: RoomMessageText, bot):
bot.logger.log("Querying calculation API...")
for subpod in bot.calculation_api.generate_calculation_response(prompt, text, results_only, user=room.room_id):
bot.logger.log(f"Sending subpod...")
bot.logger.log("Sending subpod...")
if isinstance(subpod, bytes):
await bot.send_image(room, subpod)
else:
await bot.send_message(room, subpod, True)
bot.log_api_usage(event, room, f"{bot.calculation_api.api_code}-{bot.calculation_api.calculation_api}", tokens_used)
return
await bot.send_message(room, "You need to provide a prompt.", True)

View file

@ -9,7 +9,7 @@ async def command_dice(room: MatrixRoom, event: RoomMessageText, bot):
try:
sides = int(event.body.split()[2])
except ValueError:
except (ValueError, IndexError):
sides = 6
if sides < 2:

View file

@ -8,18 +8,17 @@ async def command_help(room: MatrixRoom, event: RoomMessageText, bot):
- !gptbot help - Show this message
- !gptbot botinfo - Show information about the bot
- !gptbot privacy - Show privacy information
- !gptbot newroom \<room name\> - Create a new room and invite yourself to it
- !gptbot stats - Show usage statistics for this room
- !gptbot systemmessage \<message\> - Get or set the system message for this room
- !gptbot newroom <room name> - Create a new room and invite yourself to it
- !gptbot systemmessage <message> - Get or set the system message for this room
- !gptbot space [enable|disable|update|invite] - Enable, disable, force update, or invite yourself to your space
- !gptbot coin - Flip a coin (heads or tails)
- !gptbot dice [number] - Roll a dice with the specified number of sides (default: 6)
- !gptbot imagine \<prompt\> - Generate an image from a prompt
- !gptbot calculate [--text] [--details] \<query\> - Calculate a result to a calculation, optionally forcing text output instead of an image, and optionally showing additional details like the input interpretation
- !gptbot chat \<message\> - Send a message to the chat API
- !gptbot classify \<message\> - Classify a message using the classification API
- !gptbot custom \<message\> - Used for custom commands handled by the chat model and defined through the room's system message
- !gptbot roomsettings [use_classification|use_timing|always_reply|system_message|tts] [true|false|\<message\>] - Get or set room settings
- !gptbot imagine <prompt> - Generate an image from a prompt
- !gptbot calculate [--text] [--details] <query> - Calculate a result to a calculation, optionally forcing text output instead of an image, and optionally showing additional details like the input interpretation
- !gptbot chat <message> - Send a message to the chat API
- !gptbot classify <message> - Classify a message using the classification API
- !gptbot custom <message> - Used for custom commands handled by the chat model and defined through the room's system message
- !gptbot roomsettings [use_classification|use_timing|always_reply|system_message|tts] [true|false|<message>] - Get or set room settings
- !gptbot ignoreolder - Ignore messages before this point as context
"""

View file

@ -16,7 +16,7 @@ async def command_imagine(room: MatrixRoom, event: RoomMessageText, bot):
return
for image in images:
bot.logger.log(f"Sending image...")
bot.logger.log("Sending image...")
await bot.send_image(room, image)
bot.log_api_usage(event, room, f"{bot.image_api.api_code}-{bot.image_api.image_model}", tokens_used)

View file

@ -13,7 +13,7 @@ async def command_newroom(room: MatrixRoom, event: RoomMessageText, bot):
if isinstance(new_room, RoomCreateError):
bot.logger.log(f"Failed to create room: {new_room.message}")
await bot.send_message(room, f"Sorry, I was unable to create a new room. Please try again later, or create a room manually.", True)
await bot.send_message(room, "Sorry, I was unable to create a new room. Please try again later, or create a room manually.", True)
return
bot.logger.log(f"Inviting {event.sender} to new room...")
@ -21,7 +21,7 @@ async def command_newroom(room: MatrixRoom, event: RoomMessageText, bot):
if isinstance(invite, RoomInviteError):
bot.logger.log(f"Failed to invite user: {invite.message}")
await bot.send_message(room, f"Sorry, I was unable to invite you to the new room. Please try again later, or create a room manually.", True)
await bot.send_message(room, "Sorry, I was unable to invite you to the new room. Please try again later, or create a room manually.", True)
return
with closing(bot.database.cursor()) as cursor:
@ -43,4 +43,4 @@ async def command_newroom(room: MatrixRoom, event: RoomMessageText, bot):
await bot.matrix_client.joined_rooms()
await bot.send_message(room, f"Alright, I've created a new room called '{room_name}' and invited you to it. You can find it at {new_room.room_id}", True)
await bot.send_message(bot.matrix_client.rooms[new_room.room_id], f"Welcome to the new room! What can I do for you?")
await bot.send_message(bot.matrix_client.rooms[new_room.room_id], "Welcome to the new room! What can I do for you?")

View file

@ -11,7 +11,7 @@ async def command_privacy(room: MatrixRoom, event: RoomMessageText, bot):
body += "- For chat requests: " + f"{bot.chat_api.operator}" + "\n"
if bot.image_api:
body += "- For image generation requests (!gptbot imagine): " + f"{bot.image_api.operator}" + "\n"
if bot.calculate_api:
body += "- For calculation requests (!gptbot calculate): " + f"{bot.calculate_api.operator}" + "\n"
if bot.calculation_api:
body += "- For calculation requests (!gptbot calculate): " + f"{bot.calculation_api.operator}" + "\n"
await bot.send_message(room, body, True)

View file

@ -114,7 +114,7 @@ async def command_roomsettings(room: MatrixRoom, event: RoomMessageText, bot):
await bot.send_message(room, f"The current chat model is: '{value}'.", True)
return
message = f"""The following settings are available:
message = """The following settings are available:
- system_message [message]: Get or set the system message to be sent to the chat model
- classification [true/false]: Get or set whether the room uses classification

View file

@ -120,7 +120,7 @@ async def command_space(room: MatrixRoom, event: RoomMessageText, bot):
if isinstance(response, RoomInviteError):
bot.logger.log(
f"Failed to invite user {user} to space {space}", "error")
f"Failed to invite user {event.sender} to space {space}", "error")
await bot.send_message(
room, "Sorry, I couldn't invite you to the space. Please try again later.", True)
return

View file

@ -5,16 +5,30 @@ from contextlib import closing
async def command_stats(room: MatrixRoom, event: RoomMessageText, bot):
await bot.send_message(
room,
"The `stats` command is no longer supported. Sorry for the inconvenience.",
True,
)
return
# Yes, the code below is unreachable, but it's kept here for reference.
bot.logger.log("Showing stats...")
if not bot.database:
bot.logger.log("No database connection - cannot show stats")
bot.send_message(room, "Sorry, I'm not connected to a database, so I don't have any statistics on your usage.", True)
return
await bot.send_message(
room,
"Sorry, I'm not connected to a database, so I don't have any statistics on your usage.",
True,
)
return
with closing(bot.database.cursor()) as cursor:
cursor.execute(
"SELECT SUM(tokens) FROM token_usage WHERE room_id = ?", (room.room_id,))
"SELECT SUM(tokens) FROM token_usage WHERE room_id = ?", (room.room_id,)
)
total_tokens = cursor.fetchone()[0] or 0
bot.send_message(room, f"Total tokens used: {total_tokens}", True)
await bot.send_message(room, f"Total tokens used: {total_tokens}", True)

View file

@ -15,7 +15,7 @@ async def command_tts(room: MatrixRoom, event: RoomMessageText, bot):
await bot.send_message(room, "Sorry, I couldn't generate an audio file. Please try again later.", True)
return
bot.logger.log(f"Sending audio file...")
bot.logger.log("Sending audio file...")
await bot.send_file(room, content, "audio.mp3", "audio/mpeg", "m.audio")
return

View file

@ -22,7 +22,7 @@ def get_version(db: SQLiteConnection) -> int:
try:
return int(db.execute("SELECT MAX(id) FROM migrations").fetchone()[0])
except:
except Exception:
return 0
def migrate(db: SQLiteConnection, from_version: Optional[int] = None, to_version: Optional[int] = None) -> None:

View file

@ -1,7 +1,5 @@
# Migration for Matrix Store - No longer used
from datetime import datetime
from contextlib import closing
def migration(conn):
pass

View file

@ -1,6 +1,6 @@
from importlib import import_module
from .base import BaseTool, StopProcessing, Handover
from .base import BaseTool, StopProcessing, Handover # noqa: F401
TOOLS = {}

View file

@ -1,4 +1,4 @@
from .base import BaseTool, Handover
from .base import BaseTool
class Imagedescription(BaseTool):
DESCRIPTION = "Describe the content of the images in the conversation."

View file

@ -47,7 +47,7 @@ class Newroom(BaseTool):
await self.bot.add_rooms_to_space(space[0], [new_room.room_id])
if self.bot.logo_uri:
await self.bot.matrix_client.room_put_state(room, "m.room.avatar", {
await self.bot.matrix_client.room_put_state(new_room, "m.room.avatar", {
"url": self.bot.logo_uri
}, "")

View file

@ -17,6 +17,10 @@ class Weather(BaseTool):
"type": "string",
"description": "The longitude of the location.",
},
"name": {
"type": "string",
"description": "A location name to include in the report. This is optional, latitude and longitude are always required."
}
},
"required": ["latitude", "longitude"],
}
@ -26,6 +30,8 @@ class Weather(BaseTool):
if not (latitude := self.kwargs.get("latitude")) or not (longitude := self.kwargs.get("longitude")):
raise Exception('No location provided.')
name = self.kwargs.get("name")
weather_api_key = self.bot.config.get("OpenWeatherMap", "APIKey")
if not weather_api_key:
@ -37,7 +43,7 @@ class Weather(BaseTool):
async with session.get(url) as response:
if response.status == 200:
data = await response.json()
return f"""**Weather report**
return f"""**Weather report{f" for {name}" if name else ""}**
Current: {data['current']['temp']}°C, {data['current']['weather'][0]['description']}
Feels like: {data['current']['feels_like']}°C
Humidity: {data['current']['humidity']}%