Added a debug-level log statement to capture the final set of messages before returning, which aids in tracing message processing and debugging potential issues in the message truncation logic. This enhances transparency and facilitates easier troubleshooting.
Wraps `system_message_dict` in a list to address a logic error in message concatenation. This ensures that the message list is correctly formatted, preventing potential runtime issues when processing the message sequence.
Updated message truncation logic to correctly return a system message dictionary and adjust token calculations. Improved model encoding fallback strategy to utilize "gpt-4o" instead of "gpt-3.5-turbo" for greater compatibility. This addresses message mishandling and ensures more robust operation.
Resolves the need for better error handling with encoding defaults.
Updated project version from 0.3.19 to 0.3.20 to reflect the latest changes and improvements in the codebase. Ensures compatibility with the new updates and maintains version tracking.
Updated the method to include a room parameter, ensuring that message processing functions only when a room is provided. This prevents errors when trying to download and process media files, improving stability and avoiding unnecessary exceptions.
Adjust exception handling to catch both ValueError and IndexError. This ensures the command gracefully defaults to 6 sides when input parameters are insufficient or improperly formatted. Improves robustness against user errors.
Added fallback values for Matrix 'Password' and 'UserID' config checks to prevent exceptions when these keys are not present. This ensures smoother handling of missing configurations.
Updated bot info command to display model info specific to room.
Removed the now unsupported stats command from help and privacy.
Retired the 'stats' command, informing users of its deprecation.
Updated version to 0.3.18 to reflect these changes.
Appended the event to the incoming messages list to ensure it gets processed. This change addresses situations where events were previously being overlooked, potentially leading to incomplete or incorrect processing. This enhancement ensures a more comprehensive handling of incoming data.
Updated the project version to 0.3.16 to prepare for the next release. This includes recent bug fixes and minor improvements. Ensure the updated version is reflected across all relevant documentation and deployment scripts.
Added debug logging to capture incoming, prepared, and truncated messages in the OpenAI class. Also, included logging for last messages fetched in the bot class. These additions aid in the traceability and debugging of message flows and processing errors.
Additionally, an option to log detailed error tracebacks in debug mode was implemented to facilitate better error analysis.
Updated the reference to max_tokens in the truncation call from
self.chat_api.max_tokens to self.max_tokens, ensuring the correct
token limit is applied. This resolves potential issues with message
length handling.
Introduced the capability to handle video files as input for AI models that support it, enhancing the bot's versatility in processing media. This update includes a new configuration option to enable or disable video input, catering to different model capabilities. Additionally, integrated Google's Generative AI through the addition of a Google dependency and a corresponding AI class implementation. This move broadens the AI options available, providing users with more flexibility in choosing their desired AI backend. The update involves refactoring and simplifying message preparation and handling, ensuring compatibility and extending functionality to include the new video input feature and Google AI support.
- Added `ForceVideoInput` configuration option to toggle video file processing.
- Integrated Google Generative AI as an optional dependency and included it in the bot's AI choices.
- Implemented a unified method for preparing messages for AI processing, streamlining how the bot handles various message types.
- Removed obsolete code related to message truncation and specialized handling for images, files, and audio, reflecting a shift towards a more flexible and generalized message processing approach.
Enhanced the OpenAI class to better support diverse message types in chat interactions, including image and video processing. This update introduces several key improvements:
- Added handling for image and video messages, converting them to a format compatible with the OpenAI API.
- Implemented a new method to prepare messages for OpenAI, allowing for richer interaction by including media content directly within the chat.
- Incorporated message truncation to adhere to token limits, ensuring efficient usage of OpenAI's API without sacrificing message content.
- Extended support for additional message types, such as audio and file messages, with specialized processing for each category.
This change aims to enhance user experience by allowing more dynamic and multimedia-rich interactions, aligning with modern chat functionalities. It also addresses potential issues with token limit surpassing and ensures smoother integration of different message formats into the chat flow.
Improved the login logic in the bot's initialization process to require a UserID when a Password is provided for login. This update ensures a more secure and fail-proof login procedure by validating the presence of a UserID before attempting to log in, and by handling LoginError more explicitly with a clear error message. This change addresses the need for better error handling and validation during the bot's login phase to avoid silent failures and improve debuggability.
- Added LoginError import to handle login-related exceptions more gracefully.
- Refined the login process to create the AsyncClient instance with a UserID when password authentication is used, following best practices for client identification.
- Introduced explicit error raising for missing UserID configuration, enhancing configuration validation before attempting a login.
- Improved clarity and security by clearing the password from the configuration post-login, preventing inadvertent storage or reuse.
This update enhances the bot's robustness and configuration validation, ensuring smoother operations and better error handling during the initialization phase.
Introduce `DownloadException` to improve error reporting and handling when file downloads fail. Modified `download_file` method to accept a `raise_error` flag, which, when set, raises `DownloadException` upon a download error instead of just logging it. This enables the bot to respond with a specific error message to the room if a download fails during processing of speech-to-text, file messages, and image messages, enhancing user feedback on download failures.
Introduced changes to the tool request behavior and image processing. Now, the configuration allows a dedicated model for tool requests (`ToolModel`) and enforces automatic resizing of context images to maximal dimensions, improving compatibility and performance with the AI model. The update shifts away from a rigid tool model use, accommodating varied model support for tool requests, and optimizes image handling for network and processing efficiency. These adjustments aim to enhance user experience with more flexible tool usage and efficient image handling in chat interactions.
Updated the exception handling in the migration logic to catch `Exception` explicitly instead of using a bare except clause. This change improves the robustness of the migration process by ensuring that only known, broad exceptions are caught, aiding in debugging and maintaining the principle of least privilege in error handling. It prevents the swallowing of unrelated exceptions that could mask other issues or critical errors.
Switched to using the bot's centralized logging mechanism for bot info commands, enhancing consistency across the application. This change ensures that all log messages go through the same process, potentially simplifying future debugging and logging enhancements.
Refined exception handling in the OpenAI response parsing by specifying `Exception` instead of using a bare except. This change improves code reliability and maintainability by clearly defining the scope of exceptions we anticipate, leading to more precise error handling and easier debugging processes. It aligns with best practices for Python error handling, avoiding the catch-all approach that might inadvertently suppress unrelated errors, thus enhancing the overall robustness of the error management strategy.
Resolved incorrect variable usage in join_callback function that affected the mapping of new rooms to the correct spaces. Previously, the event.sender variable was mistakenly used, leading to potential mismatches in identifying the correct user and room IDs for space assignments. This update ensures the response object's sender and room_id properties are correctly utilized, aligning room additions with the intended user spaces.
Refactored the invite handling process within the invite callback for better consistency and maintainability. Swapped out a basic logging function with the bot's standardized logger for improved logging consistency across the application. Additionally, simplified the room joining process by removing redundant response handling, thus enhancing code readability and maintainability. These changes aim to unify the logging approach within the bot and ensure smoother invite processing without altering the underlying functionality.
Adjusted import statements in `tools.__init__.py` to silence linting warnings regarding unused imports. This emphasizes that `BaseTool`, `StopProcessing`, and `Handover` are intentionally imported for export purposes, despite not being directly referenced. This change aids in maintaining cleaner code and reduces confusion around import intentions.
No functional impact or changes in behavior are expected as a result of this refactor.
This commit streamlines the `callbacks` module by removing the debugging and testing related callbacks (`test_callback` and `test_response_callback`) along with their associated imports. It focuses on enhancing the clarity and maintainability of the callback handling by eliminating unused code paths and dependencies that were specifically used for testing purposes. This cleanup is part of an effort to mature the codebase and reduce potential confusion for new contributors by ensuring that only operational code is present in the production paths. This should not affect the existing functionality but will make future modifications and understanding of the callback mechanisms more straightforward.
Enhanced code readability by formatting multiline log statements and adding missing line breaks in conditional blocks. Adopted a more robust error handling approach by catching general exceptions in encoding determination. Eliminated redundant variable assignments for async tasks to streamline event handling and response callbacks, directly invoking `asyncio.create_task()` for better clarity and efficiency. Simplify message and file sending routines by removing unnecessary status assignments, implying a focus on action over response verification. Lastly, optimized message truncation logic by discarding the unused result, focusing on in-place operation for token limit adherence. These changes collectively contribute to a cleaner, more maintainable, and efficient codebase, addressing potential bugs and performance bottlenecks.
Removed an unnecessary call to `log_api_usage` after command execution in the calculate command handler. This change eliminates redundant logging that didn't contribute valuable insights and led to clutter in log files, streamlining the process and potentially improving performance by reducing I/O operations.
Updated the logger to use `event.sender` instead of an undefined `user` variable when logging a failed space invitation, ensuring the correct information is logged. This change addresses a bug where the wrong variable was referenced, potentially causing confusion when diagnosing issues with space invites.
The ai base module in gptbot no longer requires the json package, leading to its removal. This cleanup enhances code readability and reduces unnecessary import overhead, streamlining the functionality of the ai base class without affecting its external behavior. Such optimizations contribute to the overall maintainability and performance of the codebase.
Fixed an issue where the bot attempted to set the avatar for the wrong room when creating a new room. The avatar is now correctly assigned to the newly created room instead of the incorrectly referenced room variable. This ensures that newly created rooms properly display the intended logo from the start, improving the user experience by maintaining consistent branding across rooms.
Enhanced project visibility and accessibility by including new badges in the README. These additions are aimed at providing quick links to the package on PyPI, showing the supported Python versions, license information, and the latest Git commit status. These enhancements make it easier for users and contributors to find important project details, contributing to a more open and engaging community.
This change underscores our commitment to transparency and support for the development community.
Removed redundant Docker CI/CD workflow for the 'latest' tag and integrated its functionality into the existing tagging workflow. This change not only reduces the redundancy of having separate workflows for 'latest' and version-specific tags but also simplifies the CI/CD process by having a single, unified workflow for Docker image publications. Moving forward, every push will now ensure that the 'latest' tag is updated alongside the version-specific tags, maintaining a smoother and more predictable deployment and versioning flow.
Added a Dependabot configuration to automate dependency updates for the Python package ecosystem. Dependabot will now check for updates on a daily basis, ensuring that our project dependencies remain up-to-date with the latest security patches and features without manual oversight. This proactive approach towards dependency management will aid in minimizing potential security vulnerabilities and compatibility issues, fostering a more secure and stable development environment.
Updated the README to strengthen the recommendation of using a virtual environment (venv) during installation. This adjustment aims to guide users towards best practices in Python environment management, potentially reducing common issues related to package dependencies and conflicts.
This commit simplifies the pyproject.toml structure for better readability and maintenance. Key changes include formatting author and license information, consolidating dependency lists into a more concise format, and adding the `future` package to dependencies to ensure forward-compatibility. Optional dependencies are now listed in a more compact style, and the development dependencies section has been cleaned up. These adjustments make the project configuration cleaner and more accessible, facilitating future updates and dependency management.
Corrected the default port for Pantalaimon from 8010 to 8009 in the README documentation. This change aligns the documentation with the latest Pantalaimon configuration standards, ensuring that users setting up their homeserver URL in the bot's config.ini file use the correct port. This update is crucial for new users during initial setup to avoid connectivity issues.
Updated the default listening port in pantalaimon.example.conf from 8010 to 8009. This alteration ensures compatibility with new network policies and avoids collision with commonly used ports in the default configuration. It's an important change for users setting up new instances, enabling smoother initial configurations without manual port adjustments.
Introduced the `ForceVision` configuration option to allow usage of third-party models for image recognition within the OpenAI setup. This change broadens the flexibility and applicability of the bot's image processing capabilities by not restricting to predefined vision models only. Also, added missing properties to the `OpenAI` class to provide comprehensive control over the bot's behavior, including options for forcing vision and tools usage, along with emulating tool capabilities in models not officially supporting them. These enhancements make the bot more adaptable to various models and user needs, especially for self-hosted setups.
Additionally, updated documentation and increment version to 0.3.12 to reflect these changes and improvements.
Refactored the handling of AI providers to support multiple AI services efficiently, introducing a `BaseAI` class from which all AI providers now inherit. This change modernizes our approach to AI integration, providing a more flexible and maintainable architecture for future expansions and enhancements.
- Adopted `gpt-4o` and `dall-e-3` as the default models for chat and image generation, respectively, aligning with the latest advancements in AI capabilities.
- Integrated `ruff` as a development dependency to enforce coding standards and improve code quality through consistent linting.
- Removed unused API keys and sections from `config.dist.ini` to streamline configuration management and clarify setup processes for new users.
- Updated the command line tool for improved usability and fixed previous issues preventing its effective operation.
- Enhanced OpenAI integration with advanced settings for temperature, top_p, frequency_penalty, and presence_penalty, enabling finer control over AI-generated outputs.
This comprehensive update not only enhances the bot's performance and usability but also lays the groundwork for incorporating additional AI providers, ensuring the project remains at the forefront of AI-driven chatbot technologies.
Resolves#13
Refactored the main execution pathway to introduce a `main_sync` function that wraps the existing asynchronous `main` function, facilitating compatibility with environments that necessitate or prefer synchronous execution. This change enhances the bot's flexibility in various deployment scenarios without altering the core asynchronous functionality.
In addition, expanded the exception handling in `get_version` to catch all exceptions instead of limiting to `DistributionNotFound`. This broadens the robustness of version retrieval, ensuring the application can gracefully handle unexpected issues during version lookup.
Whitespace adjustments improve code readability by clearly separating function definitions.
These adjustments contribute to the maintainability and operability of the application, allowing for broader usage contexts and easier integration into diverse environments.
This commit removes unnecessary imports across several modules, enhancing code readability and potentially improving performance. Notably, `KeysUploadError` and `requests` were removed where no longer used, reflecting a cleaner dependency structure. Furthermore, logging calls have been standardized, removing dynamic string generation in favor of static messages. This change not only makes the logs more consistent but also slightly reduces the computational overhead associated with log generation. The removal of unused type hints also contributes to a more focused and maintainable code base.
Additionally, the commit includes minor text adjustments for user messages, replacing dynamic content with fixed strings where the dynamism was not needed. This enhances both the clarity and security of user-directed messages by avoiding unnecessary string formatting operations.
Finally, the simplification of the migration script and the adjustment in the tools module underscore an ongoing effort to maintain clean and efficient code infrastructure.
Added `LogLevel` and `UseKeyring` configuration options to the example configuration file to provide users with more control over logging verbosity and the decision to utilize a system keyring for credentials storage. The LogLevel option allows for easier debugging by adjusting the verbosity of logs, whereas the UseKeyring option offers flexibility in credential management, catering to environments where a system keyring may not be preferred or available.
These changes enhance the tool's usability and adaptability to various user environments and debugging needs.
Enhanced bot flexibility by enabling the specification of room IDs in the allowed users' list, broadening access control capabilities. This change allows for more granular control over who can interact with the bot, particularly useful in scenarios where the bot's usage needs to be restricted to specific rooms. Additionally, updated documentation and configurations reflect the inclusion of new AI models and self-hosted API support, catering to a wider range of use cases and setups. The README.md and config.dist.ini files have been updated to offer clearer guidance on setup, configuration, and troubleshooting, aiming to improve user experience and ease of deployment.
- Introduced the ability for room-specific bot access, enhancing user and room management flexibility.
- Expanded AI model support, including `gpt-4o` and `ollama`, increases the bot's versatility and application scenarios.
- Updated Python version compatibility to 3.12 to ensure users are leveraging the latest language features and improvements.
- Improved troubleshooting documentation to assist users in resolving common issues more efficiently.
Introduces logging for cases where OpenAI's API returns an empty response, ensuring that such occurrences are captured for debugging purposes. This change enhances visibility into the interaction with OpenAI's endpoint, facilitating easier identification and resolution of issues where empty responses are received, potentially indicating API limitations, network issues, or unexpected behavior from the AI model.
This update allows users to provide a location name for their weather reports, which can be useful when requesting weather information for specific locations.
When processing large volumes of data, it's essential to handle errors gracefully and provide clear feedback to users. This change introduces additional checks to ensure robust error handling during user authentication, reducing the likelihood of errors propagating further down the pipeline.
This improvement not only enhances the overall stability of the system but also provides a better user experience by providing more informative error messages in the event of an issue.
Renamed `pantalaimon_first_login.py` to `fetch_access_token.py` to better reflect its purpose. Additionally, updated README to remove obsolete instructions for using pantalaimon with the bot.