Enhanced event processing in the bot's message retrieval logic to
improve message relevance and accuracy. Changes include accepting all
'gptbot' prefixed events, refining handling of 'ignoreolder' command
with exact match rather than starts with, and now passing through
'custom' commands that start with '!'. The default behavior now excludes
notices unless explicitly included. This update allows for more precise
command interactions and reduces clutter from irrelevant notices.
Standardize the passing of 'messages' argument across various calls to
generate_chat_response method to ensure consistency and prevent
potential bugs in the GPT bot's response generation. The 'model'
parameter in one instance has been corrected to 'original_model' for
proper context loading. These changes improve code clarity and maintain
the intended message flow within the bot's conversation handling.
Refactored initialization of OpenAI APIs to correct a redundancy and
enhance clarity. Improved content extraction logic for robust handling
of various message types. Enhanced logging and messaging by including
user and room context, facilitating better debugging and user
interaction. Extended `send_message` to support custom message types,
allowing for richer interaction within the chat ecosystem. Updated
hardcoded chat models to leverage newer versions for potentially more
accurate tool overrides. Fixed async method call in recursion handling
to ensure proper response generation. Finally, increased message history
retrieval limit based on the `max_messages` attribute for more effective
conversation context.
Resolves issues with message context and enhances user feedback during
operations.
Removed the '-dev' suffix from the project version indicating the transition from a development state to the official release of version 0.3.3. This version bump aligns with the completion of features and fixes slated for this iteration.
Update the README to specify that issues with file attachments primarily occur in non-encrypted rooms when the same user operates the bot in both encrypted and non-encrypted rooms. This detail aims to inform users of potential pitfalls more precisely when setting up the bot with end-to-end encryption enabled.
Improved the error handling in the OpenAI class to prevent infinite recursion issues by retaining the original chat model during recursive calls. Enhanced logging within the recursion depth check for better debugging and traceability. Ensured consistency in chat responses by passing the initial model reference throughout the entire call stack. This is crucial when fallbacks due to errors or tool usage occur.
Refactored code for clarity and readability, ensuring that any recursion retains the original model and tool parameters. Additionally, proper logging and condition checks now standardize the flow of execution, preventing unintended modifications to the model's state that could lead to incorrect bot behavior.
Introduced the ability to specify and retrieve different OpenAI models on a per-room basis, thereby allowing enhanced customization of the bot's response behavior according to the preferences for each room. Cleaned up code formatting across the bot implementation files for improved readability and maintainability. Additional logic now checks for model overrides when generating responses, ensuring the correct model is used as configured.
Refactors include streamlined database and API initializations and a refined method for processing message formatting to accommodate images, texts, and system messages consistently. This change differentiates default behavior from room-specific configurations, catering to diverse user needs without compromising on default settings.
Added a safety check to prevent infinite recursion within the response generation function. When `use_tools` is active, the code now inspects the call stack and terminates the process if a certain recursion depth is exceeded. This ensures that the code is robust against potential infinite loops that could block or crash the service. A default threshold is set with a TODO for revisiting the hard-coded limit, and the recursion detection logs the occurrence for easier debugging and maintenance.
Note: Recursion limit handling may require future adjustments to the `allow_override` parameter based on real-world feedback or testing.
Introduced a configuration option to emulate tool usage in models that do not natively support tools, facilitating the use of such functionality without direct model support. This should benefit users aiming to leverage tool-based features without relying on specific AI models. Additionally, enhanced error logging in the GPTBot class by including traceback details, aiding in debugging and incident resolution.
- Added `EmulateTools` option in the `config.dist.ini` for flexible tool handling.
- Enriched error logging with stack traces in `bot.py` for better insight during failures.
Introduced additional debug log entries in the `GPTBot` class to provide clarity on the initial sync and callback setup process. This helps with monitoring and troubleshooting during the early stages of bot deployment, making it easier to pinpoint issues around bot startup and room joining behavior.
Bumped project version to 0.3.3-dev to signal ongoing development.
Updated README to caution users about the current issues with end-to-end encryption, specifically that it can disrupt file uploads and downloads. The aim is to prevent user frustration and potential data loss until a fix is implemented.
Resolved an issue that prevented the bot from responding when files were uploaded to encrypted rooms by implementing a workaround. The bot now tries to generate text from uploaded files and logs errors without interrupting the message flow. Upgraded the Pantalaimon dependency to ensure compatibility. Also, refined the message processing logic to handle different message types correctly and made the download_file method asynchronous to match the matrix client's expected behavior. Additionally, updated the changelog and bumped the project version to reflect these fixes and improvements.
Known issues have been documented, including a limitation when using Pantalaimon where the bot cannot download/use files uploaded to encrypted rooms.
Upgraded bot features to interpret and respond to text, image, and voice prompts in Matrix rooms using advanced OpenAI models, including vision preview and text-to-speech. Streamlined installation process with bot now available via PyPI, simplifying setup and extending accessibility. Eliminated planned features section, signaling a shift towards realized functionalities over prospective development.
Configured Pantalaimon as an optional dependency to enable bot use in E2EE rooms while maintaining compatibility with non-encrypted rooms. Removed trackingmore dependency, indicating a refinement in the feature set towards core functionalities. Version bumped to 0.3.0, signifying major enhancements over previous iteration.
Introduced a new systemd service configuration for GPTbot to ensure Pantalaimon starts as a background process on system boot, maintaining persistent Matrix encryption handling. Ensures seamless restarts and network dependency management for improved reliability.
Integrated Pantalaimon support with updated configuration instructions and examples, facilitating secure communication when using the Matrix homeserver. The .gitignore is now extended to exclude a Pantalaimon configuration file, preventing sensitive information from accidental commits. Removed encryption callbacks and related functions as the application leverages Pantalaimon for E2EE, simplifying the codebase and shifting encryption responsibilities externally. Streamlined dependency management by removing the requirements.txt in favor of pyproject.toml, aligning with modern Python practices. This change overall improves security handling and eases future maintenance.
Resolved a syntax error in the allowed_users property within the GPTBot class by adding the missing 'self' parameter. This correction ensures the proper functioning of the property method, enabling the bot to correctly retrieve the list of users authorized to use it.
Extended the condition for the audio message handling in the chatbot to recognize MP3 audio files sent as file attachments. This ensures that MP3 files will be properly processed as audio messages, improving the bot's media handling capabilities. This is just a test at this point, and may be rolled back.
Migrated several hardcoded bot configuration settings to dynamic properties with fallbacks, enhancing flexibility and customization. The properties now read from a configuration file, allowing changes without code modification. Simplified the instantiation logic by removing immediate attribute setting in favor of lazy-loaded properties. Additionally, prepared to segregate OpenAI-related settings into a dedicated class (noted with TODO comments).
Note: Verify the presence of necessary configuration parameters or include defaults to prevent potential runtime issues.
Added options to extract specific info and summarize content from Wikipedia pages within the gptbot's Wikipedia tool. The 'extract' option enables partial retrieval of page data based on a user-defined string, leveraging the bot's existing chat API for extraction. The 'summarize' option allows users to get concise versions of articles, again utilizing the bot's chat capabilities. These additions provide users with more granular control over the information they receive, potentially reducing response clutter and focusing on user-specified interests.
Cast user objects to strings to standardize ID handling across API calls. Enhanced logging statements now include user and room context, providing better traceability for response generation. Also, refined error handling for API token limits by falling back to an altered response flow, removing tool roles from messages when a max token error occurs, before reattempting. This targets more graceful handling of response generation without tool assistance when constraints are hit.
Introduced error handling for the 'max_tokens' exception during chat response generation. In cases where the maximum token count is exceeded, the bot now falls back to a no-tools response, avoiding the halt of conversation. This ensures a smoother chat experience and persistence in responding, even when a limit is reached. Any other exceptions will still be raised as before, maintaining error transparency.
- Introduce error handling for the keys upload process, logging failures to assist with troubleshooting.
- Improve exception handling in the OpenAI class by returning a more informative response based on the exception arguments if available.
- Replace a return statement in the Newroom tool with an exception raise to standardize tool action termination and provide clearer flow control.
Resolves issue with silent key upload failures. Refines response and control flow for better clarity and debugging.
Enabled asynchronous key upload in the roommember callback to improve efficiency. Fixed the chat response generation by properly referencing the event sender rather than the room ID, aligning user context with chat messages. Corrected the user parameter misuse in the OpenAI class to utilize the room ID. Extended the toolkit to include a 'newroom' feature for creating and setting up new Matrix rooms, thereby enhancing bot functionality.
This commit significantly improves bot response times and contextual accuracy while interacting within rooms and adds a valuable feature for users to create rooms seamlessly.
Enhanced the speech generation logging to display the word count of the input text instead of the full text. This change prioritizes user privacy and improves log readability. Implemented a new feature to generate descriptions for images within a conversation, expanding the bot's capabilities. Also, refactor `BaseTool` class to securely access arguments through `.get` method and to include `messages` by default, ensuring graceful handling of missing arguments.
Enhanced the audio processing in speech-to-text conversion by converting the input audio to MP3 format before transcription. The logging now reflects the word count of the recognized text, providing clearer insight into the output. This should improve compatibility with the transcription service and result in more accurate transcriptions.
Introduced a new 'datetime' tool to the gptbot, which provides the current date and time in UTC. This enhancement caters to the need for time-related queries within the bot's functionality, expanding its utility for users dealing with time-sensitive information.
Temporarily commented out callbacks for test responses, event handling, and encrypted messages to focus on core functionality stabilization. This change aims to simplify the debugging process and enhance the reliability of active features during the development phase. Encryption handling will be reintroduced after refining base features.
Refined the exception details in the Wikipedia tool to include the search query when no results are found, enhancing the clarity of error outputs for end-users. This change helps in debugging by indicating the exact query that led to a no-results situation. Additionally, the existing failure-to-connect error message was left as-is, maintaining accurate API connectivity diagnostics.
Refactor the message concatenation logic within the chat response to ensure the original final message remains intact at the end of the sequence. Introduce a new 'Wikipedia' tool to the bot's capabilities, allowing users to query and retrieve information from Wikipedia directly through the bot's interface. This enhancement aligns with efforts to provide a more informative and interactive user experience.
Eliminated a print statement that was outputting the API request URL in the weather fetching tool, ensuring sensitive key information is not displayed in logs. This increases security by preventing potential API key exposure.
Eliminated the printing of traceback in the exception handling block when the GPTBot encounters an error calling a tool. This change cleans up the logs by removing a redundant error output since relevant information is already being logged. The update aims to enhance the clarity and readability of the logs in case of tool calling errors.
Refactored `call_tool` to pass `room` and `user` for improved context during tool execution.
Introduced `Handover` and `StopProcessing` exceptions to better control the flow when calling tools involves managing exceptions and handovers between tools and text generation.
Enabled flexibility with `room` param in sending images and files, now accepting both `MatrixRoom` and `str` types.
Updated `generate_chat_response` in OpenAI class to incorporate tool usage flag and more pruned message handling for tool responses.
Introduced `orientation` option for image generation to specify landscape or portrait.
Implemented two new tool classes, `Imagine` and `Imagedescription`, to streamline image creation and description processes accordingly.
This improved error handling and additional granularity in tool invocation ensure that the bot behaves more predictably and transparently, particularly when interacting with generative AI and handling dialogue. The flexibility in both response and image generation caters to a wider range of user inputs and scenarios, ultimately enhancing the bot's user experience.
This commit adds functionality to call tools within the chat completion model. By introducing the `call_tool()` method in the `GPTBot` class, tools can now be invoked with the appropriate tool call. The commit also includes the necessary changes in the `OpenAI` class to handle tool calls during response generation. Additionally, new tool classes for geocoding and dice rolling have been implemented. This enhancement aims to expand the capabilities of the bot by allowing users to leverage various tools directly within the chat conversation.
This change adds support for voice input and output to the GPTbot. Users can enable this feature using the new `!gptbot roomsettings` command. Voice input and output are currently supported via OpenAI's TTS and Whisper models. However, note that voice input may be unreliable at the moment. This enhancement expands the capabilities of the bot, allowing users to interact with it using their voice. This addresses the need for a more user-friendly and natural way of communication.
- Replaced synchronous room check with asynchronous room check using `await`.
- Updated the code to use the `await` keyword before calling `self.room_uses_assistant(room)`.
- This change enables the code to generate assistant response asynchronously.
This commit adds a new method `room_uses_assistant` to the OpenAI class. This method allows checking whether a given room uses an assistant. It uses the `room_settings` table in the database to determine if the specified room has the `openai_assistant` setting.
The commit modifies the image generation code in the OpenAI class. The size and model of the generated image can now be dynamically set based on the provided prompt. The code has been refactored to handle different image sizes and models correctly.