| local | ||
| __init__.py | ||
| .gitignore | ||
| config.py | ||
| plugin.py | ||
| prompt.txt | ||
| README.md | ||
| test.py | ||
Chat
A Limnoria plugin that brings ChatGPT into your IRC channel
Installation
- Install the plugin
- Put your API key in the plugin configuration registry like so:
/msg BotName config plugins.Chat.api_key YOUR_API_KEY
- Load the plugin:
load Chat
Configuration
The Chat plugin supports the following configuration parameters:
api_key: The API key for accessing OpenAI's API. This must be set for the plugin to work.model: The OpenAI model to use for generating responses. Default:gpt-4.max_tokens: The maximum number of tokens to include in the response. Default:256.system_prompt: The system prompt to guide the assistant's behavior. Default:You are a helpful assistant..scrollback_lines: The number of recent lines from the channel to include as context. Default:10.join_string: The string used to join multi-line responses into a single line. Default:/.passive_mode: Controls passive participation. Options:off,mention,smart. Default:off.passive_probability: Whenpassive_modeissmart, probability (0-1) that the bot considers replying when heuristics match. Default:0.35.passive_max_replies: Maximum passive replies per thread (-1disables the cap). Default:3.passive_engagement_timeout: Seconds before an active passive thread expires if the bot stays quiet. Default:180.passive_cooldown: Cooldown in seconds after ending a passive thread before starting a new one. Default:120.passive_trigger_words: Space-separated keywords that increase the chance of a passive response insmartmode. Default: (empty).passive_prompt_addendum: Text appended to the system prompt while passive mode is active, shaping the bot's etiquette.history_service_url: Base URL for the optional history service (e.g.http://127.0.0.1:8901). Leave blank to disable.history_service_token: Bearer token to send with history service requests (if required).history_service_timeout: Timeout in seconds for history service HTTP requests. Default:1.5.history_include_files: Number of rotated log files the service should scan (include_filesparameter). Default:2.history_result_limit: Maximum number of log lines to request from the service. Default:60.history_max_chars: Maximum characters of history context injected into the prompt. Default:1800.history_max_lines: Maximum history lines injected into the prompt. Default:80.history_trigger_words: Words/phrases that cause the bot to consult the history service. Default:remember earlier history logs recap summary yesterday before.
Example Configuration
To set the API key:
/msg BotName config plugins.Chat.api_key YOUR_API_KEY
To change the model:
/msg BotName config plugins.Chat.model gpt-3.5-turbo
To adjust the maximum tokens:
/msg BotName config plugins.Chat.max_tokens 512
Passive Mode
Enable light-weight participation by switching passive_mode to mention so the bot automatically answers when called by name:
/msg BotName config plugins.Chat.passive_mode mention
For a looser "hang out" presence, activate smart mode and adjust the heuristics:
/msg BotName config plugins.Chat.passive_mode smart
/msg BotName config plugins.Chat.passive_probability 0.25
/msg BotName config plugins.Chat.passive_trigger_words help thoughts idea
In smart mode the bot watches channel flow, but only jumps in when it is confident it can help or close an active thread. Direct .chat/@Bot chat commands still work exactly as before.
History Service
Point the plugin at the local history API and let it pull in context when users ask for recaps:
/msg BotName config plugins.Chat.history_service_url http://127.0.0.1:8901
/msg BotName config plugins.Chat.history_trigger_words "remember earlier recap"
If the service expects a bearer token:
/msg BotName config plugins.Chat.history_service_token YOURTOKEN
When a .chat request (or passive interjection) contains one of the trigger phrases—or starts with history:/log:—the plugin calls the service, pulls any matching log lines, and appends a short “Recent channel facts” block to the OpenAI prompt before generating the final reply.
Usage
Once configured, you can use the chat command to interact with the bot. For example:
@BotName chat What is the capital of France?
The bot will respond with the answer based on the configured model and context.
Defaults
The plugin is designed to work out of the box with minimal configuration. Simply set the api_key, and the plugin will use sensible defaults for all other parameters.