-
Notifications
You must be signed in to change notification settings - Fork 12
Implemented logging message deletions to mod channel #216
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implemented logging message deletions to mod channel #216
Conversation
- Update moderation module to include `DeletedMessagesListener` - Introduce configurable options for deleted message logging in `Config` and `config.type` - Adjust PostgreSQL volume path in `docker-compose.dev.yml` - Minor `package.json` script cleanup for `typecheck` command
bristermitten
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
great start, some changes & clarification needed please
…e fetching. - Add message caching and fetching logic to `DeletedMessagesListener`. - Replace inlined message formatting with reusable `formatMessageForPaste` function. - Integrate `upload` service to generate paste URLs for deleted messages. - Consolidate starboard and moderation message fetching to rely on `DeletedMessagesListener`.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR implements functionality to log deleted messages to the mod channel, addressing issue #211. The implementation caches messages for up to 24 hours and logs both individual and bulk message deletions to a paste service, with links provided in embeds sent to the moderation log channel.
- Introduces a new event listener that caches messages and logs deletions to the mod channel
- Refactors message fetching from starboard module to use a shared singleton instance
- Adds configuration support for customizing cache TTL and excluded channels
Reviewed changes
Copilot reviewed 10 out of 10 changed files in this pull request and generated 7 comments.
Show a summary per file
| File | Description |
|---|---|
| src/modules/moderation/deletedMessages.listener.ts | New listener that caches messages on create/update and logs them when deleted, handling both single and bulk deletions |
| src/modules/moderation/logs.ts | Adds CachedMessage interface and functions to log deleted messages with paste links |
| src/modules/moderation/moderation.module.ts | Registers the new DeletedMessagesListener with the moderation module |
| src/modules/starboard/starboard.listener.ts | Removes local MessageFetcher instance and message caching logic, delegating to DeletedMessagesListener |
| src/util/ratelimiting.ts | Exports a shared singleton MessageFetcher instance for reuse across modules |
| src/config.type.ts | Adds optional deletedMessageLog configuration with cache TTL and excluded channels |
| src/Config.ts | Adds default deletedMessageLog configuration for development environment |
| src/Config.prod.ts | Adds default deletedMessageLog configuration for production environment |
| package.json | Updates typecheck script flag ordering (appears to be a formatting change) |
| docker-compose.dev.yml | Removes version field and modifies PostgreSQL volume path |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| for (const message of messages.values()) { | ||
| cacheMessage(message); | ||
| } | ||
| return messages.size; | ||
| } | ||
|
|
||
| async function fetchAndCacheMessages( | ||
| channel: TextBasedChannel, | ||
| limit: number | ||
| ): Promise<number> { | ||
| let cached = 0; | ||
| let before: Snowflake | undefined; | ||
|
|
||
| while (cached < limit) { | ||
| const batch = await channel.messages.fetch({ | ||
| limit: Math.min(100, limit - cached), | ||
| before | ||
| }); | ||
| if (batch.size === 0) break; | ||
|
|
||
| cached += cacheMessages(batch); | ||
| before = batch.last()?.id; | ||
|
|
||
| if (batch.size < 100) break; | ||
| } | ||
|
|
||
| return cached; | ||
| } | ||
|
|
||
| export const DeletedMessagesListener: EventListener = { | ||
| async clientReady(client) { | ||
| const guild = await client.guilds.fetch(config.guildId); | ||
| const channels = await guild.channels.fetch(); | ||
|
|
||
| for (const channel of channels.values()) { | ||
| if (!channel?.isTextBased() || isExcludedChannel(channel.id)) continue; | ||
|
|
||
| await messageFetcher.addToQueue(async () => { | ||
| try { | ||
| const cached = await fetchAndCacheMessages(channel, 200); | ||
| logger.info(`Cached ${cached} messages from #${channel.name}`); | ||
| } catch { | ||
| // Skip channels we can't read (permissions) | ||
| } | ||
| }); | ||
| } | ||
| }, |
Copilot
AI
Jan 6, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Inconsistent indentation is used in this file. Functions cacheMessages and fetchAndCacheMessages (lines 47-75) and the clientReady method (lines 78-94) use 2-space indentation, while the rest of the EventListener methods use tab indentation. The project should maintain consistent indentation throughout. Based on other files in the codebase, tabs appear to be the standard.
| for (const message of messages.values()) { | |
| cacheMessage(message); | |
| } | |
| return messages.size; | |
| } | |
| async function fetchAndCacheMessages( | |
| channel: TextBasedChannel, | |
| limit: number | |
| ): Promise<number> { | |
| let cached = 0; | |
| let before: Snowflake | undefined; | |
| while (cached < limit) { | |
| const batch = await channel.messages.fetch({ | |
| limit: Math.min(100, limit - cached), | |
| before | |
| }); | |
| if (batch.size === 0) break; | |
| cached += cacheMessages(batch); | |
| before = batch.last()?.id; | |
| if (batch.size < 100) break; | |
| } | |
| return cached; | |
| } | |
| export const DeletedMessagesListener: EventListener = { | |
| async clientReady(client) { | |
| const guild = await client.guilds.fetch(config.guildId); | |
| const channels = await guild.channels.fetch(); | |
| for (const channel of channels.values()) { | |
| if (!channel?.isTextBased() || isExcludedChannel(channel.id)) continue; | |
| await messageFetcher.addToQueue(async () => { | |
| try { | |
| const cached = await fetchAndCacheMessages(channel, 200); | |
| logger.info(`Cached ${cached} messages from #${channel.name}`); | |
| } catch { | |
| // Skip channels we can't read (permissions) | |
| } | |
| }); | |
| } | |
| }, | |
| for (const message of messages.values()) { | |
| cacheMessage(message); | |
| } | |
| return messages.size; | |
| } | |
| async function fetchAndCacheMessages( | |
| channel: TextBasedChannel, | |
| limit: number | |
| ): Promise<number> { | |
| let cached = 0; | |
| let before: Snowflake | undefined; | |
| while (cached < limit) { | |
| const batch = await channel.messages.fetch({ | |
| limit: Math.min(100, limit - cached), | |
| before, | |
| }); | |
| if (batch.size === 0) break; | |
| cached += cacheMessages(batch); | |
| before = batch.last()?.id; | |
| if (batch.size < 100) break; | |
| } | |
| return cached; | |
| } | |
| export const DeletedMessagesListener: EventListener = { | |
| async clientReady(client) { | |
| const guild = await client.guilds.fetch(config.guildId); | |
| const channels = await guild.channels.fetch(); | |
| for (const channel of channels.values()) { | |
| if (!channel?.isTextBased() || isExcludedChannel(channel.id)) continue; | |
| await messageFetcher.addToQueue(async () => { | |
| try { | |
| const cached = await fetchAndCacheMessages(channel, 200); | |
| logger.info(`Cached ${cached} messages from #${channel.name}`); | |
| } catch { | |
| // Skip channels we can't read (permissions) | |
| } | |
| }); | |
| } | |
| }, |
| - "3306:5432" | ||
| volumes: | ||
| - postgres_data:/var/lib/postgresql/data | ||
| - postgres_data:/var/lib/postgresql |
Copilot
AI
Jan 6, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The volume path for PostgreSQL data has been changed from /var/lib/postgresql/data to /var/lib/postgresql. This is incorrect as PostgreSQL stores its data in the /var/lib/postgresql/data directory by default. This change will likely cause PostgreSQL to fail to find or persist its data correctly, and may result in data loss or initialization issues.
| - postgres_data:/var/lib/postgresql | |
| - postgres_data:/var/lib/postgresql/data |
| "version": "echo $npm_package_version", | ||
| "lint": "bunx biome check --fix", | ||
| "typecheck": "bunx tsc --noEmit --build tsconfig.json", | ||
| "typecheck": "bunx tsc --build --noEmit tsconfig.json", |
Copilot
AI
Jan 6, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are two consecutive spaces between 'tsc' and '--build' in the typecheck command. While this may work, it's inconsistent with standard formatting and should be normalized to a single space for consistency.
| "typecheck": "bunx tsc --build --noEmit tsconfig.json", | |
| "typecheck": "bunx tsc --build --noEmit tsconfig.json", |
| const pasteContent = formatMessageForPaste(message); | ||
| const pasteUrl = await upload({ content: pasteContent }); | ||
|
|
||
| const embed = createStandardEmbed(message.authorId) | ||
| .setTitle("Message Deleted") | ||
| .setColor("Grey") | ||
| .setDescription( | ||
| `**Author**: <@${message.authorId}> (${message.authorTag})\n` + | ||
| `**Channel**: <#${message.channelId}>\n` + | ||
| `**Created**: <t:${Math.round(message.createdTimestamp / 1000)}:R>\n\n` + | ||
| `**Content**: [View on Paste](${pasteUrl})`, | ||
| ) | ||
| .setFooter({ text: `Message ID: ${message.id}` }) | ||
| .setTimestamp(); | ||
|
|
||
| await modLogChannel.send({ embeds: [embed] }); |
Copilot
AI
Jan 6, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The upload function can return "Pasting failed" when an error occurs, but this code doesn't handle that case. If the paste upload fails, the embed will contain a broken link "[View on Paste](Pasting failed)" which is not user-friendly. Consider adding error handling to check if the upload was successful before creating the embed, or provide an alternative way to display the message content when paste upload fails.
| const pasteUrl = await upload({ content: pasteContent }); | ||
|
|
||
| const embed = new EmbedBuilder() | ||
| .setTitle("Bulk Messages Deleted") | ||
| .setColor("DarkGrey") | ||
| .setDescription( | ||
| `**Channel**: <#${channelId}>\n` + | ||
| `**Count**: ${messages.length} messages\n\n` + | ||
| `**Messages**: [View on Paste](${pasteUrl})`, | ||
| ) | ||
| .setTimestamp(); | ||
|
|
||
| await modLogChannel.send({ embeds: [embed] }); |
Copilot
AI
Jan 6, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The upload function can return "Pasting failed" when an error occurs, but this code doesn't handle that case. If the paste upload fails, the embed will contain a broken link "[View on Paste](Pasting failed)" which is not user-friendly. Consider adding error handling to check if the upload was successful before creating the embed, or provide an alternative way to display the message content when paste upload fails.
| function cacheMessages(messages: Collection<string, Message>): number { | ||
| for (const message of messages.values()) { | ||
| cacheMessage(message); | ||
| } | ||
| return messages.size; |
Copilot
AI
Jan 6, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The function returns messages.size which represents the total number of messages passed in, but doesn't reflect how many were actually cached. The cacheMessage function may skip messages (bots, non-guild messages, excluded channels), so the return value may be inaccurate. Consider tracking and returning the actual number of messages that were successfully cached for more accurate logging.
| export const DeletedMessagesListener: EventListener = { | ||
| async clientReady(client) { | ||
| const guild = await client.guilds.fetch(config.guildId); | ||
| const channels = await guild.channels.fetch(); | ||
|
|
||
| for (const channel of channels.values()) { | ||
| if (!channel?.isTextBased() || isExcludedChannel(channel.id)) continue; | ||
|
|
||
| await messageFetcher.addToQueue(async () => { | ||
| try { | ||
| const cached = await fetchAndCacheMessages(channel, 200); | ||
| logger.info(`Cached ${cached} messages from #${channel.name}`); | ||
| } catch { | ||
| // Skip channels we can't read (permissions) | ||
| } | ||
| }); | ||
| } | ||
| }, | ||
|
|
||
| messageCreate(_, message) { | ||
| if (!message.inGuild()) return; | ||
| cacheMessage(message); | ||
| }, | ||
|
|
||
| messageUpdate(_, _oldMessage, newMessage) { | ||
| if (!newMessage.inGuild()) return; | ||
| if (newMessage.partial) return; | ||
| cacheMessage(newMessage); | ||
| }, | ||
|
|
||
| async messageDelete(client, message) { | ||
| try { | ||
| if (isExcludedChannel(message.channelId)) return; | ||
|
|
||
| const cached = messageCache.get(message.id); | ||
| if (!cached) { | ||
| logger.debug(`Deleted message ${message.id} not in cache`); | ||
| return; | ||
| } | ||
|
|
||
| await logDeletedMessage(client, cached); | ||
| messageCache.delete(message.id); | ||
| } catch (error) { | ||
| logger.error("Failed to log deleted message:", error); | ||
| Sentry.captureException(error); | ||
| } | ||
| }, | ||
|
|
||
| async messageDeleteBulk(client, messages, channel) { | ||
| try { | ||
| if (isExcludedChannel(channel.id)) return; | ||
|
|
||
| const cachedMessages: CachedMessage[] = []; | ||
| for (const [id] of messages) { | ||
| const cached = messageCache.get(id); | ||
| if (cached) { | ||
| cachedMessages.push(cached); | ||
| messageCache.delete(id); | ||
| } | ||
| } | ||
|
|
||
| if (cachedMessages.length === 0) { | ||
| logger.debug(`Bulk deletion in ${channel.id} - no messages in cache`); | ||
| return; | ||
| } | ||
|
|
||
| await logBulkDeletedMessages(client, cachedMessages, channel.id); | ||
| } catch (error) { | ||
| logger.error("Failed to log bulk deleted messages:", error); | ||
| Sentry.captureException(error); | ||
| } | ||
| }, | ||
| }; |
Copilot
AI
Jan 6, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The new DeletedMessagesListener lacks test coverage. Similar listeners in the codebase (e.g., bump.listener.ts) have comprehensive test files. Consider adding unit tests to verify the message caching logic, event handlers (messageCreate, messageUpdate, messageDelete, messageDeleteBulk), and edge cases such as excluded channels, bot messages, and cache expiry behavior.
|
LGTM |
This should implement #211