Hello Lemmings!
I am thinking of making a community moderation bot for Lemmy. This new bot will have faster response times with the help of Lemmy webhooks, an amazing plugin for Lemmy instances by @rikudou@lemmings.world to add webhook support. With this, there is no need to frequently call the API at a fixed interval to fetch new data. Any new data will be sent via the webhook directly to the bot backend. This allows for actions within seconds, thus making it an effective auto moderation tool.
I have a few features I thought of doing:
- Welcome messages
- Auto commenting on new posts
- Scheduled posts
Punish content authors or take action onAuto report content via word blacklist/regex- Ban members of communities by their usernames via word blacklist or regex
- Auto community lockdown during spam
What other features do you think are possible? Please let me know. Any questions are also welcome.
Community requested features:
- Strike system
Strikes are added to a certain member of the community and the member will be temporarily banned within a time period if their strike count reaches a certain threshold
- Post creation restriction by account age
If an account’s age is lower than X, remove the post.
make it blockable, mutable…, i don’t like opening a post and the first thing i see is a pinned bot comment.
Half the features are helpful and the others are obnoxious or useless reddit vestiges. Auto banning users, locking communities, deleting posts is all rather harmful and not conducive to interesting discussion and posts. Welcome messages and auto mod comments on every post are also plain terrible.
Make a slim bot with moderation tools that helps mods and admins to do their tasks more efficiently and comfortably, but dont offload the mod role itself to the bot. That is one of the worst parts if reddit.
Your idea of a “bot” is yet just manual labor for mods with “advanced” features.
Honestly a bot moderator is just open source enshittification of the fediverse if you did it like this. Bots have no nuance, do not understand context and are generally unable to apply reason to a situation.
The most egregious suggestion is user name based bans, this is 100% going to remove a bunch of users without real cause. Or having automod comment the same irrelevant headline on every single post is just causing spam and kills the comment count function.
In my opinion the bots should do all the tediousness for the moderators, and there may even be scenarios where a bot content filter could be invaluable, but in general any tool you put out there will also be used to its fullest extent by at least one person.
Like cops with too many powers, eventually they abuse it for everything.
What exactly am I to do when mods use my bot maliciously? I just try to program features into the bot that might be used by the moderators. Everything is optional, if the features are used maliciously, it’s not my fault but the moderators’. What fault does the creator of knives have when they are being used to murder people instead of cutting vegetables?
I do appreciate your comment though, some misc tasks don’t necessarily need to exist such as welcome messages and auto comments on posts. Actually I’ll remove welcome messages, they are a waste of API calls. Maybe auto comment on posts as well but scheduled posts are (most likely) staying. It is a moderation bot after all. I’ll consider your compliant. Thanks.
Generally speaking please don’t. I’ve never seen a reddit bot that I didn’t find annoying.
I beg of you, please don’t. The worst thing to happen to Reddit was their Automod. Please reconsider.
Why? Automod is just a tool, the issues people have with it is how overzealous the mods using it are. If you’re moderating a community with 10,000+ people you can’t expect to filter and manage everything yourself, so a bot scheduling posts and filtering potential spam/low effort content is necessary.
Automod is just a tool, indeed, but how a tool is designed dictates or at least encourages its usage.
Exactly.
It’s to easen the work of community moderators. And you can’t just catch every comment that needs to be removed. Or posts, etc. This is where an automated moderation bot comes in. No matter how much you hate it, it is a must to have some automated system in growing platforms such as Lemmy.
It’s also not like the bot instantly bans everyone. I honestly don’t get the hate
OP I agree with you, it’s a great idea imo.
I’ve been a moderator before on a Discord server with +1000 members, for one of my FOSS projects,
and maintenance against scam / spam bots grew so bad,
that I had to get a team of moderators + an auto moderation bot + wrote an additional moderation bot myself!..Here is the source to that bot, might be usable for inspiration or just plain usable some other users:
https://github.com/Rikj000/Discord-Auto-BanI think it will only be a matter of time before the spam / scam bots catch up to Lemmy,
so it’s good to be ahead of the curve with auto-moderation.However I also partially agree with @dohpaz42, auto-moderation on Reddit is very, uhm, present.
Imo auto moderation should not really be visible to non-offenders.
Trying to automate things and decrease mod burden is great, so I don’t oppose OP’s idea on general grounds. My issues are with two specific points:
- Punish content authors or take action on content via word blacklist/regex
- Ban members of communities by their usernames/bios via word blacklist or regex
- Automated systems don’t understand what people say within a context. As such, it’s unjust and abusive to use them to punish people based on what they say.
- This sort of automated system is extra easy to circumvent for malicious actors, specially since they need to be tuned in a way that lowers the amount of false positives (unjust bans) and this leads to a higher amount of false negatives (crap going past the radar).
- Something that I’ve seen over and over in Reddit, that mods here will likely do in a similar way, is to shift the blame to automod. “NOOOO, I’m not unjust. I didn’t ban you incorrectly! It was automod lol lmao”
Instead of those two I think that a better use of regex would be an automated reporting system, bringing potentially problematic users/pieces of content to the attention of human mods.
Alright. Sounds fair. Instead of taking dangerous actions, I’ll make it create a report instead. Though I’ll probably keep the feature to punish members by their usernames via regex or word blacklist.
please no welcome messages, they’re like the most obnoxious thing Reddit ever had (well ok maybe not the most), they just clog your inbox.
I’ll consider it. Thanks for your comment.
I don’t see a problem with having the feature as an option. It only becomes a problem if it is misused by moderators.
The problem with something like this is that people start to dislike it more with experience. People have to be less experienced to become more experienced, and so it’s a certainty that there will be a lot of moderators that misuse it.
I also don’t mean to sound like a gnome dev, but what is actually the use case for this?