Articles on: FAQ

FAQ: Why doesn't janitor publicly discuss moderator actions?

Why janitor doesn't publicly discuss moderator action

Very often we are asked why we don't publicly discuss the reasons behind specific moderator activity on the site, especially when specific creators or users themselves come online and accuse us of targeting them or of changing the rules. "If they're wrong, why aren't you telling us?" gets thrown at us a lot. Even internally, the question comes from staff members who aren't involved in the site moderation side of the business. To be as transparent as possible, here are our reasons spelled out as clearly as possible.

1) Privacy and safety

  • A lot of the content that gets removed, or that gets people banned is exploitative, illegal, or otherwise harmful, and can result in that content being tied to your digital footprint and identity forever. Whether that be "Bestiality", "Pro-Homophobia", or "Child exploitation" etc. Posting this content on purpose or ignorantly, can then follow you to school, work, or even get back to your family and loved ones. We aren’t condoning this content by staying silent in public, but that silence protects people’s privacy.
  • The harassment risk is very real. Publicly tagging someone’s account with a hot-button policy label invites brigades, dogpiles, and doxxing. We don’t want to put a target on anyone’s back, including people who are already upset. We take this responsibility very seriously.
  • You control what you reveal, not us. If one of you says “X happened,” that doesn’t mean we should add “and also Y and Z.” You might be comfortable sharing one detail but not another. We don’t get to decide that for you. Even if it is misinformation, we will not use private information to publicly refute.

2) Legal

  • We are limited by law regarding what we can publicly disclose. GDPR in Europe and the UK protects personal information like sexual preferences. In the USA, many states treat “sex life,” health, and other sensitive categories as protected. Publishing those details without explicit consent can break privacy laws and our own privacy and safety commitment from point one.
  • “But they talked about it first” doesn’t give us a free pass. If we add details they didn’t disclose, we can still face claims for public disclosure of private facts. And ultimately, we’re here to keep Janitor what it is — the best place for creators and users to build with AI. We’re not in the business of exposing people or getting into arguments about labels, and we don’t want to be.
  • Defamation. If we misstate or oversimplify a label, or imply criminal behavior, that’s legal risk we own.

3) Precedent

  • Rules have to work for everyone, not just the loudest group or post. If we break our “no public specifics” rule once, we’ll be pressured to do it every time. Then moderation here just becomes a spectator sport where shouting loud enough gets you a custom response. That’s not fair or consistent to either our moderators, or to our users.
  • Protecting privacy is for everyone. You may be angry today, but you might appreciate tomorrow that we didn’t pin a permanent, searchable label next to your name or handle.
  • Consistency > vibes. A clear, predictable process (private communication, appeal, second review) is how we keep things fair across thousands of cases, not ad-hoc clapbacks on Reddit or Discord.

4) Ineffectiveness

  • Public callouts aren't going to convince the skeptics. People who already distrust us, or who deliberately argue with bad faith, won’t be won over by the full, public disclosure; all that will happen is moving the goalposts or arguing over whether or not the policy itself is wrong instead of the action taken.
  • It would further fuel drama and misinformation. Posting specifics generally tends to escalate drama, not resolve it, and it can drag in unrelated users and creators more than they already are.
  • The silent neutrals care about process, not gossip. Most users want to know that the rules are clear, applied consistently, and that there’s an appeal path. Case-by-case public disclosure does nothing to prove that; strong and consistent process does.

What does all of this mean? It means that the people behind the screen here at janitor will continue to put user privacy and safety above the ability to publicly refute misinformation or smear campaigns. It does not mean we think we never make mistakes! But when we do, we do our absolute best to ensure they are investigated, corrected, and learned from. People can appeal any moderator decision.

And just to be very clear, we are also not satisfied with the way things are. We're working hard to improve the tools and processes for both moderators and users. Below are a few examples of things that are in the pipeline:

  • On-site private, direct communication between moderators and users specifically when an action is taken, removing the requirement to leave a moderator comment on a bot regarding code change requests or image removal, and improving the ability for moderators to assist with specific clarification or advice when bots get removed.
  • Ability to republish bots once a creator has changed it to be compliant with guidelines, and moderator approved. This will remove the current process where creators have to create a new bot.
  • On-site appeals process.
  • Far more efficient moderator tools to enable us to clear reports more quickly.
  • Ability to report user profiles
  • Better public documentation eg: An image policy guide in the works.
  • Better on-site signposting to the expanded guidelines during bot creation.

Updated on: 27/10/2025

Was this article helpful?

Share your feedback

Cancel

Thank you!