Articles on: User Guides

What is JLLM? And what do all those pesky abbreviations mean?

🆕 What is JLLM? And What Do All Those Pesky Abbreviations Mean?

So you’ve just stumbled into the world of JLLM and everyone’s throwing around acronyms like LLM, API, and OOC like they’re Pokémon names. Don’t worry! You’re not alone. Here’s your decoder ring for the most commonly used terms, macros, and mysterious bits of techspeak. Let’s break it all down:


🧠 LLM — Large Language Model

This is the “brain” behind the bot. It doesn’t think or feel, it just does a really good job at guessing what text should come next based on everything you’ve said, it's not actually smart. GPT-4? Claude Sonnet? Mistral? They’re all LLMs. JLLM is one too, hence the name.


🤖 JLLM — Janitor Large Language Model

This is the default model used in Janitor unless you’ve hooked up an external one via API. It’s free, built-in, and ready to go out of the box. You can think of it as the "house brand", surprisingly powerful, a bit quirky at night.


🖥️ Front End

The front end is the part of a website or app you actually interact with—the buttons, the chat box, the shiny UI. It's the face of the whole system. It doesn't do the thinking, but it knows how to ask the backend (the real brain) for help and make it look good while doing it.


🧹 Janitor

Janitor is the front-end interface that lets you chat with AI characters (bots), create them, manage settings, and switch between LLMs


🌐 API — Application Programming Interface

The fancy techy bridge that lets Janitor talk to other models (like OpenAI or Anthropic). If you’re using Claude or GPT through Janitor, it’s thanks to the API connecting them like a matchmaker.


🎭 Prompt

A prompt is what tells the LLM what to do. It can be a simple question (“What’s 2+2?”), or it can be an elaborate setup that makes the bot act like a pirate, a barista, or a Russian mafia boss. At its core, everything — yes, even bots — is just a prompt.


🎟️ Tokens

Tokens are tiny chunks of text (words or even parts of words) that the LLM reads and writes. Most models can only remember a certain number of tokens at a time, so the more you send, the more likely earlier stuff gets forgotten. Roughly 1,000 tokens = 750 words. Use them wisely.


🌡️ Temperature

This setting controls how random the AI gets. Low temperature (e.g. 0.2): more logical, but might get repetitive or overly stiff. High temperature (e.g. 0.8+): more chaos might make stuff up or go off the rails. Often seen as more "creative".


🛜 Proxy

A little tech wizardry that lets you use external models (like Claude or GPT) inside Janitor. The proxy acts as the middleman between your chat and the model provider. If JLLM is the default kitchen, the proxy lets you order takeout from OpenAI.


🕵️ VPN — Virtual Private Network

A VPN hides your real IP address and reroutes your internet through a different location. It’s like wearing a fake moustache and trench coat online. People use VPNs for privacy, security, or to pretend they live in another country


Next Up: Understanding Temperature in AI

Updated on: 31/07/2025

Was this article helpful?

Share your feedback

Cancel

Thank you!