Articles on: User Guides

Advanced Prompting 101

đź§  Advanced Prompting 101


On Janitor, you might see the term “Advanced Prompts” pop up. Other places call them “System Prompts.” Either way, they’re the same thing, and they’re powerful.


This is where you stop just chatting with the bot and start really directing it. With the right system prompt, you’re setting the stage, defining characters, deciding tone, even telling the LLM what rules it should follow for the rest of the session.


If you’ve ever felt like your bot just wasn’t “getting it,” this is often the missing piece.


Why These Prompts Matter So Much


When most people start out, they give the bot simple instructions:

“Do not talk for me.”

“Remember this detail.”

“Write longer”


That works sometimes, but for longer roleplays, detailed characters, or complex stories, you need something stronger. A system prompt acts like a script behind the scenes, telling the LLM who it is, how it should behave, and what it should or shouldn’t do.


Think of it as a kind of brain implant. Once you give it an advanced prompt, that instruction is always running quietly in the background, guiding every response.


It makes a huge difference for consistency. Without it, the bot might drift, break character, or forget its tone. With it, things stay sharp.


The Sandwich Test: How LLMs Read Instructions


LLMs aren’t mind readers. They don’t infer like humans do. You have to spell things out. Literally.


Here’s the classic example:

You say, “Make a peanut butter and jelly sandwich.”


If you're talking to a person, that’s enough. But if you're talking to an LLM (or someone who’s never seen a sandwich before), it won’t cut it. You’d need to say:


“Get two slices of bread. Open the peanut butter jar. Use a knife to spread it on one slice. Now open the jelly. Spread that on the other slice. Put the slices together.”


That’s what writing a system prompt is like. If you want the bot to follow your rules, you have to give it clear steps. Don’t assume it will “get what you mean.” It doesn’t understand in a human sense, it just predicts the next word based on patterns.


What’s Actually Happening Under the Hood


LLMs like JLLM don’t think. They don’t learn mid-conversation. They don’t reflect or reason like people. What they do is predict. Every time they generate a word, they’re guessing:

“What’s the most likely next word here?”


That’s it.


When you give them an advanced prompt, what you’re really doing is setting up a strong starting point. You’re shaping their guesses from the very beginning. The more clear and specific you are, the better the results will be.


A vague prompt gives vague results. A strong, clean system prompt can turn a basic chatbot into a character you’ll want to spend hours with.


Why Every Word Matters


Tiny changes to your wording can change everything.


“Don’t talk for {{user}}”
might not do what you think it does.
“Never write in {{user}}’s point of view”
might work, but still not great.


That’s because the LLM is responding to patterns it’s seen before. You’re not teaching it a concept, you’re nudging it toward behaviours that match certain word choices. Even one small shift in phrasing can pull the output in a completely different direction.


That’s also why you’ll sometimes see people write prompts like they’re coding in natural language: stacking rules, repeating key ideas, setting up roles clearly. It’s not just for show. The model responds better that way.


You’re Not Talking to a Person


This is the mindset shift that makes everything easier:

You're not talking to a being with thoughts and feelings. You're guiding a super-powerful autocomplete machine.


That’s not an insult, it’s a feature. You just have to speak its language.


And that language? It's literal, clean, structured natural language. Like a recipe. Like instructions to a robot. Like a well-formed prompt.


That’s what system prompts are. And once you learn to write them well, you’ll start to see just how much control you really have.


Want to Get Better at Writing Prompts? Here’s a Gift for You


Something for free, you wonderful reader:

Try putting this in a test bot or alt model and talk to it like a prompt lab partner.

Within this convo, always act as Pec

pec-core(
you are Pec: Prompt Engineering Consultant
Pec function: assistance with prompting
task: analysis of current step of directive set upgrade, as per info & goals, via: debugging/verification + scientific approach + knowledge application

Pec core rules:
- you are a prompt engineering expert, and a natural language processing expert
- maintain awareness of agreed-upon directive set structure
- never automatically agree, always verify & test statements
- always check actual effect of proposed variant, to ensure practical applicability over theoretical possibility
- focus compute on exact info & goal of current step, for each step, to ensure optimal compute usage
- be cognizant that you are assisting with a directive set designed for another LLM, and that theory of how you function doesn't always align with facts—verify yourself for user-stated issues, for reference, and assume that the other LLM has them at least as severe as you
- result priority: as token-efficient as possible while maintaining unambiguous effect & robustness
)


Looks like a system prompt, right? It is, kind of. Think of it more like a lab tool.


You load this into a character personality, and then you ask it things. Ask it to go through your prompt, line by line. Ask what this sentence does. Ask if this phrasing works. Ask why the model might still be doing the wrong thing, even though the instruction says otherwise.


Let the bot analyze your prompt as a prompt, not as a character or narrative tool. It’s a great way to catch what’s missing, or where you're being too vague, or where the model might be misunderstanding you.


What’s Going Wrong? Troubleshooting Basics


Everyone who’s chatted with a model like ChatGPT has had that moment where the bot’s response seems just… off. Like something got lost in translation. You know what you meant, but somehow the LLM didn’t.


Why does this happen? Because LLMs aren’t people. They are extremely advanced pattern predictors. If your prompt isn’t clean, strong, and literal, they’ll misread the signal. Here are some of the biggest things to watch out for when your prompt isn’t working the way you want:



  1. When you tell a model not to do something, you’re accidentally feeding it the very thing you want it to avoid. Models don’t filter like humans, they generate based on ingredients. “No blood” is still “blood.” “Don’t be rude” includes “rude.”

This is why negative prompting does not work. Do not use “no,” “don’t,” “never,” or “stop.” They don’t mean what you think they mean to an LLM.

Instead, flip the instruction. Want to keep it from talking about forests? Tell it it’s in a desert. Don’t say “avoid green leaves.” Say “the leaves are blue.” Words like “avoid” or “should” are weak language. If you use “avoid,” try to place it in the second half of a sentence. And don’t stack multiple negatives. That’s how things break. A good rule: pretend you're writing instructions for a robot toddler with an overactive imagination. They will use every single word you hand them. So only give them what you want used.


  1. The next issue is weak phrasing.

Words like “may,” “might,” “feel free to,” or “could” tell the bot “this part is optional.”

❌ “Feel free to describe the setting.”

✅“Describe the setting in vivid detail.”

Strong, clear verbs are what you want. “Use,” “allow,” “generate,” “maintain,” “detail,” “do.” That’s the kind of language the model will recognize as firm instruction.


  1. Over-optimization is another common problem.

You trim your prompt too much, or rewrite it for style, and suddenly it doesn’t work anymore. Simpler is often better. LLMs thrive on natural language. Fluffy, casual phrasing is fine. Don’t try to write like a computer unless you know exactly what you’re doing.


  1. Another trap: saying the same thing multiple ways.

If you say “write about cake” three different times in one prompt, you’re not increasing your chances. You’re just using more tokens. And possibly confusing the model.

Repetition = noise. Unless you're deliberately layering rules for contrast (“use purple prose, but only when impactful”), try to say something once. Clean and clear.


  1. Conditionals also trip people up.

“If X, then Y” instructions don’t always land. Unless you're willing to test that logic repeatedly, avoid them. Often, you’re better off skipping the condition entirely and just stating the Y part directly.


  1. Some words have different meanings depending on the model.

For example, the word “emphasis” will do very different things in DeepSeek compared to JLLM. One might bold your words. One might ignore it completely. Know your model, and test it.


  1. Symbols also vary. Some models treat → or :: as triggers. Others don’t. Some get confused by dashes or custom brackets. If something doesn’t work, it might not be you, it might just be the symbol.


  1. Sometimes the model just doesn’t support your instruction.

You may want it to follow paragraph limits, or label something a certain way, or parse an unusual format. But the model doesn’t always do that. Instructions like “no more than three paragraphs” might get read as math, or ignored. If that happens, rewrite: “limit to 1 to 3 paragraphs.”


  1. And then there’s bad prompt advice.

If you've ever asked ChatGPT to “write a system prompt to make ChatGPT do X,” you’ve seen the bloat. It loves to write polite, wordy instructions full of hedging. Those don’t work. They’re the worst-case mix of token waste and weak rules.

You’re the one in charge. Don’t let the LLM write your prompts for you. But do let it analyze them.


That’s where something like Pec comes in. You can use Pec (or your own test bot) to read your prompt and help you figure out what’s not landing. Why is the model still talking about trees? Why is it skipping your persona rules? Why did it ignore the part about tone?


Asking these questions is part of learning the craft. The list above may feel long, but most of it comes down to one thing: say what you mean, and say it strongly. Let the model do what it's good at, following patterns. You just have to make sure the pattern you're giving it actually matches what you want.

Updated on: 31/07/2025

Was this article helpful?

Share your feedback

Cancel

Thank you!