Skip to main content

Helpers don't help

Learn what are AI prompting best-practices and ensure Sintra Helpers are performing according to your needs!

Written by Ally

When Your Helper Isn't Helping

Sintra Helpers are powered by large language models (LLMs), AI systems that generate responses based on the instructions you give them. Unlike a search engine, they don't look things up; they reason through your request using the context you provide. This means the quality of your output depends almost entirely on the quality of your input.

If your Helper is giving vague, off-topic, or disappointing results, the fix is almost always in how you're asking.


The most common mistake: prompting without context

Helpers don't have access to your business, your goals, your tone preferences, or what you've already tried unless you tell them, or this information is explicitly referenced from Brain AI knowledge. A prompt like "write me an email" gives the Helper almost nothing to work with, so it fills in the blanks with generic assumptions.

The more relevant context you include upfront, the less the Helper has to guess.


What a strong prompt includes

Think of a good prompt as answering three questions:
What do I want? Why do I want it? What does a good result look like?

In practice, that means including:

  1. A clear task — State exactly what you want done, not just a topic. Instead of "email", try "write a follow-up email to a customer who didn't respond to our last message."

  2. Relevant context — Who is this for? What's the situation? What has already happened? For example: "The customer signed up 3 days ago and hasn't completed onboarding. Keep the tone friendly, not pushy."

  3. A format or output constraint — If you need bullet points, a specific length, a particular tone, or a structured format, say so explicitly. Example: "Keep it under 100 words. End with a single clear call to action."

  4. Examples, when possible — Showing the Helper a sample of what you want (even a rough one) dramatically improves accuracy. This technique, called few-shot prompting, works because LLMs learn from patterns, and a concrete example is far more precise than a description.


Techniques that actually work

Ask it to think step by step. For anything that involves reasoning, analysis, or multi-part tasks, add "think through this step by step" or "reason through this before answering." This isn't a magic phrase; it works because it encourages the model to process the problem sequentially rather than jumping to the first plausible answer.

Assign a role. Tell the Helper who it should be in this context: "You are an experienced customer support specialist writing for a SaaS audience." Role framing shifts the Helper's tone, vocabulary, and reasoning style to match your needs.

Break complex tasks into steps. If you need something with multiple parts, research + draft + formatting, for example, don't ask for everything in one go. Give each stage its own prompt. The Helper isn't worse at complex tasks; it just performs better when the goal is focused.

Use negative instructions. Tell the Helper what not to do. "Don't use jargon," "avoid bullet points," "don't mention pricing," these constraints prevent the most common failure modes before they happen.

Iterate, don't restart. If the first result isn't right, don't scrap the conversation. Tell the Helper what specifically missed the mark: "This is too formal, make it more conversational," or "Good structure, but the opening is too long, cut it to one sentence." Targeted feedback produces faster improvements than starting over.


What the Helper doesn't respond to

A few things worth knowing about how LLMs actually work:

  • Politeness doesn't affect output. Phrases like "please," "thank you," or "if you don't mind" do not affect response quality. Save those for humans.

  • Vague encouragement doesn't help. Telling the Helper it's "doing great" or that it's "very intelligent" won't improve results. Specific instructions will.

  • Greetings, break task focus. Opening with "Hello, I'm back," or casual openers can cause the Helper to respond socially rather than task-focused. Start with your actual request.


A quick before/after

Weak prompt:

Write a product description.

Strong prompt:

Write a product description for Sintra's AI assistant for e-commerce store owners. The tone should be confident but approachable, not overly technical. Highlight that it saves time on repetitive tasks, works 24/7, and requires no setup. Keep it under 80 words and end with a one-line value statement.

The second prompt gives the Helper a task, an audience, a tone, three specific selling points, a length constraint, and a structural requirement. There's no guessing involved.


Still not getting what you need?

Ask the Helper directly: "What information do you need from me to do this well?" It will tell you what's missing. This works especially well for complex tasks where you're not sure where to start.

If a Helper is consistently producing poor results on a specific type of task, contact our support team here in the live chat or at [email protected]. It may be a configuration issue we can help resolve.

Did this answer your question?