Skip to content
Try Free →

Prompt engineering for AI customer support

Last updated: · 7 min read

What a system prompt is

A system prompt is the instructions the LLM reads before every customer message. Tells the model what role to play, what tone to use, what to refuse, what to cite, and how to handle uncertainty.

AskVault's default system prompt is engineered for B2B customer support. Out of the box it produces grounded, source-cited, polite responses that refuse to invent facts. For 80% of customers, the defaults work without customization.

The remaining 20% customize for brand voice, industry vocabulary, or specific refusal patterns.

What you can customize

Three layers, configurable under Settings > AI Config > System Prompt in about 5 minutes:

  1. Tone. Formal, friendly, casual. Default: friendly-professional.
  2. Persona name. Default: matches your workspace name. Customize: "Sarah from Acme", "Acme AI Assistant", "Help Bot".
  3. Refusal patterns. What the bot says when it can't answer. Default: graceful, offers escalation.

Below this, AskVault's core instructions (cite sources, don't hallucinate, follow policy bounds) stay locked. They're load-bearing for accuracy; don't change them.

Tone customization

Three preset tones plus custom.

Formal.

Use formal English. Avoid contractions. Address the customer as "you" but maintain professional distance. Cite sources for every claim.

Use for: legal services, financial services, healthcare, government.

Friendly-professional (default).

Use natural English with contractions where appropriate. Be warm but stay focused on the question. Cite sources for substantive claims.

Use for: B2B SaaS, e-commerce, most consumer-facing services.

Casual.

Use conversational English. Contractions, occasional slang, exclamation points are fine. Match the customer's energy. Cite sources but lightly.

Use for: gaming, lifestyle, younger-demographic consumer brands.

Each preset is a small block of instructions appended to the base prompt. You can also write a custom tone block from scratch under Settings > AI Config > Tone > Custom.

Persona customization

The persona is the bot's identifier. Two settings:

  • Name. What the bot calls itself. "Hi, I'm Sarah from Acme support."
  • Disclosure. Whether the bot says it's an AI. Default: explicit disclosure ("I'm an AI assistant").

Some brands prefer single-persona (no AI disclosure, the bot just acts like a regular team member). This is allowed but legally risky in some jurisdictions. EU's AI Act, California's BOT Act, and similar regulations may require disclosure for automated interactions.

For most B2B SaaS, explicit AI disclosure is the safer default. Customers actually appreciate it; they know what they're getting and adjust their expectations.

Strictness setting

This is the most impactful customization. Two modes under Settings > AI Config > Strictness:

  • Helpful (default). Bot combines retrieved knowledge with general knowledge to answer. If retrieval is weak, it reasons from training data to fill gaps.
  • Strict. Bot answers only from retrieved knowledge. If retrieval is weak, it refuses gracefully and offers escalation.

Strict is the right default for production B2B support. The trade is:

  • Helpful mode answers 15 to 25% more queries (the ones with weak retrieval) but introduces hallucination risk.
  • Strict mode refuses those 15 to 25% but the answers it does give are reliable.

Most customers prefer "no answer" over "wrong answer". Default to strict; loosen to helpful only if your knowledge base is genuinely sparse and you need broader coverage.

Refusal patterns

When the bot can't answer, what does it say? Configure under Settings > AI Config > Refusal Messages:

  • Out-of-scope. "That's outside what I can help with from our documentation. Want me to connect you with a human?"
  • Knowledge gap. "I don't have specific information about that in my knowledge base. Let me get a human to look at this."
  • Sensitive topic. "I'm not able to discuss that here. Let me connect you with the right team."
  • Policy refusal. "I can't help with that based on our policy. Here's how to reach a human agent."

Each pattern customizable per brand voice. Default phrasing is neutral and offers escalation.

What not to change

Three parts of AskVault's system prompt stay locked:

  1. The "use only retrieved context" instruction. Loosening this re-introduces hallucination. The core RAG safety guarantee depends on it.
  2. The "cite sources" instruction. Customers expect citations; loosening it breaks trust.
  3. Policy bounds on skills. Skills like discount_negotiator have hard caps. Don't try to override them in the system prompt; they're enforced at the policy layer below the LLM.

For Enterprise customers who genuinely need to override these, contact sales@askvault.co. We can configure a custom prompt with explicit risk acknowledgment.

Per-channel prompt overrides

Different channels can have different prompts. Configure under Settings > AI Config > Per-Channel Prompts:

  • Email. Slightly more formal than chat default.
  • WhatsApp. Slightly more casual; emoji-friendly.
  • Voice. Shorter responses (speech-friendly), no markdown.
  • Slack. Casual; matches your internal Slack tone.

Per-channel overrides are layered on top of the workspace default. Override only what you need; inherit the rest.

Testing prompt changes

Before pushing to production, test under Settings > AI Config > Test Mode:

  • Sample queries. AskVault runs 5 representative queries against the new prompt. Returns the bot's response. Compare against expected.
  • A/B preview. Show the customer's actual response with the old prompt and with the new. Side-by-side.

Once you're satisfied, click Apply to Production. Changes affect new conversations immediately. Active conversations continue with the previous prompt to avoid mid-conversation tone shifts.

Limits

  • Custom prompt length. Up to 2,000 characters total across all sections.
  • Per-channel overrides. Up to 13 (one per channel).
  • Refusal message variants. Up to 10 patterns per workspace.

Common pitfalls

Prompt too aggressive. "Never refuse, always answer." Forces hallucination. Stick to strict mode for B2B support.

Persona inconsistency. Different channels have different personas with conflicting branding. Pick one persona and apply across all channels.

Refusal too apologetic. "I'm so sorry I cannot help with this" loops 5 times per conversation. Tone down. One clean refusal is enough.

System prompt versioning. A change broke retrieval for a specific query type. AskVault keeps the last 10 prompt versions; revert under Settings > AI Config > Version History.

FAQ

Can I see what system prompt is in effect?

Yes. Settings > AI Config > Current Prompt shows the assembled prompt (base + your customizations) that the LLM receives.

Does the system prompt affect cost?

Marginally. Longer prompts use slightly more tokens per query. The default prompt adds about 200 to 400 tokens; a heavily customized one might add 800 to 1,200. Negligible cost impact.

Can I include examples (few-shot) in the prompt?

For Enterprise customers, yes. Under Settings > AI Config > Examples, add up to 5 question-answer examples that the LLM uses as patterns. Useful for very specific output formats.

How do I test edge cases?

Build a regression suite of representative customer questions. Run them against new prompt versions before publishing. AskVault supports automated regression testing on Enterprise.

What if my industry needs a non-default safety stance?

Healthcare, legal, financial. Reach out to security@askvault.co. We can configure an industry-specific prompt template that includes the right disclaimers and refusal patterns.

Was this page helpful?