C3 AI Documentation Home

Dynamic Agent System Prompts

System prompts are predefined inputs given to a generative AI agent before a user query is processed. These prompts shape the agent’s behavior and determine the sequence of steps it takes to get to the answer using available tools. You should provide rules and examples in the system prompt to guide the agent to reason, execute logic, and respond effectively.

The agent will follow the rules stated in the prompt, which bind the agent to specific actions. These prompts shape the agent’s behavior and determine the sequence of steps it takes to get to the answer using available tools. You should provide rules and examples in the system prompt to guide the agent to reason, execute logic, and respond effectively.

To structure your system prompt effectively, strive for the minimal set of information that fully outlines your expected behavior. It is best to start by testing a minimal prompt with the best model available to see how it performs on your task. Add clear instructions and examples to improve reliability based on failure modes found during initial testing.

How to write good system prompts

Prompt engineering is the process of writing and re-writing a system prompt to configure an LLM to return results that are accurate and reliable for your use case. When configuring your agent in the workbench, follow these guidelines for system prompts:

  • Use clear sections in the prompt to specify instructions.
  • Be direct and very clear in instructions.
  • Ensure there are no conflicting instructions in the prompt or ambiguity.

Clear sections in prompt

We use standardized tags (<thought>, <execute>, and <solution>) to guide the agent on how to answer questions. Every thought and action you want the agent to perform should be specified in this prompt. Organize your prompts into distinct sections (like background information, instructions, tool guidance, and output description) and using techniques like tags or Markdown headers to delineate these sections.

Here is a partial example of the default system prompt:

Python
"""
# C3 AI Agent

You are an AI agent built by C3 AI, serving as the interface between users and their enterprise data and applications that can be accessed through tools.

## Tag Structure

Wrap all responses in these tags:

- `<thought> ... </thought>` - for internal reasoning, planning, or decision-making
- `<execute> ... </execute>` - for executing Python code
- `<solution> ... </solution>` - for delivering the final answer or asking the user for clarification

...

## Python Libraries Available

You are allowed to use these libraries to address the user's query:

- datetime
- dateutil
- time
- numpy
- pandas
- matplotlib

## Toolkit

{{toolkit}}

## Data Model Documentation

{{documentation}}

## Additional Instructions

{{instructions}}

## More Examples

{{FEWSHOT_EXAMPLES}}
"""

In this system prompt, we outline the rules, and explain the tag structure. Parameters are placeholders embedded in double curly braces {{...}} that get dynamically filled in at runtime. They enable a static system prompt to become dynamically tailored for:

  • The current user {{current_user}}
  • Specific instructions in {{instructions}}
  • The available data schema {{documentation}}
  • The tools and functions supported in a specific environment {{toolkit}}
  • few shot examples in {{FEWSHOT_EXAMPLES}}

The required parameters are documentation and fewshot_examples for the system prompt.

How to add clarity to the instruction set

System prompts should be extremely clear and use simple, direct language that presents ideas at the right context for the agent. The tags instruct the agent on the expected response format. Code execution is defined in the <execute> block. In the following prompt, the agent is only allowed to use defined libraries, write clean code, and split up its tasks into singular logical actions.

Python
"""
# C3 AI Agent

## Communication Rules

- Complete tasks efficiently with an engaging, thoughtful, and collaborative tone.  
- Always respond in the same language the user uses.  
- Minimize user-facing commentary when making internal adjustments, such as selecting alternate tools or retrying queries.

### Flow

Follow this multi-turn flow until you could give a final answer:

    <thought>...</thought> (plan) -> <execute>...</execute> (code) -> <thought>...</thought> (next step) -> <execute>...</execute> (code) -> ... (more steps) -> <solution>...</solution> (final answer)

### Content Guidelines

<execute>

- The content must be a bug-free Python code snippet.
- Do not wrap the code in triple backticks (```).
- Each <execute> block should perform one logical action only.
- Use only libraries and variables that have been explicitly imported or defined.
- Do not redefine or duplicate existing tools or variables.
- You may reuse previously defined variables or imported modules without redefining or reimporting them.
- Each block must include at least one output, using print() or display() at the end.
- Use print(variable) to read and preserve information needed for subsequent steps.
- Use display(fig) to save and show figure.
- Do not use .head() - always print full results.
- Retry failed steps up to 3 times, applying corrections each time.
- If all retries fail, proceed to a block with a clear explanation of what went wrong and suggestions for how the user might proceed.

</execute>

"""

In practice, inspecting error logs when testing your agent will allow you to write better instruction sets for content response.

How to reduce ambiguity in your prompt

You should try to reduce ambiguity for the model. An important situation is ambiguity about which tool to use. Specifying a minimal viable set of tools for the agent can also lead to more reliable maintenance and pruning of context over long interactions.

Python
"""
## Communication Rules

- Prefer forward-looking, neutral transitions – such as 'I'll try a different approach' – over language that emphasizes failure.  
- If something is unclear, request clarification from the user using clear and specific questions.  
- Do not use your own knowledge. Rely exclusively on the tools and libraries provided to you.  

## Toolkit

- You have access to tools that behave like Python functions.
- Only call a tool when it is necessary to complete the current step.
- Do not repeat a tool call with the exact same parameters as a previous call.
"""

In this system prompt, you force the agent to request clarification from the user and to not use its own knowledge. To increase predictable behavior, you also have the agent only use Python functions. Finally, you explain the agent should not use unnecessary tools and should not use the tools in exactly the same way.

Changing the System Prompt through Code

To change the system prompt programmatically, you need to update the chatManagerSpec on the agent's configuration. For detailed steps on how to do this, refer to System Prompts.

See also

Was this page helpful?