Set Up LLM Clients
Before you can create agents from C3 AI Studio, you need to set up an LLM client. This tutorial shows you how to configure clients for Azure OpenAI, Azure AI, AWS Bedrock, or Google Vertex AI using either JavaScript in Console or Python in Jupyter.
What is an LLM client?
An LLM client connects your C3 Agentic AI Platform agents to external language model services. The client handles three key components:
- Authentication: Credentials that verify your access to the LLM service.
- Model: The specific language model you want to use (for example, GPT-5, Claude, or Gemini).
- Client: The wrapper that combines authentication and model configuration into a reusable interface.
You complete three main steps to set up an LLM client:
- Configure LLM authentication.
- Create and configure the LLM client.
- Test the LLM client.
Choose your implementation approach
Select the approach that matches your workflow:
- JavaScript in Console - Run commands directly in the C3 AI Console.
- Python in Jupyter - Use Jupyter notebooks for Python-based configuration.
Configure LLM authentication (JavaScript)
Follow these steps to configure authentication for your chosen provider using JavaScript in Console.
Run these commands in Console to set up Azure OpenAI authentication:
// Create authentication object
var openaiAuth = C3.GenaiCore.Llm.AzureOpenAi.Auth.make({
name: 'test_azure',
apiKey: '<your-api-key>',
azureEndpoint: '<your-endpoint>',
apiVersion: '2024-02-01',
});
// Save configuration
openaiAuth.setConfig();
openaiAuth.setSecret();Replace the placeholder values with your Azure OpenAI resource details:
apiKey: Your Azure OpenAI API key from the Azure portal.azureEndpoint: Your Azure OpenAI resource endpoint URL. For example,https://your-resource-name.openai.azure.com/apiVersion: The API version you want to use. For example, For example,2024-02-01
Create and configure the LLM client (JavaScript)
After you configure authentication, create the model and client objects in Console.
The model parameter must match an officially supported model name from your chosen provider.
Run these commands to set up an Azure OpenAI client:
// Create model configuration
var openaiModel = C3.GenaiCore.Llm.AzureOpenAi.Model.make({
model: 'gpt-5-mini',
auth: openaiAuth,
defaultOptions: { temperature: 1, max_tokens: 100 },
});
// Create and save client
var openaiClient = C3.GenaiCore.Llm.Completion.Client.make({
name: 'test_client',
model: openaiModel,
});
openaiClient.setConfig();The configuration includes:
- Model: Specifies which model to use (for example,
gpt-5-mini) and sets default parameters like temperature and max tokens. For a list of available models and deployment instructions, see the Azure OpenAI Service models documentation. - Client: Creates a completion client that wraps the model and provides a standardized interface.
Model name for Azure needs to match the deployment name from the Azure AI Foundry.
Set a name parameter when creating the client to make it visible in the Agent Workbench UI. Without a name, the client will not appear in the LLM Client dropdown.
Verify client in Agent Workbench (JavaScript)
After you configure your LLM client and assign it a name, it becomes available for use in the Agent Workbench.
To verify this:
- Navigate to Agents > Gallery in C3 AI Studio.
- Open an existing agent or create a new one. For instructions on creating agents, see Create Agents.
- In the Agent Workbench, locate the Model section in the configuration panel on the left.
- Select the LLM Client dropdown.
Your configured client appears in the dropdown list and is now available for selection. After you select the client, your agent uses it to generate responses.
If you had the Agent Workbench open before creating the LLM client, refresh the page for the client to appear in the LLM Client dropdown.
For more information on configuring agents with LLM clients, see Configure Agents.
Configure LLM authentication (Python)
Follow these steps to configure authentication for your chosen provider using Python in Jupyter.
Run these commands in your notebook to set up Azure OpenAI authentication:
# Create authentication object
openai_auth = c3.GenaiCore.Llm.AzureOpenAi.Auth(
name="test_azure",
apiKey="<your-api-key>",
azureEndpoint="<your-endpoint>",
apiVersion="2024-02-01"
)
# Save configuration
openai_auth.setConfig()
openai_auth.setSecret()
print("✅ Auth saved")Replace the placeholder values with your Azure OpenAI resource details:
apiKey: Your Azure OpenAI API key from the Azure portal.azureEndpoint: Your Azure OpenAI resource endpoint URL.apiVersion: The API version you want to use.
Create and configure the LLM client (Python)
After you configure authentication, create the model and client objects in the same notebook.
The model parameter must match an officially supported model name from your chosen provider.
Run these commands to set up an Azure OpenAI client:
# Create model configuration
openai_model = c3.GenaiCore.Llm.AzureOpenAi.Model(
model="gpt-5-mini",
auth=openai_auth,
defaultOptions={"temperature": 1, "max_tokens": 100}
)
print("✅ Model configured")
# Create and save client
openai_client = c3.GenaiCore.Llm.Completion.Client(
name="test_client",
model=openai_model
)
openai_client.setConfig()
print("✅ Client ready")The configuration includes:
- Model: Specifies which model to use (for example,
gpt-5-mini) and sets default parameters like temperature and max tokens. For a list of available models and deployment instructions, see the Azure OpenAI Service models documentation. - Client: Creates a completion client that wraps the model and provides a standardized interface.
Model name for Azure needs to match the deployment name from the Azure AI Foundry.
Set a name parameter when creating the client to make it visible in the Agent Workbench UI. Without a name, the client will not appear in the LLM Client dropdown.
Test the LLM client (Python)
After you configure your LLM client, test it to verify the setup works correctly. Run the appropriate test code for your provider in the same notebook where you configured the client.
# Test the client
messages = [{"role": "user", "content": "Say 'OK'"}]
response = openai_client.completion(messages=messages, options={'returnJson': True})
print("Response:", response.choices[0].message.content)A successful test returns a response from the model. For example: OK
If you encounter errors, verify:
- Your authentication details are correct.
- You have network access to the LLM provider's API.
- Your credentials have the necessary permissions.
Verify client in Agent Workbench (Python)
To verify your client is available in Agent Workbench, follow the steps in Verify client in Agent Workbench (JavaScript).
For a list of all the pre-configured LLMs and embedders available, check Default LLMs and Embedders