C3 AI Documentation Home

Set Up LLM Clients

Before you can create agents from C3 AI Studio, you need to set up an LLM client. This tutorial shows you how to configure clients for Azure OpenAI, Azure AI, AWS Bedrock, or Google Vertex AI using either JavaScript in Console or Python in Jupyter.

What is an LLM client?

An LLM client connects your C3 Agentic AI Platform agents to external language model services. The client handles three key components:

  • Authentication: Credentials that verify your access to the LLM service.
  • Model: The specific language model you want to use (for example, GPT-5, Claude, or Gemini).
  • Client: The wrapper that combines authentication and model configuration into a reusable interface.

You complete three main steps to set up an LLM client:

  1. Configure LLM authentication.
  2. Create and configure the LLM client.
  3. Test the LLM client.

Choose your implementation approach

Select the approach that matches your workflow:

Configure LLM authentication (JavaScript)

Follow these steps to configure authentication for your chosen provider using JavaScript in Console.

Run these commands in Console to set up Azure OpenAI authentication:

JavaScript
// Create authentication object
var openaiAuth = C3.GenaiCore.Llm.AzureOpenAi.Auth.make({
  name: 'test_azure',
  apiKey: '<your-api-key>',
  azureEndpoint: '<your-endpoint>',
  apiVersion: '2024-02-01',
});

// Save configuration
openaiAuth.setConfig();
openaiAuth.setSecret();

Replace the placeholder values with your Azure OpenAI resource details:

  • apiKey: Your Azure OpenAI API key from the Azure portal.
  • azureEndpoint: Your Azure OpenAI resource endpoint URL. For example, https://your-resource-name.openai.azure.com/
  • apiVersion: The API version you want to use. For example, For example, 2024-02-01

Create and configure the LLM client (JavaScript)

After you configure authentication, create the model and client objects in Console.

Run these commands to set up an Azure OpenAI client:

JavaScript
// Create model configuration
var openaiModel = C3.GenaiCore.Llm.AzureOpenAi.Model.make({
  model: 'gpt-5-mini',
  auth: openaiAuth,
  defaultOptions: { temperature: 1, max_tokens: 100 },
});

// Create and save client
var openaiClient = C3.GenaiCore.Llm.Completion.Client.make({
  name: 'test_client',
  model: openaiModel,
});
openaiClient.setConfig();

The configuration includes:

  • Model: Specifies which model to use (for example, gpt-5-mini) and sets default parameters like temperature and max tokens. For a list of available models and deployment instructions, see the Azure OpenAI Service models documentation.
  • Client: Creates a completion client that wraps the model and provides a standardized interface.

Verify client in Agent Workbench (JavaScript)

After you configure your LLM client and assign it a name, it becomes available for use in the Agent Workbench.

To verify this:

  1. Navigate to Agents > Gallery in C3 AI Studio.
  2. Open an existing agent or create a new one. For instructions on creating agents, see Create Agents.
  3. In the Agent Workbench, locate the Model section in the configuration panel on the left.
  4. Select the LLM Client dropdown.

Your configured client appears in the dropdown list and is now available for selection. After you select the client, your agent uses it to generate responses.

For more information on configuring agents with LLM clients, see Configure Agents.

Configure LLM authentication (Python)

Follow these steps to configure authentication for your chosen provider using Python in Jupyter.

Run these commands in your notebook to set up Azure OpenAI authentication:

Python
# Create authentication object
openai_auth = c3.GenaiCore.Llm.AzureOpenAi.Auth(
    name="test_azure",
    apiKey="<your-api-key>",
    azureEndpoint="<your-endpoint>",
    apiVersion="2024-02-01"
)

# Save configuration
openai_auth.setConfig()
openai_auth.setSecret()
print("✅ Auth saved")

Replace the placeholder values with your Azure OpenAI resource details:

  • apiKey: Your Azure OpenAI API key from the Azure portal.
  • azureEndpoint: Your Azure OpenAI resource endpoint URL.
  • apiVersion: The API version you want to use.

Create and configure the LLM client (Python)

After you configure authentication, create the model and client objects in the same notebook.

Run these commands to set up an Azure OpenAI client:

Python
# Create model configuration
openai_model = c3.GenaiCore.Llm.AzureOpenAi.Model(
    model="gpt-5-mini",
    auth=openai_auth,
    defaultOptions={"temperature": 1, "max_tokens": 100}
)
print("✅ Model configured")

# Create and save client
openai_client = c3.GenaiCore.Llm.Completion.Client(
    name="test_client",
    model=openai_model
)
openai_client.setConfig()
print("✅ Client ready")

The configuration includes:

  • Model: Specifies which model to use (for example, gpt-5-mini) and sets default parameters like temperature and max tokens. For a list of available models and deployment instructions, see the Azure OpenAI Service models documentation.
  • Client: Creates a completion client that wraps the model and provides a standardized interface.

Test the LLM client (Python)

After you configure your LLM client, test it to verify the setup works correctly. Run the appropriate test code for your provider in the same notebook where you configured the client.

Python
# Test the client
messages = [{"role": "user", "content": "Say 'OK'"}]
response = openai_client.completion(messages=messages, options={'returnJson': True})
print("Response:", response.choices[0].message.content)

A successful test returns a response from the model. For example: OK

If you encounter errors, verify:

  • Your authentication details are correct.
  • You have network access to the LLM provider's API.
  • Your credentials have the necessary permissions.

Verify client in Agent Workbench (Python)

To verify your client is available in Agent Workbench, follow the steps in Verify client in Agent Workbench (JavaScript).

For a list of all the pre-configured LLMs and embedders available, check Default LLMs and Embedders

See also

Was this page helpful?