Configure and use Large Language Models
Large language models (LLMs) are the core of generative AI applications. The C3 Generative AI Application enables you to configure, manage, and validate LLM integration in the application.
Supported LLMs
The application supports a set of third-party and proprietary LLMs that are pre-integrated and available by default. As of the 6.1 release, the following models are supported:
- AwsBedrock(Claude): Claude v2, Claude v3 Haiku
- Azure(Gpt): Gpt 3.5 Turbo, Gpt 4
- Google(Gemini): Gemini (gemini-pro)
- MIS: Narwhal
To list available models, run the following command in the Application C3 AI Console:
Genai.UnstructuredQuery.Engine.ModelConfig.listConfigKeys().collect();You can configure large language models (LLMs) using the application interface or the Application C3 AI Console. Choose one method based on your workflow.
Use the application interface
Enable LLM and Credentials pages
By default, the LLM and Credentials pages are hidden in the application interface. To make them visible, follow these steps:
From the home page, go to Settings in the left navigation bar.
In the Configurations, locate the
GenAiUiConfigconfiguration.Select Edit.
Change the
llmVisibilitysetting fromhiddentofull.Refresh the page.
After you refresh the page, the LLM and Credentials tabs appear under Settings.
Configure credentials
To configure provider credentials before adding a large language model (LLM):
- Go to Settings > Credentials.
- Select the Add icon and choose a Provider from the dropdown.
- Fill in the required fields (see the table below).
- Select Add to submit the form.
The application validates that the configuration is correctly formatted and that the credential name is unique.
To update an existing credential, select the credential in the list, modify its fields, and save the changes. The application validates the changes before saving them.
Required fields by provider
| Provider | Required fields |
|---|---|
| AwsBedrock/Claude | Name, accessKeyId, secretAccessKey, endpoint, region |
| Gemini | Name, serviceAccountInfo, project, location |
| Azure/Gpt | Name, apiVersion, apiKey, apiBase, apiType |
Add and manage LLMs
To register or update a large language model (LLM) after adding a credential:
- Go to Settings > LLM.
- Select Add, enter a unique name, choose the Credential for the LLM.
- Fill in the model configuration parameters under Default Parameters.
- Select Add to save the LLM.
The system validates that the configuration is correctly formatted and that the model name is unique. You can later select an existing LLM to update its parameters.
Default parameters
| Parameter | Description |
|---|---|
| model_name | Identifier of the model to use. This is required and varies by provider. |
| temperature | Value from 0 to 1. Controls randomness. Lower values (for example, 0.2) make responses more focused. Higher values (for example, 0.8) increase creativity and variability. |
| top_p | Value from 0 to 1. Controls the probability mass of tokens to consider. Lower values narrow the output. Higher values allow more diverse responses. |
| top_k | Positive integer. Limits the number of top tokens the model can choose from. Lower values produce more deterministic responses. Higher values increase variability in responses. |
| context_length | The maximum amount of input text the model can consider at once. If your input is too long, it may get cut off. |
| max_output_tokens | Maximum number of tokens to generate in the output. |
| stop_sequences | Specifies one or more strings that stop generation. (Use these stop sequences for the Dynamic Agent: </plan>, </thought>, </execute>, </solution>) |
| stream_response | Boolean. If true, streams the output tokens as they are generated. |
| truncate_prompt_totally | Boolean. If true, truncates the entire prompt to fit within the model’s input limit. |
| api_version | (Only for Gpt) The API version of the OpenAI deployment. |
| api_type | (Only for Gpt) Specifies the deployment type (for example, azure). |
Supported model identifiers
Use the correct model name in the model_name field. The required identifier depends on the provider and must match exactly. You can obtain model identifiers from provider documentation, consoles, or your system administrator.
AwsBedrock/Claude: Use model names such as
anthropic.claude-3-haiku-20240307-v1:0. These are available in the AWS Bedrock console or in the Anthropic documentation.Gemini: Use model names such as
gemini-1.5-pro. You can find these in the Vertex AI dashboard or the Gemini documentation.Azure/Gpt: Use model names such as
gpt-4,gpt-4o, orgpt-3.5-turbo. These are available in the OpenAI documentation or in the Azure OpenAI documentation.
After saving an LLM, select the Refresh to apply changes.
To complete the setup, you must set the model as the default for all generative AI workflows in the environment. Run the following command from the Application C3 AI Console:
Genai.QuickStart.setup({ llmClientConfigName: 'model_name' });To verify the credential and model setup, send a query to the Dynamic Agent from the user interface.
Use the C3 AI Application Console
You can also manage large language models (LLMs) and credentials programmatically using the Application C3 AI console. This approach is recommended for advanced users or when automating environment setup.
View available models
To list all registered LLM configurations in the current environment, run:
Genai.UnstructuredQuery.Engine.ModelConfig.listConfigKeys().collect();Register a new model
To register a new model configuration, use Genai.UnstructuredQuery.Engine.ModelConfig.make() and apply it with .setConfig().
Genai.UnstructuredQuery.Engine.ModelConfig.make({
name: '<name>',
llmDeployment: 'Genai.Llm.AzureOpenAI',
llmType: 'Gpt',
llmKwargs: {
context_length: 4096,
frequency_penalty: 0,
logit_bias: {},
max_output_tokens: 4096,
model_name: '<model-name>',
n: 1,
stop_sequences: ['</plan>', '</thought>', '</execute>', '</solution>'],
presence_penalty: 0,
temperature: 0.0,
top_p: 1,
},
userFriendlyName: 'Azure OpenAI',
configSuffix: '_gpt4o',
}).setConfig();
Genai.QuickStart.migrateAllModelConfigsToGenaiCore();name: A unique configuration name.model_name: Provider-specific model ID (gpt-4o for OpenAI).llmDeployment: Authentication source (for example, Genai.Llm.OpenAI.Config).llmKwargs: Optional settings like stop, temperature, top_p, etc.
You can use the same structure to register other providers:
- Claude 3.5 Sonnet (using AWS Bedrock):
llmDeployment:GenaiCore.Llm.Bedrock
- Gemini 2.0 Flash (using Google Vertex AI):
llmDeployment:GenaiCore.Llm.VertexAi
You can also include additional generation parameters such as temperature, top_p, max_output_tokens inside the defaultOptions field.
Set provider credentials
function setOpenAiApiKey() {
Genai.Llm.OpenAI.Config.inst().setSecretValue('apiKey', '<your-api-key>', ConfigOverride.APP);
Genai.Llm.OpenAI.Config.inst().setConfigValue('apiBase', 'https://your-openai-endpoint.com', ConfigOverride.APP);
Genai.Llm.OpenAI.Config.inst().setConfigValue('apiVersion', '<version>', ConfigOverride.APP);
Genai.Llm.OpenAI.Config.inst().setSecretValue('apiKey', '<your-api-key>', ConfigOverride.APP);
Genai.Llm.OpenAI.Config.inst().setConfigValue(
'apiBase',
'https://your-azure-openai-endpoint.openai.azure.com/',
ConfigOverride.APP
);
Genai.Llm.OpenAI.Config.inst().setConfigValue('apiVersion', '<version>', ConfigOverride.APP);
}
setOpenAiApiKey();Refer to Application Initialization for steps to configure credentials for other providers.
Complete the setup by setting the model as the default for all generative AI workflows in the environment.
Genai.QuickStart.setup({ llmClientConfigName: '<name>' });This step ensures that the application uses the specified model for all relevant operations.
Creating an LLM batch job
Large language model (LLM) batch jobs let you submit multiple prompts in one asynchronous request. Use this method for high-volume tasks such as data labeling, content generation, or evaluation.
The system queues the job, runs it in the background, and lets you retrieve results later. This approach lowers request overhead, increases throughput, and improves resource use.
For LLMs which support batch jobs, you can use Genai.UnstructuredQuery.Engine.ModelConfig to submit a batch of prompts through the LLM’s batch API.
For more information on LLM batch APIs, see
job = Genai.ConfigUtil.queryEngineModelConfig('gemini').generateTextBatch({
"prompts": ["Count from 1 to 10", "What is the color of the sky"],
})
// Wait until the job is complete
while(!job.completed()) {
console.log(`Batch job ${job.id} has not completed as of ${DateTime.now()}. Sleeping for 10 seconds.`)
Thread.sleep(10000);
}
// Read the responses
job.readResponses()
// Remove the job if it is no longer needed
job.remove()