C3 AI Documentation Home

Configure and use Large Language Models

Large language models (LLMs) are the core of generative AI applications. The C3 Generative AI Application enables you to configure, manage, and validate LLM integration in the application.

Supported LLMs

The application supports a set of third-party and proprietary LLMs that are pre-integrated and available by default. As of the 6.1 release, the following models are supported:

  • AwsBedrock(Claude): Claude v2, Claude v3 Haiku
  • Azure(Gpt): Gpt 3.5 Turbo, Gpt 4
  • Google(Gemini): Gemini (gemini-pro)
  • MIS: Narwhal

To list available models, run the following command in the Application C3 AI Console:

JavaScript
Genai.UnstructuredQuery.Engine.ModelConfig.listConfigKeys().collect();

You can configure large language models (LLMs) using the application interface or the Application C3 AI Console. Choose one method based on your workflow.

Use the application interface

Enable LLM and Credentials pages

By default, the LLM and Credentials pages are hidden in the application interface. To make them visible, follow these steps:

  1. From the home page, go to Settings in the left navigation bar.

  2. In the Configurations, locate the GenAiUiConfig configuration.

  3. Select Edit.

  4. Change the llmVisibility setting from hidden to full.

  5. Refresh the page.

After you refresh the page, the LLM and Credentials tabs appear under Settings.

Configure credentials

To configure provider credentials before adding a large language model (LLM):

  1. Go to Settings > Credentials.
  2. Select the Add icon and choose a Provider from the dropdown.
  3. Fill in the required fields (see the table below).
  4. Select Add to submit the form.

The application validates that the configuration is correctly formatted and that the credential name is unique.

To update an existing credential, select the credential in the list, modify its fields, and save the changes. The application validates the changes before saving them.

Required fields by provider

ProviderRequired fields
AwsBedrock/ClaudeName, accessKeyId, secretAccessKey, endpoint, region
GeminiName, serviceAccountInfo, project, location
Azure/GptName, apiVersion, apiKey, apiBase, apiType

Add and manage LLMs

To register or update a large language model (LLM) after adding a credential:

  1. Go to Settings > LLM.
  2. Select Add, enter a unique name, choose the Credential for the LLM.
  3. Fill in the model configuration parameters under Default Parameters.
  4. Select Add to save the LLM.

The system validates that the configuration is correctly formatted and that the model name is unique. You can later select an existing LLM to update its parameters.

Default parameters

ParameterDescription
model_nameIdentifier of the model to use. This is required and varies by provider.
temperatureValue from 0 to 1. Controls randomness. Lower values (for example, 0.2) make responses more focused. Higher values (for example, 0.8) increase creativity and variability.
top_pValue from 0 to 1. Controls the probability mass of tokens to consider. Lower values narrow the output. Higher values allow more diverse responses.
top_kPositive integer. Limits the number of top tokens the model can choose from. Lower values produce more deterministic responses. Higher values increase variability in responses.
context_lengthThe maximum amount of input text the model can consider at once. If your input is too long, it may get cut off.
max_output_tokensMaximum number of tokens to generate in the output.
stop_sequencesSpecifies one or more strings that stop generation. (Use these stop sequences for the Dynamic Agent: </plan>, </thought>, </execute>, </solution>)
stream_responseBoolean. If true, streams the output tokens as they are generated.
truncate_prompt_totallyBoolean. If true, truncates the entire prompt to fit within the model’s input limit.
api_version(Only for Gpt) The API version of the OpenAI deployment.
api_type(Only for Gpt) Specifies the deployment type (for example, azure).

Supported model identifiers

Use the correct model name in the model_name field. The required identifier depends on the provider and must match exactly. You can obtain model identifiers from provider documentation, consoles, or your system administrator.

After saving an LLM, select the Refresh to apply changes.

To complete the setup, you must set the model as the default for all generative AI workflows in the environment. Run the following command from the Application C3 AI Console:

JavaScript
Genai.QuickStart.setup({ llmClientConfigName: 'model_name' });

To verify the credential and model setup, send a query to the Dynamic Agent from the user interface.

Use the C3 AI Application Console

You can also manage large language models (LLMs) and credentials programmatically using the Application C3 AI console. This approach is recommended for advanced users or when automating environment setup.

View available models

To list all registered LLM configurations in the current environment, run:

JavaScript
Genai.UnstructuredQuery.Engine.ModelConfig.listConfigKeys().collect();

Register a new model

To register a new model configuration, use Genai.UnstructuredQuery.Engine.ModelConfig.make() and apply it with .setConfig().

JavaScript
Genai.UnstructuredQuery.Engine.ModelConfig.make({
  name: '<name>',
  llmDeployment: 'Genai.Llm.AzureOpenAI',
  llmType: 'Gpt',
  llmKwargs: {
    context_length: 4096,
    frequency_penalty: 0,
    logit_bias: {},
    max_output_tokens: 4096,
    model_name: '<model-name>',
    n: 1,
    stop_sequences: ['</plan>', '</thought>', '</execute>', '</solution>'],
    presence_penalty: 0,
    temperature: 0.0,
    top_p: 1,
  },
  userFriendlyName: 'Azure OpenAI',
  configSuffix: '_gpt4o',
}).setConfig();

Genai.QuickStart.migrateAllModelConfigsToGenaiCore();
  • name: A unique configuration name.
  • model_name: Provider-specific model ID (gpt-4o for OpenAI).
  • llmDeployment: Authentication source (for example, Genai.Llm.OpenAI.Config).
  • llmKwargs: Optional settings like stop, temperature, top_p, etc.

You can use the same structure to register other providers:

  • Claude 3.5 Sonnet (using AWS Bedrock):
    • llmDeployment: GenaiCore.Llm.Bedrock
  • Gemini 2.0 Flash (using Google Vertex AI):
    • llmDeployment: GenaiCore.Llm.VertexAi

You can also include additional generation parameters such as temperature, top_p, max_output_tokens inside the defaultOptions field.

Set provider credentials

JavaScript
function setOpenAiApiKey() {
  Genai.Llm.OpenAI.Config.inst().setSecretValue('apiKey', '<your-api-key>', ConfigOverride.APP);
  Genai.Llm.OpenAI.Config.inst().setConfigValue('apiBase', 'https://your-openai-endpoint.com', ConfigOverride.APP);
  Genai.Llm.OpenAI.Config.inst().setConfigValue('apiVersion', '<version>', ConfigOverride.APP);

  Genai.Llm.OpenAI.Config.inst().setSecretValue('apiKey', '<your-api-key>', ConfigOverride.APP);
  Genai.Llm.OpenAI.Config.inst().setConfigValue(
    'apiBase',
    'https://your-azure-openai-endpoint.openai.azure.com/',
    ConfigOverride.APP
  );
  Genai.Llm.OpenAI.Config.inst().setConfigValue('apiVersion', '<version>', ConfigOverride.APP);
}

setOpenAiApiKey();

Refer to Application Initialization for steps to configure credentials for other providers.

Complete the setup by setting the model as the default for all generative AI workflows in the environment.

JavaScript
Genai.QuickStart.setup({ llmClientConfigName: '<name>' });

This step ensures that the application uses the specified model for all relevant operations.

Creating an LLM batch job

Large language model (LLM) batch jobs let you submit multiple prompts in one asynchronous request. Use this method for high-volume tasks such as data labeling, content generation, or evaluation.

The system queues the job, runs it in the background, and lets you retrieve results later. This approach lowers request overhead, increases throughput, and improves resource use.

For LLMs which support batch jobs, you can use Genai.UnstructuredQuery.Engine.ModelConfig to submit a batch of prompts through the LLM’s batch API.

For more information on LLM batch APIs, see

Text
job = Genai.ConfigUtil.queryEngineModelConfig('gemini').generateTextBatch({
  "prompts": ["Count from 1 to 10", "What is the color of the sky"],
})

// Wait until the job is complete
while(!job.completed()) {
  console.log(`Batch job ${job.id} has not completed as of ${DateTime.now()}. Sleeping for 10 seconds.`)
  Thread.sleep(10000);
}

// Read the responses
job.readResponses()

// Remove the job if it is no longer needed
job.remove()

See Also

Was this page helpful?