Final Cluster Configuration Steps
After completing the C3 Agentic AI Platform installation and deployment process, C3 AI Operations completes final cluster configuration steps before end users can use the cluster.
If you install and deploy the platform for your organization, complete these steps after your organization meets resource requirements and deploys the required Helm charts. You can also use this topic as guidance on how to configure and verify necessary cluster components.
Prerequisites
See the following documentation before you complete the final cluster configuration steps:
- Non-Standard Deployment Requirements
- Secret Creation for Installation and Deployment
- C3 AI Helm Chart Overview and Installation
- Configure a Layer 7 load balancer
- Configure Storage for On-Premises Deployment
Contact Center of Excellence (CoE) to obtain c3/c3 credentials. You will set your own c3/c3 credentials as part of the final cluster configuration steps.
Final cluster configuration steps
Complete the following final cluster configuration steps. Take note of which steps are for only cloud deployments, only on-premises deployments, or optional.
- Set image pull secret
- (Cloud only) Configure file system mounts
- (On-premises only) Set up file system storage
- Add load balancer IP to DNS
- Access the
c3/c3C3 AI Console - Configure server version
- Ensure the database services
- (On-premises only) Set security profile provider
- (On-premises only) Create Persistent Volume Claims for all pods
- (Optional) Configure GPU nodepools
- (Optional) Configure OpenSearch index patterns
- Configure the cluster
- Deploy C3 AI Studio
- Verify deployment
1. Set image pull secret
In the command line, set the CLUSTER_ID variable and patch the default Kubernetes service account to use the same image pull secret as the C3 AI service account. Replace <cluster_id> with your cluster name:
export CLUSTER_ID=<cluster_id>
kubectl patch serviceaccount default -n $CLUSTER_ID -p '{"imagePullSecrets": [{"name": "docker-registry-secret"}]}'2. (Cloud-only) Configure file system mounts
Create a FileSystemConfig.json file at <bucket>/config/_cluster_/$CLUSTER_ID/FileSystemConfig/FileSystemConfig.json. This object represents a storage container for Azure. Skip this step for on-premises deployments and see the following section to configure file system storage.
Add the following content according to your cloud provider.
Azure
This object represents a storage container for Azure.
{
"azure":
{
"mounts":
{
"/": "azure://$CLUSTER_ID/${env}/${app}/fs",
"artifact": "azure://$CLUSTER_ID/artifacts",
"attachment": "azure://$CLUSTER_ID/${env}/${app}/attachment",
"data-load": "azure://$CLUSTER_ID/${env}/${app}/dl",
"etl": "azure://$CLUSTER_ID/${env}/${app}/etl",
"key-value": "azure://$CLUSTER_ID/${env}/${app}/kv",
"system": "azure://$CLUSTER_ID/${env}/${app}/system",
"telemetry": "azure://$CLUSTER_ID/${env}/${app}/telemetry",
"datalake": "azure://$CLUSTER_ID/${env}/${app}/datalake"
}
},
"configOverride": "CLUSTER",
"default": "azure",
"type": "FileSystemConfig"
}AWS
This object represents a bucket for AWS.
{
"configOverride": "CLUSTER",
"default": "s3",
"s3":
{
"mounts":
{
"/": "s3://<account_id>--$CLUSTER_ID/${env}/${app}/fs",
"artifact": "s3://<account_id>--$CLUSTER_ID/artifacts",
"attachment": "s3://<account_id>--$CLUSTER_ID/${env}/${app}/attachment",
"data-load": "s3://<account_id>--$CLUSTER_ID/${env}/${app}/dl",
"etl": "s3://<account_id>--$CLUSTER_ID/${env}/${app}/etl",
"key-value": "s3://<account_id>--$CLUSTER_ID/${env}/${app}/kv",
"system": "s3://<account_id>--$CLUSTER_ID/${env}/${app}/system",
"telemetry": "s3://<account_id>--$CLUSTER_ID/${env}/${app}/telemetry",
"datalake": "s3://<account_id>--$CLUSTER_ID/${env}/${app}/datalake"
}
},
"type": "FileSystemConfig"
}GCP
This object represents a bucket for GCP.
{
"configOverride": "CLUSTER",
"default": "gcs",
"gcs":
{
"mounts":
{
"/": "gcs://c3--$CLUSTER_ID/${env}/${app}/fs",
"artifact": "gcs://c3--$CLUSTER_ID/artifacts",
"attachment": "gcs://c3--$CLUSTER_ID/${env}/${app}/attachment",
"data-load": "gcs://c3--$CLUSTER_ID/${env}/${app}/dl",
"etl": "gcs://c3--$CLUSTER_ID/${env}/${app}/etl",
"key-value": "gcs://c3--$CLUSTER_ID/${env}/${app}/kv",
"system": "gcs://c3--$CLUSTER_ID/${env}/${app}/system",
"telemetry": "gcs://c3--$CLUSTER_ID/${env}/${app}/telemetry",
"datalake": "gcs://c3--$CLUSTER_ID/${env}/${app}/datalake"
}
},
"type": "FileSystemConfig"
}3. (On-premises only) Set up file system storage
If you have an on-premises deployment, run the following code in the command line to set up file system storage for config and vault store. Skip this section of you have a cloud deployment.
Initialize configuration directories
If your on-premises deployment will use file system storage for config and vault instead of Hashicorp Vault or blob storage, initialize the configuration directores:
export CLUSTER_ID=clusterid
export POD_NAME=$(kubectl get pod -n $CLUSTER_ID -l c3__env-0=0c30,c3__app-0=0c30,c3__leader-0=010 -o jsonpath='{.items[0].metadata.name}')
kubectl exec -n $CLUSTER_ID "$POD_NAME" -c c3-server -- mkdir -p /usr/local/share/c3/server/config/_app_/$CLUSTER_ID/c3/c3/JdbcStoreConfig
kubectl exec -n $CLUSTER_ID "$POD_NAME" -c c3-server -- mkdir -p /usr/local/share/c3/server/config/_cluster_/$CLUSTER_ID/FileSystemConfig
kubectl exec -n $CLUSTER_ID "$POD_NAME" -c c3-server -- mkdir -p /usr/local/share/c3/server/config/_cluster_/$CLUSTER_ID/JdbcStoreConfig
kubectl exec -n $CLUSTER_ID "$POD_NAME" -c c3-server -- mkdir -p /usr/local/share/c3/server/vault/_app_/$CLUSTER_ID/c3/c3/JdbcStoreConfigIf you choose to mount config and vault at different locations, adjust the paths to /usr/local/share/c3/server/config and /usr/local/share/c3/server/vault.
Configure mounts
Configure mounts for config and vault storage.
Get the name of the C3 AI leader pod:
Command Lineexport POD_NAME=$(kubectl get pod -n $CLUSTER_ID -l c3__env-0=0c30,c3__app-0=0c30,c3__leader-0=010 -o jsonpath='{.items[0].metadata.name}')Create a
FileSystemConfig.jsonfile and place the file at the cluster-level configuration directory:Command Linecat <<EOF >FileSystemConfig.json { "configOverride": "CLUSTER", "default": "c3fs", "c3fs": { "mounts": { "/": "/c3-filesystem/\${env}/\${app}/fs", "artifact": "/c3-filesystem/artifacts", "attachment": "/c3-filesystem/\${env}/\${app}/attachment", "data-load": "/c3-filesystem/\${env}/\${app}/dl", "etl": "/c3-filesystem/\${env}/\${app}/etl", "key-value": "/c3-filesystem/\${env}/\${app}/kv", "system": "/c3-filesystem/\${env}/\${app}/system", "telemetry": "/c3-filesystem/\${env}/\${app}/telemetry", "datalake": "/c3-filesystem/\${env}/\${app}/datalake" } }, "type": "FileSystemConfig" } EOF kubectl cp \ -n "$CLUSTER_ID" \ ./FileSystemConfig.json \ "$POD_NAME:/usr/local/share/c3/server/config/_cluster_/$CLUSTER_ID/FileSystemConfig/FileSystemConfig.json" \ -c c3-server
Set temporary database for setup process
On-premises deployments require the following configurations to start the server with a temporary embedded database.
This temporary database only serves the setup process and is not for production use. In later steps, you will initialize the containerized Postgres database for production use.
Configure the database connection and pool settings for the embedded database:
Command Lineexport POD_NAME=$(kubectl get pod -n $CLUSTER_ID -l c3__env-0=0c30,c3__app-0=0c30,c3__leader-0=010 -o jsonpath='{.items[0].metadata.name}') cat <<EOF >JdbcStoreConfig_h2.json { "name": "sql", "shareReadConnection": false, "connectionPool": { "maxIdle": -1, "maxTotal": -1 }, "credentials": { "username": "sa", "serverEndpoint": "/usr/local/share/c3/dbs", "datastore": "h2", "database": "\${cluster}-\${env}" }, "sharedReadThreshold": -1, "readConnectionPool": { "minIdle": 0, "maxIdle": -1, "maxTotal": -1 }, "writeConnectionPool": { "minIdle": 0, "maxIdle": -1, "maxTotal": -1 } } EOF kubectl cp \ -n "$CLUSTER_ID" \ ./JdbcStoreConfig_h2.json \ "$POD_NAME:/usr/local/share/c3/server/config/_cluster_/$CLUSTER_ID/JdbcStoreConfig/sql.json" \ -c c3-server kubectl cp \ -n "$CLUSTER_ID" \ ./JdbcStoreConfig_h2.json \ "$POD_NAME:/usr/local/share/c3/server/config/_app_/$CLUSTER_ID/c3/c3/JdbcStoreConfig/sql.json" \ -c c3-serverConfigure the secrets for connecting to the embedded database:
Command Lineexport POD_NAME=$(kubectl get pod -n $CLUSTER_ID -l c3__env-0=0c30,c3__app-0=0c30,c3__leader-0=010 -o jsonpath='{.items[0].metadata.name}') cat <<EOF >JdbcStoreConfig_h2_secret.json { "name": "sql", "credentials": { "password": "sa" } } EOF kubectl cp \ -n "$CLUSTER_ID" \ ./JdbcStoreConfig_h2_secret.json \ "$POD_NAME:/usr/local/share/c3/server/vault/_cluster_/$CLUSTER_ID/JdbcStoreConfig/sql.json" \ -c c3-server kubectl cp \ -n "$CLUSTER_ID" \ ./JdbcStoreConfig_h2_secret.json \ "$POD_NAME:/usr/local/share/c3/server/vault/_app_/$CLUSTER_ID/c3/c3/JdbcStoreConfig/sql.json" \ -c c3-server
Restart the c3/c3 pod
Restart the c3/c3 pod to apply configurations:
kubectl -n $CLUSTER_ID rollout restart deployment/$CLUSTER_ID-c3-c3-k8sdeploy-appleader-0014. Add load balancer IP to DNS
Add the load balancer IP address to your Domain Name System (DNS). Refer to your DNS provider on how to add an IP to your DNS.
If you set up a Layer 7 load balancer, add the load balancer IP address to your DNS. See Configure a Layer 7 Load Balancer to learn how to set up a layer 7 load balancer during Helm chart configuration.
5. Access the c3/c3 C3 AI console
Open a web browser and navigate to https://<cluster_domain>/c3/c3/static/console/index.html. Replace <cluster_domain> with your cluster domain to navigate to the c3/c3 C3 AI Console.
Log in with the cluster credentials provided by CoE.
6. Configure the server version
In the c3/c3 C3 AI Console, configure the cluster server version:
var serverVersion = C3.app().serverVersion;
C3.app().withConfiguredServerVersion(serverVersion).withRootPkgVersion(serverVersion).setConfig(ConfigOverride.CLUSTER);
C3.env().withConfiguredServerVersion(serverVersion).setConfig(ConfigOverride.CLUSTER);7. Ensure the database services
Ensure the Postgres and Cassandra database services. Run the following code in the command line for your kind of deployment.
For cloud deployments: Ensure Postgres service
Ensure the Postgres service and replace <pg_password> and <pg_ip> with your Postgres password and IP from your cloud provider:
kubectl -n c3-opsadmin create secret generic $CLUSTER_ID-c3-c3-k8spg-cs-001 \
--from-literal=connection-string='postgresql://postgres:<pg_password>@<pg_ip>:5432/postgres?sslmode=disable' \
--from-literal=postgres-admin-password='<pg_password>' \
--from-literal=postgres-admin-username="postgres" \
--from-literal=postgres-db-endpoint="<pg_ip>" \
--from-literal=postgres-db-port="5432" \
--dry-run=client -o yaml | kubectl apply -f -
var pg = PostgresDB.make().withRoleAndSeq();
pg.ensureService()This restarts the c3/c3 pod. Refresh your browser before you proceed.
For on-premises deployments: Ensure Postgres service
Start a new Kubernetes StatefulSet and create a secret with a randomly-generated Postgres password:
JavaScriptlet pg = PostgresDB.list()[0] || PostgresDB.make().withRoleAndSeq(); pg.config() .withField("imageRegistry", "registry.c3.ai") // Or your own registry URL .withField("persistenceSize", "100Gi") .withField("resourcesLimitsCpu", "4") .withField("resourcesLimitsMemory", "16Gi") .withField("resourcesRequestsCpu", "4") .withField("resourcesRequestsMemory", "16Gi") .setConfig(ConfigOverride.CLUSTER); pg.ensureService();Get the Postgres password from the secret. Replace
<cs-0123abcdef-secret>with your secret name:Command Linekubectl -n $CLUSTER_ID get secret # Returns a secret name that looks like cs-0123abcdef-secret kubectl -n $CLUSTER_ID get secret -o yaml <cs-0123abcdef-secret> # Print the password as plain text kubectl get secret <cs-0123abcdef-secret> -o jsonpath="{.data['postgresql-password']}" | base64 --decode ; echoWith the secret you obtained from the previous step, run the following script to set the database credentials:
JavaScriptlet pgPassword = "<postgres_password_from_secret>"; // Plain text password from previous step let csId = "<0123abcdef>"; // From secret name (previous example: cs-0123abcdef-secret) let clusterName = "$CLUSTER_ID"; // Set config for c3/c3 app let cfgCluster = JdbcStoreConfig.make().withName("sql").getConfig(); let newClusterCreds = cfgCluster.credentials .withAdminUsername("postgres") .withUsername("postgres") .withAdminPassword(pgPassword) .withPassword(pgPassword) .withServerEndpoint("cs-" + csId + "-001") .withPort(5432) .withDatastore("postgres") .withDatabase(clusterName + "_c3_c3"); cfgCluster.withCredentials(newClusterCreds).setConfig(ConfigOverride.APP); cfgCluster.withCredentials(newClusterCreds).setSecret(ConfigOverride.APP); // Set config for cluster let cfg = JdbcStoreConfig.make().withName("sql").getConfig().withoutFields(["configOverride", "secretOverride"]); let newCreds = cfg.credentials .withAdminUsername("postgres") .withUsername("postgres") .withAdminPassword(pgPassword) .withPassword(pgPassword) .withServerEndpoint("cs-" + csId + "-001") .withPort(5432) .withDatastore("postgres") .withDatabase(clusterName + "_c3_c3"); cfg.withCredentials(newCreds).setConfig(ConfigOverride.CLUSTER); cfg.withCredentials(newCreds).setSecret(ConfigOverride.CLUSTER);Restart the
c3/c3node to start using containerized Postgres:Command Linekubectl -n $CLUSTER_ID rollout restart deployment/$CLUSTER_ID-c3-c3-k8sdeploy-appleader-001
For both cloud and on-premises deployments: Ensure the Cassandra service
If your cluster will use Cassandra as a database, run the following command in the command line to ensure the Cassandra service:
var cass = CassandraDB.make().withRoleAndSeq()
var c = cass.config()
c = c.withField("heapSize", "12000M" )
c = c.withField("storageSize", "1024Gi")
c = c.withField("ringSize", 6)
c = c.withField("storageClassName", "c3-default") // For on-premises deployments, use your RWO storage class
c = c.withField("cpuLimits", "3")
c = c.withField("cpuRequests", "3")
c = c.withField("memoryLimits", "28Gi")
c = c.withField("memoryRequests", "28Gi")
c.setConfig(ConfigOverride.CLUSTER)
cass.ensureService()This restarts the c3/c3 node. Refresh your browser before you proceed.
8. (On-premises only) Set security profile provider
In an on-premises deployment, run the following command to set ONPREM as the provider type for your cloud security profile. Skip this step for cloud deployments.
CloudSecurityProfile.setProviderTypeForCluster(ProviderType.ONPREM);This setting disables cloud provider discovery. Cloud provider discovery can cause issues in some infrastructure configurations, such as self-managed Kuberntetes clusters hosted on cloud infrastructure.
9. (On-premises only) Create Persistent Volume Claims for all pods
In an on-premises deployment, you must set up Persistent Volume Claims (PVCs) for all pods for configuration and vault storage. Skip this step for cloud deployments.
The c3server Helm chart configuration in the previous step, Configure Storage for On-Premises Deployments, creates PVCs for configFramework and sharedFileSystems in the c3/c3 pod. To set up PVCs for all other environments, set the K8sContainerConfig Type with the same volumes and volumeMounts.
From the c3/c3 C3 AI Console, run the following script:
function addReplacePVC(name, persistentVolumeClaimName, mountPath, subPath, overrideLevel) {
let oldVolumes = K8sContainerConfig.forName("c3-server").volumes;
let oldVolumeMounts = K8sContainerConfig.forName("c3-server").volumeMounts;
let tempVolume = K8sVolume.make({
name: name,
persistentVolumeClaim: K8sPersistentVolumeClaimVolumeSpec.make({ claimName: persistentVolumeClaimName }),
});
let tempMount = K8sVolumeMount.make({ name: name, mountPath: mountPath, subPath: subPath });
let newVolumes = [];
let newVolumeMounts = [];
let volumeNames = oldVolumes.map((item) => item.name);
let mountList = oldVolumeMounts.map((item) => item.mountPath);
newVolumes = volumeNames.contains(name)
? oldVolumes.map((item) => (item.name == name ? tempVolume : item))
: [...oldVolumes, tempVolume];
newVolumeMounts = mountList.contains(mountPath)
? oldVolumeMounts.map((item) => (item.mountPath == mountPath ? tempMount : item))
: [...oldVolumeMounts, tempMount];
K8sContainerConfig.forName("c3-server").setConfigValues(
{ volumes: newVolumes, volumeMounts: newVolumeMounts },
overrideLevel,
);
}
addReplacePVC(
"c3-config",
"c3-config-pvc",
"/usr/local/share/c3/server/config",
"c3-cf-config",
ConfigOverride.CLUSTER,
);
addReplacePVC("c3-vault", "c3-vault-pvc", "/usr/local/share/c3/server/vault", "c3-cf-vault", ConfigOverride.CLUSTER);
addReplacePVC("c3-filesystem", "c3-filesystem", "/c3-filesystem", undefined, ConfigOverride.CLUSTER);This code sets K8sContainerConfig Type with the necessary volumes and volumeMounts for an on-premises deployment.
Troubleshoot PVCs configuration
If any C3 AI environments do not start up, the script was unsuccessful. Run the following command in the c3/c3 C3 AI Console to unset the configuration. Replace <major.minor> with the C3 AI server version, for example 8.9.
K8sContainerConfig.forName("c3-server-<major.minor>").clearConfigAndSecretOverride(ConfigOverride.CLUSTER);If the script was unsuccessful, contact Center of Excellence (CoE) for assistance with setting up PVCs for pods in an on-premises deployment.
10. (Optional) Configure GPU nodepools
To configure GPU nodepools, set up the NVIDIA device plugin for Kubernetes. Complete the following steps for your Kubernetes service.
AKS and EKS
For AKS and EKS, create an nvidia-device-plugin.yaml file with the following contents to install GPU drivers:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: nvidia-device-plugin-daemonset
namespace: kube-system
spec:
selector:
matchLabels:
name: nvidia-device-plugin-ds
updateStrategy:
type: RollingUpdate
template:
metadata:
labels:
name: nvidia-device-plugin-ds
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: nvidia.com/gpu.product
operator: Exists
tolerations:
- effect: NoSchedule
key: nvidia.com/gpu
operator: Equal
value: "true"
- effect: NoSchedule
key: nvidia.com/gpu
operator: Equal
value: present
- effect: NoSchedule
key: gpuWorkload
operator: Equal
value: "true"
priorityClassName: "system-node-critical"
containers:
- image: nvcr.io/nvidia/k8s-device-plugin:v0.17.4
name: nvidia-device-plugin-ctr
env:
- name: FAIL_ON_INIT_ERROR
value: "false"
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop: ["ALL"]
volumeMounts:
- name: device-plugin
mountPath: /var/lib/kubelet/device-plugins
volumes:
- name: device-plugin
hostPath:
path: /var/lib/kubelet/device-pluginsRun the following command to apply the file:
kubectl apply -f nvidia-device-plugin.yamlGKE
For GKE, add the gpu_driver_installation_config Terraform flag to the root module before running the Terraform scripts.
"guest_accelerator": [
{
"type": "nvidia-l4",
"count": 8,
"gpu_driver_installation_config": {
"gpu_driver_version": "LATEST"
}
}
]11. (Optional) Configure OpenSearch index patterns
You can configure OpenSearch index patterns. To learn more about index patterns and how to configure them, see Index Pattarns in the OpenSearch documentation.
To configure OpenSearch index patterns according to C3 AI standards, run the following code in the command line:
os_endpoint="https://<cluster_domain>/opensearch"
os_username=`kubectl -n c3-opsadmin get secret opensearch-admin-user -o jsonpath='{.data.username}' | base64 -d`
os_password=`kubectl -n c3-opsadmin get secret opensearch-admin-user -o jsonpath='{.data.password}' | base64 -d`
index_patterns=`curl -s -u $os_username:$os_password "$os_endpoint/api/saved_objects/_find?fields=title&fields=type&per_page=10000&type=index-pattern" -H 'osd-xsrf: reporting' -k | jq -r '.saved_objects[].attributes.title'`
for index_pattern in c3server cassandra jupyter exmachina telemetry system; do
if [[ ! "$index_patterns" =~ $index_pattern ]]; then
curl -XPOST -u $os_username:$os_password "$os_endpoint/api/saved_objects/index-pattern" -H 'Content-Type: application/json' -H 'osd-xsrf: reporting' -d "{\"attributes\":{\"title\":\"${index_pattern}*\",\"timeFieldName\":\"@timestamp\"}}" -k
echo
fi
done12. Configure the cluster
Log in to the
c3/c3C3 AI Console with the credentials provided by CoE.In the
c3/c3C3 AI Console, set the cluster-level URL to the cluster domain name:JavaScriptAppUrl.make({"id":"<cluster_domain>","implicit":false, "scheme":"https"}).setConfig(ConfigOverride.CLUSTER)Set the container registry. Replace
https://registry.c3.aiif your organization uses a custom registry:JavaScriptContainerRegistry.make({"id": "c3", "name": "c3", "url":"https://registry.c3.ai"}).setConfig(ConfigOverride.CLUSTER);(On-premises only) Set the following storage classes for an on-premises deployment. Replace
<your_rwo_storageclass>and<your_rxo_storageclass>with your ReadWriteMany (RWX) and ReadWriteOnce (RWO) storage classes. C3 AI recommends storage classes without retain.JavaScript// Default storage class for single and multiple node environments K8sStorageMappingConfig.forName("default") .getConfig() .withDisk("<your_rwo_storageclass>") // Use RWO storage class .withNfs("<your_rwx_storageclass>") // Use RWX storage class .setConfig(ConfigOverride.CLUSTER); // Jupyter storage class JupyterHub_Config.make({ id: "k8sjup-cs-001" }).setConfigValue( "jupyterSingleUserStorageClass", "<your_rwo_storageclass>", ConfigOverride.CLUSTER );Set up an identity provider (IdP) for your cluster. See the following documentation and use the specified reply URL for SAML and OIDC:
Authentication Method Documentation Reply URL SAML Authentication Using SAML https://<cluster_domain>/c3/c3/saml/loginOIDC Authentication Using OpenID Connect https://<cluster_domain>/c3/c3/oidc/loginLDAP Connect the C3 Agentic AI Platform to an LDAP Server N/A For all of the IdPs, set the
userIdFormatfield in IdP configuration toEMAIL. Make sure to set user emails and group details, because the platform uses emails as the user IDs and uses the groups to handle authorization.In the
c3/c3C3 AI Console, map theC3.ClusterAdminC3 AI groups to its respective IdP group name using the appropriate IdP configuration:Authentication Method Command OIDC UserGroup.forId("C3.ClusterAdmin").addIdpGroupForIdp(OidcIdpConfig.forIdp("<domain>"), "<IdP_group_name>")SAML UserGroup.forId("C3.ClusterAdmin").addIdpGroupForIdp(SamlIdpConfig.forIdp("<domain>"), "<IdP_group_name>")LDAP UserGroup.forId("C3.ClusterAdmin").addIdpGroupForIdp(LdapIdp.Config.forIdp("<domain>"), "<IdP_group_name>")You do not need to map
C3.ClusterAdminto the IdP group name if the IdP group name is alreadyC3.ClusterAdmin.
13. Deploy C3 AI Studio
In the c3/c3 C3 AI Console, create an environment and set the server version:
Cluster.startEnv({ "name": "ai", "singleNode": true, "defaultAppMode": "prod", "sharedDb": true});This code does the following:
- Sets a name for the environment
- Configures the environment to run on a single node
- Sets the default application mode to production
- Enabled=s database sharing with other environments
In the ai/c3 C3 AI Console, deploy C3 AI Studio:
Pkg.Store.inst().config().clearConfigAndSecretOverride('APP')
Studio.deploy('prod')This code removes any existing custom configurations at the application level to prevent conflicts, and deploys C3 AI Studio in production mode.
In the ai/c3 C3 AI Console, update microservice configurations to set at the cluster level:
Microservice.Config.listConfigs().collect().filter(c => c.configOverride == "ENV").each(c => {
Microservice.Config.forName(c.name).setConfig("CLUSTER");
Microservice.Config.forName(c.name).clearConfigAndSecretOverride("ENV");
});This code retrieves all microservice configurations, filters configurations at the environment level, promotes these configurations to the cluster level, and removes the environment-level override to prevent conflicts.
In the ai/studio C3 AI Console, map the C3.StudioAdmin and C3.StudioUser C3 AI groups to their respective IdP group names using the appropriate IdP configuration:
| Authentication Method | C3.StudioAdmin Command | C3.StudioUser Command |
|---|---|---|
| OIDC | UserGroup.forId("C3.StudioAdmin").addIdpGroupForIdp(OidcIdpConfig.forIdp("<domain>"), "<IdP_group_name>") | UserGroup.forId("C3.StudioUser").addIdpGroupForIdp(OidcIdpConfig.forIdp("<domain>"), "<IdP_group_name>") |
| SAML | UserGroup.forId("C3.StudioAdmin").addIdpGroupForIdp(SamlIdpConfig.forIdp("<domain>"), "<IdP_group_name>") | UserGroup.forId("C3.StudioUser").addIdpGroupForIdp(SamlIdpConfig.forIdp("<domain>"), "<IdP_group_name>") |
| LDAP | UserGroup.forId("C3.StudioAdmin").addIdpGroupForIdp(LdapIdp.Config.forIdp("<domain>"), "<IdP_group_name>") | UserGroup.forId("C3.StudioUser").addIdpGroupForIdp(LdapIdp.Config.forIdp("<domain>"), "<IdP_group_name>") |
To map C3 AI application-specific groups to IdP groups, a user with the C3.ClusterAdmin role must run the mapping command in the C3 AI application C3 AI Console after the application is deployed.
14. Verify the deployment
In the c3/c3 C3 AI Console, run sanity tests to verify the deployment:
Js.exec('ClusterSanityCheck.runAllTests({"studioEnvName": "ai"})')This code shows sucessfully configured components and flags deployment issues.