Helm Chart Overlay: c3server
As part of the C3 Agentic AI Platform installation and deployment process, C3 AI uses Helm to deploy Kubernetes resources for the platform. In a standard deployment, C3 AI Operations deploys the c3crds, c3aiops, and c3server Helm charts for your organization. However, if you are a cluster administrator who deploys the Helm charts for your organization or needs to modify its configurations, use this topic as guidance.
This topic provides required configuration settings for the c3server Helm chart.
Deviations from the guidance in this topic might cause upgrade challenges during subsequent releases. Maintain proper change controls to ensure successful product upgrades. Carefully review the "Patch Notes" in Supported Platform Versions for any changes that might impact your specific configurations.
Requirements
See the following documentation before you complete the configure the c3aiops overlay:
- Non-Standard Deployment Overview and Requirements
- Create Secrets for Installation and Deployment
- C3 AI Helm Chart Overview and Installation. Configure the c3aiops overlay after you complete the "Add Helm charts" step.
Example overlay
The following code is an example of an entire overlay file for an Azure deployment. It contains comments to describe each configuration and additional examples for AWS and GCP cloud providers. Use this file as guidance on how to set additional configurations for your c3server overlay.
c3: # For on-premises deployments, refer to on-premises storage documentation in the "See also" section of this topic
cluster:
environment:
name: c3 # Name of main c3 app. Leave as c3
name: <cluster_id> # Must match cluster name requirements listed in install guides at https://c3.ai/legal/
metadata:
enabled: true
ipMode: "IPv4" # Or IPv6
cloud: "azure" # or aws or gcp
cloudNativeLoadBalancerToNginx: # Ingress controller for a Layer 7 load balancer. Refers to Web Application Firewall (WAF) and attaches it to the Layer 7 load balancer. See Layer 7 load balancer documentation in "See also" section of this topic
enabled: true
azure: # or aws or gcp
wafPolicyForPath: "/subscriptions/<azure_subscription_id>/resourceGroups/<cluster_id>-rsgp-c3-01/providers/Microsoft.Network/ApplicationGatewayWebApplicationFirewallPolicies/<cluster_id>-waf-pol-01"
# AWS example:
# cloudNativeLoadBalancerToNginx:
# enabled: true
# aws:
# wafv2:
# aclArn: "arn:aws:wafv2:<aws_region>:<aws_account_id>:regional/webacl/<cluster_id>-waf-acl-01/{{waf_acl_id}}>"
# ingressSecurityGroups: "<cluster_id>-sg-dmz-01"
# GCP example:
# cloudNativeLoadBalancerToNginx:
# enabled: true
# gcp:
# securityPolicyName: "<cluster_id>-securitypolicy-lb"
# ingress:
# hosts:
# - "*.<example_url>"
ingress:
hosts: # Must match cluster domain. The Layer 7 load balancer uses this domain.
- "*.<example_url>" # Use wildcard to allow multiple AppUrls under a single domain. If multiple domains require a single Layer 7 load balancer, pass each domain as a separate entry
# For air gapped environments or for other reasons you need to add certificates, add this section to set new certificates in the c3server truststore:
# additionalCertificates:
# mountPath: "/usr/local/share/c3/certificates"
# certificates:
# - name: <certificate1>
# crt: |
# -----BEGIN CERTIFICATE-----
# <certificate1_content>
# -----END CERTIFICATE-----
# - name: <certificate2>
# crt: |
# -----BEGIN CERTIFICATE-----
# <certificate2_content>
# -----END CERTIFICATE-----
configFramework: # Config framework settings for c3/c3
bootConfig:
clusterApp:
code: 1
configuredServerVersion: <server_version_build> # Set to same version as c3server Helm chart, for example 8.8.7+196
mode: dev # Must set to dev on initial deployment. You can change to prod after setting up cluster. Setting to prod does not allow basic authentication and requires IdP to handle authentication. This can cause login issues if you have not set up and enable an IdP before setting to prod.
credentials: # Cloud credentials for c3/c3 to access cloud resources
accessKey: "c3<cluster_id>" # Use empty string "" for managed identity (best practice) in Azure. Optionally set an access key
secretKey: "<secret_key>" # Set secret key for pods to use key-based authentication. To use workload identity federation (manged identity in Azure, or IAM in AWS, or service accounts in GCP), configure the serviceAccount section
configRoot: azure://<clusterid>/
# AWS configRoot: s3://<accountid>--<clusterid>/config
# GCP configRoot: gcs://<projectid>--<clusterid>/config
vaultRoot: <blob_storage_root>/vault/ # Use azure:// for Azure, s3:// for AWS, and gcs:// for GCP
# doNotUseWorkloadIdentity: "true" # Required only for on-premises deployment. Disables cloud provider discovery, which can cause issues in some infrastructure configurations, such as self-managed Kuberntetes clusters hosted on cloud infrastructure.
region: westus2 ## Specify a region for blob storage
enabled: true
path: /usr/local/share/c3/server/bootconfig/
config: # Leave as false for cloud deployments
nfs:
enabled: false
vault:
nfs:
enabled: false
image:
registry: registry.c3.ai # Set registry. If your organization uses its own registry with a dedicated repository for C3 AI images, input <registry_url>/<repository_name>
tag: <server_version_build> # Set to same version as c3server helm chart, but use the _ instead of +. For example 8.8.7_196
pullSecret: docker-registry-secret # Secret name to pull from registry
ingress:
enabled: true
host: <example>.c3.cloud # Define cluster domain. The cluster domain requires a TLS certificate. A single TLS certificate serves all AppUrls in the cluster for standard ingress
install-nginx-controller: true
annotations:
nginx.ingress.kubernetes.io/app-root: /ai/studio # Default path for C3 AI Studio. Users are redirected to this if they don't specify a full path in the URL
leader: # c3/c3 pod and its details: Resource configs, JVM details, how to pull log container
jvmDebug: false
jvmSuspend: false
replicas: 1
resources:
limits:
cpu: 2
jvmMaxMemFraction: 0.8
memory: 15Gi
requests:
cpu: 2
memory: 15Gi
service:
api2Port: false
apiPort: 8888
log:
image:
registry: registry.c3.ai # Set registry
pullSecret: docker-registry-secret # Set secret name to pull from registry
security:
deployAuthClusterRole: true
istio:
enabled: false
deployClusterRole: true # Set to true
deployRole: true # Set to true
podSecurityContext: # Switch different UIDs for c3/c3 pod
runAsUser: 4444
runAsGroup: 4444
fsGroup: 4444
supplementalGroups: [5555]
containerSecurityContext:
allowPrivilegeEscalation: true
# On-premises deployments do not require serviceAccount configuration. Leave as defaults for on-premises deployments. Only set for cloud deployments
serviceAccount: # C3 AI creates service accounts in Kubernetes and uses them to access cloud services and get Kubernetes cluster permissions. See https://kubernetes.io/docs/concepts/security/service-accounts/
c3ServiceAccount:
annotations: # Configure to use workload identity federation
azure.workload.identity/client-id: <azure_client_id> # c3-mi-01 output from Terraform module. Allows pods to authenticate and access cloud resources. C3 Service account maps to this cloud identity. Use client ID for Azure, IAM role for AWS, and service account for GCP.
# AWS example: eks.amazonaws.com/role-arn: arn:aws:iam::<aws_account_id>:role/<cluster_id>-c3
# AWS government cloud example: eks.amazonaws.com/role-arn: arn:aws-gov:iam::<aws_account_id>:role/<cluster_id>-c3
# GCP example: iam.gke.io/gcp-service-account: c3server@<gcp_project_id>.iam.gserviceaccount.com
labels:
azure.workload.identity/use: true # Not required for AWS and GCP
c3PrivilegedServiceAccount: # Another C3 AI service account
annotations:
azure.workload.identity/client-id: <azure_client_id> # c3privileged-mi-01 output from Terraform module. c3PrivilegedServiceAccount maps to this cloud identity. This cloud identity comes from the provided Terraform scripts or infrastructure provisioning step. Refer to previous section for other cloud provider examples.
labels:
azure.workload.identity/use: true # Not required for AWS and GCP
# End serviceAccount section for cloud deployments. Continue for on-premises deployments
task: # Task nodes of c3/c3. Default is zero because task nodes are not necessary at the cluster level.
replicas: 0
autoscaling:
cpu:
enabled: false
data: # Data nodes of c3/c3. Default is zero because data nodes are not necessary at the cluster level.
replicas: 0
prometheus: # Deploys additional Prometheus pod alongside c3/c3 for metrics. Leave as true.
enabled: true
c3-nginx-ingress: # Nginx ingress controller is required. See https://github.com/kubernetes/ingress-nginx/tree/main for all Nginx ingress controller configs. For on-premises deployments, refer to the on-premises storage documentation in the "See also" section of this topic.
rbac:
scope: false # Set to false to enable Nginx to create the ClusterRole and ClusterRoleBinding to differentiate between various ingressClass names.
imagePullSecrets:
- name: docker-registry-secret # Set secret name
controller: # Refer to Layer 7 load balancer documentation in the "See also" section of this topic.
image:
registry: registry.c3.ai # Set registry
autoscaling: # Determines how Nginx ingress controller scales
enabled: true
maxReplicas: 20
minReplicas: 2 # Set to 1 to use less resources
config:
force-ssl-redirect: false # Set to true if you want Nginx ingress controller to always use HTTPS. If Nginx ingress controller does not handle TLS termination, set to false to allow HTTP traffic through the load balancer that handles TLS termination and Nginx ingress controller.
extraArgs:
default-ssl-certificate: <cluster_id>/tls-cert # Secret that allows Nginx ingress controller to perform TLS termination
metrics: # Allows metrics and sets metric configurations for Nginx ingress controller
enabled: true
serviceMonitor:
additionalLabels:
release: c3
enabled: true
namespaceSelector:
any: true
scrapeInterval: 10s
publishService:
enabled: true
resources: # Resource configuration for Nginx ingress controller
limits:
cpu: 2
memory: 12Gi
requests:
cpu: 2
memory: 12Gi
scope:
enabled: true
service: # Determines how Nginx load balancer is created in the cloud. Defaults from cloud provider are already set. See your own cloud provider documentation for load balancer creation configs.
annotations: # Set max idle timeout for the load balancer
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "4000"
service.beta.kubernetes.io/azure-load-balancer-tcp-idle-timeout: "100"
cloud.google.com/l4-rbs: "enabled"
externalTrafficPolicy: "Local" # The externalTrafficPolicy of the service. The value Local preserves the client source IP Leave as Local.
enableHttp: false # Set to false to disable the HTTP port in the default Nginx service.
loadBalancerSourceRanges: # Explicit allow list of IPs. If left undefined, anyone can access your cluster at the network level (opens up all traffic on 0.0.0.0/0). If you have custom CIDR ranges, add them here to allow internal traffic through load balancer.
# Default C3 AI control IPs
- 54.76.64.220/32
- 52.48.79.190/32
- 34.231.113.223/32
- 34.232.23.54/32
- 12.226.154.130/32
- 70.35.33.244/32
- 34.238.215.224/32
- 34.82.144.175/32
- 18.136.19.189/32
- 13.214.249.29/32
- 38.77.68.228/32
- 172.176.168.40/32
kube-prometheus-stack: # Deploys kube-prometheus-stack for monitoring and alerting. Leave as false
enabled: false
metrics-server:
enabled: false
prometheus-adapter:
enabled: true
metricsRelistInterval: 1m
nodeExporter:
enabled: false
prometheus:
port: 9090
url: http://prometheus-operated.<cluster_id>