C3 AI Documentation Home

Configure Storage for On-Premises Deployments

In a standard cloud deployment, the C3 Agentic AI Platform uses Binary Large Objects (blob) storage as an abstraction layer for the backend file system. Backend file system storage configuration for on-premises deployments differs from cloud deployments.

If your organization uses an on-premises deployment of the platform, configure storage in the c3server Helm chart.

Prerequisites

See the following documentation before you configure storage for on-premises deployments:

Configure storage for on-premises deployments

The following code snippet shows you how to set values and additional fields in the c3server Helm chart for on-premises storage.

Use following code as guidance when you configure the the c3 and c3-nginx-ingress section in the c3server Helm chart. See the comments for required inputs that you must modify, and use the provided values as examples and enter values for your own deployment.

About c3-nginx-ingress.controller.kind

You can specify whether to set the Nginx ingress controller as a DaemonSet or not. The c3-nginx-ingress.controller.kind value determines if the ingress controller configuration is a DaemonSet:

  • DaemonSet: See DaemonSet in the Kubernetes documentation.
  • Deployment (non-DaemonSet): See Deployments In the Kubernetes documentation.

C3 AI recommends to set c3-nginx-ingress.controller.kind to Deployment. However c3-nginx-ingress.controller.kind supports all configurations available listed in ingress-nginx.

YAML
c3:
  cluster:
    name: <cluster_id> # Replace with your cluster name
  configFramework:
    bootConfig:
      clusterApp:
        code: 1
        configuredServerVersion: <server_version_build> # Set to same version as c3server Helm chart, for example 8.8.7+196
        mode: dev # Or prod
      credentials:
        configRoot: file:///usr/local/share/c3/server/config 
        vaultRoot: file:///usr/local/share/c3/server/vault
        doNotUseWorkloadIdentity: "true" # Required for on-premises deployment. Disables cloud provider discovery, which can cause issues in some infrastructure configurations, such as self-managed Kuberntetes clusters hosted on cloud infrastructure.
      enabled: true
      path: /usr/local/share/c3/server/bootconfig/ 
    config: # Creates additional Persistent Volume Claims (PVCs) for the c3/c3 pod
      mountPath: "/usr/local/share/c3/server/config"
      nfs:
        enabled: true
        persistentVolumeClaimName: c3-config-pvc
        storage: 4Gi
        storageClassName: <storage_class_name> # Use storageClassName that supports ReadWriteMany (RWX), for example longhorn
    vault:
      mountPath: "/usr/local/share/c3/server/vault"
      nfs:
        enabled: true
        persistentVolumeClaimName: c3-vault-pvc
        storage: 4Gi
        storageClassName: <storage_class_name> # Use storageClassName that supports ReadWriteMany (RWX), for example longhorn
  data:
    replicas: 0
  image:
    pullSecret: registryc3ai
    tag: <server_version_build> # Set to same version as c3server Helm chart, but use _ instead of +. For example 8.8.7_196
  ingress:
    annotations:
      nginx.ingress.kubernetes.io/app-root: /ai/studio
      nginx.ingress.kubernetes.io/use-regex: "true"
    enabled: true
    host: <example_url> # Set as your cluster URL
    install-nginx-controller: true
  log:
    image:
      pullSecret: registryc3ai
  security:
    deployAuthClusterRole: true
    deployRole: true
  sharedFileSystems: # Creates additional Persistent Volume Claims (PVCs) for the c3/c3 pod
    size: 32Gi
    storageClassName: <storage_class_name> # Use storageClassName that supports ReadWriteMany (RWX), for example longhorn
    volumeMounts:
    - mountPath: /environment
      name: environment-shared
    - mountPath: /c3-filesystem
      name: c3-filesystem
    volumes:
    - emptyDir: {}
      name: environment-shared
    - name: c3-filesystem
      persistentVolumeClaim:
        claimName: c3-filesystem
  task:
    replicas: 0
c3-nginx-ingress: # For additional configurations, see https://github.com/kubernetes/ingress-nginx/blob/controller-v1.12.1/charts/ingress-nginx/values.yaml
  controller:
    admissionWebhooks:
      enabled: false
    config:
      allow-snippet-annotations: "false"
      force-ssl-redirect: true
      proxy-body-size: 20G
      proxy-connect-timeout: "3600"
      proxy-read-timeout: "3600"
      proxy-send-timeout: "3600"
      strict-validate-path-type: false
      worker-shutdown-timeout: "3700"
    extraArgs:
      default-ssl-certificate: <cluster_id>/tls-cert
    image:
      registry: registry.c3.ai
    kind: Deployment # See the previous "About c3-nginx-ingress.controller.kind" section in this topic.
    metrics:
      enabled: true
      serviceMonitor:
        additionalLabels:
          release: c3
        enabled: true
        namespaceSelector:
          any: true
        scrapeInterval: 10s
    publishService:
      enabled: true
    resources:
      limits:
        cpu: "2"
        memory: 2Gi
      requests:
        cpu: 100m
        memory: 0.5Gi
    service: 
      enabled: true
      type: ClusterIP
  imagePullSecrets:
  - name: registryc3ai
  rbac:
    scope: true

Create Persistent Volume Claims (PVCs) for all pods

In an on-premises deployment, you must set up Persistent Volume Claims (PVCs) for all pods for configuration and vault storage.

The c3server Helm chart configuration in the previous step creates PVCs for configFramework and sharedFileSystems in the c3/c3 pod.

To set up PVCs for all other environments, see "(On-premises only) Create Persistent Volume Claims for all pods" in Final Cluster Configuration Steps.

Was this page helpful?