Application Prerequisites
Before deploying the C3 Generative AI Application, ensure all required access, credentials, and infrastructure are in place. These prerequisites are essential to initialize the application in your C3 AI Studio cluster:
- Cluster access: Access to a customer-specific environment.
- Deployment package: A certified deployment package.
- LLM credentials: API access to supported LLM providers.
- GPU access (optional): For faster document processing.
- Hibernation policies: Information about environment and application hibernation.
Cluster access
The application deploys to customer-specific clusters hosted on C3 AI Studio.
You must have:
- Access to your cluster-specific C3 AI Studio URL.
- Permissions to start environments, access C3 AI Console, and execute browser-based setup scripts.
Deployment package
The deployment artifact will be named genAiSearch.
- The package name:
genAiSearch. - The certified semantic version (For example, 1.3.179-105).
Use these details to create a new environment and deploy the application.
LLM credentials
The application supports customer-provided credentials for supported LLM providers. You must have API access to at least one supported LLM provider.
| Provider | Supported Models |
|---|---|
| OpenAI (Azure-hosted) | gpt-3.5-turbo, gpt-4 |
| Claude (AWS-Hosted) | claude-v2, claude-v3-haiku |
| Google Vertex AI | gemini-pro, gemini-2.0-flash |
Collect the following information before setting up the C3 Generative AI Application:
OpenAI (required fields)
- API Key
- API Base URL (For example,
https://your-instance-name.openai.azure.com/) - API Version (For example, 2024-02-01)
Claude (required fields)
- API Key
- Secret Access Key
- AWS Region (For example, us-west-2)
- Endpoint URL (For example,
https://bedrock-runtime.us-west-2.amazonaws.com)
Google Vertex AI (required fields)
- GCP Service Account JSON (containing private key and credentials).
- GCP Project ID.
- Location (For example,
us-central1).
GPU access (optional)
You can request GPU access if your use case requires faster processing or GPU-dependent models. You can submit the request to the C3 AI DevOps team.
If GPUs are unavailable, configure the application to use CPU-based layout parsers. This setup supports most evaluation scenarios.
Hibernation policies
C3 AI Studio uses hibernation to minimize cost by suspending inactive environments:
- Multi-node applications: Hibernate at 9:00 PM PST by default.
- Single-node environments: Hibernate after 4 hours of inactivity by default.
You can exclude environments from hibernation or create custom hibernation schedules if needed. When an environment hibernates, you can manually resume it in C3 AI Studio.
To manage hibernation policies, you need the StudioAdmin role.
To manage hibernation settings, refer to Hibernate Environments and Applications.
See also
After confirming all prerequisites, proceed to C3 Generative AI Application Initialization for instructions on setting up and configuring the application.