Test folder structure
The test/ folder contains test implementations, test data, and test-specific configurations that mirror your package structure. The C3 Agentic AI Platform treats test files as an overlay that provides test-specific behavior without affecting production. This topic explains the test folder structure, how tests are discovered and executed, and how test overlays work.
Understand what belongs in test/
The test folder contains test-specific implementations and data organized by runtime requirement:
- Test implementations (
test/<requirement>/): Unit tests, integration tests organized by runtime (for example,js-rhino,py-server) - Test data (
test/data/): Entity instances for testing with auto-generated IDs - Test seed data (
test/seed/): Initial database records that override or extend production seed data - Test metadata (
test/metadata/): Test-specific configuration that remixes production metadata - Test canonical data (
test/canonical/): Input and expected output data for data integration pipeline tests - Tests you want to skip (
test/deprecated/): Skip or disable tests by adding them to thedeprecatedsubfolder.
Examples:
test/
js-rhino/
test_User_authentication.js # JavaScript test (Rhino runtime)
py-server/
test_Order_processing.py # Python test (server runtime)
notebook/
test_mlmodel_evaluation.json # Notebook test specification
data/
Customer/
test-customers.json # Test customer data
seed/
CronJob/
TestBackupJob.json # Test-specific cron job
metadata/
Role/
TestRole.json # Test-specific role
canonical/
CanonicalExample/
input/
values.csv # Input data for canonical tests
expected/
TargetType/
expected.csv # Expected output for target typeTest overlay concept
The test folder functions as an overlay that provides test-specific behavior:
- Test mode: When running in test mode, the C3 Agentic AI Platform loads both production files and test files.
- Override behavior: Test files can override or extend production files.
- Isolation: Test files never deploy to production environments.
- Mirrors structure: Test folder mirrors the package structure for consistency.
Example of overlay behavior:
Production:
seed/CronJob/BackupDatabase.json (id: "backup-db", interval: "daily")
Test overlay:
test/seed/CronJob/BackupDatabase.json (id: "backup-db", interval: "hourly")
In test mode:
CronJob "backup-db" has interval "hourly" (test value overrides production)Test implementations (test/<requirement>/)
Test implementations are organized by runtime requirement rather than in a single src/ directory. Each runtime has its own subdirectory under test/.
Runtime-specific organization
Tests are placed in directories that correspond to their execution runtime:
Common runtime directories:
test/js-rhino/: JavaScript tests for Rhino runtime.test/js-server/: JavaScript tests for server runtime.test/py-server/: Python tests for server runtime.test/py-<version>-server/: Python tests for specific Python version.test/notebook/: Jupyter notebook tests.test/browser/: Browser-based UI tests.
Use the following commands to check the available runtimes in your application.
// for Python
const runtimes = ImplLanguage.Runtime.fetch().objs;
const pythonRuntimes = runtimes.filter(rt =>
rt.name.startsWith("py-")
);
pythonRuntimes.forEach(rt => {
console.log(rt.name + ' ' + rt.languageVersion);
});
// for javascript:
const runtimes = ImplLanguage.Runtime.fetch().objs;
const jsRuntimes = runtimes.filter(rt =>
rt.name.startsWith("js-")
);
jsRuntimes.forEach(rt => {
console.log(rt.name);
});
Test file naming
Test files must follow specific naming conventions to be discovered and executed:
Required naming pattern: Files must start with test_ prefix
Valid extensions: .py, .js, .json, .ts, .tsx, .ipynb
Examples:
test/js-rhino/test_User_authentication.js ✓ Valid
test/py-server/test_Order_processing.py ✓ Valid
test/notebook/test_Analysis.ipynb ✓ Valid
test/js-server/UserTest.js ✗ Invalid (missing test_ prefix)
test/src/test_something.js ✗ Invalid (wrong directory)Supported test frameworks
JavaScript tests (Jasmine)
JavaScript tests use the Jasmine framework.
Location: test/js-<requirement>/test_*.js. Naming: Must start with test_. Framework: Jasmine (describe/it/expect syntax).
For detailed information, see Write tests with Jasmine.
Python tests (pytest)
Python tests use the pytest framework.
Location: test/py-<requirement>/test_*.py. Naming: Must start with test_. Framework: pytest (test functions must start with test_).
For detailed information, see Write Tests with Pytest.
Notebook tests
Jupyter notebook tests for data science workflows use JSON specification files that reference seeded notebooks.
Location: test/notebook/test_*.json. Naming: Must start with test_. Format: JSON specification file conforming to JupyterNotebook.TestSpec.
Example:
test/notebook/test_mlmodel_evaluation.json
{
"path": "MlTutorials/MlEvaluation.ipynb",
"timeoutSeconds": 300
}The test specification file references a seeded JupyterNotebook entity (stored in seed/JupyterNotebook/), and the actual notebook file (.ipynb) is stored in the resources/ directory.
For detailed information, see Authoring Notebook Tests within Jupyter.
Browser tests
Browser-based UI tests including performance testing.
Location: test/browser/test_*.js. Naming: Must start with test_.
For detailed information on UI testing, see:
Test discovery
The C3 Agentic AI Platform discovers tests using a pattern-based approach:
Pattern: test/<requirement>/test_*.<extension>
Key requirements:
- File must be in a runtime-specific directory under
test/. - Filename must start with
test_. - File must not be in a
disabled/directory. - Extension must be appropriate for the runtime (
.py,.js, etc.).
Non-test files: Files in test/ directories that don't match the pattern (like test data, utilities, or configuration) are included in the package but not executed as tests.
For comprehensive testing guidance, see C3 AI Testing Overview.
Test data (test/data/)
The test/data/ directory provides entity instances for testing with relaxed requirements compared to production data.
Key differences from production data
Auto-generated IDs: Test data doesn't require explicit IDs. The C3 Agentic AI Platform generates UUIDs automatically if IDs are omitted. This simplifies test data creation since you don't need to manage unique IDs for test entities.
Test isolation: Test data deploys only in test mode and doesn't persist to production.
Repeatable loading: Test data can be reloaded repeatedly during test runs.
When to use test data
Use test/data/ for:
- Creating test entities without worrying about ID management.
- Providing sample data for integration tests.
- Setting up test scenarios with specific data states.
- Testing data validation and constraints.
See Data Folder for more information on data file formats and structure.
Test seed data (test/seed/)
The test/seed/ directory provides test-specific seed data that overrides or extends production seed data.
Override and extension behavior
Override: Test seed data with the same ID as production seed data overrides production values. This is useful for modifying schedules (for example, changing a daily cron job to run every few seconds for testing) or disabling production jobs during tests.
Extension: Test seed data with unique IDs extends production seed data by adding additional test-only instances.
When to use test seed data
Use test/seed/ for:
- Faster execution schedules for cron jobs (for example, every few seconds instead of daily)
- Test-specific configurations
- Disabled or modified production jobs
- Additional test-only seed data instances
See Seed Folder for more information on seed data formats and structure.
Test metadata (test/metadata/)
The test/metadata/ directory provides test-specific metadata that remixes production metadata.
Field-level remixing
Test metadata merges with production metadata at the field level. You specify only the fields you want to override; unspecified fields retain their production values. This enables test-specific configurations like disabling validation or using shorter timeouts without duplicating entire metadata files.
When to use test metadata
Use test/metadata/ for:
- Disabling validation for faster tests.
- Shorter timeouts for test scenarios.
- Test-specific routing or workflow configurations.
- Modified behavior for test environments.
See Metadata Folder for more information on metadata formats and remixing.
Test canonical data (test/canonical/)
The test/canonical/ directory stores input and expected output data for data integration (DI) pipeline tests. This directory uses the BaseCanonicalTester framework and has a specific structure that differs from other test data.
Directory structure
Canonical tests follow a strict directory structure:
test/canonical/
<TypeName>/
input/
values.csv # Input data files
values.json
expected/
<TargetType>/
expected.csv # Expected output for this target type
expected.jsonInner types are supported using dot notation:
test/canonical/
CanonicalExample.Inner/
input/
expected/Purpose
Canonical data tests validate data integration transforms by:
- Testing transformations: Verifying data flows correctly through the DI pipeline.
- Comparing outputs: Checking actual transform outputs against expected results.
- Preventing regressions: Detecting unexpected changes in transformation behavior.
Key differences from other test data
Unlike test/seed/ and test/data/, canonical test files are:
- NOT provisioned to the database: Explicitly excluded from
upsertAllSeed()processing. - Framework-specific: Processed by the BaseCanonicalTester package.
- Jasmine-integrated: Run as Jasmine tests in test frameworks.
- Structure-strict: Must follow the
input/andexpected/<TargetType>/structure.
For detailed information on canonical testing, see the BaseCanonicalTester package documentation.
How test/ relates to other folders
The test folder overlays and interacts with production folders:
Source folder (
src/) Tests in runtime-specific directories (for example,test/js-rhino/,test/py-server/) validate Types declared insrc/. Test implementations run against production Type definitions. See Source Folder.Seed folder (
seed/) Test seed data intest/seed/overrides or extends production seed data. The C3 Agentic AI Platform merges test seed with production seed in test mode. See Seed Folder.Data folder (
data/) Test data intest/data/provides test-specific entity instances with auto-generated IDs. Unlike production data indata/, test data allows auto-generated IDs and is only loaded in test mode. See Data Folder.Metadata folder (
metadata/) Test metadata intest/metadata/remixes production metadata at the field level for test-specific configurations. See Metadata Folder.
Test execution modes
The C3 Agentic AI Platform recognizes test mode through configuration:
Enable test mode
Enable test mode by:
- AppMode configuration: Setting the application mode to include test overlay.
- Test execution: Automatically enabled when running tests.
- Environment configuration: Test environments configured with test mode.
What happens in test mode
When in test mode, the C3 Agentic AI Platform:
- Loads all production files normally
- Loads test overlay files from
test/directories - Merges test data with production data
- Overrides production values with test values where IDs match
- Runs test implementations
- Reports test results
Test mode ensures:
- Test files never deploy to production.
- Test data doesn't persist to production databases.
- Production behavior remains unchanged.
- Tests can safely modify test data.