C3 AI Documentation Home

Evaluate Metrics

After defining and deploying metrics, you can evaluate them.

There are four key parts to evaluating a metric:

  • Instance of Type evaluated — IDs for the instances of the source type in the metric definition

  • Name of metric — For example, AveragePower

  • Time Range — Start and end datetime of the time range. The start date is inclusive of the date mentioned. The end date is exclusive of the date mentioned.

    • start & end
  • Interval — The possible value of the standard normalized time intervals used most often for timeseries.

    • SECOND, MINUTE, FIVE_MINUTE, TEN_MINUTE, QUARTER_HOUR, HALF_HOUR, HOUR, DAY, MONTH, YEAR

Process and definition

Types must mix in a Type called FeatureEvaluatable. This Type contains the ability to be evaluate-able, along with a more applicable APIs for evaluating metrics.

Consider the example below:

Type
entity type SmartBulb extends LightBulb mixes FeatureEvaluatable schema name "SMARTBULB" {

}

SmartBulb mixes FeatureEvaluatable. If you type the code snippet below in the C3 Console you receive a list of all the metrics that have been defined on the SmartBulb Type.

JavaScript
SmartBulb.listMetrics()

FeatureEvaluatable has many APIs which can be used by the Type that mixes it in:

  • SourceType.listMetrics(): list all metrics that are available on SourceType
  • SourceType.evalMetric(s)(spec): evaluate provisioned metric(s) on the SourceType
  • SourceType.evalMetricsWithMetaData(spec, overrideMetrics): evaluate metrics on the SourceType
  • SourceType.rollupMetric(s)(spec): aggregate metric results across sources are evalMetrics, which evaluate the specified metric based on the spec. You can use evalMetric to evaluate a single source with single metric.

EvalMetrics spec

To evaluate a metric, you need to create a specification. The spec is the input that can be used by the API calls to run.

Both the evalMetrics() and evalMetricsWithMetadata() APIs use an API called EvalMetricsSpec. The EvalMetricsSpec is a Type that contains the following important fields:

  • ids — List of source objects on which you want to evaluate the metrics on.
  • expressions — Where you place a list of the metrics that you wish to evaluate.
  • start and end — Datetime fields where you can put the start and end dates of the period for which you want to evaluate your metrics.
  • interval — The desired interval for the Timeseries output.

A variable spec is created that is an instance of the EvalMetricsSpec Type:

JavaScript
// For this spec, the AveragePower Metric is evaluated on SmartBulb 1 from January 1st 2011 until January 1st 2015 at a
// monthly interval.
var spec = {
  ids: ["SMBLB1"],
  expressions: ["AveragePower"],
  start: "2011-01-01",
  end: "2015-01-01",
  interval: "MONTH"
};

An example using evalMetricsWithMetadata() looks like this:

JavaScript
var metric = SimpleMetric.make({
  id: "AveragePower_SmartBulb",
  name: "AveragePower",
  srcType: "SmartBulb",
  path: "bulbMeasurements",
  expression: "avg(avg(normalized.data.power))"
});

var result = SmartBulb.evalMetricsWithMetadata(spec, [metric]);

Exploring time series results

The resulting Timeseries object from an evalMetric() call has several member methods that make it easy to access and manipulate the Timeseries object:

MethodReturn Value
start()State date of the Timeseries
end()End date of the Timeseries
interval()Normalization interval of the Timeseries
dates()Calendar-normalized dates of the Timeseries
data()Timeseries data
missing()Fraction of missing data from the Timeseries
toInterval(interval, aggregationFunction)Convert Timeseries to specified interval
aggregate(aggregationFuntion, prorate)Aggregate all values of the Timeseries into scalar

See also

Was this page helpful?