Represents token usage and cost information for an AI model execution.

This class tracks the number of tokens used in both the prompt (input) and completion (output) phases of an AI model execution, along with optional cost information when provided by the AI provider.

ModelUsage

Since

2.43.0

Constructors

  • Creates a new ModelUsage instance.

    Parameters

    • promptTokens: number

      Number of tokens used in the prompt/input

    • completionTokens: number

      Number of tokens generated in the completion/output

    • Optional cost: number

      Optional cost of the execution

    • Optional costCurrency: string

      Optional currency code for the cost (e.g., 'USD', 'EUR', 'GBP')

    Returns ModelUsage

Properties

completionTime?: number

Optional time in milliseconds for the model to generate the completion/response tokens. This is a provider-specific timing metric that may not be available from all providers.

completionTokens: number

Number of tokens generated by the model in its response. This represents the length of the model's output.

cost?: number

Optional cost of this execution. The currency is specified in the costCurrency field. Some providers (like Anthropic) provide this information directly in their API responses.

costCurrency?: string

Optional ISO 4217 currency code for the cost field. Examples: 'USD', 'EUR', 'GBP', 'JPY', etc. If not specified when cost is provided, the currency is provider-specific.

promptTime?: number

Optional time in milliseconds for the model to ingest and process the prompt. This is a provider-specific timing metric that may not be available from all providers.

promptTokens: number

Number of tokens used in the prompt/input phase. This includes all tokens from system messages, user messages, and any other context provided to the model.

queueTime?: number

Optional queue time in milliseconds before the model started processing the request. This is a provider-specific timing metric that may not be available from all providers.

Accessors

  • get totalTokens(): number
  • Calculated total number of tokens (prompt + completion). This is useful for tracking overall token usage against limits.

    Returns number

    The sum of promptTokens and completionTokens