Base class for all LLM sub-class implementations. Not all sub-classes will support all methods. If a method is not supported an exception will be thrown.

Hierarchy (view full)

Constructors

Properties

_additionalSettings: Record<string, any> = {}

Protected property to store additional provider-specific settings

thinkingStreamState: ThinkingStreamState = null

State tracking for streaming thinking extraction Providers should initialize this if they support thinking models

Accessors

  • get AdditionalSettings(): Record<string, any>
  • Get the current additional settings

    Returns Record<string, any>

  • get SupportsStreaming(): boolean
  • Check if this provider supports streaming

    Returns boolean

    true if streaming is supported, false otherwise

  • get apiKey(): string
  • Only sub-classes can access the API key

    Returns string

Methods

  • Process multiple chat completion requests in parallel. This is useful for:

    • Generating multiple variations with different parameters (temperature, etc.)
    • Getting multiple responses to compare or select from
    • Improving reliability by sending the same request multiple times

    Parameters

    Returns Promise<ChatResult[]>

    Promise resolving to an array of ChatResults in the same order as the input params

  • Clear all additional settings This is useful for resetting the state of the provider or when switching between different configurations.

    Returns void

  • Set additional provider-specific settings Subclasses should override this method to validate required settings

    Parameters

    • settings: Record<string, any>

      Provider-specific settings

    Returns void

  • Create a provider-specific streaming request

    Parameters

    Returns Promise<any>

    A stream object that can be iterated with for await

  • Extract thinking content from non-streaming content This method handles case-insensitive extraction of thinking blocks

    Parameters

    • content: string

    Returns {
        content: string;
        thinking?: string;
    }

    • content: string
    • Optional thinking?: string
  • Create the final response object from streaming results

    Parameters

    • accumulatedContent: string

      The complete content accumulated from all chunks

    • lastChunk: any

      The last chunk received from the stream

    • usage: any

      The usage information (tokens, etc.)

    Returns ChatResult

    A complete ChatResult object

  • Get the thinking tag format for this provider Providers can override this to customize the thinking tag format

    Returns {
        close: string;
        open: string;
    }

    • close: string
    • open: string
  • Template method for handling streaming chat completion This implements the common pattern across providers while delegating provider-specific logic to abstract methods.

    Parameters

    Returns Promise<ChatResult>

  • Initialize thinking stream state for streaming extraction

    Returns void

  • Process streaming chunk with thinking extraction This method handles case-insensitive extraction across chunk boundaries

    Parameters

    • rawContent: string

    Returns string

  • Process a streaming chunk from the provider

    Parameters

    • chunk: any

      The raw chunk from the provider

    Returns {
        content: string;
        finishReason?: string;
        usage?: any;
    }

    Processed content and metadata

    • content: string
    • Optional finishReason?: string
    • Optional usage?: any
  • Check if the provider supports thinking models Providers should override this to return true if they support thinking extraction

    Returns boolean