Server-side AI Engine that wraps AIEngineBase and adds server-only capabilities.

This class uses composition (containment) rather than inheritance to avoid duplicate data loading. It delegates all base functionality to AIEngineBase.Instance while adding server-specific features like embeddings, vector search, and LLM execution.

Description

ONLY USE ON SERVER-SIDE. For metadata only, use the AIEngineBase class which can be used anywhere.

Hierarchy (view full)

Constructors

Properties

Accessors

Methods

AgenteNoteTypeIDByName CacheResult CanUserDeleteAgent CanUserEditAgent CanUserRunAgent CanUserViewAgent CheckResultCache ClearAgentPermissionsCache Config EmbedText EmbedTextLocal ExecuteAIAction ExecuteEntityAIAction ExtractTextAndProcessWithLLM FindSimilarActions FindSimilarAgentExamples FindSimilarAgentNotes FindSimilarAgents GetAccessibleAgents GetActiveModelCost GetAgentByName GetAgentConfigurationPresetByName GetAgentConfigurationPresets GetAgentStepByID GetAgentSteps GetConfigurationChain GetConfigurationParam GetConfigurationParams GetConfigurationParamsWithInheritance GetCredentialBindingsForTarget GetDefaultAgentConfigurationPreset GetGlobalObjectStore GetHighestPowerLLM GetHighestPowerModel GetPathsFromStep GetStringOutputFromActionResults GetSubAgents GetUserAgentPermissions HasCredentialBindings ParallelLLMCompletions PrepareChatMessages PrepareLLMInstance ProcessContentItemText RefreshAgentPermissionsCache RegenerateEmbeddings SimpleLLMCompletion castValueAsCorrectType chunkExtractedText convertLastRunDateToTimezone deleteInvalidContentItem getAdditionalContentTypePrompt getAllContentSources getChecksumFromText getChecksumFromURL getContentFileTypeName getContentItemDescription getContentItemIDFromURL getContentItemParams getContentSourceLastRunDate getContentSourceParams getContentSourceTypeName getContentTypeName getDefaultContentSourceTypeParams getDriver getLLMPrompts markupUserMessage parseDOCX parseFileFromPath parseHTML parsePDF parseStringArray processChunkWithLLM promptAndRetrieveResultsFromLLM saveContentItemTags saveLLMResults saveProcessRun saveResultsToContentItemAttribute setSubclassContentSourceType stringToBoolean getInstance

Constructors

Properties

EmbeddingModelTypeName: string
LocalEmbeddingModelVendorName: string

Accessors

  • get ActionVectorService(): SimpleVectorService<ActionEmbeddingMetadata>
  • Get the action vector service for semantic search. Initialized during Config - will be null before AIEngine.Config() completes.

    Returns SimpleVectorService<ActionEmbeddingMetadata>

  • get Actions(): AIActionEntity[]
  • Returns AIActionEntity[]

    Deprecated

    Use the new Action system instead

  • get AgentVectorService(): SimpleVectorService<AgentEmbeddingMetadata>
  • Get the agent vector service for semantic search. Initialized during Config - will be null before AIEngine.Config() completes.

    Returns SimpleVectorService<AgentEmbeddingMetadata>

  • get Agents(): AIAgentEntityExtended[]
  • Returns AIAgentEntityExtended[]

  • get Base(): AIEngineBase
  • Access to the underlying AIEngineBase instance

    Returns AIEngineBase

  • get EntityAIActions(): EntityAIActionEntity[]
  • Returns EntityAIActionEntity[]

    Deprecated

    Use the new Action system instead

  • get GlobalKey(): string
  • Returns string

  • get HighestPowerLocalEmbeddingModel(): AIModelEntityExtended
  • Returns the highest power local embedding model

    Returns AIModelEntityExtended

  • get LanguageModels(): AIModelEntityExtended[]
  • Returns AIModelEntityExtended[]

  • get Loaded(): boolean
  • Returns true if both the base engine and server capabilities are loaded

    Returns boolean

  • get LocalEmbeddingModels(): AIModelEntityExtended[]
  • Returns an array of the local embedding models, sorted with the highest power models first

    Returns AIModelEntityExtended[]

  • get LowestPowerLocalEmbeddingModel(): AIModelEntityExtended
  • Returns the lowest power local embedding model

    Returns AIModelEntityExtended

  • get Models(): AIModelEntityExtended[]
  • Returns AIModelEntityExtended[]

  • get PromptCategories(): AIPromptCategoryEntityExtended[]
  • Returns AIPromptCategoryEntityExtended[]

  • get Prompts(): AIPromptEntityExtended[]
  • Returns AIPromptEntityExtended[]

  • get SystemActions(): ActionEntity[]
  • Get all available actions loaded from the database. Loaded during Config() - will be empty before AIEngine.Config() completes. NOTE: This returns ActionEntity (MJ Action system), not the deprecated AIActionEntity. For deprecated AI Actions, see the inherited Actions property.

    Returns ActionEntity[]

Methods

  • Parameters

    • agentNoteTypeName: string

    Returns string

  • Parameters

    • model: AIModelEntityExtended
    • prompt: AIPromptEntityExtended
    • promptText: string
    • resultText: string

    Returns Promise<boolean>

  • Parameters

    Returns Promise<boolean>

  • Parameters

    Returns Promise<boolean>

  • Parameters

    Returns Promise<boolean>

  • Parameters

    Returns Promise<boolean>

  • Configures the AIEngine by first ensuring AIEngineBase is configured, then loading server-specific capabilities (embeddings, actions, etc.).

    This method is safe to call from multiple places concurrently - it will return the same promise to all callers during loading.

    Parameters

    • Optional forceRefresh: boolean

      If true, forces a full reload even if already loaded

    • Optional contextUser: UserInfo

      User context for server-side operations (required)

    • Optional provider: IMetadataProvider

      Optional metadata provider override

    Returns Promise<void>

  • Helper method to instantiate a class instance for the given model and calculate an embedding vector from the provided text.

    Parameters

    • model: AIModelEntityExtended
    • text: string
    • Optional apiKey: string

    Returns Promise<EmbedTextResult>

  • Helper method that generates an embedding for the given text using the highest power local embedding model.

    Parameters

    • text: string

    Returns Promise<{
        model: AIModelEntityExtended;
        result: EmbedTextResult;
    }>

  • Find actions similar to a task description using semantic search.

    Parameters

    • taskDescription: string
    • Optional topK: number
    • Optional minSimilarity: number

    Returns Promise<ActionMatchResult[]>

  • Find examples similar to query text using semantic search.

    Parameters

    • queryText: string
    • Optional agentId: string
    • Optional userId: string
    • Optional companyId: string
    • Optional topK: number
    • Optional minSimilarity: number

    Returns Promise<ExampleMatchResult[]>

  • Find notes similar to query text using semantic search.

    Parameters

    • queryText: string
    • Optional agentId: string
    • Optional userId: string
    • Optional companyId: string
    • Optional topK: number
    • Optional minSimilarity: number

    Returns Promise<NoteMatchResult[]>

  • Find agents similar to a task description using semantic search.

    Parameters

    • taskDescription: string
    • Optional topK: number
    • Optional minSimilarity: number

    Returns Promise<AgentMatchResult[]>

  • Parameters

    • user: UserInfo
    • permission: "view" | "run" | "edit" | "delete"

    Returns Promise<AIAgentEntityExtended[]>

  • Parameters

    • agentName: string

    Returns AIAgentEntityExtended

  • Returns the inheritance chain for a configuration, starting with the specified configuration and walking up through parent configurations to the root. Delegates to AIEngineBase.GetConfigurationChain.

    Parameters

    • configurationId: string

      The ID of the configuration to get the chain for

    Returns AIConfigurationEntity[]

    Array of AIConfigurationEntity objects representing the inheritance chain

    Throws

    Error if a circular reference is detected in the configuration hierarchy

  • Returns all configuration parameters for a configuration, including inherited parameters from parent configurations. Child parameters override parent parameters. Delegates to AIEngineBase.GetConfigurationParamsWithInheritance.

    Parameters

    • configurationId: string

      The ID of the configuration to get parameters for

    Returns AIConfigurationParamEntity[]

    Array of AIConfigurationParamEntity objects, with child overrides applied

  • The Global Object Store is a place to store global objects that need to be shared across the application. Depending on the execution environment, this could be the window object in a browser, or the global object in a node environment, or something else in other contexts. The key here is that in some cases static variables are not truly shared because it is possible that a given class might have copies of its code in multiple paths in a deployed application. This approach ensures that no matter how many code copies might exist, there is only one instance of the object in question by using the Global Object Store.

    Returns typeof globalThis

  • Parameters

    • Optional vendorName: string
    • Optional contextUser: UserInfo

    Returns Promise<AIModelEntityExtended>

  • Parameters

    • vendorName: string
    • modelType: string
    • Optional contextUser: UserInfo

    Returns Promise<AIModelEntityExtended>

  • Parameters

    • agentID: string
    • Optional status: "Active" | "Disabled" | "Pending"
    • Optional relationshipStatus: "Active" | "Pending" | "Revoked"

    Returns AIAgentEntityExtended[]

  • Parameters

    Returns Promise<EffectiveAgentPermissions>

  • Parameters

    • bindingType: "Vendor" | "ModelVendor" | "PromptModel"
    • targetId: string

    Returns boolean

  • Executes multiple parallel chat completions with the same model but potentially different parameters.

    Parameters

    • userPrompt: string

      The user's message/question to send to the model

    • contextUser: UserInfo

      The user context for authentication and logging

    • Optional systemPrompt: string

      Optional system prompt to set the context/persona

    • Optional iterations: number

      Number of parallel completions to run (default: 3)

    • Optional temperatureIncrement: number

      The amount to increment temperature for each iteration (default: 0.1)

    • Optional baseTemperature: number

      The starting temperature value (default: 0.7)

    • Optional model: AIModelEntityExtended

      Optional specific model to use, otherwise uses highest power LLM

    • Optional apiKey: string

      Optional API key to use with the model

    • Optional callbacks: ParallelChatCompletionsCallbacks

      Optional callbacks for monitoring progress

    Returns Promise<ChatResult[]>

    Array of ChatResult objects, one for each parallel completion

  • Prepares standard chat parameters with system and user messages.

    Parameters

    • userPrompt: string

      The user message/query to send to the model

    • Optional systemPrompt: string

      Optional system prompt to set context/persona for the model

    Returns ChatMessage[]

    Array of properly formatted chat messages

  • Prepares an LLM model instance with the appropriate parameters. This method handles common tasks needed before calling an LLM:

    • Loading AI metadata if needed
    • Selecting the appropriate model (user-provided or highest power)
    • Getting the correct API key
    • Creating the LLM instance

    Parameters

    • contextUser: UserInfo

      The user context for authentication and permissions

    • Optional model: AIModelEntityExtended

      Optional specific model to use, otherwise uses highest power LLM

    • Optional apiKey: string

      Optional API key to use with the model

    Returns Promise<{
        modelInstance: BaseLLM;
        modelToUse: AIModelEntityExtended;
    }>

    Object containing the prepared model instance and model information

  • Force regeneration of all embeddings for agents and actions.

    Use this method when:

    • Switching to a different embedding model
    • Agent or Action descriptions have been significantly updated
    • You want to ensure embeddings are up-to-date after bulk changes
    • Troubleshooting embedding-related issues

    Note: This is an expensive operation and should not be called frequently. Normal auto-refresh operations will NOT regenerate embeddings to avoid performance issues.

    Parameters

    • Optional contextUser: UserInfo

      User context for database operations (required on server-side)

    Returns Promise<void>

  • Executes a simple completion task using the provided parameters.

    Parameters

    • userPrompt: string

      The user message/query to send to the model

    • contextUser: UserInfo

      The user context for authentication and permissions

    • Optional systemPrompt: string

      Optional system prompt to set context/persona for the model

    • Optional model: AIModelEntityExtended

      Optional specific model to use, otherwise uses highest power LLM

    • Optional apiKey: string

      Optional API key to use with the model

    Returns Promise<string>

    The text response from the LLM

    Throws

    Error if user prompt is not provided or if there are issues with model creation

  • Given a run date, this function converts the run date to the user's timezone and formats it as a date object.

    Parameters

    • lastRunDate: Date

    Returns Promise<Date>

    The last run date converted to the user's timezone

  • Parameters

    • contentTypeID: string
    • contextUser: UserInfo

    Returns Promise<{
        maxTags: number;
        minTags: number;
        modelID: string;
    }>

  • Retrieves the last run date of the provided content source from the database. If no previous runs exist, the epoch date is returned.

    Parameters

    • contentSourceID: string
    • contextUser: UserInfo

    Returns Promise<Date>

  • Parameters

    • model: AIModelEntityExtended
    • apiKey: string

    Returns Promise<BaseModel>

  • Parameters

    • entityRecord: BaseEntity<unknown>
    • userMessage: string

    Returns string

  • Given a file path, as along as its one of the supported file types, this function choses the correct parser and returns the extracted text.

    Parameters

    • filePath: string

      The path to the file to extract text from

    Returns Promise<string>

    • The extracted text from the file
  • Given the processing results from the LLM and the Content Element Item that was saved to the database, this function saves the tags as Content Element Tags in the database.

    Parameters

    Returns Promise<void>

  • Returns the singleton instance of the class. If the instance does not exist, it is created and stored in the Global Object Store. If className is provided it will be used as part of the key in the Global Object Store, otherwise the actual class name will be used. NOTE: the class name used by default is the lowest level of the object hierarchy, so if you have a class that extends another class, the lowest level class name will be used.

    Type Parameters

    Parameters

    • this: (new () => T)
        • new (): T
        • Returns T

    • Optional className: string

    Returns T