Protected constructorReadonly EmbeddingReadonly LocalPrivate _actionPrivate _actionPrivate _actionsPrivate _agentPrivate _agentPrivate _contextPrivate _embeddingsPrivate _examplePrivate _loadedPrivate _loadingPrivate _loadingPrivate _noteGet the action vector service for semantic search. Initialized during Config - will be null before AIEngine.Config() completes.
Use the new Action system instead
Get the agent vector service for semantic search. Initialized during Config - will be null before AIEngine.Config() completes.
Protected BaseAccess to the underlying AIEngineBase instance
Use the new Action system instead
Returns the highest power local embedding model
Returns true if both the base engine and server capabilities are loaded
Returns an array of the local embedding models, sorted with the highest power models first
Returns the lowest power local embedding model
Get all available actions loaded from the database. Loaded during Config() - will be empty before AIEngine.Config() completes. NOTE: This returns ActionEntity (MJ Action system), not the deprecated AIActionEntity. For deprecated AI Actions, see the inherited Actions property.
Static InstanceConfigures the AIEngine by first ensuring AIEngineBase is configured, then loading server-specific capabilities (embeddings, actions, etc.).
This method is safe to call from multiple places concurrently - it will return the same promise to all callers during loading.
Optional forceRefresh: booleanIf true, forces a full reload even if already loaded
Optional contextUser: UserInfoUser context for server-side operations (required)
Optional provider: IMetadataProviderOptional metadata provider override
Helper method to instantiate a class instance for the given model and calculate an embedding vector from the provided text.
Optional apiKey: stringHelper method that generates an embedding for the given text using the highest power local embedding model.
AI Actions are deprecated. Use AIPromptRunner with the new AI Prompt system instead.
Entity AI Actions are deprecated. Use AIPromptRunner with the new AI Prompt system instead.
Find actions similar to a task description using semantic search.
Find examples similar to query text using semantic search.
Optional agentId: stringOptional userId: stringOptional companyId: stringFind notes similar to query text using semantic search.
Optional agentId: stringOptional userId: stringOptional companyId: stringFind agents similar to a task description using semantic search.
Optional status: stringReturns the inheritance chain for a configuration, starting with the specified configuration and walking up through parent configurations to the root. Delegates to AIEngineBase.GetConfigurationChain.
The ID of the configuration to get the chain for
Array of AIConfigurationEntity objects representing the inheritance chain
Error if a circular reference is detected in the configuration hierarchy
Returns all configuration parameters for a configuration, including inherited parameters from parent configurations. Child parameters override parent parameters. Delegates to AIEngineBase.GetConfigurationParamsWithInheritance.
The ID of the configuration to get parameters for
Array of AIConfigurationParamEntity objects, with child overrides applied
The Global Object Store is a place to store global objects that need to be shared across the application. Depending on the execution environment, this could be the window object in a browser, or the global object in a node environment, or something else in other contexts. The key here is that in some cases static variables are not truly shared because it is possible that a given class might have copies of its code in multiple paths in a deployed application. This approach ensures that no matter how many code copies might exist, there is only one instance of the object in question by using the Global Object Store.
Optional vendorName: stringOptional contextUser: UserInfoOptional contextUser: UserInfoProtected GetThis method is related to deprecated AI Actions. Use AIPromptRunner instead.
Executes multiple parallel chat completions with the same model but potentially different parameters.
The user's message/question to send to the model
The user context for authentication and logging
Optional systemPrompt: stringOptional system prompt to set the context/persona
Number of parallel completions to run (default: 3)
The amount to increment temperature for each iteration (default: 0.1)
The starting temperature value (default: 0.7)
Optional model: AIModelEntityExtendedOptional specific model to use, otherwise uses highest power LLM
Optional apiKey: stringOptional API key to use with the model
Optional callbacks: ParallelChatCompletionsCallbacksOptional callbacks for monitoring progress
Array of ChatResult objects, one for each parallel completion
Prepares standard chat parameters with system and user messages.
The user message/query to send to the model
Optional systemPrompt: stringOptional system prompt to set context/persona for the model
Array of properly formatted chat messages
Prepares an LLM model instance with the appropriate parameters. This method handles common tasks needed before calling an LLM:
The user context for authentication and permissions
Optional model: AIModelEntityExtendedOptional specific model to use, otherwise uses highest power LLM
Optional apiKey: stringOptional API key to use with the model
Object containing the prepared model instance and model information
Force regeneration of all embeddings for agents and actions.
Use this method when:
Note: This is an expensive operation and should not be called frequently. Normal auto-refresh operations will NOT regenerate embeddings to avoid performance issues.
Optional contextUser: UserInfoUser context for database operations (required on server-side)
Executes a simple completion task using the provided parameters.
The user message/query to send to the model
The user context for authentication and permissions
Optional systemPrompt: stringOptional system prompt to set context/persona for the model
Optional model: AIModelEntityExtendedOptional specific model to use, otherwise uses highest power LLM
Optional apiKey: stringOptional API key to use with the model
The text response from the LLM
Error if user prompt is not provided or if there are issues with model creation
Private generatePrivate Generate embedding for a single action and add it to the vector service. Used for incremental updates when new actions are created.
The action to generate embeddings for
Private generateProtected getPrivate innerInternal loading logic - separated for clean promise management
Optional forceRefresh: booleanOptional contextUser: UserInfoOptional provider: IMetadataProviderPrivate loadPrivate loadPrivate Load Actions from database. Called during Config to populate the Actions list.
Optional contextUser: UserInfoPrivate loadPrivate loadPrivate Load example embeddings from database and build vector service. Only loads active examples with embeddings already generated.
Optional contextUser: UserInfoPrivate loadPrivate Load note embeddings from database and build vector service. Only loads active notes with embeddings already generated.
Optional contextUser: UserInfoPrivate loadLoad server-specific capabilities: actions and embeddings
Optional contextUser: UserInfoProtected markupProtected Static getReturns the singleton instance of the class. If the instance does not exist, it is created and stored in the Global Object Store. If className is provided it will be used as part of the key in the Global Object Store, otherwise the actual class name will be used. NOTE: the class name used by default is the lowest level of the object hierarchy, so if you have a class that extends another class, the lowest level class name will be used.
Server-side AI Engine that wraps AIEngineBase and adds server-only capabilities.
This class uses composition (containment) rather than inheritance to avoid duplicate data loading. It delegates all base functionality to AIEngineBase.Instance while adding server-specific features like embeddings, vector search, and LLM execution.
Description
ONLY USE ON SERVER-SIDE. For metadata only, use the AIEngineBase class which can be used anywhere.