Readonly EmbeddingReadonly LocalGet the action vector service for semantic search. Initialized during Config - will be null before AIEngine.Config() completes.
Use the new Action system instead
Get the agent vector service for semantic search. Initialized during Config - will be null before AIEngine.Config() completes.
Protected BaseAccess to the underlying AIEngineBase instance
Use the new Action system instead
Returns the highest power local embedding model
Returns true if both the base engine and server capabilities are loaded
Returns an array of the local embedding models, sorted with the highest power models first
Returns the lowest power local embedding model
Get all available actions loaded from the database. Loaded during Config() - will be empty before AIEngine.Config() completes. NOTE: This returns ActionEntity (MJ Action system), not the deprecated AIActionEntity. For deprecated AI Actions, see the inherited Actions property.
Static InstanceConfigures the AIEngine by first ensuring AIEngineBase is configured, then loading server-specific capabilities (embeddings, actions, etc.).
This method is safe to call from multiple places concurrently - it will return the same promise to all callers during loading.
Optional forceRefresh: booleanIf true, forces a full reload even if already loaded
Optional contextUser: UserInfoUser context for server-side operations (required)
Optional provider: IMetadataProviderOptional metadata provider override
Helper method to instantiate a class instance for the given model and calculate an embedding vector from the provided text.
Optional apiKey: stringHelper method that generates an embedding for the given text using the highest power local embedding model.
AI Actions are deprecated. Use AIPromptRunner with the new AI Prompt system instead.
Entity AI Actions are deprecated. Use AIPromptRunner with the new AI Prompt system instead.
Given a list of content items, extract the text from each content item with the LLM and send off the required parameters to the LLM for tagging.
Find actions similar to a task description using semantic search.
Optional topK: numberOptional minSimilarity: numberFind examples similar to query text using semantic search.
Optional agentId: stringOptional userId: stringOptional companyId: stringOptional topK: numberOptional minSimilarity: numberFind notes similar to query text using semantic search.
Optional agentId: stringOptional userId: stringOptional companyId: stringOptional topK: numberOptional minSimilarity: numberFind agents similar to a task description using semantic search.
Optional topK: numberOptional minSimilarity: numberOptional processingType: "Realtime" | "Batch"Optional activeOnly: booleanOptional status: stringReturns the inheritance chain for a configuration, starting with the specified configuration and walking up through parent configurations to the root. Delegates to AIEngineBase.GetConfigurationChain.
The ID of the configuration to get the chain for
Array of AIConfigurationEntity objects representing the inheritance chain
Error if a circular reference is detected in the configuration hierarchy
Returns all configuration parameters for a configuration, including inherited parameters from parent configurations. Child parameters override parent parameters. Delegates to AIEngineBase.GetConfigurationParamsWithInheritance.
The ID of the configuration to get parameters for
Array of AIConfigurationParamEntity objects, with child overrides applied
The Global Object Store is a place to store global objects that need to be shared across the application. Depending on the execution environment, this could be the window object in a browser, or the global object in a node environment, or something else in other contexts. The key here is that in some cases static variables are not truly shared because it is possible that a given class might have copies of its code in multiple paths in a deployed application. This approach ensures that no matter how many code copies might exist, there is only one instance of the object in question by using the Global Object Store.
Optional vendorName: stringOptional contextUser: UserInfoOptional contextUser: UserInfoProtected GetThis method is related to deprecated AI Actions. Use AIPromptRunner instead.
Executes multiple parallel chat completions with the same model but potentially different parameters.
The user's message/question to send to the model
The user context for authentication and logging
Optional systemPrompt: stringOptional system prompt to set the context/persona
Optional iterations: numberNumber of parallel completions to run (default: 3)
Optional temperatureIncrement: numberThe amount to increment temperature for each iteration (default: 0.1)
Optional baseTemperature: numberThe starting temperature value (default: 0.7)
Optional model: AIModelEntityExtendedOptional specific model to use, otherwise uses highest power LLM
Optional apiKey: stringOptional API key to use with the model
Optional callbacks: ParallelChatCompletionsCallbacksOptional callbacks for monitoring progress
Array of ChatResult objects, one for each parallel completion
Prepares standard chat parameters with system and user messages.
The user message/query to send to the model
Optional systemPrompt: stringOptional system prompt to set context/persona for the model
Array of properly formatted chat messages
Prepares an LLM model instance with the appropriate parameters. This method handles common tasks needed before calling an LLM:
The user context for authentication and permissions
Optional model: AIModelEntityExtendedOptional specific model to use, otherwise uses highest power LLM
Optional apiKey: stringOptional API key to use with the model
Object containing the prepared model instance and model information
Given processing parameters that include the text from our content item, process the text with the LLM and extract the information related to that content type.
Force regeneration of all embeddings for agents and actions.
Use this method when:
Note: This is an expensive operation and should not be called frequently. Normal auto-refresh operations will NOT regenerate embeddings to avoid performance issues.
Optional contextUser: UserInfoUser context for database operations (required on server-side)
Executes a simple completion task using the provided parameters.
The user message/query to send to the model
The user context for authentication and permissions
Optional systemPrompt: stringOptional system prompt to set context/persona for the model
Optional model: AIModelEntityExtendedOptional specific model to use, otherwise uses highest power LLM
Optional apiKey: stringOptional API key to use with the model
The text response from the LLM
Error if user prompt is not provided or if there are issues with model creation
Retrieves all of the content sources of a given content source type data from the database.
A list of content sources
Given a content file type ID, this function retrieves the content file type name from the database.
Given the content source parameters, this function creates a description of the content source item.
The description of the content source item
Retrieves the last run date of the provided content source from the database. If no previous runs exist, the epoch date is returned.
Given a content source type ID, this function retrieves the content source type name from the database.
Given a content type ID, this function retrieves the content type name from the database.
Protected getProtected markupGiven a file path, as along as its one of the supported file types, this function choses the correct parser and returns the extracted text.
The path to the file to extract text from
Given the processing results from the LLM and the Content Element Item that was saved to the database, this function saves the tags as Content Element Tags in the database.
Given the results of the processing from the LLM, this function saves the details of the process run in the database.
Protected Static getReturns the singleton instance of the class. If the instance does not exist, it is created and stored in the Global Object Store. If className is provided it will be used as part of the key in the Global Object Store, otherwise the actual class name will be used. NOTE: the class name used by default is the lowest level of the object hierarchy, so if you have a class that extends another class, the lowest level class name will be used.
Server-side AI Engine that wraps AIEngineBase and adds server-only capabilities.
This class uses composition (containment) rather than inheritance to avoid duplicate data loading. It delegates all base functionality to AIEngineBase.Instance while adding server-specific features like embeddings, vector search, and LLM execution.
Description
ONLY USE ON SERVER-SIDE. For metadata only, use the AIEngineBase class which can be used anywhere.