The MemberJunction AI Engine package provides a comprehensive framework for AI-powered operations within the MemberJunction ecosystem. It serves as the central orchestration layer for AI model management, agent coordination, and basic prompt execution capabilities.
📝 Advanced Prompt Management: For sophisticated stored prompt management, template rendering, and parallel execution capabilities, see the @memberjunction/ai-prompts package.
As part of improving code organization:
agent-types.ts file with agent type classes@memberjunction/ai (Core)@memberjunction/ai-agents@memberjunction/ai-promptsnpm install @memberjunction/aiengine
@memberjunction/ai-openai)AI Agents are the primary interface for interacting with AI capabilities. Each agent has:
import { AIEngine } from '@memberjunction/aiengine';
// Initialize the engine
await AIEngine.Instance.Config(false, currentUser);
// Get available agents
const agents = AIEngine.Instance.Agents;
const dataAnalysisAgent = agents.find(a => a.Name === 'Data Analysis Agent');
// Agents are configured through the MemberJunction metadata system
console.log(`Agent: ${dataAnalysisAgent.Name}`);
console.log(`Purpose: ${dataAnalysisAgent.Purpose}`);
console.log(`Available Actions: ${dataAnalysisAgent.Actions.length}`);
For quick AI tasks without complex prompt management:
// Simple completion with automatic model selection
const response = await AIEngine.Instance.SimpleLLMCompletion(
"Explain the benefits of TypeScript over JavaScript",
currentUser,
"You are a helpful programming tutor who explains concepts clearly."
);
console.log("AI Response:", response);
// With specific model
const specificModel = allModels.find(m => m.Name === 'GPT-4');
const response2 = await AIEngine.Instance.SimpleLLMCompletion(
"Analyze this code for potential issues",
currentUser,
"You are an expert code reviewer",
specificModel
);
Note: For advanced prompt management with templates, parallel execution, and stored prompts, use the @memberjunction/ai-prompts package.
The engine maintains a comprehensive registry of AI models:
// Get all available models
const allModels = AIEngine.Instance.Models;
const llmModels = AIEngine.Instance.LanguageModels;
// Get the most powerful model for a specific vendor
const bestOpenAI = await AIEngine.Instance.GetHighestPowerLLM('OpenAI', currentUser);
const bestModel = await AIEngine.Instance.GetHighestPowerModel(null, 'LLM', currentUser);
// Models are automatically selected based on:
// - PowerRank: Relative capability ranking
// - ModelType: LLM, Vision, Audio, etc.
// - Vendor: OpenAI, Anthropic, Google, etc.
// - Cost and performance characteristics
The engine provides basic tracking and analytics for AI operations. Advanced execution metrics, caching analytics, and parallel execution analytics are available in the @memberjunction/ai-prompts package.
The central orchestration class for all AI operations.
Config(forceRefresh?: boolean, contextUser?: UserInfo, provider?: IMetadataProvider): Load AI configuration metadata from the MemberJunction systemSimpleLLMCompletion(userPrompt: string, contextUser: UserInfo, systemPrompt?: string, model?: AIModelEntityExtended, apiKey?: string): Quick text completion for basic use casesParallelLLMCompletions(userPrompt: string, contextUser: UserInfo, systemPrompt?: string, iterations?: number, temperatureIncrement?: number, baseTemperature?: number, model?: AIModelEntityExtended, apiKey?: string, callbacks?: ParallelChatCompletionsCallbacks): Execute multiple parallel completions with different parametersGetHighestPowerModel(vendorName: string, modelType: string, contextUser?: UserInfo): Get the most powerful model of a specific type from a vendorGetHighestPowerLLM(vendorName?: string, contextUser?: UserInfo): Get the most powerful LLM, optionally filtered by vendorPrepareLLMInstance(contextUser: UserInfo, model?: AIModelEntityExtended, apiKey?: string): Prepare an LLM instance with proper configurationPrepareChatMessages(userPrompt: string, systemPrompt?: string): Format chat messages in the standard formatGetAgentByName(agentName: string): Get a specific AI agent by nameAgenteNoteTypeIDByName(agentNoteTypeName: string): Get the ID of an agent note type by nameModels: All registered AI models with extended capabilitiesLanguageModels: Filtered list of LLM type modelsVectorDatabases: Available vector database configurationsArtifactTypes: Registered artifact types for AI outputsAgents: Available AI agents with their capabilities and configurationsAgentActions: All available agent actionsAgentModels: Model associations for agents (deprecated)AgentNoteTypes: Types of notes agents can createAgentNotes: All agent notes/learningsPrompts: All registered prompts (use @memberjunction/ai-prompts for execution)PromptModels: Model associations for promptsPromptTypes: Available prompt typesPromptCategories: Prompt category hierarchyActions: Legacy AI actions (deprecated)EntityAIActions: Legacy entity AI actions (deprecated)ModelActions: Legacy model actions (deprecated)Note:
PromptsandPromptCategoriesproperties are now available in the @memberjunction/ai-prompts package.
For sophisticated prompt management with templates, parallel execution, and stored prompts, see the @memberjunction/ai-prompts package which provides the AIPromptRunner class and related functionality.
Extended AI Agent entity with relationship management:
class AIAgentEntityExtended extends AIAgentEntity {
get Actions(): AIAgentActionEntity[]; // Agent's available actions
get Models(): AIAgentModelEntity[]; // Associated models (deprecated - use prompts)
get Notes(): AIAgentNoteEntity[]; // Agent's learning notes
}
Extended prompt category with hierarchical prompt management:
class AIPromptCategoryEntityExtended extends AIPromptCategoryEntity {
get Prompts(): AIPromptEntity[]; // Prompts in this category
}
The AI Engine automatically extends AI Model entities with additional capabilities from the AI provider system. These extended models include all driver-specific functionality and API integration.
The Engine now includes specialized agent type classes in agent-types.ts:
// Base agent type - foundation for all agent types
import { BaseAgentType } from '@memberjunction/aiengine';
// Specialized agent types
import { LoopAgentType } from '@memberjunction/aiengine';
// Agent types define behavioral characteristics:
// - System prompts for consistent behavior
// - Decision-making patterns
// - Execution flow control
The AI Engine now provides type-safe context propagation for sub-agent requests. Context is optional in the request definition and is provided at execution time by the framework:
import { AgentSubAgentRequest, BaseAgentNextStep } from '@memberjunction/aiengine';
// Define your context type
interface MyContext {
apiEndpoint: string;
apiKey: string;
environment: 'dev' | 'staging' | 'prod';
}
// Create a typed sub-agent request - context is optional here
const subAgentRequest: AgentSubAgentRequest<MyContext> = {
id: 'sub-agent-uuid',
name: 'DataProcessorAgent',
message: 'Process the uploaded data',
terminateAfter: false
// context is NOT set by AI agents - it's provided by the framework at execution time
};
// Use in agent next step decisions
const nextStep: BaseAgentNextStep<MyContext> = {
step: 'sub-agent',
subAgent: subAgentRequest
};
// At execution time, the framework provides the context:
// The parent agent's context is automatically passed to sub-agents
// This ensures consistent runtime configuration across the agent hierarchy
This pattern ensures:
The AI Engine supports parallel execution of LLM calls with varying parameters:
// Execute 5 parallel completions with increasing temperature
const results = await AIEngine.Instance.ParallelLLMCompletions(
"Generate creative product names for a smart water bottle",
currentUser,
"You are a creative product naming expert",
5, // iterations
0.15, // temperature increment
0.5, // base temperature
null, // use best available model
null, // use default API key
{
onProgress: (completed, total) => {
console.log(`Progress: ${completed}/${total}`);
},
onComplete: (results) => {
console.log(`All ${results.length} completions finished`);
}
}
);
// Results array contains all completion responses
results.forEach((result, index) => {
if (result.success) {
console.log(`Result ${index + 1}:`, result.data.choices[0].message.content);
}
});
The AI Engine provides intelligent model selection:
// Get the best model regardless of vendor
const bestModel = await AIEngine.Instance.GetHighestPowerModel(null, 'LLM', currentUser);
// Get the best OpenAI model specifically
const bestOpenAI = await AIEngine.Instance.GetHighestPowerLLM('OpenAI', currentUser);
// Get the best vision model
const bestVision = await AIEngine.Instance.GetHighestPowerModel(null, 'Vision', currentUser);
// Get the best audio model
const bestAudio = await AIEngine.Instance.GetHighestPowerModel(null, 'Audio', currentUser);
AI Agents provide specialized capabilities:
// Find a specific agent
const codeAgent = AIEngine.Instance.GetAgentByName('Code Assistant Agent');
// Access agent properties
console.log('Agent Purpose:', codeAgent.Purpose);
console.log('Available Actions:', codeAgent.Actions.length);
console.log('Learning Notes:', codeAgent.Notes.length);
// Use agent context in prompts
const response = await AIEngine.Instance.SimpleLLMCompletion(
"Review this TypeScript code for best practices",
currentUser,
`You are ${codeAgent.Name}. ${codeAgent.Purpose}`
);
The AI Engine provides semantic search capabilities using vector embeddings for finding similar agents, actions, notes, and examples. All search operations support efficient metadata filtering for scoped searches.
// Find agents similar to a task description
const taskDescription = "I need to analyze sales data and generate insights";
const similarAgents = await AIEngine.Instance.FindSimilarAgents(
taskDescription,
5, // topK: return top 5 matches
0.5 // minSimilarity: minimum similarity threshold (0-1)
);
similarAgents.forEach(match => {
console.log(`Agent: ${match.agent.Name}`);
console.log(`Similarity: ${(match.similarity * 100).toFixed(1)}%`);
console.log(`Purpose: ${match.agent.Purpose}\n`);
});
// Find actions that match a capability description
const capability = "send email notifications to users";
const similarActions = await AIEngine.Instance.FindSimilarActions(
capability,
10, // topK
0.6 // minSimilarity
);
similarActions.forEach(match => {
console.log(`Action: ${match.action.Name}`);
console.log(`Match: ${(match.similarity * 100).toFixed(1)}%`);
console.log(`Description: ${match.action.Description}\n`);
});
Agent notes can be filtered by agent, user, or company for scoped searches. Filtering happens before similarity calculation for optimal performance (10-20x faster!):
// Find notes similar to a query for a specific agent
const queryText = "best practices for error handling";
const agentId = 'agent-uuid-here';
const similarNotes = await AIEngine.Instance.FindSimilarAgentNotes(
queryText,
agentId, // Filter by agent ID (efficient pre-filtering)
undefined, // userId filter (optional)
undefined, // companyId filter (optional)
5, // topK
0.7 // minSimilarity
);
similarNotes.forEach(match => {
console.log(`Note ID: ${match.note.ID}`);
console.log(`Similarity: ${(match.similarity * 100).toFixed(1)}%`);
console.log(`Content: ${match.note.Note}\n`);
});
// Search across all agents (no filtering)
const allNotes = await AIEngine.Instance.FindSimilarAgentNotes(
queryText,
undefined, // No agent filter
undefined, // No user filter
undefined, // No company filter
10,
0.6
);
Agent examples also support efficient metadata filtering:
// Find examples similar to an input for a specific agent
const inputText = "Calculate the total revenue for Q4";
const agentId = 'data-analysis-agent-uuid';
const similarExamples = await AIEngine.Instance.FindSimilarAgentExamples(
inputText,
agentId, // Filter by agent ID (efficient pre-filtering)
undefined, // userId filter (optional)
undefined, // companyId filter (optional)
3, // topK
0.75 // minSimilarity
);
similarExamples.forEach(match => {
console.log(`Example: ${match.example.ID}`);
console.log(`Similarity: ${(match.similarity * 100).toFixed(1)}%`);
console.log(`Input: ${match.example.ExampleInput}`);
console.log(`Output: ${match.example.ExampleOutput}\n`);
});
The semantic search methods use pre-filtering for optimal performance:
// ✅ EFFICIENT: Filter applied BEFORE similarity calculation
// For 1000 notes where 50 belong to the agent:
// - Filters to 50 notes (fast metadata check)
// - Calculates similarity for 50 vectors
// - Returns top 5 matches
const filtered = await AIEngine.Instance.FindSimilarAgentNotes(
queryText,
agentId, // Pre-filter by agent
undefined,
undefined,
5,
0.7
);
// ❌ INEFFICIENT: Don't do this (old pattern)
// Would calculate similarity for ALL 1000 notes then filter
const all = await AIEngine.Instance.FindSimilarAgentNotes(queryText);
const filtered = all.filter(n => n.note.AgentID === agentId).slice(0, 5);
Speedup: Pre-filtering provides 10-20x performance improvement for scoped searches because similarity calculation is much more expensive than metadata checks.
For sophisticated parallel execution, template rendering, and stored prompt management, see the @memberjunction/ai-prompts package.
// Import main AI Engine class
import { AIEngine } from '@memberjunction/aiengine';
// Import extended entity types
import {
AIAgentEntityExtended,
AIPromptCategoryEntityExtended,
AIModelEntityExtended
} from '@memberjunction/aiengine';
// Import agent type classes
import { BaseAgentType, LoopAgentType } from '@memberjunction/aiengine';
// Import base AI types from Core
import { BaseLLM, ChatParams, ChatResult } from '@memberjunction/ai';
// When working with agents, import execution types
import { AgentExecutionParams, AgentExecutionResult } from '@memberjunction/ai-agents';
@memberjunction/core: MemberJunction core library@memberjunction/global: MemberJunction global utilities @memberjunction/core-entities: MemberJunction entity definitions@memberjunction/ai: Base AI types and interfaces (imported for core types)@memberjunction/templates: Template engine integration@memberjunction/templates-base-types: Template base type definitionsrxjs: Reactive programming supportdotenv: Environment variable management@memberjunction/ai-prompts: Advanced prompt management with templates, parallel execution, and stored prompts@memberjunction/ai-agents: AI Agent implementations and specialized behaviors@memberjunction/ai: Core AI abstractions and interfaces@memberjunction/ai-openai: OpenAI model provider@memberjunction/ai-anthropic: Anthropic (Claude) model provider@memberjunction/ai-groq: Groq model provider@memberjunction/ai-mistral: Mistral AI model provider@memberjunction/ai-azure: Azure AI model provider@memberjunction/ai-bedrock: AWS Bedrock model provider@memberjunction/ai-vertex: Google Vertex AI model provider@memberjunction/ai-cerebras: Cerebras model provider@memberjunction/ai-bettybot: BettyBot model providerIf you're migrating from the deprecated AI Actions system, you can either:
// Old AI Actions approach (deprecated)
const actionParams: AIActionParams = {
actionId: 'action-id',
modelId: 'model-id',
systemPrompt: "System message",
userPrompt: "User message"
};
const result = await AIEngine.Instance.ExecuteAIAction(actionParams);
// New Simple LLM approach (basic cases)
const response = await AIEngine.Instance.SimpleLLMCompletion(
"User message",
currentUser,
"System message"
);
// Old Entity AI Actions approach (deprecated)
const entityParams: EntityAIActionParams = {
actionId: 'action-id',
modelId: 'model-id',
entityAIActionId: 'entity-action-id',
entityRecord: entity
};
const result = await AIEngine.Instance.ExecuteEntityAIAction(entityParams);
// New approach: Use AI Agents with either simple completions or advanced prompts
const agent = AIEngine.Instance.Agents.find(a => a.Name === 'Your Agent Name');
// For simple cases - use SimpleLLMCompletion
const entityData = JSON.stringify(entity.GetAll());
const response = await AIEngine.Instance.SimpleLLMCompletion(
`Analyze this ${entity.EntityType} entity: ${entityData}`,
currentUser,
`You are an expert ${agent.Purpose}`
);
// For complex cases - use @memberjunction/ai-prompts package
# From the package directory
npm run build
# Watch mode for development
npm run watch
npm test
The AI Engine uses environment variables for API keys:
# .env file
OPENAI_API_KEY=your-openai-key
ANTHROPIC_API_KEY=your-anthropic-key
GROQ_API_KEY=your-groq-key
MISTRAL_API_KEY=your-mistral-key
# ... other provider API keys
Alternatively, API keys can be passed directly to methods or configured in the MemberJunction metadata system.
The AI Engine provides comprehensive error handling:
try {
const response = await AIEngine.Instance.SimpleLLMCompletion(
userPrompt,
currentUser,
systemPrompt
);
console.log('Success:', response);
} catch (error) {
if (error.message.includes('AI Metadata not loaded')) {
// Metadata needs to be loaded first
await AIEngine.Instance.Config(false, currentUser);
} else if (error.message.includes('User prompt not provided')) {
// Handle missing prompt
} else {
// Handle other errors
console.error('AI Engine Error:', error);
}
}
Config() once at application startup to load all AI metadataGetHighestPowerModel() methods to automatically select optimal modelsParallelLLMCompletions() for improved reliability and result qualityISC
The following features are deprecated and will be removed in a future version. Please migrate to the new AI Agents and AI Prompts system.
⚠️ DEPRECATED: AI Actions are deprecated in favor of the new AI Prompts system which provides better template support, parallel execution, and caching capabilities.
AI Actions represented different AI operations like:
chat: General conversational AIsummarize: Text summarization classify: Text classification// Deprecated - use AI Prompts instead
import { AIActionParams } from '@memberjunction/aiengine';
const params: AIActionParams = {
actionId: 'summarize-action-id',
modelId: 'gpt4-model-id',
systemPrompt: "You are a helpful assistant that creates concise summaries.",
userPrompt: "Summarize the following document: " + documentText
};
const result = await AIEngine.Instance.ExecuteAIAction(params);
console.log("Summary:", result.data?.choices[0]?.message?.content);
⚠️ DEPRECATED: Entity AI Actions are deprecated in favor of AI Agents which provide better context management, learning capabilities, and entity integration.
Entity AI Actions connected AI actions to specific entity types, defining:
// Deprecated - use AI Agents instead
import { EntityAIActionParams } from '@memberjunction/aiengine';
import { Metadata } from '@memberjunction/core';
// Load an entity record
const md = new Metadata();
const customer = await md.GetEntityObject('Customers');
await customer.Load(customerId);
// Execute an AI action
const params: EntityAIActionParams = {
actionId: 'action-id-here',
modelId: 'model-id-here',
entityAIActionId: 'entity-action-id-here',
entityRecord: customer
};
const result = await AIEngine.Instance.ExecuteEntityAIAction(params);
if (result && result.success) {
console.log("AI processing completed successfully");
// The entity record has been updated if configured that way
} else {
console.error("Error:", result.errorMessage);
}
⚠️ DEPRECATED: The old markup-based prompt generation is deprecated in favor of the template system integration.
// Deprecated markup approach
const entityAction = {
UserMessage: "Please summarize the customer profile for {Name} who works at {Company}."
};
// When executed on a record with Name="John Doe" and Company="Acme Inc"
// The prompt becomes: "Please summarize the customer profile for John Doe who works at Acme Inc."
⚠️ DEPRECATED: Manual cache management methods are deprecated in favor of automatic caching through the prompt system.
// Deprecated manual caching
const cached = await AIEngine.Instance.CheckResultCache(fullPromptText);
if (cached) {
console.log("Using cached result:", cached.ResultText);
return cached.ResultText;
}
const result = await AIEngine.Instance.ExecuteAIAction(params);
await AIEngine.Instance.CacheResult(model, prompt, fullPromptText, result.data.choices[0].message.content);