A robust SQL Server data provider implementation for MemberJunction applications, providing seamless database connectivity, query execution, and entity management.
The @memberjunction/sqlserver-dataprovider package implements MemberJunction's data provider interface specifically for Microsoft SQL Server databases. It serves as the bridge between your MemberJunction application and SQL Server, handling data access, entity operations, view execution, and more.
npm install @memberjunction/sqlserver-dataprovider
This package relies on the following key dependencies:
@memberjunction/core: Core MemberJunction functionality@memberjunction/core-entities: Entity definitions@memberjunction/global: Shared utilities and constants@memberjunction/actions: Action execution framework@memberjunction/ai: AI integration capabilities@memberjunction/ai-vector-dupe: Duplicate detection using AI vectors@memberjunction/aiengine: AI engine integration@memberjunction/queue: Queue management for async operationsmssql: SQL Server client for Node.js (v11+)typeorm: ORM for database operations (v0.3+)import { SQLServerDataProvider } from '@memberjunction/sqlserver-dataprovider';
import { ConfigHelper } from '@memberjunction/global';
// Configure database connection
const config = {
host: 'your-server.database.windows.net',
port: 1433,
database: 'YourMJDatabase',
user: 'your-username',
password: 'your-password',
options: {
encrypt: true,
trustServerCertificate: false
}
};
// Create data provider instance
const dataProvider = new SQLServerDataProvider(config);
// Or using environment variables
const dataProvider = new SQLServerDataProvider({
host: ConfigHelper.getConfigValue('MJ_HOST'),
port: ConfigHelper.getConfigValue('MJ_PORT', 1433),
database: ConfigHelper.getConfigValue('MJ_DATABASE'),
user: ConfigHelper.getConfigValue('MJ_USER'),
password: ConfigHelper.getConfigValue('MJ_PASSWORD')
});
// Initialize the data provider (connects to the database)
await dataProvider.initialize();
import { SQLServerDataProvider } from '@memberjunction/sqlserver-dataprovider';
import { Metadata, CompositeKey, UserInfo } from '@memberjunction/core';
import { UserEntity } from '@memberjunction/core-entities';
// Setup data provider
const dataProvider = new SQLServerDataProvider(/* config */);
await dataProvider.initialize();
// Get entity metadata
const md = new Metadata();
const userEntity = md.EntityByName('User');
// Load an entity by ID
const userKey = new CompositeKey([{ FieldName: 'ID', Value: 1 }]);
const userResult = await dataProvider.Get(userEntity, userKey);
if (userResult.Success) {
const user = userResult.Entity;
console.log(`Loaded user: ${user.FirstName} ${user.LastName}`);
// Update the entity
user.Email = 'new.email@example.com';
const saveResult = await dataProvider.Save(user, contextUser);
if (saveResult.Success) {
console.log(`User updated successfully, ID: ${saveResult.Entity.ID}`);
}
}
// Create a new entity
const newUserEntity = await md.GetEntityObject<UserEntity>('User');
newUserEntity.FirstName = 'John';
newUserEntity.LastName = 'Doe';
newUserEntity.Email = 'john.doe@example.com';
// set other required fields...
const createResult = await dataProvider.Save(newUserEntity, contextUser);
if (createResult.Success) {
console.log(`New user created with ID: ${createResult.Entity.ID}`);
}
// Delete an entity
const deleteKey = new CompositeKey([{ FieldName: 'ID', Value: 5 }]);
const deleteResult = await dataProvider.Delete(userEntity, deleteKey, contextUser);
if (deleteResult.Success) {
console.log('User deleted successfully');
}
The SQL Server Data Provider supports comprehensive transaction management through both transaction groups and instance-level transactions.
import { SQLServerDataProvider } from '@memberjunction/sqlserver-dataprovider';
import { SQLServerTransactionGroup } from '@memberjunction/sqlserver-dataprovider';
import { Metadata } from '@memberjunction/core';
// Setup data provider
const dataProvider = new SQLServerDataProvider(/* config */);
await dataProvider.initialize();
// Create a transaction group
const transaction = new SQLServerTransactionGroup('CreateOrderWithItems');
// Get entity objects
const md = new Metadata();
const orderEntity = await md.GetEntityObject('Order');
const orderItemEntity1 = await md.GetEntityObject('Order Item');
const orderItemEntity2 = await md.GetEntityObject('Order Item');
// Set up the order
orderEntity.CustomerID = 123;
orderEntity.OrderDate = new Date();
orderEntity.Status = 'New';
// Add to transaction - this will get ID after save
await transaction.AddTransaction(orderEntity);
// Set up order items with references to the order
orderItemEntity1.OrderID = '@Order.1'; // Reference to the first Order in this transaction
orderItemEntity1.ProductID = 456;
orderItemEntity1.Quantity = 2;
orderItemEntity1.Price = 29.99;
orderItemEntity2.OrderID = '@Order.1'; // Same order reference
orderItemEntity2.ProductID = 789;
orderItemEntity2.Quantity = 1;
orderItemEntity2.Price = 49.99;
// Add items to transaction
await transaction.AddTransaction(orderItemEntity1);
await transaction.AddTransaction(orderItemEntity2);
// Execute the transaction group
const results = await transaction.Submit();
// Check results
const success = results.every(r => r.Success);
if (success) {
console.log('Transaction completed successfully');
const orderResult = results.find(r => r.Entity.EntityInfo.Name === 'Order');
console.log('Order ID:', orderResult?.Entity.ID);
} else {
console.error('Transaction failed');
results.filter(r => !r.Success).forEach(r => {
console.error(`Failed: ${r.Entity.EntityInfo.Name}`, r.Message);
});
}
In multi-user server environments like MJServer, each request gets its own SQLServerDataProvider instance with isolated transaction state. This provides automatic transaction isolation without requiring transaction scope IDs:
// Each request gets its own provider instance
const dataProvider = new SQLServerDataProvider(connectionPool);
await dataProvider.Config(config);
try {
// Begin a transaction on this instance
await dataProvider.BeginTransaction();
// Perform operations - all use this instance's transaction
await dataProvider.Save(entity1, contextUser);
await dataProvider.Save(entity2, contextUser);
// Delete operations also participate in the transaction
await dataProvider.Delete(entity3, deleteOptions, contextUser);
// Commit the transaction
await dataProvider.CommitTransaction();
} catch (error) {
// Rollback on error
await dataProvider.RollbackTransaction();
throw error;
}
Key Features of Instance-Level Transactions:
Best Practices for Multi-User Environments:
ignoreExistingMetadata: false to reuse cached metadataimport { SQLServerDataProvider } from '@memberjunction/sqlserver-dataprovider';
import { RunViewParams, RunReportParams } from '@memberjunction/core';
// Setup data provider
const dataProvider = new SQLServerDataProvider(/* config */);
await dataProvider.initialize();
// Run a view with filtering and pagination
const viewOptions: RunViewParams = {
EntityName: 'vwActiveUsers',
ExtraFilter: "Role = 'Administrator'",
OrderBy: 'LastName, FirstName',
PageSize: 10,
PageNumber: 1
};
const viewResult = await dataProvider.RunView(viewOptions);
if (viewResult.success) {
console.log(`Found ${viewResult.Results.length} users`);
console.log(`Total matching records: ${viewResult.TotalRowCount}`);
viewResult.Results.forEach(user => {
console.log(`${user.FirstName} ${user.LastName} (${user.Email})`);
});
}
// Run a report
const reportParams: RunReportParams = {
ReportID: 'report-id-here',
// Other parameters as needed
};
const reportResult = await dataProvider.RunReport(reportParams);
if (reportResult.Success) {
console.log('Report data:', reportResult.Results);
console.log('Row count:', reportResult.RowCount);
console.log('Execution time:', reportResult.ExecutionTime, 'ms');
}
import { SQLServerDataProvider } from '@memberjunction/sqlserver-dataprovider';
import { RunQueryParams } from '@memberjunction/core';
// Setup data provider
const dataProvider = new SQLServerDataProvider(/* config */);
await dataProvider.initialize();
// Execute raw SQL with parameters
const sqlResult = await dataProvider.ExecuteSQL(
'SELECT * FROM Users WHERE Department = @dept AND HireDate > @date',
{
dept: 'Engineering',
date: '2022-01-01'
}
);
console.log(`Query returned ${sqlResult.length} rows`);
sqlResult.forEach(row => {
console.log(row);
});
// Execute a stored procedure
const spResult = await dataProvider.ExecuteSQL(
'EXEC sp_GetUserPermissions @UserID',
{
UserID: 123
}
);
console.log('User permissions:', spResult);
// Using RunQuery for pre-defined queries
const queryParams: RunQueryParams = {
QueryID: 'query-id-here', // or use QueryName + Category identification
// Alternative: use QueryName with hierarchical CategoryPath
// QueryName: 'CalculateCost',
// CategoryPath: '/MJ/AI/Agents/' // Hierarchical path notation
// CategoryID: 'optional-direct-category-id',
};
const queryResult = await dataProvider.RunQuery(queryParams);
if (queryResult.Success) {
console.log('Query results:', queryResult.Results);
console.log('Execution time:', queryResult.ExecutionTime, 'ms');
}
// Query lookup supports hierarchical category paths
// Example: Query with name "CalculateCost" in category hierarchy "MJ" -> "AI" -> "Agents"
const hierarchicalQueryParams: RunQueryParams = {
QueryName: 'CalculateCost',
CategoryPath: '/MJ/AI/Agents/' // Full hierarchical path with leading/trailing slashes
};
// The CategoryPath is parsed as a path where:
// - "/" separates category levels
// - Each segment is matched case-insensitively against category names
// - The path walks from root to leaf through the ParentID relationships
// - Falls back to simple category name matching for backward compatibility
import { SQLServerDataProvider } from '@memberjunction/sqlserver-dataprovider';
// Setup data provider
const dataProvider = new SQLServerDataProvider(/* config */);
await dataProvider.initialize();
// Set current user context
dataProvider.setCurrentUser(123); // User ID
// Get current user
const currentUser = dataProvider.getCurrentUser();
console.log(`Current user: ${currentUser.FirstName} ${currentUser.LastName}`);
// User caching is handled automatically by the provider
// but you can clear the cache if needed
dataProvider.clearUserCache();
The SQL Server data provider accepts the following configuration options:
| Option | Description | Default |
|---|---|---|
host |
SQL Server hostname or IP | required |
port |
SQL Server port | 1433 |
database |
Database name | required |
user |
Username | required |
password |
Password | required |
connectionTimeout |
Connection timeout in ms | 15000 |
requestTimeout |
Request timeout in ms | 15000 |
pool.max |
Maximum pool size | 10 |
pool.min |
Minimum pool size | 0 |
pool.idleTimeoutMillis |
Pool idle timeout | 30000 |
options.encrypt |
Use encryption | true |
options.trustServerCertificate |
Trust server certificate | false |
options.enableArithAbort |
Enable arithmetic abort | true |
import { SQLServerDataProvider } from '@memberjunction/sqlserver-dataprovider';
class CustomSQLProvider extends SQLServerDataProvider {
// Override to add custom logging or modifications
async ExecuteSQL(sql: string, params?: any, maxRows?: number): Promise<any> {
console.log(`Executing SQL: ${sql}`);
console.log('Parameters:', params);
// Add timing
const startTime = Date.now();
const result = await super.ExecuteSQL(sql, params, maxRows);
const duration = Date.now() - startTime;
console.log(`Query executed in ${duration}ms`);
console.log(`Rows returned: ${result?.length || 0}`);
return result;
}
// Custom error handling
protected async HandleExecuteSQLError(error: any, sql: string): Promise<void> {
console.error('SQL Error:', error);
console.error('Failed SQL:', sql);
// Add custom error handling logic here
await super.HandleExecuteSQLError(error, sql);
}
}
The SQL Server Data Provider includes comprehensive error handling:
try {
const result = await dataProvider.Save(entity, user);
if (!result.Success) {
console.error('Save failed:', result.ErrorMessage);
// Handle validation or business logic errors
}
} catch (error) {
console.error('Unexpected error:', error);
// Handle system-level errors
}
# From the package directory
npm run build
# Or from the repository root
turbo build --filter="@memberjunction/sqlserver-dataprovider"
npm run build - Compile TypeScript to JavaScriptnpm run start - Run the package with ts-node-dev for developmentThis package is configured with TypeScript strict mode enabled. The compiled output is placed in the dist/ directory with declaration files for type support.
The main class that implements IEntityDataProvider, IMetadataProvider, IRunViewProvider, IRunReportProvider, and IRunQueryProvider interfaces.
Config(configData: SQLServerProviderConfigData): Promise<boolean> - Configure the provider with connection detailsGet(entity: EntityInfo, CompositeKey: CompositeKey, user?: UserInfo): Promise<BaseEntityResult> - Load an entity by primary keySave(entity: BaseEntity, user: UserInfo, options?: EntitySaveOptions): Promise<BaseEntityResult> - Save (create/update) an entityDelete(entity: EntityInfo, CompositeKey: CompositeKey, user?: UserInfo, options?: EntityDeleteOptions): Promise<BaseEntityResult> - Delete an entityRunView(params: RunViewParams, contextUser?: UserInfo): Promise<RunViewResult> - Execute a database viewRunReport(params: RunReportParams, contextUser?: UserInfo): Promise<RunReportResult> - Execute a reportRunQuery(params: RunQueryParams, contextUser?: UserInfo): Promise<RunQueryResult> - Execute a queryExecuteSQL(sql: string, params?: any, maxRows?: number): Promise<any[]> - Execute raw SQLcreateSqlLogger(filePath: string, options?: SqlLoggingOptions): Promise<SqlLoggingSession> - Create a new SQL logging sessiongetActiveSqlLoggingSessions(): SqlLoggingSession[] - Get all active logging sessionsdisposeAllSqlLoggingSessions(): Promise<void> - Stop and clean up all logging sessionsisSqlLoggingEnabled(): boolean - Check if SQL logging is availableConfiguration class for the SQL Server provider.
constructor(
connectionPool: sql.ConnectionPool,
MJCoreSchemaName?: string,
checkRefreshIntervalSeconds: number = 0,
includeSchemas?: string[],
excludeSchemas?: string[],
ignoreExistingMetadata: boolean = true
)
ConnectionPool: sql.ConnectionPool - SQL Server connection pool instanceCheckRefreshIntervalSeconds: number - Interval for checking metadata refresh (0 to disable)MJCoreSchemaName: string - Schema name for MJ core tables (default: '__mj')IncludeSchemas?: string[] - List of schemas to includeExcludeSchemas?: string[] - List of schemas to excludeIgnoreExistingMetadata: boolean - Whether to ignore cached metadata and force a reload (default: true)Important Note on ignoreExistingMetadata:
false in multi-user environments to reuse cached metadata across provider instancesSQL Server implementation of TransactionGroupBase for managing database transactions.
HandleSubmit(): Promise<TransactionResult[]> - Execute all pending transactions in the groupServer-side cache for user and role information.
Instance: UserCache - Get singleton instanceUsers: UserInfo[] - Get all cached usersRefresh(dataSource: DataSource, autoRefreshIntervalMS?: number): Promise<void> - Refresh user cacheUserByName(name: string, caseSensitive?: boolean): UserInfo | undefined - Find user by nameHelper function to initialize and configure the SQL Server data provider.
setupSQLServerClient(config: SQLServerProviderConfigData): Promise<SQLServerDataProvider>
The SQL Server Data Provider includes comprehensive SQL logging capabilities that allow you to capture SQL statements in real-time. This feature supports both programmatic access and runtime control through the MemberJunction UI.
import { SQLServerDataProvider } from '@memberjunction/sqlserver-dataprovider';
const dataProvider = new SQLServerDataProvider(/* config */);
await dataProvider.initialize();
// Create a SQL logging session
const logger = await dataProvider.createSqlLogger('./logs/sql/operations.sql', {
formatAsMigration: false,
sessionName: 'User registration operations',
filterByUserId: 'user@example.com', // Only log SQL from this user
prettyPrint: true,
statementTypes: 'both' // Log both queries and mutations
});
// Perform your database operations - they will be automatically logged
await dataProvider.ExecuteSQL('INSERT INTO Users (Name, Email) VALUES (@name, @email)', {
name: 'John Doe',
email: 'john@example.com'
});
// Check session status
console.log(`Session ${logger.id} has captured ${logger.statementCount} statements`);
// Clean up the logging session
await logger.dispose();
SQL logging sessions support pattern-based filtering to include or exclude specific SQL statements. You can use either regex patterns for advanced matching or simple wildcard patterns for ease of use.
Simple Wildcard Patterns (Recommended for most users):
* as a wildcard character*AIPrompt* - Matches anything containing "AIPrompt"spCreate* - Matches anything starting with "spCreate"*Run - Matches anything ending with "Run"UserTable - Exact match onlyRegular Expression Patterns (For advanced users):
/spCreate.*Run/i - Case-insensitive regex/^SELECT.*FROM.*vw/ - Queries from views/INSERT INTO (Users|Roles)/i - Insert into Users or Roles// Exclude specific patterns from logging
const logger = await dataProvider.createSqlLogger('./logs/sql/filtered.sql', {
sessionName: 'Production Operations',
filterPatterns: [
/spCreateAIPromptRun/i, // Regex: Exclude AI prompt runs
/spUpdateAIPromptRun/i, // Regex: Exclude AI prompt updates
/^SELECT.*FROM.*vw.*Metadata/i, // Regex: Exclude metadata view queries
/INSERT INTO EntityFieldValue/i // Regex: Exclude field value inserts
],
filterType: 'exclude' // Default - exclude matching patterns
});
// Only log specific patterns
const auditLogger = await dataProvider.createSqlLogger('./logs/sql/audit.sql', {
sessionName: 'User Audit Trail',
filterPatterns: [
/INSERT INTO Users/i,
/UPDATE Users/i,
/DELETE FROM Users/i,
/sp_ChangePassword/i
],
filterType: 'include' // Only log statements matching these patterns
});
When configuring SQL logging in MetadataSync's .mj-sync.json, you can use string patterns that support both formats:
{
"sqlLogging": {
"enabled": true,
"filterPatterns": [
"*AIPrompt*", // Simple: Exclude anything with "AIPrompt"
"/^EXEC sp_/i", // Regex: Exclude stored procedures
"*EntityFieldValue*", // Simple: Exclude EntityFieldValue operations
"/INSERT INTO (__mj|mj)/i" // Regex: Exclude system table inserts
],
"filterType": "exclude"
}
}
filterPatterns: Array of patterns (RegExp objects in code, strings in config)filterType: 'exclude' (default): Skip logging if ANY pattern matches'include': Only log if ANY pattern matchesNote: Filtering is applied to the actual SQL that will be logged. If
logRecordChangeMetadatais false and a simplified SQL fallback is provided, the filtering tests against the simplified version.
// Start a new logging session
const mutation = `
mutation {
startSqlLogging(input: {
fileName: "debug-session.sql"
filterToCurrentUser: true
options: {
sessionName: "Debug Session"
prettyPrint: true
statementTypes: "both"
formatAsMigration: false
}
}) {
id
filePath
sessionName
}
}
`;
// List active sessions
const query = `
query {
activeSqlLoggingSessions {
id
sessionName
startTime
statementCount
filterByUserId
}
}
`;
// Stop a session
const stopMutation = `
mutation {
stopSqlLogging(sessionId: "session-id-here")
}
`;
SQL logging can be controlled through the MemberJunction Explorer UI:
// Create logger with migration formatting
const migrationLogger = await dataProvider.createSqlLogger('./migrations/V20241215120000__User_Operations.sql', {
formatAsMigration: true,
sessionName: 'User management operations for deployment',
batchSeparator: 'GO',
logRecordChangeMetadata: true
});
// Your operations are logged in Flyway-compatible format
// with proper headers and schema placeholders
// Get all active sessions
const activeSessions = dataProvider.getActiveSqlLoggingSessions();
console.log(`${activeSessions.length} sessions currently active`);
// Dispose all sessions
await dataProvider.disposeAllSqlLoggingSessions();
// Check if logging is enabled
if (dataProvider.isSqlLoggingEnabled()) {
console.log('SQL logging is available');
}
Security Note: SQL logging requires Owner-level privileges in the MemberJunction system. Only users with
Type = 'Owner'can create, manage, or access SQL logging sessions.
Connection Timeout Errors
connectionTimeout and requestTimeout in configurationAuthentication Failures
Schema Not Found
MJCoreSchemaName matches your database schema (default: __mj)Transaction Rollback Issues
Performance Issues
pool.max, pool.min)Enable detailed logging by setting environment variables:
# Enable SQL query logging
export MJ_LOG_SQL=true
# Enable detailed error logging
export MJ_LOG_LEVEL=debug
ISC
The MemberJunction SQL Server Data Provider is designed to support high-performance parallel database operations through proper connection pool management. The underlying mssql driver (node-mssql) is expressly designed to handle many concurrent database calls efficiently.
Single Shared Connection Pool: MemberJunction creates one connection pool at server startup and reuses it throughout the application lifecycle. This pool is passed to the SQLServerDataProvider and used for all database operations.
Request-Per-Query Pattern: Each database operation creates a new sql.Request object from the shared pool, allowing multiple queries to execute in parallel without blocking each other.
Configurable Pool Size: The connection pool can be configured via mj.config.cjs to support your specific concurrency needs:
// In your mj.config.cjs at the root level
module.exports = {
databaseSettings: {
connectionPool: {
max: 50, // Maximum connections (default: 50)
min: 5, // Minimum connections (default: 5)
idleTimeoutMillis: 30000, // Idle timeout (default: 30s)
acquireTimeoutMillis: 30000 // Acquire timeout (default: 30s)
}
}
};
MemberJunction follows SQL Server connection best practices:
sql.Request objectBased on your environment and load:
connectionPool: {
max: 10,
min: 2,
idleTimeoutMillis: 60000,
acquireTimeoutMillis: 15000
}
connectionPool: {
max: 50, // 2-4× CPU cores of your API server
min: 5,
idleTimeoutMillis: 30000,
acquireTimeoutMillis: 30000
}
connectionPool: {
max: 100, // Monitor SQL Server wait types to tune
min: 10,
idleTimeoutMillis: 30000,
acquireTimeoutMillis: 30000
}
Pool Size: The practical concurrency limit equals your pool size. With default settings (max: 50), you can have up to 50 concurrent SQL operations.
Connection Reuse: The mssql driver efficiently reuses connections from the pool, minimizing connection overhead.
Queue Management: When all connections are busy, additional requests queue in FIFO order until a connection becomes available.
Monitoring: Watch for these SQL Server wait types to identify if pool size is too large:
RESOURCE_SEMAPHORE: Memory pressureTHREADPOOL: Worker thread exhaustionIf you experience "connection pool exhausted" errors:
max in your configurationThe connection pool is created in /packages/MJServer/src/index.ts:
const pool = new sql.ConnectionPool(createMSSQLConfig());
await pool.connect();
And used in SQLServerDataProvider for each query:
const request = new sql.Request(this._pool);
const result = await request.query(sql);
If you are using SQLServerDataProvider outside of the context of MJServer/MJAPI it is your responsibility to create connection pool in alignment with whatever practices make sense for your project and pass that along to the SQLServerDataProvider configuration process.
This pattern ensures maximum parallelism while maintaining connection efficiency, allowing MemberJunction applications to scale to handle hundreds of concurrent database operations without blocking the Node.js event loop.