AI Toolkit

Security & Compliance

PII protection, audit trails, and guardrails for regulated industries

Overview

AI Toolkit includes built-in security features for regulated industries: PII detection, data sanitization, audit logging, rate limiting, and configurable guardrails.

PII Detection and Sanitization

Always sanitize user input before sending to LLMs:

import { detectPII, sanitizeForLLM } from '@jamaalbuilds/ai-toolkit/security';

// Detect PII
const findings = detectPII(userInput);
if (findings.length > 0) {
  console.warn('PII detected:', findings.map(f => f.type));
}

// Sanitize before sending to LLM
const safeInput = sanitizeForLLM(userInput);
const result = await ai.generate(safeInput);

Detected PII Types

TypeExample
SSN123-45-6789
EMAILjohn@example.com
PHONE(555) 123-4567
NAMEJohn Smith
DOB1990-01-15

Audit Logging

Track every security-relevant event:

import { createAuditLogger } from '@jamaalbuilds/ai-toolkit/security';

const audit = createAuditLogger('my-service');

audit.log('llm_query', {
  userId: user.id,
  details: {
    model: 'llama-3.3-70b',
    piiDetected: findings.length > 0,
    sanitized: true,
  },
});

Guardrails

Validate AI outputs before returning to users:

import { createGuardrails, checkOutput, detectPII } from '@jamaalbuilds/ai-toolkit/security';

const rules = [
  {
    id: 'no-pii-in-output',
    description: 'No PII in AI output',
    test: (text: string) => detectPII(text).length > 0,
  },
  {
    id: 'no-harmful-content',
    description: 'No harmful content in output',
    test: (text: string) => !!text.match(/\b(hack|exploit|attack)\b/i),
  },
  {
    id: 'reasonable-length',
    description: 'Output must be under 10000 chars',
    test: (text: string) => text.length >= 10000,
  },
];

const guardrails = createGuardrails(rules);
const result = guardrails.check(aiResponse);

// Or use checkOutput directly
const directResult = checkOutput(aiResponse, rules);
if (!directResult.allowed) {
  console.error('Guardrail violations:', directResult.violations);
  console.error('Reasons:', directResult.reasons);
  return 'I cannot provide that response.';
}

Rate Limiting

Protect your API from abuse:

import { createRateLimiter } from '@jamaalbuilds/ai-toolkit/security';

const limiter = createRateLimiter(cache, {
  max: 100,
  windowSeconds: 60, // per minute
});

// In your API handler
const result = await limiter.check(userId);
if (!result.allowed) {
  return Response.json(
    { error: 'Rate limited', retryAfter: result.resetAt },
    { status: 429 },
  );
}

API Key Authentication

import { createApiKeyGuard } from '@jamaalbuilds/ai-toolkit/auth';

const Guard = createApiKeyGuard(process.env.API_KEY!);
const guard = new Guard();

// In your API handler (NestJS-style canActivate)
const isValid = guard.canActivate({
  switchToHttp: () => ({
    getRequest: () => ({ headers: Object.fromEntries(request.headers) }),
  }),
});
if (!isValid) {
  return Response.json({ error: 'Unauthorized' }, { status: 401 });
}

Best Practices

  1. Always sanitize user input before LLM calls
  2. Log everything — audit trails are critical for compliance
  3. Validate outputs — guardrails catch PII leaks in responses
  4. Rate limit — protect against abuse and runaway costs
  5. Use timing-safe comparison for API keys (built into createApiKeyGuard)
On this page

On this page