GuidesSecurity & Compliance
Security & Compliance
PII protection, audit trails, and guardrails for regulated industries
Overview
AI Toolkit includes built-in security features for regulated industries: PII detection, data sanitization, audit logging, rate limiting, and configurable guardrails.
PII Detection and Sanitization
Always sanitize user input before sending to LLMs:
import { detectPII, sanitizeForLLM } from '@jamaalbuilds/ai-toolkit/security';
// Detect PII
const findings = detectPII(userInput);
if (findings.length > 0) {
console.warn('PII detected:', findings.map(f => f.type));
}
// Sanitize before sending to LLM
const safeInput = sanitizeForLLM(userInput);
const result = await ai.generate(safeInput);
Detected PII Types
| Type | Example |
|---|---|
SSN | 123-45-6789 |
EMAIL | john@example.com |
PHONE | (555) 123-4567 |
NAME | John Smith |
DOB | 1990-01-15 |
Audit Logging
Track every security-relevant event:
import { createAuditLogger } from '@jamaalbuilds/ai-toolkit/security';
const audit = createAuditLogger('my-service');
audit.log('llm_query', {
userId: user.id,
details: {
model: 'llama-3.3-70b',
piiDetected: findings.length > 0,
sanitized: true,
},
});
Guardrails
Validate AI outputs before returning to users:
import { createGuardrails, checkOutput, detectPII } from '@jamaalbuilds/ai-toolkit/security';
const rules = [
{
id: 'no-pii-in-output',
description: 'No PII in AI output',
test: (text: string) => detectPII(text).length > 0,
},
{
id: 'no-harmful-content',
description: 'No harmful content in output',
test: (text: string) => !!text.match(/\b(hack|exploit|attack)\b/i),
},
{
id: 'reasonable-length',
description: 'Output must be under 10000 chars',
test: (text: string) => text.length >= 10000,
},
];
const guardrails = createGuardrails(rules);
const result = guardrails.check(aiResponse);
// Or use checkOutput directly
const directResult = checkOutput(aiResponse, rules);
if (!directResult.allowed) {
console.error('Guardrail violations:', directResult.violations);
console.error('Reasons:', directResult.reasons);
return 'I cannot provide that response.';
}
Rate Limiting
Protect your API from abuse:
import { createRateLimiter } from '@jamaalbuilds/ai-toolkit/security';
const limiter = createRateLimiter(cache, {
max: 100,
windowSeconds: 60, // per minute
});
// In your API handler
const result = await limiter.check(userId);
if (!result.allowed) {
return Response.json(
{ error: 'Rate limited', retryAfter: result.resetAt },
{ status: 429 },
);
}
API Key Authentication
import { createApiKeyGuard } from '@jamaalbuilds/ai-toolkit/auth';
const Guard = createApiKeyGuard(process.env.API_KEY!);
const guard = new Guard();
// In your API handler (NestJS-style canActivate)
const isValid = guard.canActivate({
switchToHttp: () => ({
getRequest: () => ({ headers: Object.fromEntries(request.headers) }),
}),
});
if (!isValid) {
return Response.json({ error: 'Unauthorized' }, { status: 401 });
}
Best Practices
- Always sanitize user input before LLM calls
- Log everything — audit trails are critical for compliance
- Validate outputs — guardrails catch PII leaks in responses
- Rate limit — protect against abuse and runaway costs
- Use timing-safe comparison for API keys (built into
createApiKeyGuard)