Modulessecurity
security
PII detection, audit logging, rate limiting, and guardrails
Overview
The security module provides enterprise-grade security features: PII detection and sanitization, audit logging, rate limiting, and configurable guardrails for AI outputs. Uses regex-based detection for zero-dependency, Edge-compatible operation.
No peer dependencies required.
Quick Start
import { detectPII, sanitizeForLLM } from '@jamaalbuilds/ai-toolkit/security';
const findings = detectPII('Contact John Smith at john@example.com');
// [{ type: 'EMAIL', match: 'john@example.com', ... }, { type: 'NAME', ... }]
const safe = sanitizeForLLM('My SSN is 123-45-6789');
// 'My SSN is [REDACTED_SSN]'
API Reference
detectPII(text)
Detect personally identifiable information in text.
function detectPII(text: string): PIIFinding[]
Detects: EMAIL, PHONE, SSN, NAME, DOB
const findings = detectPII('John Smith, born 1990-01-15, SSN 123-45-6789');
for (const f of findings) {
console.log(`${f.type}: ${f.match} at position ${f.start}-${f.end}`);
}
sanitizeForLLM(text)
Replace detected PII with redaction tokens before sending to an LLM.
function sanitizeForLLM(text: string): string
const safe = sanitizeForLLM('Email me at john@example.com');
// 'Email me at [REDACTED_EMAIL]'
createRateLimiter(cache, config)
Create a rate limiter for request throttling.
function createRateLimiter(cache: CacheClient, config?: RateLimitConfig): RateLimiter
| Parameter | Type | Description |
|---|---|---|
cache | CacheClient | Cache client instance |
config.max | number | Maximum requests per window |
config.windowSeconds | number | Window duration in seconds |
const limiter = createRateLimiter(cache, { max: 100, windowSeconds: 60 });
const result = await limiter.check('user-123');
if (!result.allowed) {
console.log(`Rate limited. Resets at ${result.resetAt}`);
}
createAuditLogger(serviceName)
Create an audit logger for tracking security-relevant events.
function createAuditLogger(serviceName: string): AuditLogger
const audit = createAuditLogger('my-service');
audit.log('pii_detected', {
userId: 'user-123',
details: { types: ['EMAIL', 'SSN'], sanitized: true },
});
createGuardrails(rules)
Create configurable guardrails for AI output validation.
function createGuardrails(rules: GuardrailRule[]): Guardrails
checkOutput(response, rules)
Validate AI output against guardrail rules.
function checkOutput(response: string, rules: GuardrailRule[]): GuardrailResult
const rules = [
{ id: 'no-pii', description: 'No PII in output', test: (text) => detectPII(text).length > 0 },
{ id: 'max-length', description: 'Under 10k chars', test: (text) => text.length >= 10000 },
];
const guardrails = createGuardrails(rules);
const result = guardrails.check(aiResponse);
if (!result.allowed) {
console.log('Violations:', result.violations);
console.log('Reasons:', result.reasons);
}
Types
PIIFinding— type, match, start, endPIIType—'SSN' | 'EMAIL' | 'PHONE' | 'NAME' | 'DOB'RateLimiter— limiter with check() methodRateLimitResult— allowed, remaining, resetAtAuditLogger— logger with log()AuditEvent— action, userId, details, timestampGuardrails— validator instance with check() methodGuardrailResult— allowed, violations, reasonsGuardrailRule— id, description, test function