ArchPilot — End-to-End Technical Design Document

The complete, point-by-point development blueprint. Every file, every table, every endpoint, every pattern. Follow this document blindly — it covers the entire development lifecycle from empty repo to production B2B enterprise product.

5
Applications
51
Components
20+
DB Tables
80+
API Endpoints
12
Design Patterns
34
Sprints

1. Complete Tech Stack — Pinned Versions

Every technology used. Versions are pinned. Do NOT upgrade without testing. This is not a suggestion list — these are exact dependencies.

RULE: Pin Everything

Use exact versions in package.json (no ^ or ~). Use lockfiles. Every upgrade is a deliberate PR.

Frontend — Next.js + React App 2: Web
PackageVersionPurposeWhy This
next15.1.xFull-stack React frameworkApp Router, Server Components, API routes, middleware, SSR/SSG
react / react-dom19.xUI renderingServer Components, Suspense, useOptimistic, useFormStatus
typescript5.7.xType safetyEnterprise-grade — no JavaScript files allowed
tailwindcss4.xUtility-first CSSRapid UI iteration, design tokens via config
shadcn/uilatestComponent libraryCopy-paste components, full control, accessible by default
@tanstack/react-query5.xServer state managementCaching, background refetch, optimistic updates, infinite queries
zustand5.xClient state managementMinimal boilerplate, middleware support, devtools
@xyflow/react12.xDiagram editor (React Flow)Diagram Intelligence visual editing canvas
framer-motion12.xAnimationsLayout animations, gesture support, exit animations
zod3.xRuntime validationSchema validation for forms, API responses, AI outputs
react-hook-form7.xForm handlingPerformance (uncontrolled), zod resolver integration
@supabase/ssr0.5.xSupabase SSR helpersCookie-based auth in Next.js Server Components
next-intl4.xi18nEnterprise needs multi-language support
recharts2.xCharts/dashboardsComposable, responsive, React-native charting
@monaco-editor/react4.xCode editorCode Intelligence code viewing with syntax highlighting
sonner2.xToast notificationsClean API, stacking, custom renders
Backend — Supabase Ecosystem App 3: Backend
ServiceReplacesPurposeConfig Notes
Supabase PostgreSQLRDS, PlanetScalePrimary databaseEnable pgvector, pg_cron, pg_trgm extensions
Supabase AuthAuth0, ClerkAuthenticationGoogle SSO, SAML (Enterprise), magic link, MFA
Supabase RealtimePusher, Socket.ioWebSocket subscriptionsPostgres Changes, Broadcast, Presence channels
Supabase StorageS3, CloudinaryFile storageDiagrams, audio files, exported reports
Supabase Edge FunctionsAWS LambdaServerless compute (Deno)Webhooks, lightweight processing, CRON triggers
Supabase VaultHashiCorp VaultSecrets managementAPI keys, tokens — never in env vars
Supabase QueuesSQS, BullMQJob queuesBackground processing for heavy engine tasks
pgvector (extension)Pinecone, WeaviateVector embeddings1536-dim OpenAI / 1024-dim Cohere embeddings
pg_cron (extension)CloudWatch EventsScheduled jobsADR health checks, stale decision alerts, usage rollups

Supabase Project Config

Region: us-east-1 (or closest to users). Plan: Pro ($25/mo) for development, Team ($599/mo) for production. Enable Point-in-Time Recovery, Read Replicas when >1000 users.

AI / ML Layer — Multi-Model Strategy Intelligence Core
ProviderModelUse CaseLatencyCost/1M tokens
AnthropicClaude Sonnet 4.5Primary analysis — meetings, code review, ADR generation~2s$3 in / $15 out
AnthropicClaude Haiku 4.5Fast classification — trigger detection, PII filter, routing~400ms$0.25 in / $1.25 out
AnthropicClaude Opus 4.6Complex reasoning — architecture review, failure simulation~5s$15 in / $75 out
OpenAIGPT-4oVision — diagram parsing (PNG/JPG to structured graph)~3s$2.50 in / $10 out
OpenAItext-embedding-3-largeEmbeddings — semantic search, similarity, caching~200ms$0.13/1M
DeepgramNova-3Speech-to-text — real-time meeting transcription~200ms$0.0059/min
GroqLlama 4 ScoutFallback — when primary providers are down/slow~300ms$0.11 in / $0.34 out

Model Router Logic

Every AI call goes through the Model Router. Route by: (1) task complexity → Haiku for simple, Sonnet for medium, Opus for complex. (2) latency requirement → meeting pipeline uses Sonnet/Haiku only. (3) cost budget → track spend per org, throttle when approaching limit. (4) fallback chain → if Claude down, route to Groq.

Electron Desktop Agent App 1: Agent
PackageVersionPurpose
electron33.xDesktop runtime — system tray, audio capture
electron-builder25.xBuild + auto-update (macOS DMG, Windows NSIS, Linux AppImage)
electron-store10.xEncrypted local config persistence
@electron/remote2.xIPC between main/renderer
Browser Extension App 4: Extension
TechnologyPurpose
Chrome Manifest V3Extension framework — service worker, content scripts
Plasmo FrameworkExtension build tooling — HMR, TypeScript, React support
React 19Popup + sidebar UI rendering
DevOps & Infrastructure Tooling
ToolPurposeConfig
GitHub ActionsCI/CDLint → Type check → Test → Build → Deploy
Hostinger VPSNext.js hosting (self-managed)PM2 process manager, Nginx reverse proxy, SSL via Certbot
TurborepoMonorepo build systemShared packages, parallel builds, remote caching
DockerContainerizationElectron build environment, self-hosted deployment
SentryError trackingSource maps, performance monitoring, session replay
PostHogProduct analyticsFeature flags, session recording, funnels, A/B tests
StripeBillingSubscriptions, usage-based metering, invoices, tax
ResendTransactional emailReact Email templates, webhooks
UpstashRedis (rate limiting)API rate limiting, session throttling, temp cache
Code Quality Tooling DX
ToolVersionConfig
eslint9.xFlat config. Rules: @typescript-eslint/strict, import/order, no-console
prettier3.xprintWidth: 100, singleQuote: true, trailingComma: 'all', semi: true
husky9.xpre-commit: lint-staged. pre-push: type-check + test
lint-staged15.xRun eslint --fix + prettier on staged files only
commitlint19.xConventional commits: feat|fix|chore|docs|refactor|test|perf
vitest3.xUnit + integration tests. Coverage threshold: 80%
playwright1.49.xE2E tests. Browsers: chromium, firefox, webkit
knip5.xDetect unused exports, dependencies, files

2. Design Patterns — Mandatory for All Code

These are not suggestions. Every component, service, and module MUST follow these patterns. Deviations require a written ADR.

ENFORCEMENT

ESLint custom rules + PR review checklist enforce these patterns. Code that violates them will not merge.

Pattern 1: Repository Pattern — All Database Access REQUIRED

WHY: No component or API route directly calls Supabase. All DB access goes through repository classes. This isolates data layer, enables testing with mocks, and ensures RLS is always applied.

// src/lib/repositories/base.repository.ts import { SupabaseClient } from '@supabase/supabase-js'; import { Database } from '@/types/database.types'; export abstract class BaseRepository<T> { constructor( protected readonly supabase: SupabaseClient<Database>, protected readonly tableName: string ) {} async findById(id: string): Promise<T | null> { const { data, error } = await this.supabase .from(this.tableName) .select('*') .eq('id', id) .single(); if (error) throw new DatabaseError(error.message); return data as T; } async findMany(filters: Partial<T>, opts?: QueryOpts): Promise<T[]> { /* ... */ } async create(input: Omit<T, 'id' | 'created_at'>): Promise<T> { /* ... */ } async update(id: string, input: Partial<T>): Promise<T> { /* ... */ } async softDelete(id: string): Promise<void> { /* set deleted_at, never hard delete */ } } // Usage: extends BaseRepository export class ProjectRepository extends BaseRepository<Project> { constructor(supabase: SupabaseClient<Database>) { super(supabase, 'projects'); } async findByTeam(teamId: string): Promise<Project[]> { const { data } = await this.supabase .from('projects').select('*').eq('team_id', teamId); return data ?? []; } }
Pattern 2: Service Layer — All Business Logic REQUIRED

WHY: API routes are thin controllers. They validate input (Zod), call a service, return the response. All business logic lives in service classes. Services call repositories, never Supabase directly.

// src/lib/services/project.service.ts export class ProjectService { constructor( private projectRepo: ProjectRepository, private auditLogger: AuditLogger, private eventBus: EventBus ) {} async createProject(input: CreateProjectInput, userId: string): Promise<Project> { // 1. Validate business rules const team = await this.projectRepo.supabase .from('team_members').select('role') .eq('team_id', input.teamId).eq('user_id', userId).single(); if (!team.data || !['owner', 'admin'].includes(team.data.role)) throw new ForbiddenError('Only owners/admins can create projects'); // 2. Execute const project = await this.projectRepo.create({ ...input, created_by: userId }); // 3. Side effects await this.auditLogger.log('project.created', { projectId: project.id, userId }); this.eventBus.emit('project.created', project); return project; } }
Pattern 3: Dependency Injection Container REQUIRED

WHY: Services depend on repositories, repositories depend on Supabase client. We wire them using a DI container so tests can swap in mocks without changing service code.

// src/lib/container.ts import { createServerClient } from '@supabase/ssr'; export function createContainer(supabase: SupabaseClient<Database>) { // Repositories const projectRepo = new ProjectRepository(supabase); const sessionRepo = new SessionRepository(supabase); const decisionRepo = new DecisionRepository(supabase); const adrRepo = new AdrRepository(supabase); // Shared services const auditLogger = new AuditLogger(supabase); const eventBus = new EventBus(); const modelRouter = new ModelRouter(supabase); // Business services const projectService = new ProjectService(projectRepo, auditLogger, eventBus); const meetingService = new MeetingService(sessionRepo, modelRouter, auditLogger); const diagramService = new DiagramService(supabase, modelRouter); const codeService = new CodeIntelligenceService(supabase, modelRouter); const adrService = new AdrService(adrRepo, modelRouter, auditLogger); return { projectRepo, sessionRepo, decisionRepo, adrRepo, auditLogger, eventBus, modelRouter, projectService, meetingService, diagramService, codeService, adrService, }; } export type Container = ReturnType<typeof createContainer>;
Pattern 4: Result Type — No Throwing in Services REQUIRED

WHY: Thrown errors are invisible in TypeScript — callers don't know what can fail. Use a Result type so every return value explicitly declares success or failure.

// src/lib/types/result.ts type Result<T, E = AppError> = | { success: true; data: T } | { success: false; error: E }; function ok<T>(data: T): Result<T> { return { success: true, data }; } function err<E>(error: E): Result<never, E> { return { success: false, error }; } // Service method returns Result — caller MUST handle both cases async createProject(input): Promise<Result<Project>> { const team = await this.checkPermission(input.teamId, userId); if (!team) return err({ code: 'FORBIDDEN', message: 'No permission' }); const project = await this.projectRepo.create(input); return ok(project); }
Pattern 5: Prompt Registry — All AI Prompts REQUIRED

WHY: Prompts are NOT hardcoded strings. Every prompt is stored in the database with version tracking, A/B testing support, and rollback capability. Think of prompts like database migrations — versioned and tracked.

// Database table: prompt_registry // Columns: id, slug, version, model, system_prompt, user_template, // temperature, max_tokens, is_active, created_at, performance_score // src/lib/ai/prompt-registry.ts export class PromptRegistry { private cache = new Map<string, PromptConfig>(); async getPrompt(slug: string): Promise<PromptConfig> { if (this.cache.has(slug)) return this.cache.get(slug)!; const { data } = await this.supabase .from('prompt_registry') .select('*') .eq('slug', slug) .eq('is_active', true) .order('version', { ascending: false }) .limit(1).single(); this.cache.set(slug, data); return data; } renderTemplate(template: string, vars: Record<string, string>): string { return template.replace(/\{\{(\w+)\}\}/g, (_, key) => vars[key] ?? ''); } } // Initial prompt slugs (seeded in Sprint 1): // meeting.suggestion, meeting.summary, diagram.analyze, // diagram.antipattern, code.review, code.drift, // adr.generate, adr.quality_score, bestpractice.recommend
Pattern 6: Event Bus — Decoupled Side Effects RECOMMENDED

WHY: When a meeting ends, many things happen: summary generation, ADR extraction, embedding creation, notification sending. Without an event bus, the meeting service must know about ALL downstream consumers. The event bus decouples them.

// src/lib/events/event-bus.ts type EventMap = { 'meeting.started': { sessionId: string; projectId: string }; 'meeting.ended': { sessionId: string; transcript: string }; 'decision.detected': { sessionId: string; decision: Decision }; 'diagram.uploaded': { diagramId: string; format: DiagramFormat }; 'diagram.analyzed': { diagramId: string; issues: Issue[] }; 'code.pr_opened': { repoId: string; prNumber: number }; 'adr.created': { adrId: string; projectId: string }; 'suggestion.accepted': { suggestionId: string; userId: string }; 'suggestion.rejected': { suggestionId: string; reason?: string }; }; export class EventBus { private handlers = new Map<string, Set<Function>>(); on<K extends keyof EventMap>(event: K, handler: (data: EventMap[K]) => void) { if (!this.handlers.has(event)) this.handlers.set(event, new Set()); this.handlers.get(event)!.add(handler); } async emit<K extends keyof EventMap>(event: K, data: EventMap[K]) { const fns = this.handlers.get(event); if (fns) await Promise.allSettled([...fns].map(fn => fn(data))); } }
Pattern 7: Circuit Breaker — All External Calls REQUIRED

WHY: If Deepgram is down, you don't want to keep sending requests and waiting for timeouts. Circuit breaker detects failures and "opens" the circuit, returning a fallback immediately.

// src/lib/resilience/circuit-breaker.ts type State = 'CLOSED' | 'OPEN' | 'HALF_OPEN'; export class CircuitBreaker { private state: State = 'CLOSED'; private failureCount = 0; private lastFailure: number = 0; constructor( private name: string, private threshold: number = 5, // failures before opening private resetTimeout: number = 30000, // ms to wait before half-open ) {} async execute<T>(fn: () => Promise<T>, fallback?: () => T): Promise<T> { if (this.state === 'OPEN') { if (Date.now() - this.lastFailure > this.resetTimeout) { this.state = 'HALF_OPEN'; } else { if (fallback) return fallback(); throw new CircuitOpenError(this.name); } } try { const result = await fn(); this.onSuccess(); return result; } catch (e) { this.onFailure(); if (fallback) return fallback(); throw e; } } private onSuccess() { this.failureCount = 0; this.state = 'CLOSED'; } private onFailure() { this.failureCount++; this.lastFailure = Date.now(); if (this.failureCount >= this.threshold) this.state = 'OPEN'; } }
Pattern 8: Strategy Pattern — Model Router REQUIRED

WHY: Different tasks need different AI models. The Model Router uses the Strategy pattern to select the right model based on task type, latency requirements, cost budget, and provider health.

// src/lib/ai/model-router.ts interface AIProvider { name: string; complete(prompt: PromptConfig, input: string): Promise<string>; estimateCost(inputTokens: number, outputTokens: number): number; } type TaskComplexity = 'low' | 'medium' | 'high' | 'critical'; export class ModelRouter { private providers: Map<string, AIProvider>; private breakers: Map<string, CircuitBreaker>; async route(task: { slug: string; // prompt registry slug complexity: TaskComplexity; maxLatencyMs: number; // e.g. 2000 for meeting, 10000 for diagram orgId: string; // for cost tracking input: string; }): Promise<string> { // 1. Get prompt from registry const prompt = await this.promptRegistry.getPrompt(task.slug); // 2. Select model based on complexity + latency const model = this.selectModel(task.complexity, task.maxLatencyMs); // 3. Check circuit breaker for selected provider const breaker = this.breakers.get(model.name)!; return breaker.execute( () => model.complete(prompt, task.input), () => this.fallbackChain(task) // try next provider ); } private selectModel(c: TaskComplexity, maxMs: number): AIProvider { if (c === 'low' || maxMs < 1000) return this.providers.get('haiku')!; if (c === 'medium') return this.providers.get('sonnet')!; if (c === 'high') return this.providers.get('sonnet')!; return this.providers.get('opus')!; // critical } }
Pattern 9: Observer Pattern — Real-time Subscriptions REQUIRED

WHY: The meeting dashboard shows live suggestions, live transcript, live speaker changes. Supabase Realtime fires Postgres changes, and the frontend observes them through React hooks.

// src/hooks/use-realtime-suggestions.ts export function useRealtimeSuggestions(sessionId: string) { const supabase = useSupabaseClient(); const [suggestions, setSuggestions] = useState<Suggestion[]>([]); useEffect(() => { const channel = supabase .channel(`session-${sessionId}`) .on('postgres_changes', { event: 'INSERT', schema: 'public', table: 'suggestions', filter: `session_id=eq.${sessionId}`, }, (payload) => { setSuggestions(prev => [payload.new as Suggestion, ...prev]); }) .subscribe(); return () => { supabase.removeChannel(channel); }; }, [sessionId]); return suggestions; }
Pattern 10: Factory Pattern — Diagram Parsers RECOMMENDED

WHY: The Diagram Engine supports 7+ formats. Each format has a different parser. The Factory pattern creates the right parser based on the file extension, keeping the DiagramService clean.

// src/lib/engines/diagram/parser-factory.ts interface DiagramParser { parse(input: Buffer | string): Promise<DiagramGraph>; } class DrawioParser implements DiagramParser { /* XML parsing */ } class ExcalidrawParser implements DiagramParser { /* JSON parsing */ } class MermaidParser implements DiagramParser { /* Mermaid AST */ } class VisionParser implements DiagramParser { /* GPT-4o Vision for PNG/JPG/PDF */ } class SvgParser implements DiagramParser { /* SVG DOM parsing */ } class TerraformParser implements DiagramParser { /* HCL parsing */ } export class ParserFactory { static create(format: DiagramFormat): DiagramParser { const parsers: Record<DiagramFormat, () => DiagramParser> = { 'drawio': () => new DrawioParser(), 'excalidraw': () => new ExcalidrawParser(), 'mermaid': () => new MermaidParser(), 'png': () => new VisionParser(), 'jpg': () => new VisionParser(), 'pdf': () => new VisionParser(), 'svg': () => new SvgParser(), 'terraform': () => new TerraformParser(), }; if (!parsers[format]) throw new UnsupportedFormatError(format); return parsers[format](); } }
Pattern 11: Middleware Chain — API Request Pipeline REQUIRED

WHY: Every API request goes through: auth check → rate limit → input validation → permission check → handler → output validation → audit log. This is a middleware chain, not ad-hoc checks in every route.

// src/lib/middleware/api-pipeline.ts type Middleware = (ctx: ApiContext, next: () => Promise<void>) => Promise<void>; export const withAuth: Middleware = async (ctx, next) => { const session = await ctx.supabase.auth.getUser(); if (!session.data.user) { ctx.res = { status: 401, body: 'Unauthorized' }; return; } ctx.userId = session.data.user.id; await next(); }; export const withRateLimit = (limit: number, window: string): Middleware => async (ctx, next) => { const key = `rate:${ctx.userId}:${ctx.path}`; const count = await redis.incr(key); if (count === 1) await redis.expire(key, parseDuration(window)); if (count > limit) { ctx.res = { status: 429, body: 'Rate limited' }; return; } await next(); }; export const withValidation = (schema: ZodSchema): Middleware => async (ctx, next) => { const result = schema.safeParse(ctx.body); if (!result.success) { ctx.res = { status: 400, body: result.error.flatten() }; return; } ctx.validated = result.data; await next(); }; export const withAudit: Middleware = async (ctx, next) => { await next(); await ctx.container.auditLogger.log(ctx.path, { userId: ctx.userId, method: ctx.method, statusCode: ctx.res?.status, ip: ctx.ip, }); };
Pattern 12: CQRS (Light) — Read/Write Separation RECOMMENDED

WHY: Dashboard reads (list projects, view suggestions, analytics) are fundamentally different from writes (create session, upload diagram). Separate query handlers from command handlers for clarity and performance. Use Supabase Read Replicas for queries at scale.

// Commands (writes) → go through service layer → primary DB class CreateProjectCommand { teamId: string; name: string; } class UploadDiagramCommand { projectId: string; file: File; } class StartSessionCommand { projectId: string; } // Queries (reads) → go through query handlers → read replica class GetProjectDashboardQuery { projectId: string; } class ListSuggestionsQuery { sessionId: string; page: number; } class GetAnalyticsQuery { orgId: string; range: DateRange; } // In Next.js, this maps naturally to: // Server Components → queries (read replica) // Server Actions / API Routes → commands (primary)

3. Enterprise Development Best Practices

Non-negotiable rules for every line of code. These are not optional guidelines — violations block merge.

Error Handling — Zero Silent Failures

  • NEVER use empty catch blocks. Every catch must log or propagate.
  • All API errors return structured JSON: { error: { code, message, details? } }
  • HTTP status codes are semantic: 400 validation, 401 unauth, 403 forbidden, 404 not found, 409 conflict, 422 unprocessable, 429 rate limit, 500 internal.
  • Client-side: global error boundary at app root + per-feature error boundaries.
  • External API calls: always set timeout (default 10s), retry with exponential backoff (max 3), circuit breaker wrapping.
  • Use Sentry for server errors. Every error includes: userId, orgId, sessionId, request path.

Security — Defense in Depth

  • RLS on EVERY table — no exceptions. If a table has no RLS policy, the query returns nothing.
  • API keys stored in Supabase Vault, NEVER in .env files.
  • All user input sanitized via Zod schemas before processing.
  • CSRF protection via Supabase Auth cookie + SameSite=Lax.
  • Content Security Policy headers on all pages.
  • Rate limiting: 100 req/min for standard endpoints, 20 req/min for AI endpoints, 5 req/min for auth endpoints.
  • PII detection runs on ALL text before storage — names, emails, SSNs are masked.
  • Audit log every write operation: who, what, when, from where.

TypeScript — Strict Mode, No Escapes

  • strict: true in tsconfig.json — no exceptions.
  • noUncheckedIndexedAccess: true — arrays/objects are never blindly accessed.
  • ZERO use of any type. Use unknown + type guards instead.
  • ZERO use of @ts-ignore or @ts-expect-error.
  • All function params and return types explicitly typed.
  • Database types auto-generated via supabase gen types — never hand-write DB types.
  • Zod schemas as single source of truth for runtime + static types: type X = z.infer<typeof XSchema>

Performance — Sub-Second Dashboard

  • Next.js Server Components for initial data fetch — zero client-side waterfalls.
  • React Query for client state — staleTime: 30s for active data, 5min for reference data.
  • Database: indexes on EVERY column used in WHERE, JOIN, ORDER BY.
  • Pagination: cursor-based (not offset) for all list endpoints.
  • Bundle: max 200KB First Load JS. Monitor with @next/bundle-analyzer.
  • Images: next/image with AVIF/WebP, lazy loading, placeholder blur.
  • AI responses: stream via Server-Sent Events, never wait for full response.
  • Semantic cache: check pgvector for similar previous queries before calling AI.

Logging — Structured, Searchable

  • Use pino for server-side logging — JSON structured output.
  • Every log line includes: { timestamp, level, service, traceId, userId?, orgId?, message, data? }
  • Log levels: ERROR (alerts), WARN (investigate), INFO (audit trail), DEBUG (dev only).
  • In production: ERROR + WARN → Sentry. INFO → log drain (Axiom or Datadog). DEBUG → off.
  • Request tracing: generate traceId in middleware, pass through entire request chain.
  • NEVER log: passwords, tokens, API keys, PII, full request bodies in production.

Testing — Coverage Gates

  • Unit tests (vitest): All services, all repositories, all utils. Target: 80% coverage.
  • Integration tests (vitest + Supabase local): API routes with real DB. Target: critical paths.
  • E2E tests (Playwright): Auth flow, meeting session, diagram upload, settings. Target: happy paths.
  • Test naming: describe('ServiceName') → it('should verb when condition')
  • No mocking Supabase in integration tests — use Supabase local Docker.
  • CI blocks merge if coverage drops below threshold.
  • AI outputs: snapshot tests with seeded prompts for regression detection.

Git Workflow — Trunk-Based Development

  • main branch: always deployable. Protected. Requires 1 approval + CI pass.
  • Feature branches: feat/AP-123-short-description (AP = ArchPilot ticket prefix).
  • Bug fix branches: fix/AP-456-short-description
  • Commits: conventional commits enforced (feat:, fix:, chore:, docs:).
  • PRs: squash merge only. PR title becomes commit message.
  • Max PR size: 400 lines changed. Larger PRs must be split.
  • No long-lived branches. Feature flags for WIP features in production.

API Design — RESTful Conventions

  • Base: /api/v1/ — always versioned.
  • Nouns for resources: /api/v1/projects, /api/v1/sessions
  • HTTP verbs: GET (read), POST (create), PATCH (partial update), DELETE (soft delete).
  • Nested resources: /api/v1/projects/:id/sessions (max 2 levels deep).
  • Filtering: query params ?status=active&sort=created_at&order=desc&limit=20&cursor=xxx
  • Response envelope: { data: T, meta?: { cursor, total, page } }
  • Errors: { error: { code: 'VALIDATION_ERROR', message: '...', details: [...] } }
  • All endpoints documented with OpenAPI 3.1 spec — auto-generated from Zod schemas.

File Naming Conventions

Components: PascalCase.tsxProjectCard.tsx, MeetingDashboard.tsx
Hooks: use-kebab-case.tsuse-realtime-suggestions.ts
Services: kebab-case.service.tsproject.service.ts
Repositories: kebab-case.repository.tsproject.repository.ts
Types: kebab-case.types.tsproject.types.ts
Utils: kebab-case.tsformat-date.ts, sanitize-html.ts
Tests: co-located → project.service.test.ts next to project.service.ts
Zod schemas: kebab-case.schema.tscreate-project.schema.ts

4. Monorepo Project Structure — Turborepo

One repository, five applications, shared packages. Every file has a home. If you don't know where to put it, it doesn't exist yet.

Why Monorepo?

Shared types between web app and Electron agent. Shared UI components. Single CI pipeline. Atomic cross-app changes. One dependency tree.

archpilot/ # Root of monorepo ├── apps/ │ ├── web/ # App 2: Next.js web application │ │ ├── src/ │ │ │ ├── app/ # Next.js App Router │ │ │ │ ├── (auth)/ # Route group: login, signup, callback │ │ │ │ │ ├── login/page.tsx │ │ │ │ │ ├── signup/page.tsx │ │ │ │ │ └── callback/route.ts # Supabase auth callback handler │ │ │ │ ├── (dashboard)/ # Route group: authenticated pages │ │ │ │ │ ├── layout.tsx # Sidebar + header layout │ │ │ │ │ ├── page.tsx # Dashboard home → / │ │ │ │ │ ├── projects/ │ │ │ │ │ │ ├── page.tsx # List all projects │ │ │ │ │ │ └── [id]/ │ │ │ │ │ │ ├── page.tsx # Project overview │ │ │ │ │ │ ├── meetings/ │ │ │ │ │ │ │ ├── page.tsx # List sessions │ │ │ │ │ │ │ └── [sessionId]/ │ │ │ │ │ │ │ └── page.tsx # Live meeting dashboard │ │ │ │ │ │ ├── diagrams/ │ │ │ │ │ │ │ ├── page.tsx # List diagrams │ │ │ │ │ │ │ └── [diagramId]/ │ │ │ │ │ │ │ └── page.tsx # Diagram editor (React Flow) │ │ │ │ │ │ ├── code/ │ │ │ │ │ │ │ ├── page.tsx # Connected repos │ │ │ │ │ │ │ └── [repoId]/ │ │ │ │ │ │ │ └── page.tsx # Code intelligence dashboard │ │ │ │ │ │ ├── adrs/ │ │ │ │ │ │ │ ├── page.tsx # ADR repository │ │ │ │ │ │ │ └── [adrId]/ │ │ │ │ │ │ │ └── page.tsx # ADR detail + quality score │ │ │ │ │ │ └── decisions/ │ │ │ │ │ │ └── page.tsx # Decision timeline │ │ │ │ │ ├── settings/ │ │ │ │ │ │ ├── page.tsx # General settings │ │ │ │ │ │ ├── team/page.tsx # Team management │ │ │ │ │ │ ├── billing/page.tsx # Stripe billing portal │ │ │ │ │ │ └── integrations/page.tsx # GitHub, Slack, Jira │ │ │ │ │ └── admin/ # Enterprise admin console │ │ │ │ │ ├── page.tsx │ │ │ │ │ ├── compliance/page.tsx │ │ │ │ │ ├── audit/page.tsx │ │ │ │ │ └── analytics/page.tsx │ │ │ │ ├── api/ # API Routes (Next.js) │ │ │ │ │ └── v1/ │ │ │ │ │ ├── projects/ # CRUD │ │ │ │ │ ├── sessions/ # Meeting sessions │ │ │ │ │ ├── diagrams/ # Diagram CRUD + analyze │ │ │ │ │ ├── code/ # Code review endpoints │ │ │ │ │ ├── adrs/ # ADR CRUD + quality score │ │ │ │ │ ├── suggestions/ # Suggestion CRUD + feedback │ │ │ │ │ ├── ai/ # AI proxy endpoints │ │ │ │ │ ├── webhooks/ # GitHub, Stripe, Slack webhooks │ │ │ │ │ ├── admin/ # Enterprise admin APIs │ │ │ │ │ └── ws/ # WebSocket upgrade for meeting audio │ │ │ │ ├── layout.tsx # Root layout — providers, fonts, metadata │ │ │ │ ├── error.tsx # Global error boundary │ │ │ │ ├── not-found.tsx # Custom 404 page │ │ │ │ └── loading.tsx # Global loading skeleton │ │ │ ├── components/ # App-specific components │ │ │ │ ├── meeting/ # MeetingDashboard, TranscriptView, SuggestionCard │ │ │ │ ├── diagram/ # DiagramCanvas, NodeInspector, AntiPatternBadge │ │ │ │ ├── code/ # CodeViewer, DriftAlert, PRReviewPanel │ │ │ │ ├── adr/ # AdrEditor, QualityScore, ConflictView │ │ │ │ ├── dashboard/ # StatsCard, ActivityFeed, QuickActions │ │ │ │ └── admin/ # CompliancePanel, AuditLog, TeamManager │ │ │ ├── hooks/ # Custom React hooks │ │ │ │ ├── use-realtime-suggestions.ts │ │ │ │ ├── use-supabase.ts │ │ │ │ ├── use-container.ts │ │ │ │ ├── use-meeting-session.ts │ │ │ │ └── use-permissions.ts │ │ │ ├── lib/ # Core business logic │ │ │ │ ├── repositories/ # BaseRepository + all entity repos │ │ │ │ ├── services/ # All business services │ │ │ │ ├── ai/ # ModelRouter, PromptRegistry, providers │ │ │ │ ├── engines/ # Intelligence engine implementations │ │ │ │ │ ├── meeting/ # Audio processing, context assembly, trigger detection │ │ │ │ │ ├── diagram/ # Parsers, anti-pattern rules, graph builder │ │ │ │ │ ├── code/ # AST analysis, drift detection, pattern matching │ │ │ │ │ ├── adr/ # Quality scoring, conflict detection, generation │ │ │ │ │ └── bestpractices/ # Knowledge base, contextual recommendation │ │ │ │ ├── resilience/ # CircuitBreaker, retry, timeout │ │ │ │ ├── events/ # EventBus, event types │ │ │ │ ├── middleware/ # Auth, rate-limit, validation, audit │ │ │ │ ├── security/ # PII filter, sanitizer, encryption │ │ │ │ ├── container.ts # DI container │ │ │ │ └── supabase.ts # Client/server Supabase client factories │ │ │ ├── types/ # TypeScript types │ │ │ │ ├── database.types.ts # Auto-generated by supabase gen types │ │ │ │ └── index.ts # Re-exports all types │ │ │ └── schemas/ # Zod validation schemas │ │ │ ├── project.schema.ts │ │ │ ├── session.schema.ts │ │ │ ├── diagram.schema.ts │ │ │ └── adr.schema.ts │ │ ├── next.config.ts │ │ ├── tailwind.config.ts │ │ ├── tsconfig.json │ │ └── package.json │ │ │ ├── agent/ # App 1: Electron desktop agent │ │ ├── src/ │ │ │ ├── main/ # Electron main process │ │ │ │ ├── index.ts # App entry, tray, window management │ │ │ │ ├── audio-capture.ts # desktopCapturer → MediaRecorder → chunks │ │ │ │ ├── audio-streamer.ts # WebSocket client → sends audio to backend │ │ │ │ ├── local-buffer.ts # SQLite buffer for offline audio │ │ │ │ ├── session-manager.ts # Start/stop/pause session lifecycle │ │ │ │ ├── auth-bridge.ts # OAuth flow → stores token in electron-store │ │ │ │ └── auto-updater.ts # electron-updater for auto updates │ │ │ ├── renderer/ # Electron renderer (React) │ │ │ │ ├── App.tsx # Tray popup UI │ │ │ │ ├── MeetingStatus.tsx # Recording indicator, duration │ │ │ │ ├── QuickSettings.tsx # Audio source, quality, server URL │ │ │ │ └── MiniSuggestions.tsx # Last 3 suggestions floating overlay │ │ │ └── preload/ │ │ │ └── index.ts # Context bridge (IPC) │ │ ├── electron-builder.yml │ │ ├── tsconfig.json │ │ └── package.json │ │ │ ├── extension/ # App 4: Chrome extension (Plasmo) │ │ ├── src/ │ │ │ ├── background.ts # Service worker — GitHub PR detection │ │ │ ├── contents/ │ │ │ │ └── github-pr.tsx # Content script — injects review panel on PR pages │ │ │ ├── popup/ │ │ │ │ └── index.tsx # Extension popup — login, status, settings │ │ │ └── sidepanel/ │ │ │ └── index.tsx # Side panel — full review interface │ │ ├── plasmo.config.ts │ │ └── package.json │ │ │ └── vscode/ # App 5: VS Code extension (optional, Phase 5) │ ├── src/ │ │ ├── extension.ts # Activation, commands, status bar │ │ ├── inline-suggestion.ts # Inline architecture suggestions │ │ └── webview/ # Sidebar webview panel │ └── package.json │ ├── packages/ # Shared packages across apps │ ├── shared-types/ # TypeScript types shared between all apps │ │ ├── src/index.ts │ │ ├── src/database.types.ts # Auto-generated Supabase types │ │ ├── src/api.types.ts # API request/response types │ │ ├── src/events.types.ts # Event bus event types │ │ └── package.json │ ├── ui/ # Shared UI components (shadcn/ui base) │ │ ├── src/components/ # Button, Card, Dialog, etc. │ │ ├── tailwind.config.ts # Shared Tailwind preset │ │ └── package.json │ └── eslint-config/ # Shared ESLint configuration │ └── index.js │ ├── supabase/ # Supabase project (App 3) │ ├── migrations/ # SQL migration files (versioned) │ │ ├── 00001_initial_schema.sql │ │ ├── 00002_rls_policies.sql │ │ ├── 00003_pgvector_setup.sql │ │ ├── 00004_prompt_registry_seed.sql │ │ └── 00005_audit_log.sql │ ├── functions/ # Supabase Edge Functions (Deno) │ │ ├── webhook-github/ # GitHub webhook handler │ │ ├── webhook-stripe/ # Stripe webhook handler │ │ ├── cron-adr-health/ # Weekly ADR health check │ │ ├── cron-usage-rollup/ # Daily usage metrics aggregation │ │ └── scim-handler/ # SCIM provisioning for enterprise SSO │ ├── config.toml # Supabase local dev config │ └── seed.sql # Dev seed data │ ├── .github/ │ └── workflows/ │ ├── ci.yml # Lint → Type → Test → Build on every PR │ ├── deploy-web.yml # Deploy web to Hostinger VPS on merge to main │ ├── deploy-agent.yml # Build + publish Electron on tag │ └── db-migrate.yml # Run Supabase migrations on merge │ ├── turbo.json # Turborepo pipeline config ├── package.json # Root package.json (workspaces) ├── pnpm-workspace.yaml # pnpm workspace definition ├── .env.example # Template — NEVER commit .env └── .gitignore

CRITICAL: .env.example contents (NEVER real values)

NEXT_PUBLIC_SUPABASE_URL=https://xxx.supabase.co
NEXT_PUBLIC_SUPABASE_ANON_KEY=xxx
SUPABASE_SERVICE_ROLE_KEY=xxx (server-side only, NEVER expose to client)
ANTHROPIC_API_KEY=xxx
OPENAI_API_KEY=xxx
DEEPGRAM_API_KEY=xxx
STRIPE_SECRET_KEY=xxx
STRIPE_WEBHOOK_SECRET=xxx
SENTRY_DSN=xxx
UPSTASH_REDIS_URL=xxx
POSTHOG_KEY=xxx
In production: ALL of these go in Supabase Vault or server env vars, not .env files.

5. Database Schema — Complete Supabase PostgreSQL

Every table, every column, every type, every index, every RLS policy. This is the exact SQL you run. Nothing is implied.

RULE: Migrations, Not Manual Changes

NEVER modify the database through Supabase Dashboard in production. All changes are SQL migration files in supabase/migrations/. Run via supabase db push or CI pipeline. Every migration is version-controlled and reversible.

Migration 001: Core Schema (18 Tables) Foundation
-- supabase/migrations/00001_initial_schema.sql -- Enable required extensions CREATE EXTENSION IF NOT EXISTS "uuid-ossp"; CREATE EXTENSION IF NOT EXISTS "vector"; -- pgvector for embeddings CREATE EXTENSION IF NOT EXISTS "pg_trgm"; -- trigram for fuzzy text search CREATE EXTENSION IF NOT EXISTS "pg_cron"; -- scheduled jobs -- ============================================= -- ENUM TYPES -- ============================================= CREATE TYPE user_role AS ENUM ( 'owner', 'admin', 'architect', 'developer', 'viewer', 'billing' ); CREATE TYPE session_status AS ENUM ( 'waiting', 'recording', 'paused', 'processing', 'completed', 'failed' ); CREATE TYPE suggestion_type AS ENUM ( 'best_practice', 'anti_pattern', 'risk_alert', 'alternative', 'cost_optimization', 'security', 'scalability', 'compliance' ); CREATE TYPE suggestion_status AS ENUM ( 'pending', 'accepted', 'rejected', 'deferred' ); CREATE TYPE diagram_format AS ENUM ( 'png', 'jpg', 'svg', 'pdf', 'drawio', 'excalidraw', 'mermaid', 'terraform' ); CREATE TYPE adr_status AS ENUM ( 'draft', 'proposed', 'accepted', 'superseded', 'deprecated', 'rejected' ); CREATE TYPE severity_level AS ENUM ( 'critical', 'high', 'medium', 'low', 'info' ); CREATE TYPE subscription_tier AS ENUM ( 'free', 'starter', 'professional', 'enterprise' ); -- ============================================= -- TABLE: organizations -- ============================================= CREATE TABLE organizations ( id UUID PRIMARY KEY DEFAULT uuid_generate_v4(), name TEXT NOT NULL, slug TEXT NOT NULL UNIQUE, logo_url TEXT, domain TEXT, -- for SSO domain verification tier subscription_tier NOT NULL DEFAULT 'free', stripe_customer_id TEXT, stripe_subscription_id TEXT, settings JSONB NOT NULL DEFAULT '{}', -- org-level config limits JSONB NOT NULL DEFAULT '{"seats": 5, "projects": 3, "ai_calls_per_month": 1000}', created_at TIMESTAMPTZ NOT NULL DEFAULT now(), updated_at TIMESTAMPTZ NOT NULL DEFAULT now() ); -- ============================================= -- TABLE: profiles (extends Supabase auth.users) -- ============================================= CREATE TABLE profiles ( id UUID PRIMARY KEY REFERENCES auth.users(id) ON DELETE CASCADE, full_name TEXT, avatar_url TEXT, job_title TEXT, preferences JSONB NOT NULL DEFAULT '{"theme": "dark", "notifications": true}', onboarded BOOLEAN NOT NULL DEFAULT false, created_at TIMESTAMPTZ NOT NULL DEFAULT now(), updated_at TIMESTAMPTZ NOT NULL DEFAULT now() ); -- ============================================= -- TABLE: org_members (many-to-many: users ↔ orgs) -- ============================================= CREATE TABLE org_members ( id UUID PRIMARY KEY DEFAULT uuid_generate_v4(), org_id UUID NOT NULL REFERENCES organizations(id) ON DELETE CASCADE, user_id UUID NOT NULL REFERENCES profiles(id) ON DELETE CASCADE, role user_role NOT NULL DEFAULT 'developer', invited_by UUID REFERENCES profiles(id), joined_at TIMESTAMPTZ NOT NULL DEFAULT now(), UNIQUE(org_id, user_id) ); -- ============================================= -- TABLE: projects -- ============================================= CREATE TABLE projects ( id UUID PRIMARY KEY DEFAULT uuid_generate_v4(), org_id UUID NOT NULL REFERENCES organizations(id) ON DELETE CASCADE, name TEXT NOT NULL, description TEXT, tech_stack TEXT[], -- ['next.js', 'postgresql', 'aws'] repo_url TEXT, settings JSONB NOT NULL DEFAULT '{}', created_by UUID NOT NULL REFERENCES profiles(id), created_at TIMESTAMPTZ NOT NULL DEFAULT now(), updated_at TIMESTAMPTZ NOT NULL DEFAULT now(), deleted_at TIMESTAMPTZ -- soft delete ); CREATE INDEX idx_projects_org ON projects(org_id) WHERE deleted_at IS NULL; -- ============================================= -- TABLE: sessions (meeting sessions) -- ============================================= CREATE TABLE sessions ( id UUID PRIMARY KEY DEFAULT uuid_generate_v4(), project_id UUID NOT NULL REFERENCES projects(id) ON DELETE CASCADE, title TEXT, -- auto-generated or user-set status session_status NOT NULL DEFAULT 'waiting', started_at TIMESTAMPTZ, ended_at TIMESTAMPTZ, duration_ms INTEGER, participants JSONB, -- [{ name, speakerId, role }] context JSONB NOT NULL DEFAULT '{}', -- accumulated context for AI summary TEXT, -- AI-generated post-meeting summary created_by UUID NOT NULL REFERENCES profiles(id), created_at TIMESTAMPTZ NOT NULL DEFAULT now() ); CREATE INDEX idx_sessions_project ON sessions(project_id); CREATE INDEX idx_sessions_status ON sessions(status); -- ============================================= -- TABLE: transcripts -- ============================================= CREATE TABLE transcripts ( id UUID PRIMARY KEY DEFAULT uuid_generate_v4(), session_id UUID NOT NULL REFERENCES sessions(id) ON DELETE CASCADE, speaker_id TEXT, -- diarization speaker label speaker_name TEXT, text TEXT NOT NULL, confidence REAL, -- Deepgram confidence score start_ms INTEGER NOT NULL, -- offset from session start end_ms INTEGER NOT NULL, is_final BOOLEAN NOT NULL DEFAULT true, created_at TIMESTAMPTZ NOT NULL DEFAULT now() ); CREATE INDEX idx_transcripts_session ON transcripts(session_id, start_ms); -- ============================================= -- TABLE: decisions -- ============================================= CREATE TABLE decisions ( id UUID PRIMARY KEY DEFAULT uuid_generate_v4(), project_id UUID NOT NULL REFERENCES projects(id) ON DELETE CASCADE, session_id UUID REFERENCES sessions(id), -- null if manual entry title TEXT NOT NULL, description TEXT NOT NULL, reasoning TEXT, alternatives JSONB, -- [{ name, pros, cons, reason_rejected }] consequences JSONB, -- [{ description, impact, severity }] tags TEXT[], detected_at_ms INTEGER, -- transcript offset where detected confidence REAL, -- AI confidence in detection status TEXT NOT NULL DEFAULT 'detected', -- detected, confirmed, revoked created_by UUID REFERENCES profiles(id), created_at TIMESTAMPTZ NOT NULL DEFAULT now() ); CREATE INDEX idx_decisions_project ON decisions(project_id); -- ============================================= -- TABLE: suggestions -- ============================================= CREATE TABLE suggestions ( id UUID PRIMARY KEY DEFAULT uuid_generate_v4(), session_id UUID REFERENCES sessions(id), project_id UUID NOT NULL REFERENCES projects(id) ON DELETE CASCADE, source TEXT NOT NULL, -- 'meeting', 'diagram', 'code', 'adr', 'bestpractice' source_ref TEXT, -- diagram_id, pr_number, adr_id, etc. type suggestion_type NOT NULL, severity severity_level NOT NULL DEFAULT 'medium', title TEXT NOT NULL, body TEXT NOT NULL, -- markdown content pros TEXT[], cons TEXT[], risks TEXT[], confidence REAL NOT NULL, -- 0.0 to 1.0 cost_impact TEXT, -- '$+500/mo', '-$200/mo' effort TEXT, -- '2 hours', '1 sprint' reversibility TEXT, -- 'easy', 'moderate', 'hard', 'irreversible' status suggestion_status NOT NULL DEFAULT 'pending', feedback TEXT, -- user rejection reason model_used TEXT, -- 'claude-sonnet-4-5', etc. tokens_used INTEGER, latency_ms INTEGER, created_at TIMESTAMPTZ NOT NULL DEFAULT now() ); CREATE INDEX idx_suggestions_session ON suggestions(session_id); CREATE INDEX idx_suggestions_project ON suggestions(project_id); CREATE INDEX idx_suggestions_type ON suggestions(type, severity); -- ============================================= -- TABLE: diagrams -- ============================================= CREATE TABLE diagrams ( id UUID PRIMARY KEY DEFAULT uuid_generate_v4(), project_id UUID NOT NULL REFERENCES projects(id) ON DELETE CASCADE, name TEXT NOT NULL, format diagram_format NOT NULL, file_path TEXT NOT NULL, -- Supabase Storage path file_size INTEGER, graph_data JSONB, -- parsed graph: { nodes: [], edges: [] } analysis JSONB, -- anti-pattern results, scores version INTEGER NOT NULL DEFAULT 1, parent_id UUID REFERENCES diagrams(id), -- version chain uploaded_by UUID NOT NULL REFERENCES profiles(id), created_at TIMESTAMPTZ NOT NULL DEFAULT now() ); CREATE INDEX idx_diagrams_project ON diagrams(project_id); -- ============================================= -- TABLE: adrs (Architecture Decision Records) -- ============================================= CREATE TABLE adrs ( id UUID PRIMARY KEY DEFAULT uuid_generate_v4(), project_id UUID NOT NULL REFERENCES projects(id) ON DELETE CASCADE, session_id UUID REFERENCES sessions(id), -- null if manual decision_id UUID REFERENCES decisions(id), number INTEGER NOT NULL, -- ADR-001, ADR-002, etc. title TEXT NOT NULL, status adr_status NOT NULL DEFAULT 'draft', context TEXT NOT NULL, -- Why is this needed? decision TEXT NOT NULL, -- What was decided? consequences TEXT NOT NULL, -- What are the consequences? alternatives JSONB, -- [{ name, pros, cons }] quality_score JSONB, -- { overall: 82, criteria: { ... } } tags TEXT[], review_date DATE, -- When to revisit this ADR superseded_by UUID REFERENCES adrs(id), created_by UUID NOT NULL REFERENCES profiles(id), created_at TIMESTAMPTZ NOT NULL DEFAULT now(), updated_at TIMESTAMPTZ NOT NULL DEFAULT now() ); CREATE INDEX idx_adrs_project ON adrs(project_id); CREATE UNIQUE INDEX idx_adrs_number ON adrs(project_id, number); -- ============================================= -- TABLE: code_repos (connected repositories) -- ============================================= CREATE TABLE code_repos ( id UUID PRIMARY KEY DEFAULT uuid_generate_v4(), project_id UUID NOT NULL REFERENCES projects(id) ON DELETE CASCADE, provider TEXT NOT NULL, -- 'github', 'gitlab', 'bitbucket' owner TEXT NOT NULL, repo_name TEXT NOT NULL, default_branch TEXT NOT NULL DEFAULT 'main', webhook_id TEXT, access_token_ref TEXT, -- reference to Vault secret last_scan_at TIMESTAMPTZ, scan_results JSONB, -- latest architecture scan connected_by UUID NOT NULL REFERENCES profiles(id), created_at TIMESTAMPTZ NOT NULL DEFAULT now() ); -- ============================================= -- TABLE: code_reviews (PR-level reviews) -- ============================================= CREATE TABLE code_reviews ( id UUID PRIMARY KEY DEFAULT uuid_generate_v4(), repo_id UUID NOT NULL REFERENCES code_repos(id) ON DELETE CASCADE, pr_number INTEGER NOT NULL, pr_title TEXT, pr_url TEXT, findings JSONB NOT NULL, -- [{ file, line, severity, message, pattern }] drift_items JSONB, -- decisions that conflict with this PR overall_score INTEGER, -- 0-100 architecture health score reviewed_at TIMESTAMPTZ NOT NULL DEFAULT now() ); -- ============================================= -- TABLE: prompt_registry -- ============================================= CREATE TABLE prompt_registry ( id UUID PRIMARY KEY DEFAULT uuid_generate_v4(), slug TEXT NOT NULL, -- 'meeting.suggestion', 'diagram.analyze', etc. version INTEGER NOT NULL DEFAULT 1, model TEXT NOT NULL, -- 'claude-sonnet-4-5', etc. system_prompt TEXT NOT NULL, user_template TEXT NOT NULL, -- template with {{variables}} temperature REAL NOT NULL DEFAULT 0.3, max_tokens INTEGER NOT NULL DEFAULT 2048, is_active BOOLEAN NOT NULL DEFAULT true, performance_score REAL, -- avg feedback score total_calls INTEGER NOT NULL DEFAULT 0, avg_latency_ms INTEGER, created_at TIMESTAMPTZ NOT NULL DEFAULT now(), UNIQUE(slug, version) ); -- ============================================= -- TABLE: embeddings (pgvector semantic search) -- ============================================= CREATE TABLE embeddings ( id UUID PRIMARY KEY DEFAULT uuid_generate_v4(), org_id UUID NOT NULL REFERENCES organizations(id) ON DELETE CASCADE, source_type TEXT NOT NULL, -- 'transcript', 'adr', 'diagram', 'code' source_id UUID NOT NULL, chunk_index INTEGER NOT NULL DEFAULT 0, content TEXT NOT NULL, -- the text that was embedded embedding vector(1536) NOT NULL, -- OpenAI text-embedding-3-large metadata JSONB, created_at TIMESTAMPTZ NOT NULL DEFAULT now() ); CREATE INDEX idx_embeddings_vector ON embeddings USING ivfflat (embedding vector_cosine_ops) WITH (lists = 100); CREATE INDEX idx_embeddings_source ON embeddings(org_id, source_type); -- ============================================= -- TABLE: audit_log -- ============================================= CREATE TABLE audit_log ( id UUID PRIMARY KEY DEFAULT uuid_generate_v4(), org_id UUID REFERENCES organizations(id), user_id UUID REFERENCES profiles(id), action TEXT NOT NULL, -- 'project.created', 'session.started', etc. resource_type TEXT, -- 'project', 'session', 'diagram', etc. resource_id UUID, details JSONB, -- additional context ip_address INET, user_agent TEXT, created_at TIMESTAMPTZ NOT NULL DEFAULT now() ); CREATE INDEX idx_audit_org ON audit_log(org_id, created_at DESC); CREATE INDEX idx_audit_user ON audit_log(user_id, created_at DESC); -- ============================================= -- TABLE: usage_metrics -- ============================================= CREATE TABLE usage_metrics ( id UUID PRIMARY KEY DEFAULT uuid_generate_v4(), org_id UUID NOT NULL REFERENCES organizations(id) ON DELETE CASCADE, period DATE NOT NULL, -- daily rollup date ai_calls INTEGER NOT NULL DEFAULT 0, tokens_in INTEGER NOT NULL DEFAULT 0, tokens_out INTEGER NOT NULL DEFAULT 0, cost_usd NUMERIC(10,4) NOT NULL DEFAULT 0, sessions_count INTEGER NOT NULL DEFAULT 0, suggestions_count INTEGER NOT NULL DEFAULT 0, diagrams_analyzed INTEGER NOT NULL DEFAULT 0, prs_reviewed INTEGER NOT NULL DEFAULT 0, created_at TIMESTAMPTZ NOT NULL DEFAULT now(), UNIQUE(org_id, period) ); -- ============================================= -- TABLE: integrations (OAuth tokens for GitHub, Slack, etc.) -- ============================================= CREATE TABLE integrations ( id UUID PRIMARY KEY DEFAULT uuid_generate_v4(), org_id UUID NOT NULL REFERENCES organizations(id) ON DELETE CASCADE, provider TEXT NOT NULL, -- 'github', 'slack', 'jira', 'confluence' access_token_ref TEXT NOT NULL, -- reference to Vault secret ID refresh_token_ref TEXT, scopes TEXT[], metadata JSONB, -- provider-specific data connected_by UUID NOT NULL REFERENCES profiles(id), expires_at TIMESTAMPTZ, created_at TIMESTAMPTZ NOT NULL DEFAULT now() ); -- ============================================= -- TABLE: feedback (user feedback on suggestions) -- ============================================= CREATE TABLE feedback ( id UUID PRIMARY KEY DEFAULT uuid_generate_v4(), suggestion_id UUID NOT NULL REFERENCES suggestions(id) ON DELETE CASCADE, user_id UUID NOT NULL REFERENCES profiles(id), rating INTEGER NOT NULL CHECK (rating >= 1 AND rating <= 5), comment TEXT, created_at TIMESTAMPTZ NOT NULL DEFAULT now() ); -- ============================================= -- UPDATED_AT TRIGGER (applies to all tables) -- ============================================= CREATE OR REPLACE FUNCTION update_updated_at() RETURNS TRIGGER AS $$ BEGIN NEW.updated_at = now(); RETURN NEW; END; $$ LANGUAGE plpgsql; CREATE TRIGGER set_updated_at BEFORE UPDATE ON organizations FOR EACH ROW EXECUTE update_updated_at(); CREATE TRIGGER set_updated_at BEFORE UPDATE ON profiles FOR EACH ROW EXECUTE update_updated_at(); CREATE TRIGGER set_updated_at BEFORE UPDATE ON projects FOR EACH ROW EXECUTE update_updated_at(); CREATE TRIGGER set_updated_at BEFORE UPDATE ON adrs FOR EACH ROW EXECUTE update_updated_at();
Migration 002: Row-Level Security (RLS) Policies Security
-- supabase/migrations/00002_rls_policies.sql -- ENABLE RLS ON EVERY TABLE ALTER TABLE organizations ENABLE ROW LEVEL SECURITY; ALTER TABLE profiles ENABLE ROW LEVEL SECURITY; ALTER TABLE org_members ENABLE ROW LEVEL SECURITY; ALTER TABLE projects ENABLE ROW LEVEL SECURITY; ALTER TABLE sessions ENABLE ROW LEVEL SECURITY; ALTER TABLE transcripts ENABLE ROW LEVEL SECURITY; ALTER TABLE decisions ENABLE ROW LEVEL SECURITY; ALTER TABLE suggestions ENABLE ROW LEVEL SECURITY; ALTER TABLE diagrams ENABLE ROW LEVEL SECURITY; ALTER TABLE adrs ENABLE ROW LEVEL SECURITY; ALTER TABLE code_repos ENABLE ROW LEVEL SECURITY; ALTER TABLE code_reviews ENABLE ROW LEVEL SECURITY; ALTER TABLE audit_log ENABLE ROW LEVEL SECURITY; ALTER TABLE usage_metrics ENABLE ROW LEVEL SECURITY; ALTER TABLE embeddings ENABLE ROW LEVEL SECURITY; ALTER TABLE feedback ENABLE ROW LEVEL SECURITY; -- HELPER: Get user's org IDs CREATE OR REPLACE FUNCTION get_user_org_ids() RETURNS UUID[] AS $$ SELECT array_agg(org_id) FROM org_members WHERE user_id = auth.uid() $$ LANGUAGE sql SECURITY DEFINER STABLE; -- PROFILES: Users see their own profile CREATE POLICY "profiles_select_own" ON profiles FOR SELECT USING (id = auth.uid()); CREATE POLICY "profiles_update_own" ON profiles FOR UPDATE USING (id = auth.uid()); -- ORGANIZATIONS: Members can view their orgs CREATE POLICY "orgs_select_member" ON organizations FOR SELECT USING (id = ANY(get_user_org_ids())); -- PROJECTS: Org members can view projects in their org CREATE POLICY "projects_select" ON projects FOR SELECT USING (org_id = ANY(get_user_org_ids()) AND deleted_at IS NULL); CREATE POLICY "projects_insert" ON projects FOR INSERT WITH CHECK ( org_id = ANY(get_user_org_ids()) AND EXISTS ( SELECT 1 FROM org_members WHERE org_id = projects.org_id AND user_id = auth.uid() AND role IN ('owner', 'admin', 'architect') ) ); -- SESSIONS: Project members can view sessions CREATE POLICY "sessions_select" ON sessions FOR SELECT USING ( EXISTS ( SELECT 1 FROM projects p WHERE p.id = sessions.project_id AND p.org_id = ANY(get_user_org_ids()) ) ); -- Same pattern for: transcripts, decisions, suggestions, diagrams, adrs -- All check that the parent project belongs to user's org -- AUDIT LOG: Only admins/owners can view CREATE POLICY "audit_select_admin" ON audit_log FOR SELECT USING ( EXISTS ( SELECT 1 FROM org_members WHERE org_id = audit_log.org_id AND user_id = auth.uid() AND role IN ('owner', 'admin') ) );

6. API Design — Complete Endpoint Map

Every endpoint, every method, every input/output schema. All routes go through the middleware chain: Auth → RateLimit → Validate → Handler → Audit.

Base URL

https://app.archpilot.dev/api/v1 — all endpoints are versioned. Breaking changes = new version.

Projects API CRUD
MethodPathInputOutputAuthRate
GET/projects?org_id, ?cursor, ?limit{ data: Project[], meta: Pagination }member100/min
POST/projects{ name, description, tech_stack, org_id }{ data: Project }admin+20/min
GET/projects/:id{ data: Project }member100/min
PATCH/projects/:id{ name?, description?, settings? }{ data: Project }admin+20/min
DELETE/projects/:id{ success: true }owner5/min
GET/projects/:id/stats?range=7d|30d|90d{ sessions, decisions, suggestions, diagrams }member100/min
Sessions API (Meeting Intelligence) Real-time
MethodPathInputOutputNotes
POST/sessions{ project_id, title? }{ data: Session, ws_url: string }Returns WebSocket URL for audio streaming
GET/sessions/:id{ data: Session }
PATCH/sessions/:id{ status: 'paused'|'recording' }{ data: Session }Pause/resume
POST/sessions/:id/end{ data: Session, summary }Triggers post-meeting summary generation
GET/sessions/:id/transcript?cursor, ?limit{ data: Transcript[] }Paginated transcript chunks
GET/sessions/:id/suggestions?type, ?severity, ?status{ data: Suggestion[] }Filter by type/severity
GET/sessions/:id/decisions{ data: Decision[] }Decisions detected during session
WS/ws/sessions/:id/audioBinary audio chunks (16kHz PCM)SSE: transcript, suggestion eventsWebSocket for Electron agent
Diagrams API (Diagram Intelligence) Analysis
MethodPathInputOutputNotes
POST/diagrams/uploadFormData: { file, project_id, name }{ data: Diagram }Max 25MB, triggers async analysis
GET/diagrams/:id{ data: Diagram, graph, analysis }Includes parsed graph + analysis results
POST/diagrams/:id/analyze{ rules?: string[] }{ data: AnalysisResult }Re-run analysis with custom rule set
POST/diagrams/:id/suggest-replacement{ node_id, context? }{ alternatives: Alternative[] }Each has pros/cons/risk/cost
POST/diagrams/:id/apply-replacement{ node_id, replacement_id }{ data: Diagram (new version) }Creates new version with rewired connections
GET/diagrams/:id/compare/:otherId{ comparison: Comparison8D }8-dimension comparison
GET/diagrams/:id/versions{ data: Diagram[] }Version history chain
Code Intelligence API Code Review
MethodPathInputOutputNotes
POST/code/repos/connect{ provider, owner, repo, project_id }{ data: CodeRepo }Starts OAuth, installs webhook
GET/code/repos/:id{ data: CodeRepo, scan_results }Latest architecture scan
POST/code/repos/:id/scan{ branch? }{ job_id }Async — triggers full repo scan
GET/code/repos/:id/scan/:jobId{ status, results? }Poll for scan completion
GET/code/repos/:id/reviews?cursor{ data: CodeReview[] }All PR reviews for repo
GET/code/repos/:id/drift{ drift_items: DriftItem[] }All decision-code mismatches
POST/webhooks/githubGitHub webhook payload{ received: true }Edge Function — handles PR events
ADR API ADR Intelligence
MethodPathInputOutputNotes
GET/adrs?project_id, ?status, ?cursor{ data: ADR[] }ADR repository listing
POST/adrs{ project_id, title, context, decision, consequences }{ data: ADR }Manual ADR creation
POST/adrs/generate{ session_id } or { decision_id }{ data: ADR (draft) }AI-generated from meeting/decision
GET/adrs/:id{ data: ADR, quality_score, conflicts }Full detail + quality + conflicts
PATCH/adrs/:idPartial ADR fields{ data: ADR }
POST/adrs/:id/score{ quality_score: QualityScore }Re-run 12-criteria scoring
GET/adrs/:id/conflicts{ conflicts: ConflictItem[] }Cross-reference with other ADRs
POST/adrs/:id/supersede{ new_adr_id }{ data: ADR }Mark as superseded, link to new
Suggestions & Feedback API Core Loop
MethodPathInputOutputNotes
PATCH/suggestions/:id{ status: 'accepted'|'rejected'|'deferred', feedback? }{ data: Suggestion }User acts on suggestion
POST/suggestions/:id/feedback{ rating: 1-5, comment? }{ data: Feedback }Feeds back into prompt improvement
GET/suggestions/search?q=query&project_id{ data: Suggestion[] }Semantic search via pgvector
Admin & Enterprise API Enterprise
MethodPathAuthPurpose
GET/admin/orgownerOrg settings, billing, limits
PATCH/admin/orgownerUpdate org settings
GET/admin/membersadmin+List all org members with roles
POST/admin/members/inviteadmin+Invite user by email
PATCH/admin/members/:idadmin+Change member role
DELETE/admin/members/:idownerRemove member
GET/admin/audit-logadmin+Paginated audit trail
GET/admin/usageadmin+Usage metrics + cost breakdown
GET/admin/complianceadmin+Compliance dashboard data
POST/admin/billing/portalownerCreates Stripe billing portal session

7. Authentication & Role-Based Access Control

Complete auth flow using Supabase Auth. RBAC with 6 roles. Permission matrix for every action.

Auth Flow — Step by Step Implementation

Signup Flow

1. User clicks Sign Up

→ /signup page

2. Google SSO or Magic Link

supabase.auth.signInWithOAuth / signInWithOtp

3. Callback

/auth/callback → exchanges code for session

4. DB Trigger

on auth.users INSERT → creates profile row

5. Onboarding

Create/join org, set name, avatar

6. Dashboard

Redirect to /

// src/app/(auth)/callback/route.ts import { NextResponse } from 'next/server'; import { createServerClient } from '@supabase/ssr'; export async function GET(request: Request) { const { searchParams } = new URL(request.url); const code = searchParams.get('code'); const next = searchParams.get('next') ?? '/'; if (code) { const supabase = createServerClient(/* cookie config */); const { error } = await supabase.auth.exchangeCodeForSession(code); if (!error) return NextResponse.redirect(new URL(next, request.url)); } return NextResponse.redirect(new URL('/login?error=auth_failed', request.url)); } // src/lib/middleware/auth.ts — Next.js middleware export async function middleware(request: NextRequest) { const supabase = createServerClient(/* ... */); const { data: { user } } = await supabase.auth.getUser(); // Public routes: /login, /signup, /callback, /api/webhooks/* const isPublic = ['/login', '/signup', '/callback'] .some(p => request.nextUrl.pathname.startsWith(p)); if (!user && !isPublic) { return NextResponse.redirect(new URL('/login', request.url)); } if (user && isPublic) { return NextResponse.redirect(new URL('/', request.url)); } }
RBAC — 6 Roles Permission Matrix Security
ActionOwnerAdminArchitectDeveloperViewerBilling
View projects
Create/edit projects
Delete projects
Start meeting sessions
Upload diagrams
Create/edit ADRs
Approve ADRs
Connect repos
View suggestions
Manage team members
View audit log
Manage billing
View compliance
Org settings
// src/lib/middleware/permission.ts type Permission = 'project:read' | 'project:write' | 'project:delete' | 'session:write' | 'diagram:write' | 'adr:write' | 'adr:approve' | 'code:connect' | 'team:manage' | 'audit:read' | 'billing:manage' | 'compliance:read' | 'org:settings'; const ROLE_PERMISSIONS: Record<UserRole, Permission[]> = { owner: ['project:read', 'project:write', 'project:delete', 'session:write', 'diagram:write', 'adr:write', 'adr:approve', 'code:connect', 'team:manage', 'audit:read', 'billing:manage', 'compliance:read', 'org:settings'], admin: ['project:read', 'project:write', 'project:delete', 'session:write', 'diagram:write', 'adr:write', 'adr:approve', 'code:connect', 'team:manage', 'audit:read', 'compliance:read'], architect: ['project:read', 'project:write', 'session:write', 'diagram:write', 'adr:write', 'adr:approve', 'code:connect', 'compliance:read'], developer: ['project:read', 'session:write', 'diagram:write', 'adr:write'], viewer: ['project:read'], billing: ['billing:manage'], }; export function requirePermission(...perms: Permission[]): Middleware { return async (ctx, next) => { const member = await ctx.supabase .from('org_members').select('role') .eq('user_id', ctx.userId).eq('org_id', ctx.orgId).single(); if (!member.data) { ctx.res = { status: 403 }; return; } const userPerms = ROLE_PERMISSIONS[member.data.role]; const hasAll = perms.every(p => userPerms.includes(p)); if (!hasAll) { ctx.res = { status: 403, body: 'Insufficient permissions' }; return; } await next(); }; }
Enterprise SSO — SAML + SCIM Phase 4

Enterprise customers (>100 seats) get SAML SSO + SCIM auto-provisioning.

SAML SSO

  • Supabase Auth supports SAML natively (Enterprise plan)
  • Admin provides IdP metadata URL (Okta, Azure AD, OneLogin)
  • Domain verification: verify DNS TXT record
  • Force SSO: all users with matching email domain must use SSO
  • JIT provisioning: first login auto-creates profile + joins org

SCIM 2.0 Provisioning

  • Edge Function: /scim/v2/Users and /scim/v2/Groups
  • Handles: CREATE user → auto-invite to org
  • Handles: UPDATE user → sync name, role
  • Handles: DELETE user → deactivate (soft delete)
  • Groups → map to ArchPilot roles

8. Meeting Intelligence Engine — Technical Spec

Real-time audio capture → STT → context assembly → AI suggestion → broadcast. End-to-end latency target: <5 seconds.

Pipeline Audio → Suggestion (3-5s)

Electron

desktopCapturer

MediaRecorder

webm/opus 16kHz

WebSocket

250ms chunks

Deepgram STT

Nova-3 ~200ms

PII Filter

~50ms

Context Build

2-min window

Trigger Check

Haiku ~100ms

Cache Check

pgvector

AI Sonnet

1-3s

Zod Validate

Schema

Broadcast

Realtime

WebSocket Server Handler Core
// src/app/api/v1/ws/sessions/[id]/audio/route.ts import { createClient } from '@deepgram/sdk'; export function SOCKET( client: WebSocket, request: Request, server: WebSocketServer ) { const sessionId = extractSessionId(request); const deepgram = createClient(process.env.DEEPGRAM_API_KEY); // Open Deepgram live connection const dgConnection = deepgram.listen.live({ model: 'nova-3', language: 'en', smart_format: true, diarize: true, // speaker identification interim_results: true, // faster partial results utterance_end_ms: 1000, sample_rate: 16000, encoding: 'opus', }); // Audio from Electron → forward to Deepgram client.on('message', (audioChunk: Buffer) => { dgConnection.send(audioChunk); }); // Deepgram transcript → process → store → broadcast dgConnection.on('transcript', async (data) => { if (!data.is_final) return; // only process final results const text = data.channel.alternatives[0].transcript; if (!text.trim()) return; // 1. Store transcript await container.transcriptRepo.create({ session_id: sessionId, text: sanitizePII(text), speaker_id: data.channel.alternatives[0].words?.[0]?.speaker, confidence: data.channel.alternatives[0].confidence, start_ms: Math.floor(data.start * 1000), end_ms: Math.floor((data.start + data.duration) * 1000), }); // 2. Trigger detection (Haiku — fast classification) const trigger = await container.modelRouter.route({ slug: 'meeting.trigger', complexity: 'low', maxLatencyMs: 500, input: text, }); // 3. Only generate suggestion for relevant topics if (['decision', 'risk', 'architecture_topic'].includes(trigger)) { const context = await assembleContext(sessionId); const suggestion = await generateSuggestion(context, container); // Stored via Supabase INSERT → triggers Realtime broadcast to UI } }); client.on('close', () => dgConnection.finish()); }
Context Assembly — Sliding Window 4000 tokens max
// src/lib/engines/meeting/context-assembler.ts export async function assembleContext(sessionId: string): Promise<string> { // Last 2 minutes of transcript const recentTranscript = await getRecentTranscript(sessionId, 120_000); // Project metadata const project = await getProjectForSession(sessionId); // Recent decisions from this project const recentDecisions = await getRecentDecisions(project.id, 5); // Active ADRs const activeADRs = await getActiveADRSummaries(project.id); return `## Project: ${project.name} ## Tech Stack: ${project.tech_stack.join(', ')} ## Recent Decisions: ${recentDecisions.map(d => d.title).join('; ')} ## Active ADRs: ${activeADRs.map(a => a.title).join('; ')} ## Current Discussion: ${recentTranscript}`; // Truncate to 4000 tokens via tiktoken }
Suggestion Output — Zod Validation Schema Critical
// src/lib/schemas/suggestion-output.schema.ts import { z } from 'zod'; export const SuggestionOutputSchema = z.object({ title: z.string().max(120), body: z.string().max(2000), // markdown content type: z.enum(['best_practice', 'anti_pattern', 'risk_alert', 'alternative', 'cost_optimization', 'security', 'scalability']), severity: z.enum(['critical', 'high', 'medium', 'low', 'info']), pros: z.array(z.string()).min(1).max(5), cons: z.array(z.string()).max(5), risks: z.array(z.string()).max(5), confidence: z.number().min(0).max(1), cost_impact: z.string().optional(), // '+$500/mo' or '-$200/mo' effort: z.string().optional(), // '2 hours', '1 sprint' reversibility: z.enum(['easy', 'moderate', 'hard', 'irreversible']).optional(), }); // Every AI response is validated before storing: const parsed = SuggestionOutputSchema.safeParse(JSON.parse(aiResponse)); if (!parsed.success) { logger.warn('AI output validation failed', parsed.error.flatten()); return; // discard invalid suggestions }
Post-Meeting Pipeline Async

Triggered by meeting.ended event. Runs as background job queue task.

Post-Meeting Flow

Session End

User clicks stop

Full Transcript

Assemble all

Summary

Sonnet gen

Extract Decisions

From transcript

Draft ADRs

Per decision

Embeddings

Chunk + vectorize

Notify

Email summary

Performance Targets

STT latency: <300ms | Trigger detection: <100ms | Suggestion generation: <3s | Total end-to-end: <5s | Post-meeting summary: <30s

9. Diagram Intelligence Engine — Technical Spec

Upload any architecture diagram → parse to graph → detect anti-patterns → suggest replacements → visual editing → version compare.

Pipeline Upload → Analysis → Edit

Upload

7 formats

Detect Format

Auto

Parser Factory

Route to parser

Build Graph

nodes + edges

Anti-Patterns

40+ rules

AI Suggestions

pros/cons/risk

React Flow

Visual editor

Parser Factory — 7 Format Parsers Core
FormatParserStrategyOutput
PNG / JPG / PDFVisionParserGPT-4o Vision → extract components, arrows, labelsStructured JSON graph
Draw.io (.xml)DrawioParserParse mxGraphModel → extract mxCell elementsNative graph with styles
ExcalidrawExcalidrawParserJSON → elements array + arrow bindingsPositional graph
Mermaid (.md)MermaidParserMermaid.js parser → AST → nodes/edgesSemantic graph
SVGSvgParserDOM parsing + AI label extractionLabeled graph
Terraform / CDKIaCParserHCL/TS parser → resource dependency mapInfrastructure graph
Graph Data Model Schema
// Unified graph model stored in diagrams.graph_data JSONB interface DiagramGraph { nodes: { id: string; type: 'service' | 'database' | 'queue' | 'cache' | 'gateway' | 'cdn' | 'loadbalancer' | 'external'; label: string; properties: Record<string, any>; // technology, version, cloud provider, etc. position: { x: number; y: number }; }[]; edges: { id: string; source: string; // node id target: string; // node id label: string; // 'REST', 'gRPC', 'async', 'SQL', etc. type: 'sync' | 'async' | 'data' | 'auth'; }[]; }
Anti-Pattern Rules Engine — 40+ Rules Detection

Structural (7)

  • Single point of failure
  • Sync chain > 4 services
  • Shared DB between microservices
  • Missing load balancer
  • No circuit breaker
  • Circular dependency
  • God service (>8 connections)

Data Flow (6)

  • No cache between service & DB
  • Dual writes no event sourcing
  • Missing dead letter queue
  • No CDN before static
  • Sync reads from write-primary
  • Missing read replica

Security (6)

  • Public → direct DB access
  • No API gateway/auth layer
  • Secrets not through vault
  • Unencrypted cross-zone
  • Missing WAF
  • No network segmentation
// src/lib/engines/diagram/rules/anti-pattern.ts interface AntiPatternRule { id: string; name: string; category: 'structural' | 'data_flow' | 'security' | 'scalability' | 'cost'; severity: SeverityLevel; detect(graph: DiagramGraph): AntiPatternMatch[]; } // Example: Circular Dependency Detection class CircularDependencyRule implements AntiPatternRule { id = 'structural.circular-dependency'; name = 'Circular Dependency Detected'; category = 'structural'; severity = 'high'; detect(graph: DiagramGraph): AntiPatternMatch[] { const cycles = findCycles(buildAdjacencyList(graph)); return cycles.map(cycle => ({ rule: this, nodes: cycle, message: `Circular: ${cycle.map(n => n.label).join(' → ')}`, fix: 'Introduce event bus or message queue to break cycle', })); } }
Component Replacement Flow Interactive

User clicks node

e.g. "RDS"

AI generates

3-5 alternatives

Each shows

Pros/Cons/Risk/Cost

User picks one

Auto-rewire

All connections

Cascade Analysis

Impact on system

Version Comparison — 8 Dimensions Differentiator
DimensionOriginalAI OptimizedCost Optimized
Reliability62/10089/10074/100
Scalability45/10092/10071/100
Security58/10085/10060/100
Monthly Cost$2,400$4,100$1,800
Anti-Patterns713
ComplianceNoSOC2+HIPAASOC2

10. Code Intelligence Engine — Technical Spec

Connect GitHub/GitLab → AST parse → detect patterns → compare decisions vs code → PR-level architecture review.

Pipeline PR → Review

GitHub Webhook

PR opened

Fetch Diff

Changed files

Tree-sitter

AST parse

Pattern Check

60+ rules

Drift Detect

Decision vs code

Review Report

PR comments

GitHub OAuth + Webhook Setup Connection
// POST /api/v1/code/repos/connect // 1. User initiates GitHub OAuth → GitHub app installation // 2. Callback stores access token in Supabase Vault // 3. Create webhook on repo via GitHub API // 4. Webhook fires on: pull_request.opened, pull_request.synchronize // supabase/functions/webhook-github/index.ts (Edge Function) import { verify } from '@octokit/webhooks-methods'; Deno.serve(async (req) => { const signature = req.headers.get('x-hub-signature-256'); const body = await req.text(); const valid = await verify(WEBHOOK_SECRET, body, signature); if (!valid) return new Response('Unauthorized', { status: 401 }); const event = JSON.parse(body); if (event.action === 'opened' || event.action === 'synchronize') { // Queue a code review job await supabase.from('job_queue').insert({ type: 'code_review', payload: { repo_id: event.repository.id, pr_number: event.number }, }); } return new Response('OK'); });
6 Detection Categories What It Finds

Service Boundaries

Module boundaries, dependency graphs, "is this a monolith disguised as microservices?"

API Contracts

REST endpoints, GraphQL schemas, versioning, error handling, auth middleware.

Data Access

ORM usage, N+1 queries, missing indexes, connection pooling, transaction boundaries.

Error Handling

Try/catch coverage, retry logic, circuit breakers, timeouts, graceful degradation.

Security

Auth middleware, input validation, SQL injection, XSS, hardcoded secrets, CORS config.

Design Patterns

Repository, factory, strategy, CQRS, event sourcing, DDD. Flags misuse.

Decision-to-Code Drift Detection Key Feature
Decision (Meeting/ADR)Code RealityDrift TypeSeverity
"Use event-driven for orders"Synchronous HTTP calls foundArchitecture DriftCritical
"All services need circuit breakers"3 of 8 services missingImplementation GapHigh
"Use PostgreSQL for user data"PostgreSQL confirmed ✓AlignedNone
"API versioning via URL path"Mix of URL + header versioningInconsistencyMedium

11. ADR Intelligence Engine — Technical Spec

Auto-generate ADRs from meetings → 12-criteria quality scoring → conflict detection → living repository with health monitoring.

Pipeline Meeting → ADR Lifecycle

Meeting

Discussion

Extract

Decisions

Generate ADR

AI Draft

Quality Score

12 criteria

Cross-Ref

Conflicts

Human Review

Approve

Publish

Repository

Health CRON

Weekly

12-Criteria Quality Scoring Scoring Algorithm
CriterionWeightChecksCommon Failure
Context Completeness12%WHY was this needed?Jumps to solution
Options Considered10%3+ alternatives listed?Only chosen option
Trade-off Analysis10%Pros/cons per option?Only pros of chosen
Decision Reasoning10%WHY this option won?No explanation
Consequences10%Impact documented?Missing in 70% of ADRs
Risk Assessment10%What could go wrong?Overly optimistic
Reversibility8%How hard to reverse?Missing entirely
Scale Assumptions8%At what scale?Made for 100, applied at 1M
Cost Implications7%Cost impact?No cost analysis
Compliance Impact5%Regulatory?Not considered
Expiry / Review Date5%When to revisit?Decisions become stale
Cross-References5%Related ADRs linked?ADRs in isolation

Score Thresholds

85-100: Excellent — publish immediately | 70-84: Good — minor improvements suggested | 50-69: Needs Work — specific gaps flagged | Below 50: Insufficient — requires significant revision

Conflict Detection Safety Net

Two-pass approach: embeddings similarity (fast, broad) → rule-based check (precise, specific).

// Pass 1: Semantic similarity via pgvector const similar = await supabase.rpc('match_embeddings', { query_embedding: newAdrEmbedding, match_threshold: 0.82, match_count: 10, source_type: 'adr', }); // Pass 2: Rule-based checks on similar ADRs // - Opposite technology choices (e.g. ADR-001 says "use MongoDB", new says "use PostgreSQL") // - Contradicting patterns (e.g. "sync" vs "async" for same service) // - Superseded but not marked (same domain, different decision) // - Scale assumption conflicts (one assumes 100 users, another 1M)
Health Check CRON Weekly
// supabase/functions/cron-adr-health/index.ts // Runs weekly via pg_cron // SELECT cron.schedule('adr-health', '0 9 * * 1', $$ // SELECT net.http_post('.../functions/v1/cron-adr-health') $$); // Checks: // 1. Stale ADRs: review_date passed + no update in 6 months → flag // 2. Low quality: score < 50 and status = 'accepted' → notify author // 3. Orphaned: references deleted project/session → flag // 4. Conflicting: re-run conflict detection on all active ADRs

12. Best Practices Engine — Technical Spec

Living knowledge base across 11 domains. Contextual recommendations with pros/cons/risks. Learning loop from user feedback.

11 Architecture Domains Knowledge Base

Cloud Architecture

AWS, Azure, GCP, multi-cloud, hybrid, serverless

Application Design

Monolith, microservices, CQRS, DDD, event sourcing

Data Management

SQL/NoSQL, data lakes, streaming, data mesh

API Design

REST, GraphQL, gRPC, versioning, error handling

Security

Zero trust, OAuth/OIDC, encryption, secrets mgmt

Scalability

Horizontal/vertical, caching, CDN, connection pooling

Reliability

Circuit breakers, retries, graceful degradation, chaos

DevOps / CI/CD

GitOps, IaC, blue-green, canary, feature flags

Monitoring

Observability, tracing, alerting, SLOs/SLIs

Frontend

SSR, micro-frontends, state mgmt, performance

Team / Process

Team topologies, Conway's law, tech debt mgmt

Recommendation Structure — Every Suggestion Standard
// Every best practice recommendation includes ALL of these fields: interface Recommendation { title: string; // "Use Aurora Serverless v2 instead of RDS" description: string; // Detailed explanation (markdown) domain: string; // "data_management" pros: string[]; // ["Auto-scales to zero", "Pay per ACU-second"] cons: string[]; // ["Cold start ~25s", "Max 256 ACUs"] risks: string[]; // ["Aurora Serverless v2 still has regional limits"] confidence: number; // 0.0 to 1.0 cost_impact: string; // "+$200/mo" or "-$500/mo" effort: string; // "1 sprint" or "3 days" reversibility: string; // "easy" | "moderate" | "hard" | "irreversible" sources: string[]; // Links to official docs, case studies } // No vague advice. Every recommendation is actionable and quantified.
Learning Loop — Feedback → Improvement Adaptive

Suggestion

Shown to user

User Action

Accept / Reject

Rating

1-5 stars

Update Score

prompt_registry

Better Next Time

Higher relevance

Accepted suggestions boost the underlying prompt's performance_score. Rejected suggestions with feedback trigger prompt review. Low-scoring prompts get A/B tested against new versions.

13. Enterprise & Governance Layer

Compliance, policy engine, audit logging, billing, and admin console for B2B enterprise customers.

Compliance Center — SOC2 / HIPAA / GDPR Enterprise
// Org-level compliance configuration stored in organizations.settings { "compliance": { "soc2": { "enabled": true, "audit_retention_years": 7 }, "hipaa": { "enabled": true, "pii_masking": "strict", "baa_signed": true }, "gdpr": { "enabled": true, "data_residency": "eu-west-1", "right_to_delete": true }, "pci_dss": { "enabled": false } }, "data_retention": { "transcripts_days": 365, "audit_log_days": 2555, // 7 years for SOC2 "embeddings_days": 730 } }
Policy Engine — Org-Level Architecture Rules Governance
RuleEvaluated AgainstAction on Violation
"All services must have circuit breakers"Code scans, DiagramsFlag in PR review + diagram analysis
"No single point of failure"DiagramsAnti-pattern alert in diagram analysis
"All APIs must be versioned"Code scansFlag unversioned endpoints in PR review
"ADRs required for infrastructure changes"PR labels + commit messagesBlock merge until ADR linked
"Max 3 sync hops between services"Diagrams, CodeCritical anti-pattern alert
Stripe Billing — 4 Tiers Revenue
TierPriceSeatsAI Calls/moFeatures
Free$01501 project, meeting only, no export
Starter$29/seat/mo55003 projects, all engines, basic analytics
Professional$79/seat/mo255,000Unlimited projects, RBAC, integrations, priority support
EnterpriseCustomUnlimitedUnlimitedSSO/SAML, SCIM, compliance, dedicated support, SLA, on-prem option
Change Management Workflow Process

1. Propose

From meeting/ADR

2. Policy Check

Auto-validate

3. Impact Analysis

Blast radius

4. Review Board

Notify architects

5. Approve

Multi-sig

6. Track

PRs + drift

14. Electron Desktop Agent — Technical Spec

Lightweight system tray app. Captures system audio, streams to backend, shows floating suggestions. No meeting bot — no joining calls.

Audio Capture — desktopCapturer Core
// src/main/audio-capture.ts (Electron main process) async function startAudioCapture(): Promise<MediaRecorder> { // Get system audio via desktopCapturer const sources = await desktopCapturer.getSources({ types: ['screen'] }); const stream = await navigator.mediaDevices.getUserMedia({ audio: { mandatory: { chromeMediaSource: 'desktop', chromeMediaSourceId: sources[0].id, } }, video: false }); // Encode as webm/opus, 16kHz mono, chunk every 250ms const recorder = new MediaRecorder(stream, { mimeType: 'audio/webm;codecs=opus', audioBitsPerSecond: 16000, }); recorder.ondataavailable = (e) => { if (e.data.size > 0) { audioStreamer.send(e.data); // → WebSocket to backend } }; recorder.start(250); // chunk every 250ms return recorder; }
Session Lifecycle State Machine States

Idle

No session

Connecting

WebSocket open

Recording

Audio streaming

Paused

Buffer locally

Processing

Post-meeting

Completed

Summary ready

Offline Buffer — SQLite Queue Resilience
// If WebSocket disconnects, queue audio chunks in local SQLite // On reconnect, flush queue in order → no audio data lost const db = new Database('audio-buffer.db'); db.exec(`CREATE TABLE IF NOT EXISTS audio_queue ( id INTEGER PRIMARY KEY AUTOINCREMENT, session_id TEXT NOT NULL, chunk BLOB NOT NULL, timestamp INTEGER NOT NULL, sent BOOLEAN DEFAULT FALSE )`);
IPC Communication Map Architecture
ChannelDirectionPayload
START_SESSIONRenderer → Main{ projectId, title }
STOP_SESSIONRenderer → Main{ sessionId }
AUDIO_STATUSMain → Renderer{ state, duration, bytesStreamed }
NEW_SUGGESTIONMain → RendererSuggestion object from backend SSE
CONNECTION_STATUSMain → Renderer{ connected, latency }
AUTH_TOKENMain → Renderer{ token, expiresAt }

15. DevOps & CI/CD — Hostinger VPS Deployment

GitHub Actions CI, PM2 for process management, Nginx reverse proxy, Certbot SSL. Supabase hosted (managed).

GitHub Actions — PR CI Workflow CI
# .github/workflows/ci.yml name: CI on: pull_request: branches: [main] jobs: ci: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - uses: pnpm/action-setup@v4 - uses: actions/setup-node@v4 with: { node-version: 22, cache: 'pnpm' } - run: pnpm install --frozen-lockfile - run: pnpm run lint # ESLint across all packages - run: pnpm run typecheck # tsc --noEmit - run: pnpm run test # vitest with coverage - run: pnpm run build # Next.js build - uses: codecov/codecov-action@v4 # Upload coverage
Hostinger VPS Deployment Production
# .github/workflows/deploy-web.yml name: Deploy to Hostinger VPS on: push: branches: [main] jobs: deploy: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - run: pnpm install && pnpm run build - name: Deploy via SSH uses: appleboy/ssh-action@v1 with: host: ${{ secrets.VPS_HOST }} username: ${{ secrets.VPS_USER }} key: ${{ secrets.VPS_SSH_KEY }} script: | cd /var/www/archpilot git pull origin main pnpm install --frozen-lockfile pnpm run build pm2 restart archpilot-web

VPS Setup Checklist

1. Ubuntu 22.04 LTS on Hostinger VPS (min 4GB RAM, 2 vCPU)
2. Install: Node.js 22 (via nvm), pnpm, PM2, Nginx, Certbot
3. pm2 start npm --name archpilot-web -- start (Next.js on port 3000)
4. Nginx reverse proxy: 443 → localhost:3000 with SSL via Certbot
5. pm2 startup && pm2 save (auto-restart on reboot)
6. UFW firewall: allow 22, 80, 443 only
7. Environment variables in /var/www/archpilot/.env.production

Monitoring Stack Observability
ToolPurposeAlert Threshold
SentryError tracking + performanceError rate > 1% → Slack alert
PostHogProduct analytics + feature flagsConversion drop > 20% → alert
UptimeRobotUptime monitoring (free tier)Down > 30s → SMS + Slack
PM2 PlusServer monitoring (CPU/RAM/disk)CPU > 80% or RAM > 90% → alert
Environment Strategy Environments
EnvironmentURLDatabaseDeploy
Locallocalhost:3000Supabase local (Docker)pnpm dev
Stagingstaging.archpilot.devSupabase staging projectPush to staging branch
Productionapp.archpilot.devSupabase production projectMerge to main

16. Testing Strategy

Testing pyramid: Unit (70%) → Integration (20%) → E2E (10%). Coverage gates block merge.

Vitest Configuration Unit Tests
// vitest.config.ts import { defineConfig } from 'vitest/config'; export default defineConfig({ test: { globals: true, environment: 'node', include: ['src/**/*.test.ts'], coverage: { provider: 'v8', reporter: ['text', 'lcov'], thresholds: { statements: 80, branches: 75, functions: 80, lines: 80, }, }, setupFiles: ['./test/setup.ts'], }, });
Example Unit Test — Service Layer Example
// src/lib/services/project.service.test.ts import { describe, it, expect, vi } from 'vitest'; describe('ProjectService', () => { const mockRepo = { create: vi.fn(), findById: vi.fn() }; const mockAudit = { log: vi.fn() }; const mockEvents = { emit: vi.fn() }; const service = new ProjectService(mockRepo, mockAudit, mockEvents); it('should create project and emit event', async () => { mockRepo.create.mockResolvedValue({ id: 'p1', name: 'Test' }); const result = await service.createProject( { name: 'Test', orgId: 'org1' }, 'user1' ); expect(result.success).toBe(true); expect(mockAudit.log).toHaveBeenCalledWith('project.created', expect.any(Object)); expect(mockEvents.emit).toHaveBeenCalledWith('project.created', expect.any(Object)); }); it('should reject if user lacks permission', async () => { // ... test forbidden case }); });
Playwright E2E — Auth Flow Test E2E
// e2e/auth.spec.ts import { test, expect } from '@playwright/test'; test('user can sign in with magic link', async ({ page }) => { await page.goto('/login'); await page.fill('[name="email"]', 'test@archpilot.dev'); await page.click('button[type="submit"]'); await expect(page.locator('text=Check your email')).toBeVisible(); }); test('authenticated user sees dashboard', async ({ page }) => { // Use stored auth state from global setup await page.goto('/'); await expect(page.locator('h1')).toContainText('Dashboard'); await expect(page.locator('[data-testid="project-list"]')).toBeVisible(); });
Coverage Gates — Merge Blockers Required
LayerTargetIncludes
Services80%All business logic in src/lib/services/
Repositories80%All DB access in src/lib/repositories/
Engines75%AI engines, parsers, rules
Components70%React components with logic
Utils60%Helper functions, formatters
E2EHappy pathsAuth, meeting, diagram upload, settings

17. Sprint-by-Sprint Development Guide

34 sprints × 2 weeks = 68 weeks. Follow this order exactly. Each sprint has dependencies on prior sprints. Do not skip ahead.

RULE: Definition of Done

A sprint is DONE when: all code merged to main, all tests passing (coverage gates met), no critical bugs, feature demo recorded, documentation updated.

PHASE 1: Foundation — Sprints 1-6 (12 weeks) Foundation

Sprint 1-2: Project Setup + Database (4 weeks)

  • Init monorepo: pnpm create turbo@latest → configure workspaces (apps/web, apps/agent, packages/*)
  • Next.js 15: pnpm create next-app apps/web → TypeScript strict, App Router, Tailwind 4, shadcn/ui init
  • Supabase: Create project → run migration 00001 (18 tables) → run migration 00002 (RLS policies)
  • pgvector: Enable extension, create embeddings table with IVFFlat index
  • Prompt Registry: Seed 9 initial prompts (meeting.suggestion, diagram.analyze, code.review, etc.)
  • Auth: Supabase Auth (Google SSO + magic link), callback route, middleware, protected routes
  • BaseRepository: Implement base class + ProjectRepo, SessionRepo, DecisionRepo, AdrRepo
  • Profile trigger: Database trigger on auth.users INSERT → creates profiles row

Done when: Auth works, empty dashboard loads, all 18 tables exist with RLS, seed data present.

Sprint 3-4: Core UI + Service Layer (4 weeks)

  • Dashboard layout: Sidebar navigation, header with user menu, breadcrumbs, responsive
  • Project CRUD: List/create/edit/delete projects with API routes + Zod validation
  • Settings pages: Profile, team members, integrations placeholders
  • DI Container: Implement container.ts wiring all repos + services
  • EventBus: Implement typed event bus with 10 initial events
  • AuditLogger: Log every write operation to audit_log table
  • React Query: Setup QueryClient provider, first hooks (useProjects, useProject)

Done when: Can create/edit/delete projects. Navigation works. Audit log captures all writes.

Sprint 5-6: AI Foundation (4 weeks)

  • ModelRouter: Implement with Anthropic (Haiku/Sonnet/Opus), OpenAI, Groq providers
  • PromptRegistry: Database-driven prompt loader with caching + version tracking
  • CircuitBreaker: Wrap all AI provider calls with circuit breaker + retry logic
  • Semantic Cache: Before calling AI, check pgvector for similar previous queries (threshold 0.92)
  • Rate Limiting: Upstash Redis integration, rate limit middleware
  • Suggestion CRUD: API routes + UI components (SuggestionCard, SuggestionList)
  • Feedback loop: Accept/reject/defer + 1-5 star rating → updates prompt performance_score

Done when: Can send test prompt → get AI response → store suggestion → collect feedback. Circuit breaker tested.

Phase 1 Milestone

Fully functional web app with auth, project management, AI pipeline, and feedback loop. Ready for Meeting Intelligence.

PHASE 2: Meeting Intelligence — Sprints 7-12 (12 weeks) Core Engine

Sprint 7-8: Electron Agent MVP (4 weeks)

  • Scaffold: Electron 33 + electron-builder + TypeScript
  • System tray: Tray icon, popup window (React), start/stop/pause controls
  • Audio capture: desktopCapturer → MediaRecorder (webm/opus, 16kHz) → 250ms chunks
  • WebSocket client: Connect to backend, send binary audio chunks
  • Auth bridge: OAuth flow in BrowserWindow → store token in electron-store (encrypted)
  • Local buffer: SQLite queue for offline audio resilience
  • Auto-updater: electron-updater with GitHub Releases

Done when: Electron app captures system audio and sends to backend via WebSocket. Works on macOS + Windows.

Sprint 9-10: Real-time Meeting Pipeline (4 weeks)

  • WebSocket server: Next.js API route handles WebSocket upgrade for audio streaming
  • Deepgram integration: Streaming STT with Nova-3, interim results, speaker diarization
  • Transcript storage: Store in transcripts table, broadcast via Supabase Realtime
  • Context assembly: Sliding 2-minute window + project metadata + recent decisions
  • Trigger detection: Haiku classifies transcript segments (decision/risk/architecture/other)
  • Suggestion generation: Sonnet processes context → Zod-validated output → store + broadcast
  • Live dashboard UI: TranscriptView (auto-scroll), SuggestionCard (real-time), SessionControls

Done when: Full loop: speak → transcript appears → suggestion appears in <5s. Dashboard shows everything live.

Sprint 11-12: Post-Meeting + Polish (4 weeks)

  • Post-meeting pipeline: session.end → full summary (Sonnet) → decision extraction → ADR drafts → embeddings
  • Meeting history: List all past sessions with search, filter by project
  • Session detail: Full transcript view, all suggestions, decisions timeline
  • Feedback UI polish: Accept/reject animations, rating system, rejection reasons
  • Email notifications: Post-meeting summary email via Resend

Done when: Complete meeting lifecycle. Summary emails sent. Decisions extracted. ADR drafts created.

Phase 2 Milestone

Meeting Intelligence fully operational. Users can record meetings, get real-time suggestions, and receive post-meeting summaries with auto-generated ADRs.

PHASE 3: Diagram + Code + ADR — Sprints 13-22 (20 weeks) Intelligence

Sprint 13-14: Diagram Upload + Parsing (4 weeks)

  • File upload to Supabase Storage (max 25MB, validate 7 formats)
  • Format auto-detection from file extension + magic bytes
  • Parser factory + implement: DrawIO (XML), Mermaid (AST), Vision (GPT-4o for images)
  • Graph data model → store in diagrams.graph_data JSONB
  • Basic diagram list page + detail page

Sprint 15-16: Diagram Analysis + Visual Editor (4 weeks)

  • Anti-pattern engine: implement first 20 rules (structural, data_flow, security)
  • Analysis results UI with severity badges and fix suggestions
  • React Flow canvas: custom node types, custom edges, minimap, toolbar
  • Component replacement flow: click node → AI alternatives → select → rewire
  • Version comparison: 8-dimension scoring between diagram versions

Sprint 17-18: Code Intelligence (4 weeks)

  • GitHub OAuth flow + app installation + webhook setup
  • Webhook Edge Function for PR events (opened/synchronize)
  • Tree-sitter WASM integration for AST parsing (JS/TS/Python/Go/Java)
  • Full repo scan: extract services, APIs, data access patterns
  • PR review pipeline: diff → parse → pattern check → drift detect → report

Sprint 19-20: ADR Intelligence (4 weeks)

  • ADR CRUD pages + API endpoints with full detail views
  • Auto-generation from meeting decisions (AI-powered)
  • 12-criteria quality scoring algorithm with weighted scores
  • Conflict detection: pgvector similarity + rule-based cross-reference
  • Health check CRON via pg_cron + Edge Function (weekly)

Sprint 21-22: Best Practices + Cross-Engine (4 weeks)

  • Knowledge base: seed 11 domains with initial patterns (150+ entries)
  • Contextual recommendation engine (context → embed → pgvector → top-K)
  • Semantic search UI for browsing best practices
  • Cross-engine links: meeting suggestions reference diagrams, drift references ADRs
  • Learning loop: feedback flows back to prompt_registry performance scores

Phase 3 Milestone

All 5 intelligence engines operational. Full platform intelligence across meetings, diagrams, code, ADRs, and best practices.

PHASE 4: Enterprise — Sprints 23-28 (12 weeks) B2B

Sprint 23-24: RBAC + Admin Console (4 weeks)

  • Full RBAC enforcement: permission middleware on all API routes
  • Admin console: member management, invite flow (Resend email), role assignment
  • Usage dashboard: AI calls, tokens, cost breakdown per project
  • Audit log viewer with filtering, search, export

Sprint 25-26: Compliance + Billing (4 weeks)

  • Compliance center: SOC2/HIPAA/GDPR checklist dashboards
  • Policy engine: org rules evaluated against all engine outputs
  • Stripe integration: 4 tiers, subscription management, usage-based metering
  • Billing portal: invoices, payment methods, plan changes

Sprint 27-28: Enterprise SSO + Polish (4 weeks)

  • SAML SSO via Supabase Auth (Okta, Azure AD, OneLogin)
  • SCIM 2.0 provisioning Edge Function (/scim/v2/Users, /scim/v2/Groups)
  • Change management workflow UI (propose → review → approve → track)
  • Data export API for compliance (JSON/CSV, all org data)

Phase 4 Milestone

Enterprise-ready. RBAC, compliance, SSO, billing, admin console. Ready for B2B sales.

PHASE 5: Scale & Extensions — Sprints 29-34 (12 weeks) Growth

Sprint 29-30: Chrome Extension (4 weeks)

  • Plasmo framework setup + Chrome Manifest V3
  • GitHub PR content script: inject ArchPilot review panel on PR pages
  • Popup: login, status, quick settings
  • Side panel: full architecture review interface

Sprint 31-32: Performance + Scale (4 weeks)

  • Supabase read replicas for dashboard queries
  • Background job queue optimization (Supabase Queues)
  • Bundle size: target <200KB First Load JS (code splitting, tree shaking)
  • Load testing: artillery.io targeting 1000 concurrent sessions
  • PM2 cluster mode on VPS (use all CPU cores)

Sprint 33-34: Self-Hosted + API Platform (4 weeks)

  • Docker image for self-hosted deployment
  • Helm chart for Kubernetes (enterprise on-prem)
  • Public API with API key auth for third-party integrations
  • VS Code extension MVP (inline suggestions, sidebar panel)
  • Documentation site (Nextra or Mintlify)

Phase 5 Milestone

Scaled, polished, extensible. Browser extension live. Self-hosted option available. API platform for ecosystem. Ready for $200M+ ARR growth.

Summary — Full Build Timeline

PhaseSprintsWeeksKey Deliverables
1. Foundation1-612Auth, DB, project CRUD, AI pipeline, feedback loop
2. Meeting Intelligence7-1212Electron agent, real-time STT, live suggestions, post-meeting
3. Diagram + Code + ADR13-2220All 5 engines operational, cross-engine integration
4. Enterprise23-2812RBAC, compliance, SSO, Stripe billing, admin console
5. Scale & Extensions29-3412Chrome extension, performance, self-hosted, API platform

Total: 34 sprints × 2 weeks = 68 weeks ≈ 16 months from empty repo to full enterprise product.