# superapp Documentation (Full) > A thin, secure data layer between your frontend and any database — with authentication, row-level permissions, and type safety built in. --- ## Introduction A thin, secure data layer between your frontend and any database. URL: /docs One library to connect your frontend to any database -- with authentication, row-level permissions, and type safety built in. ## What is superapp? superapp is three packages: | Package | What it does | |---|---| | `@superapp/backend` | Connects to Postgres, MySQL, SQLite, or CSV via native drivers. Handles auth, enforces row-level permissions, runs queries. | | `@superapp/db` | Drizzle ORM client built on [Drizzle Proxy](https://orm.drizzle.team/docs/connect-drizzle-proxy). You write standard Drizzle queries — the data you get back is already filtered, restricted, and validated by the backend's permission engine. | | `@superapp/auth` | Client-side authentication. Uses [better-auth](https://www.better-auth.com) by default, but supports custom adapters. Provides session management, React hooks, and pre-built auth UI components. | ```text ┌───────────────────────────────────────────────────────────┐ │ YOUR FRONTEND (React, Next.js, etc.) │ │ │ │ ┌────────────────────┐ ┌────────────────────────┐ │ │ │ @superapp/db │ │ @superapp/auth │ │ │ │ Drizzle Proxy │ │ Session management │ │ │ │ db.select(...) │ │ useSession() │ │ │ │ db.insert(...) │ │ AuthCard │ │ │ └─────────┬──────────┘ └───────────┬────────────┘ │ │ └──────────┬───────────────┘ │ └────────────────────────┼──────────────────────────────────┘ │ │ SQL + params + JWT (Drizzle Proxy) ▼ ┌───────────────────────────────────────────────────────────┐ │ @superapp/backend │ │ │ │ 1. Authenticate ── verify JWT, resolve user + roles │ │ │ │ │ ▼ │ │ 2. Authorize ──── check permissions for this user │ │ │ • inject WHERE filters (user_id = ?) │ │ │ • restrict columns to allowed set │ │ │ • validate writes against rules │ │ ▼ │ │ 3. Execute ────── run permission-filtered SQL directly │ └───────────────────────────┬───────────────────────────────┘ │ ┌─────────────┼─────────────┐ ▼ ▼ ▼ Postgres MySQL SQLite / CSV ``` **Every request from your frontend goes through this pipeline.** You write normal Drizzle ORM queries — `db.select()`, `db.insert()`, etc. — and the [Drizzle Proxy](https://orm.drizzle.team/docs/connect-drizzle-proxy) driver sends the parameterized SQL to the backend. The server verifies who the user is, applies permission filters to the SQL, and automatically scopes the data so that **each user can only access their own data**. ## How it works 1. **Define your server** -- connect databases, configure auth, declare permissions 2. **Generate types** -- the CLI introspects your schema and outputs Drizzle table definitions 3. **Query from the frontend** -- use standard Drizzle ORM syntax, permissions are enforced automatically ## Full Example — E-commerce Backend A complete e-commerce backend with products, orders, and customers — four roles, each with different access levels. This is all the backend code you need. **Server** (`server.ts`): ```typescript const engine = createEngine({ connections: { main: process.env.PG_URL!, }, auth: betterAuthProvider({ secret: process.env.AUTH_SECRET! }), permissions: { // ─── Products ─────────────────────────────────────────── // // Everyone can browse. Only store managers can create/edit. browse_products: { table: 'main.products', roles: ['customer', 'support_agent', 'store_manager', 'admin'], select: { columns: ['id', 'name', 'description', 'price', 'category', 'image_url', 'in_stock'], where: { in_stock: { $eq: true } }, }, }, manage_products: { table: 'main.products', roles: ['store_manager', 'admin'], select: { columns: ['id', 'name', 'description', 'price', 'category', 'image_url', 'in_stock', 'cost', 'sku', 'created_at'], }, insert: { columns: ['name', 'description', 'price', 'category', 'image_url', 'cost', 'sku'], validate: { price: { $gt: 0 }, cost: { $gte: 0 }, }, overwrite: { in_stock: true, created_at: '$now' }, }, update: { columns: ['name', 'description', 'price', 'category', 'image_url', 'in_stock', 'cost'], validate: { price: { $gt: 0 } }, overwrite: { updated_at: '$now' }, }, delete: { where: { in_stock: { $eq: false } }, }, }, // ─── Orders ───────────────────────────────────────────── // // Customers see own orders. Support can view and update status. // Store managers see everything. customer_orders: { table: 'main.orders', roles: ['customer'], select: { columns: ['id', 'total', 'status', 'shipping_address', 'created_at'], where: { customer_id: { $eq: '$user.id' } }, }, insert: { columns: ['shipping_address'], validate: { shipping_address: { $ne: '' }, }, overwrite: { customer_id: '$user.id', status: 'pending', total: 0, created_at: '$now', }, }, update: { columns: ['shipping_address'], where: { customer_id: { $eq: '$user.id' }, status: { $eq: 'pending' }, }, }, delete: { where: { customer_id: { $eq: '$user.id' }, status: { $eq: 'pending' }, }, }, }, support_orders: { table: 'main.orders', roles: ['support_agent'], select: { columns: ['id', 'customer_id', 'total', 'status', 'shipping_address', 'created_at'], }, update: { columns: ['status'], validate: { status: { $in: ['processing', 'shipped', 'delivered', 'refunded'] }, }, overwrite: { updated_at: '$now' }, }, }, manage_orders: { table: 'main.orders', roles: ['store_manager', 'admin'], select: { columns: ['id', 'customer_id', 'total', 'status', 'shipping_address', 'notes', 'created_at', 'updated_at'], }, update: { columns: ['status', 'notes', 'total'], validate: { status: { $in: ['pending', 'processing', 'shipped', 'delivered', 'cancelled', 'refunded'] }, total: { $gte: 0 }, }, overwrite: { updated_at: '$now' }, }, delete: { where: { status: { $in: ['cancelled', 'refunded'] } }, }, }, // ─── Order Items ──────────────────────────────────────── customer_order_items: { table: 'main.order_items', roles: ['customer'], select: { columns: ['id', 'order_id', 'product_id', 'quantity', 'unit_price'], where: { order_id: { $in: '$user.order_ids' } }, }, insert: { columns: ['order_id', 'product_id', 'quantity'], validate: { quantity: { $gt: 0, $lte: 99 }, }, }, delete: { where: { order_id: { $in: '$user.order_ids' } }, }, }, view_order_items: { table: 'main.order_items', roles: ['support_agent', 'store_manager', 'admin'], select: { columns: ['id', 'order_id', 'product_id', 'quantity', 'unit_price'], }, }, // ─── Customers ────────────────────────────────────────── customer_profile: { table: 'main.customers', roles: ['customer'], select: { columns: ['id', 'name', 'email', 'phone', 'created_at'], where: { id: { $eq: '$user.id' } }, }, update: { columns: ['name', 'phone'], where: { id: { $eq: '$user.id' } }, }, }, view_customers: { table: 'main.customers', roles: ['support_agent'], select: { columns: ['id', 'name', 'email', 'phone', 'created_at'], }, }, manage_customers: { table: 'main.customers', roles: ['store_manager', 'admin'], select: { columns: ['id', 'name', 'email', 'phone', 'lifetime_value', 'notes', 'created_at'], }, update: { columns: ['notes'], overwrite: { updated_at: '$now' }, }, }, }, }) const app = new Hono() app.route('/', createHonoMiddleware(engine)) serve({ fetch: app.fetch, port: 3001 }) ``` **What each role can do:** | | Products | Orders | Order Items | Customers | |---|---|---|---|---| | **customer** | Browse in-stock | Own orders only — create, edit pending, cancel pending | Own items — add, remove | Own profile — view, edit name/phone | | **support_agent** | Browse in-stock | View all — update status (processing → shipped → delivered → refunded) | View all | View all | | **store_manager** | Full CRUD + see cost/SKU | Full access + notes, delete cancelled/refunded | View all | View all + lifetime_value, edit notes | | **admin** | Same as store_manager | Same as store_manager | View all | Same as store_manager | **Client** (`lib/superapp.ts`): ```typescript export const authClient = createAuth('http://localhost:3001') export function createDb(token: string) { return drizzle({ connection: 'http://localhost:3001', token, schema, }) } ``` **Query** (anywhere in your app): ```typescript // Customer sees only their own orders — backend injects WHERE customer_id = ? const orders = await db.select() .from(schema.orders) .orderBy(desc(schema.orders.createdAt)) .limit(50) // Support agent runs the same query — sees ALL orders (no WHERE injected) // Store manager runs it — sees all orders + notes column // Create an order — backend forces customer_id, status, and timestamps await db.insert(schema.orders) .values({ shipping_address: '123 Main St' }) // Support updates status — backend validates the transition is allowed await db.update(schema.orders) .set({ status: 'shipped' }) .where(eq(schema.orders.id, orderId)) // Customer tries to set status to 'shipped' — rejected with 403 // Customer tries to delete a shipped order — rejected with 403 // Support tries to delete any order — rejected (no delete permission) ``` **Same query, different results per role.** A customer calling `db.select().from(orders)` gets their own orders. A support agent gets all orders. A store manager gets all orders plus internal fields like `notes`. The backend decides — your frontend code stays identical. Because the authorization layer lives entirely in the backend middleware, you can safely use Drizzle ORM directly from the client side — the data returned is always scoped to the user's permissions. That said, we still recommend keeping Drizzle queries in your backend (e.g. Next.js server actions, API routes) for better control over caching, error handling, and request batching. ## What's Next - [Installation](/docs/getting-started/installation) -- Install the packages and set up your environment. - [Quick Start](/docs/getting-started/quick-start) -- Get running in 5 minutes. --- ## Installation Install the server engine and client SDK. URL: /docs/getting-started/installation Two packages, two `npm install` commands, and you're ready. ## Prerequisites - **Node.js 18+** (LTS recommended) - **A database** — Postgres, MySQL, SQLite, or a directory of CSV files ## Install the Server Engine In your backend project: ```bash npm install @superapp/backend ``` This gives you the query engine, permission system, auth integration, admin UI, and all database adapters. ## Install the Client SDK In your frontend project: ```bash npm install @superapp/db ``` This gives you the type-safe query builder, auth client, and pre-built auth UI components. Zero server dependencies. If your frontend and backend are in the same monorepo, install both. If they live in separate repos, install each in their respective project. ## Environment Variables Create a `.env` file in your backend project with at minimum: ```bash # Required AUTH_SECRET=your-auth-secret-min-32-chars SUPERAPP_MASTER_KEY=your-256-bit-master-key-hex # At least one database connection PG_URL=postgres://user:pass@localhost:5432/mydb ``` | Variable | Purpose | |----------|---------| | `AUTH_SECRET` | Signs and verifies JWT tokens. Must be at least 32 characters. | | `SUPERAPP_MASTER_KEY` | Encrypts connection URLs at rest (AES-256-GCM). | | `PG_URL` | Postgres connection string. Replace with `MYSQL_URL` or a SQLite path depending on your database. | Never commit `.env` files to version control. Add `.env` to your `.gitignore`. ## What's Next - [Quick Start](/docs/getting-started/quick-start) — Set up your server and client in 5 minutes. --- ## Quick Start Get running in 5 minutes. URL: /docs/getting-started/quick-start From zero to querying your database with auth and permissions in five steps. ## The Fastest Way Scaffold a complete project with one command: ```bash npx @superapp/backend create-app my-app ``` This creates a monorepo with a configured backend, a Next.js frontend, environment files, and example permissions. Run `npm run dev` and you're live. If you prefer to set things up manually, follow the steps below. ## Manual Setup ### Step 1: Create the Engine Config Create `server.ts` in your backend project: ```typescript const auth = betterAuthProvider({ secret: process.env.AUTH_SECRET!, userTable: { table: 'main.users', matchOn: { column: 'id', jwtField: 'id' }, columns: ['id', 'email', 'name'], }, }) export const engine = createEngine({ superapp_db: './superapp.db', connections: { main: process.env.PG_URL!, }, auth, masterKey: process.env.SUPERAPP_MASTER_KEY!, permissions: { view_own_orders: { table: 'main.orders', roles: ['viewer'], select: { columns: ['id', 'amount', 'status', 'customer_id', 'created_at'], where: { organization: { members: { user_id: { $eq: '$user.id' } }, }, }, }, }, }, }) ``` ### Step 2: Start the Server Add the HTTP adapter below your engine config in the same file, or import the engine into a separate entry point: ```typescript const app = new Hono() app.route('/', createHonoMiddleware(engine)) serve({ fetch: app.fetch, port: 3001 }, (info) => { console.log(`@superapp/backend running on http://localhost:${info.port}`) }) ``` Not using Hono? superapp ships adapters for [Next.js](/docs/backend/adapters), [Express](/docs/backend/adapters), and a [generic handler](/docs/backend/adapters) for Cloudflare Workers, Deno, and Bun. ### Step 3: Set Up the Client SDK In your frontend project, create `lib/superapp.ts`: ```typescript const SUPERAPP_URL = process.env.NEXT_PUBLIC_SUPERAPP_URL! export const authClient = createAuth(SUPERAPP_URL) export function createDb(token: string) { return drizzle({ connection: SUPERAPP_URL, token, schema, }) } ``` Generate your `SuperAppSchema` types by running `npx @superapp/backend generate` against your running server. This introspects your database and outputs a TypeScript schema file. ### Step 4: Add Auth Wrap your app with `AuthProvider` and drop in `AuthCard` for sign-in/sign-up: ```tsx // app/layout.tsx export default function RootLayout({ children }: { children: React.ReactNode }) { const router = useRouter() return ( router.refresh()} > {children} ) } ``` ```tsx // app/auth/[[...slug]]/page.tsx export default function AuthPage() { return ( ) } ``` This gives you `/auth/sign-in`, `/auth/sign-up`, and `/auth/forgot-password` — fully functional, no form code. ### Step 5: Query Data Create a hook to get a typed database client, then query: ```typescript // hooks/use-db.ts export function useDb() { const { data: session } = useSession() return useMemo( () => session?.token ? createDb(session.token) : null, [session?.token], ) } ``` ```tsx // app/dashboard/page.tsx 'use client' export default function DashboardPage() { const db = useDb() const [orders, setOrders] = useState([]) useEffect(() => { if (!db) return db.select() .from(schema.orders) .where(eq(schema.orders.status, 'active')) .orderBy(desc(schema.orders.createdAt)) .limit(50) .then(setOrders) }, [db]) if (!db) return Please sign in. return ( {orders.map((o) => ( Order #{o.id} - ${o.amount} - {o.status} ))} ) } ``` ## What You Just Built ```text Browser Backend Database | | | |-- POST /auth/sign-in ------>| | |<-------- JWT token ---------| | | | | |-- POST /data -------------->| | | (JWT + Drizzle query) | | | |-- Verify JWT | | |-- Resolve session ($user) | | |-- Check permissions | | |-- Inject WHERE clauses | | | | | |-- Execute SQL ------------->| | |<-------- Results -----------| | | | |<------ Typed response ------| | ``` ## What's Next - [Backend Configuration](/docs/backend) — Full reference for `createEngine` options, connections, and security settings. - [Client Queries](/docs/client/queries) — All query methods: `db.select()`, `db.insert()`, `db.update()`, `db.delete()`. - [Permissions](/docs/backend/permissions/where) — Where clauses, validation, defaults, overwrite, and middleware. --- ## Overview What @superapp/backend does, how it works, and what routes it exposes. URL: /docs/backend/overview `@superapp/backend` is a data access layer that turns your databases into a secure, authenticated API. Point it at Postgres, MySQL, SQLite, or CSV files and get instant CRUD with row-level security, audit logging, and an admin UI. ```typescript const engine = createEngine({ connections: { main: process.env.PG_URL!, }, }) const app = new Hono() app.route('/', createHonoMiddleware(engine)) serve({ fetch: app.fetch, port: 3001 }) ``` ## Routes | Route | Method | Description | |---|---|---| | `/data` | `POST` | Query, insert, update, delete records | | `/schema` | `GET` | Introspect available tables and columns | | `/auth/*` | `*` | Authentication endpoints (login, register, session) | | `/admin` | `GET` | Admin UI dashboard | | `/admin/api/*` | `*` | Admin API for managing connections and permissions | ## Request Pipeline Every `/data` request passes through a 10-step pipeline: ```text Request → Rate Limit → Validate → JWT → Session → Roles → Permissions → Query Build → Execute → Audit → Response ``` ## Key Concepts - **Databases** -- Named database sources (Postgres, MySQL, SQLite, CSV) connected via native drivers - **Auth Providers** -- Pluggable authentication (better-auth or custom); client-side auth lives in `@superapp/auth` - **Permissions** -- Row-level security with operation blocks (`select`, `insert`, `update`, `delete`), each with inline `roles` - **Adapters** -- Framework bindings for Hono, Next.js, Express, and generic handlers ## Next Steps - [Configuration](/docs/backend/configuration) -- Configure your engine - [Databases](/docs/backend/databases) -- Add database sources - [Authentication](/docs/backend/auth) -- Set up auth - [Permissions](/docs/backend/permissions/where) -- Define access rules --- ## Configuration Configure and initialize the superapp backend engine with connections, auth, permissions, and more. URL: /docs/backend/configuration `createEngine` is the single entry point for configuring your backend. It returns an engine instance you pass to any adapter. ```typescript const engine = createEngine({ connections: { main: 'postgres://localhost:5432/mydb', }, }) ``` ## Full Configuration ```typescript const engine = createEngine({ mode: 'programmatic', // config in code (or 'interactive') superapp_db: process.env.TURSO_URL ?? './superapp.db', // internal tables storage masterKey: process.env.SUPERAPP_MASTER_KEY!, // admin API key (use env var) connections: { main: process.env.PG_URL!, // namespace: main.* warehouse: process.env.MYSQL_URL!, // namespace: warehouse.* }, auth: betterAuthProvider({ /* ... */ }), // omit for open mode jwt: { algorithms: ['RS256', 'ES256'], // allowed signing algorithms issuer: 'https://auth.myapp.com', // reject other issuers }, limits: { maxRows: 10_000, // max rows a query can return maxRelationDepth: 3, // orders → items → product = depth 3 maxFilterNesting: 5, // nested $and/$or/$not levels rateLimitPerUser: 200, // req/min per user rateLimitPerIP: 500, // req/min per IP }, audit: { enabled: true, // turn logging on/off logQuery: true, // log SQL statements logParams: true, // log query params logDuration: true, // log query time (ms) retention: '90d', // auto-delete after 90 days }, security: { cors: { origin: ['https://myapp.com'], // allowed origins credentials: true, // allow cookies }, }, pgWire: { port: 5433, // TCP port for wire protocol auth: { strategy: ['jwt', 'apiKey'] }, // auth strategies }, permissions: { /* ... */ }, }) ``` ### Connections Each connection value is a URL string or an object. The engine infers the database type from the URL protocol: | URL Pattern | Detected Type | |---|---| | `postgres://` or `postgresql://` | PostgreSQL | | `mysql://` | MySQL | | File path ending in `.db` or `.sqlite` | SQLite | | `{ directory: string }` | CSV | ```typescript connections: { main: 'postgres://localhost:5432/mydb', warehouse: 'mysql://user:pass@host:3306/warehouse', local: './data/app.db', csvData: { directory: './csv-files' }, } ``` ### Permissions Permissions use a Drizzle-like syntax with operation blocks. Roles are declared inline on each permission. ```typescript permissions: { read_orders: { table: 'main.orders', roles: ['admin', 'manager'], select: { where: { status: { $ne: 'deleted' } }, columns: ['id', 'customer_id', 'total', 'status', 'created_at'], limit: 100, }, }, manage_orders: { table: 'main.orders', roles: ['admin'], insert: { columns: ['customer_id', 'total', 'status'], validate: { total: { $gt: 0 } }, default: { status: 'pending' }, }, update: { columns: ['status', 'total'], where: { status: { $ne: 'completed' } }, validate: { total: { $gt: 0 } }, overwrite: { updated_at: '$now' }, }, delete: { where: { status: { $eq: 'draft' } }, }, }, } ``` ## Options Reference | Option | Type | Default | Description | |---|---|---|---| | `mode` | `'programmatic' \| 'interactive'` | `'programmatic'` | Engine operation mode | | `superapp_db` | `string` | `'./superapp.db'` | Path to the database where superapp stores its own internal tables. Local file path for SQLite or a Turso URL. | | `connections` | `Record` | `{}` | Named database connections. Type is inferred from URL protocol. | | `auth` | `AuthProvider` | — | Authentication provider | | `limits` | `LimitsConfig` | — | Query and rate limit constraints | | `audit` | `AuditConfig` | — | Audit logging configuration | | `jwt` | `JWTConfig` | — | JWT algorithm and issuer settings | | `security` | `SecurityConfig` | — | CORS and security headers | | `masterKey` | `string` | — | Master API key for admin access | | `permissions` | `Record` | `{}` | Permissions with inline `roles` and Drizzle-like operation blocks (`select`, `insert`, `update`, `delete`) | | `pgWire` | `PgWireConfig \| boolean` | — | PostgreSQL wire protocol config. See [Wire Protocol](/docs/backend/adapters/pg-wire). | ## Minimal vs. Full For development, only `connections` is required. Add `auth` and `permissions` when you need access control: ```typescript // Development — open access const engine = createEngine({ connections: { main: 'postgres://localhost:5432/mydb', }, }) // Production — full access control const engine = createEngine({ connections: { main: process.env.PG_URL! }, auth: betterAuthProvider({ /* ... */ }), permissions: { /* ... */ }, masterKey: process.env.SUPERAPP_MASTER_KEY!, audit: { enabled: true }, }) ``` --- ## Overview Visual dashboard to configure connections, permissions, actions, limits, security, and audit logging — all from the browser. URL: /docs/backend/admin-ui The Admin UI is a built-in dashboard that ships with the engine when running in [interactive mode](/docs/advanced/engine-modes). It gives non-technical admins a visual interface to manage every aspect of the backend — no code changes, no redeployments. The Admin UI is fully editable only when the engine runs in `interactive` mode. In `programmatic` mode, the dashboard is read-only — useful for inspecting the current configuration without risk of accidental changes. ```typescript const engine = createEngine({ mode: 'interactive', superapp_db: process.env.TURSO_URL, masterKey: process.env.SUPERAPP_MASTER_KEY!, auth: betterAuthProvider({ /* ... */ }), }) ``` Access the dashboard at `http://localhost:3001/admin` using your master key. ## What you can manage | Section | What it covers | |---|---| | [Connections](/docs/backend/admin-ui/connections) | Add, remove, and monitor database connections | | [Permissions & Roles](/docs/backend/admin-ui/permissions-roles) | Visual permission editor and role access matrix | | [Actions](/docs/backend/admin-ui/actions) | Browse registered actions, view schemas, test from a playground | | [Limits & Security](/docs/backend/admin-ui/limits-security) | Rate limits, query constraints, JWT, CORS, and defense layers | | [Audit Log](/docs/backend/admin-ui/audit-log) | Filter and inspect every query that passes through the engine | ## Programmatic vs Interactive | | Programmatic (default) | Interactive | |---|---|---| | **Permissions stored** | In code (`createEngine` config) | In metadata database | | **Admin UI** | Read-only dashboard | Full create/edit/delete | | **Changes take effect** | After redeploy | Immediately | | **Version controlled** | Yes (code review + PR) | Audit-logged | | **Best for** | Engineering teams, CI/CD | Non-technical admins, rapid iteration | A common pattern: run interactive in staging to prototype rules, then export and deploy via CI/CD to production in programmatic mode. See [Engine Modes](/docs/advanced/engine-modes) for the full comparison. --- ## Connections Manage database connections — add, remove, and monitor connection health from the admin dashboard. URL: /docs/backend/admin-ui/connections Manage all named database connections from a single panel. The engine auto-detects the database type from the URL protocol. Add, remove, and monitor connection health in real time. Each connection creates a namespace — `main.*`, `warehouse.*`, etc. — that permissions and queries reference. Try adding a new connection above to see how the type detection works. ## URL detection The URL pattern determines which database adapter is used: | URL Pattern | Detected Type | |---|---| | `postgres://` or `postgresql://` | PostgreSQL | | `mysql://` | MySQL | | File path ending in `.db` or `.sqlite` | SQLite | | `libsql://` | Turso | | `{ directory: string }` | CSV | ## Connection lifecycle 1. **Add** — provide a name and URL. The engine detects the type and connects. 2. **Health check** — the dashboard polls each connection and shows live status. 3. **Schema sync** — on connect, the engine introspects the schema and exposes tables under the connection namespace. 4. **Remove** — disconnect and remove the namespace. Any permissions referencing this connection's tables are invalidated. ## Multiple connections You can connect to as many databases as you need. Each one gets its own namespace: ```typescript connections: { main: 'postgres://localhost:5432/mydb', // main.orders, main.users warehouse: 'mysql://user:pass@host:3306/wh', // warehouse.events, warehouse.reports local: './data/app.db', // local.settings } ``` See [Configuration](/docs/backend/configuration) for the full connection setup reference. --- ## Permissions & Roles Visual permission editor and role access matrix — manage who can access what, down to the row and column level. URL: /docs/backend/admin-ui/permissions-roles ## Roles overview See all roles at a glance — what permissions and actions each role has access to, and how many users belong to each. Toggle between a card view and a full access matrix. Roles are declared inline on each permission and action — there's no separate "roles" config. The dashboard aggregates them into a unified view. Click any role to see its full access profile, or switch to the **Matrix** view to compare roles side-by-side. --- ## Permission editor The permission editor lets you create and manage access rules for every table. Each permission binds a table to one or more roles and defines what operations (read, create, update, delete) are allowed — with optional row filters, column restrictions, validation rules, defaults, and overwrites. Click the operations to toggle them, click **Edit** on roles to change access, and explore the detail panels to see how `where`, `validate`, `default`, and `overwrite` rules compose together. ## What each operation controls | Operation | What it does | Key options | |---|---|---| | **Read** | Row-level filtering on SELECT queries | `where`, `columns`, `limit` | | **Create** | Controls which columns can be inserted | `columns`, `validate`, `default` | | **Update** | Restricts which rows and columns can change | `columns`, `where`, `validate`, `overwrite` | | **Delete** | Restricts which rows can be deleted | `where` | ## Permission building blocks ### Where clauses Filter rows by column values. Applied to `select`, `update`, and `delete` operations: ```typescript where: { status: { $ne: 'deleted' }, organization_id: { $in: '$user.org_ids' } } ``` ### Validate rules Check request body values before the query executes. Applied to `insert` and `update`: ```typescript validate: { total: { $gt: 0, $lte: 100_000 }, status: { $in: ['draft', 'active'] } } ``` ### Defaults Fill missing values when the client doesn't provide them. The client can override: ```typescript default: { status: 'draft', priority: 3 } ``` ### Overwrites Always applied regardless of client input. The client cannot override: ```typescript overwrite: { updated_at: '$now', updated_by: '$user.id' } ``` See [Permissions](/docs/backend/permissions/where) for the full documentation on writing permission rules. --- ## Actions Browse registered server-side actions, view typed schemas, and test them from the built-in playground. URL: /docs/backend/admin-ui/actions Actions are named server-side functions with typed input/output. The action registry shows all registered actions, their Zod schemas, allowed roles, and usage statistics. Use the built-in playground to test actions directly from the dashboard. Each action declares its own `roles` array — no need to reference a global roles config. Click **Test** on any action to open the playground and send a request through the full pipeline: auth, role check, input validation, handler execution, and output validation. ## What the registry shows | Field | Description | |---|---| | **Name** | The action identifier clients use to call it | | **Roles** | Which roles are allowed to execute this action | | **Input Schema** | Zod-validated input fields with types and required markers | | **Output Schema** | The typed response structure | | **Total Calls** | Lifetime invocation count | | **Avg Duration** | Average execution time in milliseconds | | **Endpoint** | The HTTP route (`POST /actions/{name}`) | ## Playground The playground lets you test any action without writing client code: 1. Select an action from the list 2. Click **Test** to open the playground 3. Fill in the input fields 4. Click **Execute** to send a real request 5. See the typed response (or error) inline This is useful for validating new actions during development and debugging production issues. ## Action lifecycle ```text POST /actions/{name} + Bearer JWT + { input } │ ▼ 1. Auth — JWT verification, session resolution │ ▼ 2. Role check — is the user's role in the action's roles array? │ ▼ 3. Input validation — parse input against Zod schema │ ▼ 4. Execute — handler(ctx, validatedInput) │ ▼ 5. Response — return value sent as JSON ``` See [Actions](/docs/backend/actions) for the full guide on defining actions in code. --- ## Limits & Security Configure rate limits, query constraints, JWT validation, CORS policies, and audit logging. URL: /docs/backend/admin-ui/limits-security ## Rate limits & query constraints Configure limits that protect the engine from abuse. Use presets for common scenarios or fine-tune each value individually. Every slider shows the current security posture — strict, balanced, or permissive. | Limit | What it prevents | |---|---| | **Max Rows** | Queries returning excessive data | | **Relation Depth** | Deeply nested joins that slow the database | | **Filter Nesting** | Complex `$and/$or/$not` trees | | **Filter Conditions** | Queries with too many conditions | | **Query Timeout** | Runaway queries that block connections | | **Rate Limit / User** | Authenticated users hammering the API | | **Rate Limit / IP** | Unauthenticated or bot traffic floods | --- ## Security settings Configure JWT validation rules, CORS policies, and audit logging from a unified security panel. Every change is validated and takes effect immediately in interactive mode. ## Defense layers The engine applies security in order — each layer must pass before the next runs: 1. **Rate limiting** — per-user and per-IP throttle 2. **JWT validation** — algorithm allowlist, issuer, audience, expiry 3. **Permission check** — table, operation, columns, row-level filters 4. **Query isolation** — per-request connection, sandboxed execution 5. **Audit log** — every query recorded with user, table, operation, duration ### JWT configuration | Setting | What it does | |---|---| | **Algorithms** | Allowlist of signing algorithms. Use RS/ES for production — HMAC (HS*) uses symmetric keys. | | **Issuer** | Reject tokens not issued by your auth server | | **Audience** | Reject tokens not intended for your API | | **Clock Skew** | Tolerance for clock drift between servers (in seconds) | ### CORS configuration | Setting | What it does | |---|---| | **Allowed Origins** | Which domains can make cross-origin requests to the API | | **Credentials** | Whether to allow cookies and auth headers in cross-origin requests | ### Audit logging | Setting | What it does | |---|---| | **Enabled** | Master toggle for audit logging | | **Log Query** | Record the generated SQL statement | | **Log Params** | Record query bind parameters (may contain PII) | | **Log Duration** | Record query execution time in milliseconds | | **Retention** | How long to keep logs before auto-deletion | See [Security](/docs/advanced/security) for the full defense-in-depth architecture. --- ## Audit Log Filter and inspect every query that passes through the engine — by operation, status, user, and more. URL: /docs/backend/admin-ui/audit-log The audit log records every query that passes through the engine. Filter by operation, status, or user to investigate specific activity. Expand any entry to see the full SQL query, parameters, and execution context. Click any row to expand it and see the full query, user details, IP address, and role. Use the filters at the top to narrow down by operation type, success/error status, or user email. ## What gets logged Every request through the engine creates an audit entry with these fields: | Field | Description | |---|---| | `timestamp` | When the request was made | | `user_id` | Authenticated user ID | | `table` | Target table (e.g., `main.orders`) | | `operation` | `select`, `insert`, `update`, or `delete` | | `query` | Generated SQL (if `logQuery` enabled) | | `params` | Query parameters (if `logParams` enabled) | | `duration_ms` | Execution time (if `logDuration` enabled) | | `ip_address` | Client IP address | | `role` | User's role at time of request | | `status` | `success` or `error` | ## Querying the audit log The audit log is queryable via the admin API using your master key: ```bash # Recent entries curl -H "Authorization: Bearer $MASTER_KEY" \ https://myapp.com/admin/api/audit?limit=100 # Filter by user curl -H "Authorization: Bearer $MASTER_KEY" \ https://myapp.com/admin/api/audit?user_id=usr_123 # Filter by table and operation curl -H "Authorization: Bearer $MASTER_KEY" \ https://myapp.com/admin/api/audit?table=main.orders&operation=delete ``` ## Retention Audit logs are auto-deleted based on the retention period configured in [Limits & Security](/docs/backend/admin-ui/limits-security). Common settings: | Retention | Use case | |---|---| | `7d` | Development — quick debugging | | `30d` | Staging — catch issues before production | | `90d` | Production — standard compliance | | `365d` / `1y` | Regulated industries — long-term audit trail | If `logParams` is enabled, query parameters are stored in plain text. This may include user-submitted data. Consider disabling `logParams` in production or reducing the retention period. See [Audit Logging](/docs/advanced/audit-logging) for the full configuration reference. --- ## Databases Connect to any database with Drizzle-compatible adapters. URL: /docs/backend/databases Every database in `@superapp/backend` is accessed through a **Drizzle-compatible adapter**. The adapter architecture mirrors [Drizzle ORM's driver pattern](https://github.com/drizzle-team/drizzle-orm/tree/main/drizzle-orm/src) -- a core dialect layer handles SQL generation, and a thin driver adapter handles the connection. ```typescript const engine = createEngine({ connections: { main: process.env.PG_URL!, }, }) ``` ## How Adapters Work Adapters follow a **two-layer architecture**, identical to how Drizzle ORM structures its database support: 1. **Core dialect** -- Defines SQL generation, column types, query builders, and dialect-specific features (`pg-core`, `mysql-core`, `sqlite-core`) 2. **Driver adapter** -- A thin wrapper around a specific JavaScript database driver that handles connection management, session handling, and migrations ``` ┌─────────────────────────────────────┐ │ Your Application │ ├─────────────────────────────────────┤ │ @superapp/backend engine │ ├──────────┬──────────┬───────────────┤ │ pg-core │mysql-core│ sqlite-core │ ← Core dialects ├──────────┼──────────┼───────────────┤ │ pg │ mysql2 │better-sqlite3 │ ← Driver adapters └──────────┴──────────┴───────────────┘ ``` ## Supported Databases `@superapp/backend` supports every database that Drizzle ORM supports. Below is the full matrix organized by dialect. ### PostgreSQL | Adapter | Driver | Best for | |---|---|---| | `node-postgres` | `pg` | Standard Node.js apps | | `postgres-js` | `postgres` | High-performance Node.js | | `neon-serverless` | `@neondatabase/serverless` | Neon (WebSocket) | | `neon-http` | `@neondatabase/serverless` | Neon (HTTP, edge/serverless) | | `vercel-postgres` | `@vercel/postgres` | Vercel managed Postgres | | `supabase` | `@supabase/supabase-js` | Supabase Postgres | | `xata-http` | Xata HTTP API | Xata serverless | | `pglite` | `@electric-sql/pglite` | In-browser/WASM Postgres | | `aws-data-api` | AWS RDS Data API | Aurora Serverless | | `bun-sql` | `Bun.sql` | Bun runtime | ### MySQL | Adapter | Driver | Best for | |---|---|---| | `mysql2` | `mysql2` | Standard Node.js apps | | `planetscale-serverless` | `@planetscale/database` | PlanetScale (serverless HTTP) | | `tidb-serverless` | TiDB Cloud | TiDB Cloud serverless | ### SQLite | Adapter | Driver | Best for | |---|---|---| | `better-sqlite3` | `better-sqlite3` | Node.js (synchronous) | | `libsql` | `@libsql/client` | Turso / LibSQL | | `d1` | Cloudflare D1 | Cloudflare Workers | | `durable-sqlite` | Cloudflare Durable Objects | Durable Object storage | | `bun-sqlite` | `bun:sqlite` | Bun runtime | | `expo-sqlite` | `expo-sqlite` | React Native (Expo) | | `op-sqlite` | `op-sqlite` | React Native (high-performance) | | `sql-js` | `sql.js` | In-browser WASM SQLite | ### SingleStore | Adapter | Driver | Best for | |---|---|---| | `singlestore` | SingleStore driver | Direct connection | ### Gel (formerly EdgeDB) | Adapter | Driver | Best for | |---|---|---| | `gel` | Gel client | Gel database | ## Built-in Providers The engine ships with three built-in providers that require zero configuration beyond a connection string: | Provider | Connection format | Read | Write | Driver | |---|---|---|---|---| | PostgreSQL | `'postgres://...'` | Yes | Yes | `pg` | | MySQL | `'mysql://...'` | Yes | Yes | `mysql2` | | SQLite | `'./path/to/file.db'` | Yes | Yes | `better-sqlite3` | | CSV | `{ directory: './data/' }` | Yes | No | built-in | For any other database from the supported list above, register a custom provider using the Drizzle adapter pattern. ## Provider Interface Every provider -- built-in or custom -- exports a single object that satisfies the `IntegrationProvider` interface: ```typescript interface IntegrationProvider { type: string connect: (config: ConnectionConfig) => Promise disconnect: (connection: DriverConnection) => Promise introspect: (connection: DriverConnection) => Promise capabilities: { read: boolean write: boolean transactions: boolean } } ``` The `connect` method initializes the native driver connection pool. The engine calls it once on startup (or when a connection is added via the admin UI) and maintains the pool for the lifetime of the process. ## Multiple Connections Pass all connections in the `connections` object. The engine infers the type from the protocol or format: ```typescript const engine = createEngine({ connections: { main: process.env.PG_URL!, warehouse: process.env.MYSQL_URL!, reports: { directory: './data/reports' }, }, }) ``` ## Next Steps - [PostgreSQL](/docs/backend/databases/postgres) -- Full CRUD with Postgres - [MySQL](/docs/backend/databases/mysql) -- Full CRUD with MySQL - [SQLite](/docs/backend/databases/sqlite) -- Local file-based databases - [CSV](/docs/backend/databases/csv) -- Read-only CSV file queries - [Custom Providers](/docs/backend/databases/custom-provider) -- Build your own Drizzle-compatible provider --- ## Supported Databases Every database supported through Drizzle ORM adapters. URL: /docs/backend/databases/supported-databases `@superapp/backend` supports every database that [Drizzle ORM](https://orm.drizzle.team) supports. Each adapter is a thin wrapper around a JavaScript database driver, following the same [two-layer architecture](https://github.com/drizzle-team/drizzle-orm/tree/main/drizzle-orm/src) used by Drizzle itself. ## PostgreSQL | Adapter | Package | Best for | Serverless | |---|---|---|---| | node-postgres | `pg` | Standard Node.js apps | No | | postgres-js | `postgres` | High-performance Node.js | No | | Neon (WebSocket) | `@neondatabase/serverless` | Neon with persistent connections | Yes | | Neon (HTTP) | `@neondatabase/serverless` | Neon on edge/serverless runtimes | Yes | | Vercel Postgres | `@vercel/postgres` | Vercel managed Postgres | Yes | | Supabase | `@supabase/supabase-js` | Supabase projects | Yes | | Xata | Xata HTTP API | Xata serverless database | Yes | | PGlite | `@electric-sql/pglite` | In-browser / WASM Postgres | Yes | | AWS Data API | AWS RDS Data API | Aurora Serverless | Yes | | Bun SQL | `Bun.sql` | Bun runtime | No | ## MySQL | Adapter | Package | Best for | Serverless | |---|---|---|---| | mysql2 | `mysql2` | Standard Node.js apps | No | | PlanetScale | `@planetscale/database` | PlanetScale serverless | Yes | | TiDB Cloud | TiDB serverless driver | TiDB Cloud | Yes | ## SQLite | Adapter | Package | Best for | Serverless | |---|---|---|---| | better-sqlite3 | `better-sqlite3` | Node.js (synchronous, fast) | No | | LibSQL / Turso | `@libsql/client` | Turso edge database | Yes | | Cloudflare D1 | Cloudflare D1 binding | Cloudflare Workers | Yes | | Durable Objects | Cloudflare DO SQLite | Durable Object storage | Yes | | Bun SQLite | `bun:sqlite` | Bun runtime | No | | Expo SQLite | `expo-sqlite` | React Native (Expo) | No | | OP-SQLite | `op-sqlite` | React Native (high-performance) | No | | sql.js | `sql.js` | In-browser WASM SQLite | Yes | ## SingleStore | Adapter | Package | Best for | Serverless | |---|---|---|---| | SingleStore | SingleStore driver | Direct SingleStore connection | No | ## Gel (formerly EdgeDB) | Adapter | Package | Best for | Serverless | |---|---|---|---| | Gel | Gel client | Gel database | No | ## Choosing an Adapter Pick the adapter that matches your **runtime** and **hosting**: | You're deploying to... | Use | |---|---| | Node.js + self-hosted Postgres | `node-postgres` or `postgres-js` | | Vercel | `vercel-postgres` or `neon-http` | | Cloudflare Workers | `d1` (SQLite) or `neon-http` (Postgres) | | Supabase | `supabase` | | PlanetScale | `planetscale-serverless` | | Turso | `libsql` | | Bun | `bun-sql` (Postgres) or `bun-sqlite` (SQLite) | | React Native | `expo-sqlite` or `op-sqlite` | | Browser | `pglite` (Postgres) or `sql-js` (SQLite) | | AWS Lambda | `aws-data-api` or `neon-http` | ## Using Non-Default Adapters The three built-in providers (`pg`, `mysql2`, `better-sqlite3`) work out of the box with a connection string. For any other adapter, register a [custom provider](/docs/backend/databases/custom-provider) following the Drizzle adapter pattern: ```typescript const engine = createEngine({ providers: [neonProvider], connections: { main: { type: 'neon', connectionString: process.env.NEON_URL! }, }, }) ``` See [Custom Providers](/docs/backend/databases/custom-provider) for full examples with Neon, Cloudflare D1, and HTTP APIs. --- ## PostgreSQL Connect to PostgreSQL databases. URL: /docs/backend/databases/postgres Connect to a PostgreSQL database and get full CRUD through a direct `pg` driver connection. ```typescript const engine = createEngine({ connections: { main: process.env.PG_URL!, }, }) ``` ## Connection URL Format Standard PostgreSQL connection string: ``` postgres://user:password@host:5432/database?sslmode=require ``` | Parameter | Description | |---|---| | `user` | Database user | | `password` | Database password | | `host` | Server hostname or IP | | `5432` | Port (default: 5432) | | `database` | Database name | | `sslmode` | `disable`, `require`, `verify-ca`, `verify-full` | ## How It Works When the engine starts, it detects the `postgres://` protocol and initializes a connection pool using the `pg` driver. All tables in the target database become available under the `main` namespace. Queries are executed directly against Postgres with no intermediary. ## Capabilities | Operation | Supported | |---|---| | `findMany` / `findOne` | Yes | | `create` | Yes | | `update` | Yes | | `delete` | Yes | | `count` / `aggregate` | Yes | | Transactions | Yes | ## Full Example ```typescript const engine = createEngine({ connections: { main: process.env.PG_URL!, }, auth: betterAuthProvider({ secret: process.env.AUTH_SECRET!, userTable: { table: 'main.users', matchOn: { column: 'id', jwtField: 'id' }, columns: ['id', 'email', 'name'], }, }), permissions: { orders: { table: 'main.orders', select: { roles: ['viewer'], columns: ['id', 'amount', 'status', 'customer_id', 'created_at'], }, }, }, }) const app = new Hono() app.route('/', createHonoMiddleware(engine)) serve({ fetch: app.fetch, port: 3001 }) ``` ## Multiple Postgres Connections Attach several Postgres databases under different names: ```typescript const engine = createEngine({ connections: { main: process.env.PG_URL!, analytics: process.env.ANALYTICS_PG_URL!, }, }) ``` Tables are namespaced: `main.orders`, `analytics.events`. ## Troubleshooting | Problem | Cause | Fix | |---|---|---| | `Connection refused` | Postgres not reachable | Check host, port, and firewall rules | | `password authentication failed` | Wrong credentials | Verify user/password in connection URL | | `SSL connection required` | Server requires SSL | Add `?sslmode=require` to the URL | | `relation does not exist` | Table not in target database | Check database name in the URL | | `too many connections` | Pool exhausted on Postgres side | Increase `max_connections` or use a connection pooler like PgBouncer | --- ## MySQL Connect to MySQL databases. URL: /docs/backend/databases/mysql Connect to a MySQL database and get full CRUD through a direct `mysql2` driver connection. ```typescript const engine = createEngine({ connections: { warehouse: process.env.MYSQL_URL!, }, }) ``` ## Connection URL Format Standard MySQL connection string: ``` mysql://user:password@host:3306/database ``` | Parameter | Description | |---|---| | `user` | Database user | | `password` | Database password | | `host` | Server hostname or IP | | `3306` | Port (default: 3306) | | `database` | Database name | ## How It Works When the engine starts, it detects the `mysql://` protocol and initializes a connection pool using the `mysql2` driver. All tables in the target database become available under the `warehouse` namespace. Queries are executed directly against MySQL with no intermediary. ## Capabilities | Operation | Supported | |---|---| | `findMany` / `findOne` | Yes | | `create` | Yes | | `update` | Yes | | `delete` | Yes | | `count` / `aggregate` | Yes | | Transactions | Yes | ## Full Example ```typescript const engine = createEngine({ connections: { warehouse: process.env.MYSQL_URL!, }, permissions: { customers: { table: 'warehouse.customers', select: { roles: ['viewer'], columns: ['id', 'name', 'email', 'created_at'], }, }, }, }) const app = new Hono() app.route('/', createHonoMiddleware(engine)) serve({ fetch: app.fetch, port: 3001 }) ``` ## Combining with Other Providers MySQL works alongside any other provider. A common pattern is Postgres for your primary database and MySQL for a legacy warehouse: ```typescript const engine = createEngine({ connections: { main: process.env.PG_URL!, warehouse: process.env.MYSQL_URL!, }, }) ``` Tables are namespaced: `main.orders`, `warehouse.customers`. ## Troubleshooting | Problem | Cause | Fix | |---|---|---| | `Connection refused` | MySQL not reachable | Check host, port, and firewall rules | | `Access denied` | Wrong credentials | Verify user/password in connection URL | | `Unknown database` | Database does not exist | Check database name in the URL | | `Too many connections` | Pool exhausted on MySQL side | Increase `max_connections` in MySQL config | | `SSL required` | Server enforces SSL | Add `?ssl-mode=REQUIRED` to the URL | --- ## SQLite Connect to SQLite databases. URL: /docs/backend/databases/sqlite Connect to a local SQLite file and get full CRUD through a direct `better-sqlite3` driver connection. Ideal for development, testing, and embedded use cases. ```typescript const engine = createEngine({ connections: { local: './data/app.db', }, }) ``` ## Connection Config SQLite connections are just file path strings: | Format | Description | |---|---| | `'./data/app.db'` | Relative path to the `.db` file | | `'/absolute/path/app.db'` | Absolute path to the `.db` file | The file must exist before the engine starts. To create a new database, use the `sqlite3` CLI or any SQLite tool. ## How It Works When the engine starts, it detects the file path and opens a connection using the `better-sqlite3` driver. All tables in the SQLite file become available under the `local` namespace. ## Capabilities | Operation | Supported | |---|---| | `findMany` / `findOne` | Yes | | `create` | Yes | | `update` | Yes | | `delete` | Yes | | `count` / `aggregate` | Yes | | Transactions | Yes | ## Development Setup SQLite is the fastest way to develop locally. Seed a database and point the engine at it: ```bash sqlite3 ./data/dev.db <<'SQL' CREATE TABLE orders ( id INTEGER PRIMARY KEY AUTOINCREMENT, amount DECIMAL(10,2) NOT NULL, status TEXT NOT NULL DEFAULT 'pending', customer_id INTEGER NOT NULL, created_at TEXT NOT NULL DEFAULT (datetime('now')) ); INSERT INTO orders (amount, status, customer_id) VALUES (99.99, 'active', 1), (149.50, 'active', 2), (29.00, 'pending', 1); SQL ``` ```typescript const engine = createEngine({ connections: { main: './data/dev.db', }, }) ``` ## Testing with In-Memory SQLite For tests, create a temporary database before each test run: ```typescript function createTestEngine() { const dir = mkdtempSync(join(tmpdir(), 'superapp-test-')) cpSync('./fixtures/test.db', join(dir, 'test.db')) return createEngine({ connections: { main: join(dir, 'test.db'), }, }) } ``` ## Troubleshooting | Problem | Cause | Fix | |---|---|---| | `unable to open database file` | File not found | Check the path; it must exist before engine starts | | `database is locked` | Concurrent writes from another process | Use WAL mode (`PRAGMA journal_mode=WAL`) | | `read-only database` | File permissions | Ensure the process has write access to the file | --- ## Neon Serverless Connect to Neon via WebSocket or HTTP. URL: /docs/backend/databases/neon-serverless Connect to [Neon](https://neon.tech) serverless Postgres using the `@neondatabase/serverless` driver. Works on edge runtimes (Cloudflare Workers, Vercel Edge) where TCP connections are unavailable. ```typescript const engine = createEngine({ providers: [neonProvider], connections: { main: { type: 'neon', connectionString: process.env.NEON_URL! }, }, }) ``` ## Install ```bash npm install @neondatabase/serverless ``` ## HTTP vs WebSocket | Mode | Best for | Transactions | |---|---|---| | HTTP | Single queries on edge/serverless | No | | WebSocket | Multiple queries, sessions | Yes | The provider auto-selects based on runtime capabilities. Force a mode with the `transport` option: ```typescript connections: { main: { type: 'neon', connectionString: process.env.NEON_URL!, transport: 'http' }, } ``` ## Connection Options | Option | Type | Description | |---|---|---| | `connectionString` | `string` | Neon `postgres://` connection string | | `transport` | `'http' \| 'ws'` | Force transport mode | --- ## Supabase Connect to Supabase Postgres. URL: /docs/backend/databases/supabase Connect to [Supabase](https://supabase.com) Postgres using the Supabase client SDK. ```typescript const engine = createEngine({ providers: [supabaseProvider], connections: { main: { type: 'supabase', url: process.env.SUPABASE_URL!, serviceRoleKey: process.env.SUPABASE_SERVICE_ROLE_KEY!, }, }, }) ``` ## Install ```bash npm install @supabase/supabase-js ``` ## Connection Options | Option | Type | Description | |---|---|---| | `url` | `string` | Supabase project URL | | `serviceRoleKey` | `string` | Service role key (bypasses RLS) | | `anonKey` | `string` | Anon key (respects RLS) | Use `serviceRoleKey` for backend access. The anon key enforces Row-Level Security and is intended for client-side use. --- ## PlanetScale Connect to PlanetScale serverless MySQL. URL: /docs/backend/databases/planetscale Connect to [PlanetScale](https://planetscale.com) using the serverless HTTP driver. Works on edge runtimes where TCP connections are unavailable. ```typescript const engine = createEngine({ providers: [planetscaleProvider], connections: { main: { type: 'planetscale', connectionString: process.env.PLANETSCALE_URL! }, }, }) ``` ## Install ```bash npm install @planetscale/database ``` ## Connection URL Format ``` mysql://username:password@host/database?ssl={"rejectUnauthorized":true} ``` Get your connection string from the PlanetScale dashboard under **Connect** → **Connect with** → **@planetscale/database**. ## Connection Options | Option | Type | Description | |---|---|---| | `connectionString` | `string` | PlanetScale connection URL | | `host` | `string` | PlanetScale host (alternative to URL) | | `username` | `string` | Database username | | `password` | `string` | Database password | --- ## Vercel Postgres Connect via Vercel's managed Postgres. URL: /docs/backend/databases/vercel-postgres Connect to [Vercel Postgres](https://vercel.com/docs/storage/vercel-postgres) using the `@vercel/postgres` SDK. ```typescript const engine = createEngine({ providers: [vercelPostgresProvider], connections: { main: { type: 'vercel-postgres' }, }, }) ``` ## Install ```bash npm install @vercel/postgres ``` ## How It Works The Vercel Postgres SDK reads connection credentials from the `POSTGRES_URL` environment variable automatically when deployed on Vercel. No connection string needed in the config. For local development, add the Vercel env vars to `.env.local`: ```bash npx vercel env pull .env.local ``` ## Connection Options | Option | Type | Description | |---|---|---| | `connectionString` | `string` | Override the auto-detected URL (optional) | --- ## LibSQL / Turso Connect to Turso or LibSQL databases. URL: /docs/backend/databases/libsql Connect to [Turso](https://turso.tech) or any LibSQL database. Supports both local files and remote edge replicas. ```typescript const engine = createEngine({ providers: [libsqlProvider], connections: { main: { type: 'libsql', url: process.env.TURSO_URL!, authToken: process.env.TURSO_AUTH_TOKEN!, }, }, }) ``` ## Install ```bash npm install @libsql/client ``` ## Connection Modes | Mode | URL format | Use case | |---|---|---| | Remote | `libsql://your-db.turso.io` | Production with Turso | | Local file | `file:./data/local.db` | Development | | In-memory | `:memory:` | Testing | | Embedded replica | `file:./data/local.db` + `syncUrl` | Edge with local cache | ## Connection Options | Option | Type | Description | |---|---|---| | `url` | `string` | Database URL | | `authToken` | `string` | Turso auth token (required for remote) | | `syncUrl` | `string` | Remote URL for embedded replicas | | `syncInterval` | `number` | Sync interval in seconds | --- ## Cloudflare D1 Connect to Cloudflare D1 SQLite. URL: /docs/backend/databases/d1 Connect to [Cloudflare D1](https://developers.cloudflare.com/d1/) from Cloudflare Workers. ```typescript export default { async fetch(request: Request, env: Env) { const engine = createEngine({ providers: [d1Provider], connections: { main: { type: 'd1', binding: env.DB }, }, }) // handle request... }, } ``` ## How It Works D1 bindings are provided by the Cloudflare Workers runtime. Pass the binding directly -- no connection strings or authentication needed. ## wrangler.toml ```toml [[d1_databases]] binding = "DB" database_name = "my-database" database_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" ``` ## Connection Options | Option | Type | Description | |---|---|---| | `binding` | `D1Database` | D1 binding from the Workers environment | --- ## Postgres.js Connect with the postgres.js driver. URL: /docs/backend/databases/postgres-js High-performance PostgreSQL driver for Node.js, Deno, and Bun. ```typescript const engine = createEngine({ providers: [postgresJsProvider], connections: { main: { type: 'postgres-js', connectionString: process.env.PG_URL! }, }, }) ``` ## Install ```bash npm install postgres ``` ## Why Postgres.js - Fastest PostgreSQL driver for Node.js - Built-in connection pooling - Tagged template literal query interface - Supports Deno and Bun natively ## Connection Options | Option | Type | Description | |---|---|---| | `connectionString` | `string` | Standard `postgres://` URL | | `max` | `number` | Max pool connections (default: 10) | | `idle_timeout` | `number` | Seconds before idle connections close | | `connect_timeout` | `number` | Seconds before connection attempt times out | --- ## PGlite In-browser and in-process Postgres via WASM. URL: /docs/backend/databases/pglite [PGlite](https://pglite.dev) runs a full Postgres instance in-process using WASM. Works in Node.js, Bun, Deno, and the browser. ```typescript const engine = createEngine({ providers: [pgliteProvider], connections: { main: { type: 'pglite', dataDir: './data/pglite' }, }, }) ``` ## Install ```bash npm install @electric-sql/pglite ``` ## Use Cases - **Local development** -- No external Postgres needed - **Testing** -- Ephemeral in-memory databases per test - **Embedded apps** -- Full Postgres without a server - **Browser** -- Client-side SQL with Postgres syntax ## Connection Options | Option | Type | Description | |---|---|---| | `dataDir` | `string` | Directory for persistent data (optional, in-memory if omitted) | ## In-Memory for Tests ```typescript const engine = createEngine({ providers: [pgliteProvider], connections: { main: { type: 'pglite' }, // no dataDir = in-memory }, }) ``` --- ## AWS Data API Connect to Aurora Serverless via AWS RDS Data API. URL: /docs/backend/databases/aws-data-api Connect to [Amazon Aurora Serverless](https://aws.amazon.com/rds/aurora/serverless/) using the RDS Data API. No VPC or persistent connections required. ```typescript const engine = createEngine({ providers: [awsDataApiProvider], connections: { main: { type: 'aws-data-api', resourceArn: process.env.AURORA_RESOURCE_ARN!, secretArn: process.env.AURORA_SECRET_ARN!, database: process.env.AURORA_DATABASE!, }, }, }) ``` ## Install ```bash npm install @aws-sdk/client-rds-data ``` ## Connection Options | Option | Type | Description | |---|---|---| | `resourceArn` | `string` | Aurora cluster ARN | | `secretArn` | `string` | Secrets Manager ARN for credentials | | `database` | `string` | Database name | | `region` | `string` | AWS region (optional, uses default) | --- ## Bun SQLite Connect using Bun's native SQLite driver. URL: /docs/backend/databases/bun-sqlite Use Bun's built-in `bun:sqlite` driver for zero-dependency SQLite access. ```typescript const engine = createEngine({ providers: [bunSqliteProvider], connections: { main: { type: 'bun-sqlite', filename: './data/app.db' }, }, }) ``` ## No Install Needed `bun:sqlite` is built into the Bun runtime. No npm packages required. ## Connection Options | Option | Type | Description | |---|---|---| | `filename` | `string` | Path to SQLite file | | `readonly` | `boolean` | Open in read-only mode | --- ## SingleStore Connect to SingleStore databases. URL: /docs/backend/databases/singlestore Connect to [SingleStore](https://www.singlestore.com) (formerly MemSQL), a distributed SQL database optimized for real-time analytics and transactions. ```typescript const engine = createEngine({ providers: [singlestoreProvider], connections: { main: { type: 'singlestore', connectionString: process.env.SINGLESTORE_URL! }, }, }) ``` ## Install ```bash npm install mysql2 ``` SingleStore uses the MySQL wire protocol, so it connects through the `mysql2` driver. ## Connection URL Format ``` mysql://user:password@host:3306/database?ssl=true ``` ## Connection Options | Option | Type | Description | |---|---|---| | `connectionString` | `string` | SingleStore connection URL (MySQL format) | --- ## Expo SQLite Connect from React Native using Expo SQLite. URL: /docs/backend/databases/expo-sqlite Use SQLite in React Native apps with [Expo SQLite](https://docs.expo.dev/versions/latest/sdk/sqlite/). ```typescript const engine = createEngine({ providers: [expoSqliteProvider], connections: { main: { type: 'expo-sqlite', databaseName: 'app.db' }, }, }) ``` ## Install ```bash npx expo install expo-sqlite ``` ## Connection Options | Option | Type | Description | |---|---|---| | `databaseName` | `string` | Name of the SQLite database file | --- ## Xata Connect to Xata serverless database. URL: /docs/backend/databases/xata Connect to [Xata](https://xata.io) serverless database via the HTTP API. ```typescript const engine = createEngine({ providers: [xataProvider], connections: { main: { type: 'xata', apiKey: process.env.XATA_API_KEY!, branch: process.env.XATA_BRANCH || 'main', }, }, }) ``` ## Install ```bash npm install @xata.io/client ``` ## Connection Options | Option | Type | Description | |---|---|---| | `apiKey` | `string` | Xata API key | | `branch` | `string` | Database branch (default: `main`) | | `databaseURL` | `string` | Override the auto-detected database URL | --- ## TiDB Cloud Connect to TiDB Cloud serverless. URL: /docs/backend/databases/tidb Connect to [TiDB Cloud](https://tidbcloud.com) serverless MySQL-compatible database. ```typescript const engine = createEngine({ providers: [tidbProvider], connections: { main: { type: 'tidb', connectionString: process.env.TIDB_URL! }, }, }) ``` ## Install ```bash npm install @tidbcloud/serverless ``` ## Connection Options | Option | Type | Description | |---|---|---| | `connectionString` | `string` | TiDB Cloud connection URL | --- ## sql.js In-browser SQLite via WASM. URL: /docs/backend/databases/sql-js Run SQLite in the browser using [sql.js](https://sql.js.org), a WASM build of SQLite. ```typescript const engine = createEngine({ providers: [sqlJsProvider], connections: { main: { type: 'sql-js' }, }, }) ``` ## Install ```bash npm install sql.js ``` ## Use Cases - **Browser apps** -- Client-side SQL without a server - **Offline-first** -- Store data locally in the browser - **Demos** -- Interactive SQL sandboxes ## Connection Options | Option | Type | Description | |---|---|---| | `data` | `ArrayLike` | Initial database binary (optional) | | `locateFile` | `(filename: string) => string` | Custom path to WASM file | --- ## Gel Connect to Gel (formerly EdgeDB). URL: /docs/backend/databases/gel Connect to [Gel](https://gel.dev) (formerly EdgeDB), a Postgres-based database with a graph-relational model. ```typescript const engine = createEngine({ providers: [gelProvider], connections: { main: { type: 'gel', connectionString: process.env.GEL_URL! }, }, }) ``` ## Install ```bash npm install gel ``` ## Connection Options | Option | Type | Description | |---|---|---| | `connectionString` | `string` | Gel connection URL | | `tlsSecurity` | `'strict' \| 'no_host_verification' \| 'insecure'` | TLS mode | --- ## CSV Query CSV files as tables. URL: /docs/backend/databases/csv Point at a directory of CSV files and query them as read-only tables. Useful for reports, data imports, and static datasets. ```typescript const engine = createEngine({ connections: { reports: { directory: './data/reports' }, }, }) ``` ## Connection Config CSV connections use an object with a `directory` field: | Option | Type | Description | |---|---|---| | `directory` | `string` | Path to directory containing `.csv` files | Each `.csv` file in the directory becomes a table. The file name (without extension) is the table name: `orders.csv` becomes `reports.orders`. ## How It Works When the engine starts, it scans the directory and loads each CSV file into memory as a read-only table. Column types, delimiters, and headers are auto-detected. ## Capabilities | Operation | Supported | |---|---| | `findMany` / `findOne` | Yes | | `create` | No | | `update` | No | | `delete` | No | | `count` / `aggregate` | Yes | | Transactions | No | CSV connections are read-only. Write operations return a `405 Method Not Allowed` error. ## Directory Layout ``` data/reports/ ├── orders.csv ├── customers.csv └── organizations.csv ``` Each file should have a header row: ```csv id,amount,status,customer_id,created_at 1,99.99,active,1,2024-01-15 2,149.50,active,2,2024-01-16 3,29.00,pending,1,2024-01-17 ``` ## Querying CSV Data Once connected, CSV tables work like any other table in the client SDK: ```typescript const orders = await db.reports.orders.findMany({ select: ['id', 'amount', 'status'], where: { status: { $eq: 'active' } }, orderBy: { amount: 'desc' }, limit: 100, }) ``` ## Combining with Other Sources A common pattern is a primary Postgres database alongside CSV files for imported data: ```typescript const engine = createEngine({ connections: { main: process.env.PG_URL!, imports: { directory: './data/imports' }, }, }) ``` ## Troubleshooting | Problem | Cause | Fix | |---|---|---| | `No such file or directory` | Path does not exist | Verify the directory path | | `Could not parse CSV` | Malformed file | Check for consistent delimiters and quoting | | Empty results | No header row | Add a header row to the CSV file | | Wrong column types | Auto-detection mismatch | Ensure consistent formatting (dates, numbers) across all rows | --- ## Custom Providers Build a Drizzle-compatible provider for any data source. URL: /docs/backend/databases/custom-provider When your database is not one of the built-in providers, or when you need to connect a non-database source (REST APIs, edge runtimes, proprietary systems), build a custom provider following the Drizzle adapter pattern. ## Drizzle Adapter Structure Every Drizzle database adapter follows the same file structure. Your custom provider should mirror this pattern: ``` providers/ └── my-database/ ├── driver.ts # Connection initialization and public API ├── session.ts # Session and query execution ├── migrator.ts # Migration utilities (optional) └── index.ts # Public exports ``` This is the same structure used by every adapter in the [Drizzle ORM source](https://github.com/drizzle-team/drizzle-orm/tree/main/drizzle-orm/src) -- from `node-postgres` to `d1` to `planetscale-serverless`. ## driver.ts The driver file initializes the connection and returns a database instance. It wraps the underlying JavaScript driver: ```typescript export interface MyDatabaseConfig extends ConnectionConfig { connectionString: string poolSize?: number } export const myDatabaseProvider: IntegrationProvider = { type: 'my-database', capabilities: { read: true, write: true, transactions: true, }, async connect(config: MyDatabaseConfig): Promise { // Initialize the native driver — same as Drizzle's drizzle() entry point const client = new MyDatabaseClient(config.connectionString, { max: config.poolSize ?? 10, }) await client.connect() return { client, dialect: 'my-database', } }, async disconnect(connection: DriverConnection): Promise { await connection.client.end() }, async introspect(connection: DriverConnection) { const tables = await connection.client.query( `SELECT table_name FROM information_schema.tables WHERE table_schema = 'public'` ) // Return TableSchema[] — same shape as Drizzle's introspect output return Promise.all( tables.rows.map(async (t: any) => { const columns = await connection.client.query( `SELECT column_name, data_type, is_nullable FROM information_schema.columns WHERE table_name = $1`, [t.table_name] ) return { name: t.table_name, columns: columns.rows.map((c: any) => ({ name: c.column_name, type: c.data_type, nullable: c.is_nullable === 'YES', })), } }) ) }, } ``` ## session.ts The session file manages query execution and transactions. In Drizzle, every adapter implements a `Session` class that wraps the driver's query method: ```typescript export class MyDatabaseSession { constructor(private connection: DriverConnection) {} async execute(query: QueryRequest): Promise { const result = await this.connection.client.query(query.sql, query.params) return { rows: result.rows, count: result.rowCount, } } async transaction(fn: (tx: MyDatabaseSession) => Promise): Promise { const client = await this.connection.client.getConnection() try { await client.query('BEGIN') const tx = new MyDatabaseSession({ ...this.connection, client }) const result = await fn(tx) await client.query('COMMIT') return result } catch (error) { await client.query('ROLLBACK') throw error } finally { client.release() } } } ``` ## index.ts The index file re-exports the public API, just like every Drizzle adapter: ```typescript export { myDatabaseProvider } from './driver' export type { MyDatabaseConfig } from './driver' ``` ## Registering the Provider Pass your custom provider in the `providers` array: ```typescript const engine = createEngine({ providers: [myDatabaseProvider], connections: { main: { type: 'my-database', connectionString: process.env.MY_DB_URL!, }, }, }) ``` ## Example: Neon Serverless Provider A real-world example following the Drizzle adapter pattern for [Neon's serverless driver](https://github.com/drizzle-team/drizzle-orm/tree/main/drizzle-orm/src/neon-serverless): ```typescript export const neonProvider: IntegrationProvider = { type: 'neon', capabilities: { read: true, write: true, transactions: false }, async connect(config) { const sql = neon(config.connectionString) return { client: sql, dialect: 'pg' } }, async disconnect() { // Neon HTTP connections are stateless — no cleanup needed }, async introspect(connection) { const tables = await connection.client( `SELECT table_name FROM information_schema.tables WHERE table_schema = 'public'` ) return Promise.all( tables.map(async (t: any) => { const columns = await connection.client( `SELECT column_name, data_type, is_nullable FROM information_schema.columns WHERE table_name = '${t.table_name}'` ) return { name: t.table_name, columns: columns.map((c: any) => ({ name: c.column_name, type: c.data_type, nullable: c.is_nullable === 'YES', })), } }) ) }, } // Usage const engine = createEngine({ providers: [neonProvider], connections: { main: { type: 'neon', connectionString: process.env.NEON_URL! }, }, }) ``` ## Example: Cloudflare D1 Provider Following the [D1 adapter pattern](https://github.com/drizzle-team/drizzle-orm/tree/main/drizzle-orm/src/d1): ```typescript export const d1Provider: IntegrationProvider = { type: 'd1', capabilities: { read: true, write: true, transactions: true }, async connect(config) { // D1 binding is passed from the Cloudflare Workers environment return { client: config.binding, dialect: 'sqlite' } }, async disconnect() { // D1 bindings are managed by the Workers runtime }, async introspect(connection) { const result = await connection.client .prepare(`SELECT name FROM sqlite_master WHERE type='table' AND name NOT LIKE 'sqlite_%'`) .all() return Promise.all( result.results.map(async (t: any) => { const columns = await connection.client .prepare(`PRAGMA table_info('${t.name}')`) .all() return { name: t.name, columns: columns.results.map((c: any) => ({ name: c.name, type: c.type, nullable: c.notnull === 0, primaryKey: c.pk === 1, })), } }) ) }, } // Usage in a Cloudflare Worker export default { async fetch(request: Request, env: Env) { const engine = createEngine({ providers: [d1Provider], connections: { main: { type: 'd1', binding: env.DB }, }, }) // ... }, } ``` ## Non-Database Providers The same adapter pattern works for non-database sources. Instead of SQL, the session translates queries into API calls: ```typescript const httpProvider: IntegrationProvider = { type: 'http-api', capabilities: { read: true, write: false, transactions: false }, async connect(config) { // Validate the API is reachable const res = await fetch(`${config.baseUrl}/health`, { headers: { Authorization: `Bearer ${config.apiKey}` }, }) if (!res.ok) throw new Error(`API unreachable: ${res.status}`) return { client: config, dialect: 'http' } }, async disconnect() {}, async introspect(connection) { const res = await fetch(`${connection.client.baseUrl}/schema`, { headers: { Authorization: `Bearer ${connection.client.apiKey}` }, }) return res.json() }, } ``` ## IntegrationProvider Interface ```typescript interface IntegrationProvider { type: string capabilities: { read: boolean write: boolean transactions: boolean } connect: (config: ConnectionConfig) => Promise disconnect: (connection: DriverConnection) => Promise introspect: (connection: DriverConnection) => Promise } interface TableSchema { name: string columns: { name: string type: string nullable: boolean primaryKey?: boolean }[] } ``` Client queries work the same regardless of provider type: ```typescript const contacts = await db.crm.contacts.findMany({ select: ['id', 'name', 'email'], where: { status: { $eq: 'active' } }, limit: 50, }) ``` --- ## Authentication How auth providers work, the resolveSession interface, and session enrichment. URL: /docs/backend/auth Auth providers handle JWT verification and session resolution. They turn a raw JWT token into an enriched user object that the permission system can reference. ```typescript const engine = createEngine({ connections: { main: process.env.PG_URL!, }, auth: betterAuthProvider({ secret: process.env.AUTH_SECRET!, userTable: { table: 'main.users', matchOn: { column: 'id', jwtField: 'id' }, }, }), }) ``` ## How It Works 1. Client sends `Authorization: Bearer ` header 2. Auth provider verifies the token signature 3. Provider looks up the user in the configured `userTable` 4. `resolveSession` enriches the user with additional data (roles, org memberships) 5. The enriched user object is available as `$user.*` in permissions ## The AuthProvider Interface Every auth provider must implement: ```typescript interface AuthProvider { /** Verify JWT and return decoded payload */ verifyToken(token: string): Promise /** Look up user record from JWT payload */ findUser(payload: JWTPayload, db: QueryBuilder): Promise /** Enrich user with session data (roles, orgs, etc.) */ resolveSession?(user: User, db: QueryBuilder): Promise /** Auth route handlers (login, register, etc.) */ routes?: Record } ``` ## Session Object The object returned by `resolveSession` is what `$user.*` references in permissions: ```typescript resolveSession: async (user, db) => { const memberships = await db .selectFrom('main.members') .select(['organization_id', 'role']) .where('user_id', '=', user.id) .where('status', '=', 'active') .execute() return { ...user, // $user.id, $user.email, $user.name org_ids: memberships.map(m => m.organization_id), // $user.org_ids current_org_id: memberships[0]?.organization_id ?? null, // $user.current_org_id org_roles: memberships, // $user.org_roles } } ``` Any property you return becomes available as a `$user.*` variable in [where clauses](/docs/backend/permissions/where), [validate](/docs/backend/permissions/validate), and [defaults & overwrite](/docs/backend/permissions/defaults). ## Available Providers - [better-auth](/docs/backend/auth/better-auth) — Built-in provider with session management (recommended) - [Custom Provider](/docs/backend/auth/custom-provider) — Implement your own auth logic --- ## better-auth Set up the default authentication provider with better-auth, user table config, and session enrichment. URL: /docs/backend/auth/better-auth The `betterAuthProvider` is the recommended auth provider. It wraps [better-auth](https://www.better-auth.com) and handles JWT verification, user lookup, and session management out of the box. ```typescript const auth = betterAuthProvider({ secret: process.env.AUTH_SECRET!, userTable: { table: 'main.users', matchOn: { column: 'id', jwtField: 'id' }, }, }) const engine = createEngine({ connections: { main: process.env.PG_URL!, }, auth, }) ``` ## Full Configuration ```typescript const auth = betterAuthProvider({ secret: process.env.AUTH_SECRET!, userTable: { table: 'main.users', matchOn: { column: 'id', jwtField: 'id' }, activeCheck: { column: 'is_active', value: true }, columns: ['id', 'email', 'name'], }, resolveSession: async (user, db) => { const memberships = await db .selectFrom('main.members') .select(['organization_id', 'role']) .where('user_id', '=', user.id) .where('status', '=', 'active') .execute() return { ...user, org_ids: memberships.map(m => m.organization_id), org_roles: memberships, current_org_id: memberships[0]?.organization_id ?? null, } }, }) ``` ## Options | Option | Type | Required | Description | |---|---|---|---| | `secret` | `string` | Yes | Secret key for JWT signing and verification | | `userTable.table` | `string` | Yes | Fully qualified table name (e.g., `main.users`) | | `userTable.matchOn` | `{ column, jwtField }` | Yes | Map JWT field to user table column | | `userTable.activeCheck` | `{ column, value }` | No | Only allow active users | | `userTable.columns` | `string[]` | No | Columns to select from user table | | `resolveSession` | `(user, db) => Promise` | No | Enrich user with session data | ## User Table Matching The `matchOn` config tells the provider how to find the user record from the JWT payload: ```typescript // JWT contains { id: 'usr_123', email: 'alice@example.com' } matchOn: { column: 'id', jwtField: 'id' } // → SELECT * FROM main.users WHERE id = 'usr_123' ``` ## Active Check Block inactive or suspended users from making requests: ```typescript activeCheck: { column: 'is_active', value: true } // → AND is_active = true ``` If the user record exists but fails the active check, the request returns `403 Forbidden`. ## Session Enrichment The `resolveSession` callback runs after user lookup. Use it to attach organization memberships, roles, or any data your permissions reference: ```typescript resolveSession: async (user, db) => { const memberships = await db .selectFrom('main.members') .select(['organization_id', 'role']) .where('user_id', '=', user.id) .where('status', '=', 'active') .execute() return { ...user, org_ids: memberships.map(m => m.organization_id), current_org_id: memberships[0]?.organization_id ?? null, } } ``` The returned object is available as `$user.*` in permissions. For example, `$user.current_org_id` references the `current_org_id` property. --- ## Custom Provider Implement a custom authentication provider using the AuthProvider interface. URL: /docs/backend/auth/custom-provider Build your own auth provider when you need to integrate with an existing auth system, use a custom JWT library, or handle non-standard token formats. ```typescript const customAuth: AuthProvider = { async verifyToken(token) { const payload = await myJwtLib.verify(token, process.env.JWT_PUBLIC_KEY!) return payload }, async findUser(payload, db) { const user = await db .selectFrom('main.users') .selectAll() .where('id', '=', payload.sub) .executeTakeFirst() return user ?? null }, async resolveSession(user, db) { return { ...user, role: user.role } }, } const engine = createEngine({ connections: { main: process.env.PG_URL!, }, auth: customAuth, }) ``` ## AuthProvider Interface ```typescript interface AuthProvider { /** Verify the JWT token and return the decoded payload */ verifyToken(token: string): Promise /** Look up the user record from the decoded JWT payload */ findUser(payload: JWTPayload, db: QueryBuilder): Promise /** Optional: enrich the user object with session data */ resolveSession?(user: User, db: QueryBuilder): Promise /** Optional: expose auth-related route handlers */ routes?: Record } interface JWTPayload { sub?: string [key: string]: unknown } ``` ## Implementation Requirements ### verifyToken Must throw an error if the token is invalid or expired. The engine catches this and returns `401 Unauthorized`. ```typescript async verifyToken(token) { try { return await jose.jwtVerify(token, publicKey, { issuer: 'https://auth.myapp.com', algorithms: ['RS256'], }) } catch { throw new Error('Invalid token') } } ``` ### findUser Return `null` if the user is not found. The engine returns `401 Unauthorized` in that case. ```typescript async findUser(payload, db) { const user = await db .selectFrom('main.users') .select(['id', 'email', 'name', 'role']) .where('id', '=', payload.sub) .where('is_active', '=', true) .executeTakeFirst() return user ?? null } ``` ### resolveSession (optional) Add any properties that your permissions need to reference via `$user.*`: ```typescript async resolveSession(user, db) { const orgs = await db .selectFrom('main.members') .select(['organization_id', 'role']) .where('user_id', '=', user.id) .execute() return { ...user, org_ids: orgs.map(o => o.organization_id), current_org_id: orgs[0]?.organization_id ?? null, } } ``` ### routes (optional) Expose custom auth endpoints under `/auth/*`: ```typescript routes: { 'POST /auth/login': async (req) => { const { email, password } = await req.json() const token = await myAuth.login(email, password) return Response.json({ token }) }, 'POST /auth/logout': async (req) => { await myAuth.logout(req.headers.get('authorization')) return Response.json({ ok: true }) }, } ``` ## Example: Firebase Auth ```typescript admin.initializeApp({ credential: admin.credential.applicationDefault() }) const firebaseAuth: AuthProvider = { async verifyToken(token) { const decoded = await admin.auth().verifyIdToken(token) return { sub: decoded.uid, email: decoded.email } }, async findUser(payload, db) { const user = await db .selectFrom('main.users') .selectAll() .where('firebase_uid', '=', payload.sub) .executeTakeFirst() return user ?? null }, async resolveSession(user, db) { return { ...user } }, } const engine = createEngine({ connections: { main: process.env.PG_URL!, }, auth: firebaseAuth, }) ``` --- ## Where Clauses Restrict which rows users can read, update, and delete — grouped by operation. URL: /docs/backend/permissions/where Where clauses inject SQL `WHERE` conditions into every query so users only see and touch rows they're allowed to. They work the same way as [Drizzle ORM filters](https://orm.drizzle.team/docs/select#filtering) — if you know Drizzle, you already know how this works. --- ## select Controls which rows and columns users can **read**. The `where` clause is appended to every `SELECT` query. ```typescript permissions: { view_orders: { table: 'main.orders', roles: ['member'], select: { columns: ['id', 'amount', 'status', 'customer_id', 'created_at'], where: { organization_id: { $in: '$user.org_ids' }, status: { $ne: 'deleted' }, }, }, }, } ``` ```sql -- What the engine generates SELECT id, amount, status, customer_id, created_at FROM main.orders WHERE organization_id IN ('org_1', 'org_2') AND status != 'deleted' ``` The user's Drizzle query on the client runs normally — the engine silently adds the `WHERE` conditions. If the user tries to select columns outside the `columns` list, they're stripped. --- ## insert Controls which columns users can **create** and what values are allowed. Inserts don't use `where` — use [`validate`](/docs/backend/permissions/validate), [`default`](/docs/backend/permissions/defaults), and [`overwrite`](/docs/backend/permissions/defaults) instead. ```typescript permissions: { create_orders: { table: 'main.orders', roles: ['member'], insert: { columns: ['amount', 'status', 'customer_id'], validate: { amount: { $gte: 0 }, status: { $in: ['draft'] }, }, default: { status: 'draft' }, overwrite: { created_by: '$user.id', organization_id: '$user.current_org_id', }, }, }, } ``` ```sql -- User sends: { amount: 500, customer_id: 'cust_1' } -- Engine validates, fills defaults, forces overwrites, then runs: INSERT INTO main.orders (amount, customer_id, status, created_by, organization_id) VALUES (500, 'cust_1', 'draft', 'usr_123', 'org_1') ``` | Key | Purpose | |-----|---------| | `columns` | Allowed columns — anything else is rejected | | `validate` | Must pass or the request returns 403 | | `default` | Fills value if the client didn't send it (client can override) | | `overwrite` | Always set from server — client cannot override | --- ## update Controls which rows users can **modify** and what values are allowed. The `where` clause restricts which rows are updatable. ```typescript permissions: { edit_orders: { table: 'main.orders', roles: ['member'], update: { columns: ['amount', 'status'], where: { organization_id: { $in: '$user.org_ids' }, status: { $ne: 'completed' }, }, validate: { amount: { $gte: 0 } }, overwrite: { updated_by: '$user.id' }, }, }, } ``` ```sql -- User sends: db.update(orders).set({ amount: 500 }).where(eq(orders.id, 'order_42')) -- Engine injects permission where clause + overwrite: UPDATE main.orders SET amount = 500, updated_by = 'usr_123' WHERE id = 'order_42' AND organization_id IN ('org_1', 'org_2') AND status != 'completed' ``` The user's own `where` (e.g. `eq(orders.id, 'order_42')`) is **AND'd** with the permission's `where`. If the row doesn't match both, nothing is updated. --- ## delete Controls which rows users can **remove**. The `where` clause restricts which rows are deletable. ```typescript permissions: { delete_drafts: { table: 'main.orders', roles: ['member'], delete: { where: { organization_id: { $in: '$user.org_ids' }, status: { $eq: 'draft' }, }, }, }, } ``` ```sql -- User sends: db.delete(orders).where(eq(orders.id, 'order_42')) -- Engine injects permission where clause: DELETE FROM main.orders WHERE id = 'order_42' AND organization_id IN ('org_1', 'org_2') AND status = 'draft' ``` Only rows matching both the user's filter and the permission's `where` are deleted. --- ## Learn More - [Operators](/docs/backend/permissions/operators) — Full operator reference - [Validate](/docs/backend/permissions/validate) — Input validation for insert and update - [Defaults & Overwrite](/docs/backend/permissions/defaults) — Auto-fill and force values on writes - [Raw SQL](/docs/backend/permissions/sql) — Escape hatch for complex WHERE conditions - [Drizzle ORM Filtering](https://orm.drizzle.team/docs/select#filtering) — Our where syntax mirrors Drizzle's patterns --- ## Validate Validate data on insert and update operations before it reaches the database. URL: /docs/backend/permissions/validate Validate rules check the values a user is trying to write. If the data does not match the rules, the request is rejected with `403 Forbidden` before any SQL is executed. ```typescript permissions: { create_orders: { table: 'main.orders', roles: ['sales'], insert: { columns: ['amount', 'status', 'customer_id'], validate: { amount: { $gte: 0 }, status: { $in: ['draft'] }, }, }, }, } ``` A user trying to insert `{ amount: -50, status: 'draft' }` gets rejected because `amount` fails the `$gte: 0` validate rule. ## How Validate Works Validate runs **before** the query. It compares the request body values against the rules: 1. Extract the values from the request body 2. Apply each validate operator to the corresponding value 3. If any rule fails, return `403` with the failing field 4. If all pass, proceed to query execution ## Validate vs. Where | | Where | Validate | |---|---|---| | **Purpose** | Row-level security (which rows) | Value validation (what values) | | **Applies to** | `select`, `update`, `delete` | `insert`, `update` | | **Mechanism** | Adds WHERE clause | Validates request body | | **Failure** | Silently excludes rows | Returns 403 error | ## Examples ### Range Validation ```typescript insert: { columns: ['amount', 'status'], validate: { amount: { $gte: 0, $lte: 100_000 }, }, } // Allows: { amount: 500 } // Rejects: { amount: -1 }, { amount: 200_000 } ``` ### Enum Validation ```typescript insert: { columns: ['status'], validate: { status: { $in: ['draft', 'active', 'closed'] }, }, } // Allows: { status: 'draft' } // Rejects: { status: 'deleted' }, { status: 'archived' } ``` ### Multiple Field Validation ```typescript insert: { columns: ['amount', 'status', 'priority'], validate: { amount: { $gte: 0, $lte: 100_000 }, status: { $in: ['draft', 'active', 'closed'] }, priority: { $gte: 1, $lte: 5 }, }, } // All three fields are validated independently // ALL must pass for the request to proceed ``` ### Update Validation Validate also applies to update operations, validating the new values: ```typescript permissions: { edit_org_orders: { table: 'main.orders', roles: ['editor'], update: { columns: '*', where: { organization_id: { $in: '$user.org_ids' } }, validate: { status: { $in: ['draft', 'active', 'closed'] }, amount: { $gte: 0, $lte: 100_000 }, }, }, }, } // User can update orders in their org, // but only set status to draft/active/closed // and amount between 0 and 100,000 ``` ## Validate with Session Variables You can reference `$user.*` in validate values: ```typescript insert: { columns: ['organization_id', 'name'], validate: { organization_id: { $eq: '$user.current_org_id' }, }, } // Ensures users can only write to their own organization ``` ## Supported Operators All [comparison operators](/docs/backend/permissions/operators) are available in validate rules: `$eq`, `$ne`, `$gt`, `$gte`, `$lt`, `$lte`, `$in`, `$nin`. --- ## Defaults & Overwrite Automatically fill missing values with default or force values with overwrite on insert and update. URL: /docs/backend/permissions/defaults `default` and `overwrite` automatically set column values on write operations. They live inside `insert` and `update` operation blocks. - **`default`** — fills in values when the client does not provide them. The client can override these by sending their own value. - **`overwrite`** — always applied regardless of what the client sends. The client cannot override these. ```typescript permissions: { create_orders: { table: 'main.orders', roles: ['sales'], insert: { columns: ['amount', 'status', 'customer_id'], default: { status: 'draft', priority: 3, }, overwrite: { created_by: '$user.id', organization_id: '$user.current_org_id', }, }, }, } ``` When a user inserts an order: - `status` is set to `'draft'` only if the client does not send a `status` value - `priority` is set to `3` only if the client does not send a `priority` value - `created_by` and `organization_id` are always set from the session — even if the user sends different values in the request body ## How They Work ### default 1. User sends request body: `{ amount: 500, customer_id: 'cust_1' }` 2. Engine checks for missing fields that have defaults: `status` is missing, `priority` is missing 3. Default values are merged: `{ amount: 500, customer_id: 'cust_1', status: 'draft', priority: 3 }` 4. If the user had sent `{ amount: 500, status: 'active' }`, the `status` would remain `'active'` ### overwrite 1. User sends request body: `{ amount: 500, status: 'draft', created_by: 'someone_else' }` 2. Engine resolves overwrite variables: `$user.id` becomes `'usr_123'` 3. Overwrite values replace any user-provided values: `created_by` is forced to `'usr_123'` 4. Final insert: `{ amount: 500, status: 'draft', created_by: 'usr_123', organization_id: 'org_456' }` ## $user.* Variables Both `default` and `overwrite` reference the session object returned by `resolveSession`. Any property on the session object is available: | Variable | Example Value | Description | |---|---|---| | `$user.id` | `'usr_123'` | User ID from the user table | | `$user.email` | `'alice@example.com'` | User email | | `$user.name` | `'Alice'` | User name | | `$user.current_org_id` | `'org_456'` | Current organization (from resolveSession) | | `$user.org_ids` | `['org_1', 'org_2']` | All organization IDs (from resolveSession) | The available variables depend on what your `resolveSession` function returns. ## Common Patterns ### Audit Columns on Insert ```typescript insert: { columns: ['amount', 'status'], overwrite: { created_by: '$user.id', created_at: '$now', }, } ``` ### Audit Columns on Update ```typescript update: { columns: ['amount', 'status'], overwrite: { updated_by: '$user.id', updated_at: '$now', }, } ``` ### Organization Scoping Ensure records always belong to the user's current organization: ```typescript insert: { columns: ['name', 'description'], overwrite: { organization_id: '$user.current_org_id', }, } ``` ### Sensible Defaults for Optional Fields Let the client optionally provide values, falling back to defaults: ```typescript insert: { columns: ['name', 'description', 'status', 'priority'], default: { status: 'draft', priority: 3, }, } ``` ### Combined Insert Permission ```typescript permissions: { create_orders: { table: 'main.orders', roles: ['sales'], insert: { columns: ['amount', 'status', 'customer_id'], validate: { amount: { $gte: 0 }, status: { $in: ['draft'] }, }, default: { priority: 3, }, overwrite: { created_by: '$user.id', organization_id: '$user.current_org_id', }, }, }, } ``` This permission: - Allows inserting `amount`, `status`, and `customer_id` - Validates that `amount >= 0` and `status` is `'draft'` - Sets `priority` to `3` if the client does not provide it - Always sets `created_by` and `organization_id` from the session ## Default vs. Overwrite vs. Validate | | Default | Overwrite | Validate | |---|---|---|---| | **Purpose** | Fill missing values | Force values | Validate values | | **User control** | Can override by sending value | Cannot override | Must provide valid value | | **When applied** | Only when field is missing | Always, after validation | Before query | | **Failure** | Never fails | Never fails | Returns 403 on invalid data | | **Operations** | `insert`, `update` | `insert`, `update` | `insert`, `update` | ## Static Values Both `default` and `overwrite` can use static values instead of session variables: ```typescript insert: { columns: ['name'], default: { source: 'api', version: 2, }, overwrite: { tenant: 'main', }, } ``` --- ## Operators Complete reference of MongoDB-style operators available in where clauses, validate rules, and defaults. URL: /docs/backend/permissions/operators Permissions use MongoDB-style operators to define conditions. These operators work in both `where` (WHERE clauses) and `validate` (write validation) contexts. ```typescript where: { amount: { $gte: 0, $lte: 100_000 }, status: { $in: ['active', 'pending'] }, customer_id: { $eq: '$user.id' }, } ``` ## Comparison Operators | Operator | Description | SQL Equivalent | Example | |---|---|---|---| | `$eq` | Equal to | `=` | `{ status: { $eq: 'active' } }` | | `$ne` | Not equal to | `!=` | `{ status: { $ne: 'deleted' } }` | | `$gt` | Greater than | `>` | `{ amount: { $gt: 100 } }` | | `$gte` | Greater than or equal | `>=` | `{ amount: { $gte: 0 } }` | | `$lt` | Less than | `<` | `{ amount: { $lt: 50_000 } }` | | `$lte` | Less than or equal | `<=` | `{ amount: { $lte: 100_000 } }` | | `$in` | In array | `IN (...)` | `{ status: { $in: ['active', 'pending'] } }` | | `$nin` | Not in array | `NOT IN (...)` | `{ status: { $nin: ['deleted', 'archived'] } }` | ## Logical Operators | Operator | Description | SQL Equivalent | Example | |---|---|---|---| | `$and` | All conditions must match | `AND` | `{ $and: [{ a: { $gt: 0 } }, { b: { $lt: 100 } }] }` | | `$or` | Any condition must match | `OR` | `{ $or: [{ status: { $eq: 'active' } }, { status: { $eq: 'pending' } }] }` | | `$not` | Negate a condition | `NOT` | `{ $not: { status: { $eq: 'deleted' } } }` | ## Special Values | Value | Description | Resolves To | |---|---|---| | `'$user.id'` | Current user ID | `'usr_123'` | | `'$user.email'` | Current user email | `'alice@example.com'` | | `'$user.current_org_id'` | Current organization | `'org_456'` | | `'$user.org_ids'` | User's organization IDs | `['org_1', 'org_2']` | | `'$user.*'` | Any session property | Value from `resolveSession` | | `'$now'` | Current timestamp | `'2025-01-15T10:30:00Z'` | ## Usage in Where Clauses Where clauses add SQL WHERE conditions. Multiple conditions at the same level are combined with AND: ```typescript select: { columns: '*', where: { organization_id: { $eq: '$user.current_org_id' }, // AND status: { $ne: 'deleted' }, // AND amount: { $gte: 0 }, }, } ``` ```sql WHERE organization_id = 'org_456' AND status != 'deleted' AND amount >= 0 ``` ### OR Logic ```typescript select: { columns: '*', where: { $or: [ { customer_id: { $eq: '$user.id' } }, { assigned_to: { $eq: '$user.id' } }, ], }, } ``` ```sql WHERE (customer_id = 'usr_123' OR assigned_to = 'usr_123') ``` ### Nested Logic ```typescript select: { columns: '*', where: { $and: [ { status: { $in: ['active', 'pending'] } }, { $or: [ { customer_id: { $eq: '$user.id' } }, { organization_id: { $in: '$user.org_ids' } }, ], }, ], }, } ``` ```sql WHERE status IN ('active', 'pending') AND (customer_id = 'usr_123' OR organization_id IN ('org_1', 'org_2')) ``` ## Usage in Validate Validate rules check request body values. The same operators apply, but they validate data rather than generating SQL: ```typescript insert: { columns: ['amount', 'status'], validate: { amount: { $gte: 0, $lte: 100_000 }, status: { $in: ['draft', 'active', 'closed'] }, }, } ``` A request with `{ amount: -5 }` is rejected because `$gte: 0` fails. ## Combining Multiple Operators You can combine multiple operators on the same field: ```typescript // Range: 0 <= amount <= 100,000 { amount: { $gte: 0, $lte: 100_000 } } // Not in a set { status: { $nin: ['deleted', 'archived'] } } ``` ## Array Values with $in / $nin When using `$in` or `$nin`, provide an array of allowed/disallowed values: ```typescript // Static array { role: { $in: ['admin', 'editor'] } } // Session variable (resolves to array) { organization_id: { $in: '$user.org_ids' } } ``` --- ## Raw SQL Write advanced WHERE clauses with raw SQL for cases that standard operators cannot express. URL: /docs/backend/permissions/sql When MongoDB-style operators are not enough, you can use raw SQL expressions inside operation blocks. This is an escape hatch for complex access control logic. ```typescript permissions: { view_recent_orders: { table: 'main.orders', roles: ['analyst'], select: { columns: '*', sql: "created_at > NOW() - INTERVAL '30 days'", }, }, } ``` ## Syntax Use `sql` as a key inside an operation block with a SQL string value: ```typescript select: { columns: '*', sql: "your_sql_expression", } ``` The expression is injected directly into the WHERE clause. ## Examples ### Date Range ```typescript permissions: { view_recent_orders: { table: 'main.orders', roles: ['analyst'], select: { columns: '*', sql: "created_at > NOW() - INTERVAL '90 days'", }, }, } ``` ```sql SELECT * FROM main.orders WHERE created_at > NOW() - INTERVAL '90 days' ``` ### Computed Conditions ```typescript select: { columns: '*', sql: "amount * quantity <= 10000", } ``` ```sql SELECT * FROM main.orders WHERE amount * quantity <= 10000 ``` ### Combining with Where Clauses Raw SQL can be combined with standard where clauses inside the same operation block: ```typescript permissions: { view_org_recent_orders: { table: 'main.orders', roles: ['member'], select: { columns: '*', where: { organization_id: { $in: '$user.org_ids' }, status: { $ne: 'deleted' }, }, sql: "created_at > NOW() - INTERVAL '30 days'", }, }, } ``` ```sql SELECT * FROM main.orders WHERE organization_id IN ('org_1', 'org_2') AND status != 'deleted' AND created_at > NOW() - INTERVAL '30 days' ``` ### Using Session Variables in SQL Reference session variables with the `$user.` prefix inside the SQL string: ```typescript select: { columns: '*', sql: "created_by = '$user.id' OR assigned_to = '$user.id'", } ``` The engine replaces `$user.*` tokens with their actual values before executing. ### Raw SQL on Update and Delete Raw SQL also works inside `update` and `delete` operation blocks: ```typescript permissions: { delete_old_drafts: { table: 'main.orders', roles: ['admin'], delete: { where: { status: { $eq: 'draft' } }, sql: "created_at < NOW() - INTERVAL '90 days'", }, }, } ``` ## Safety Rules Raw SQL is powerful but comes with constraints to prevent abuse: 1. **Read-only** — Only expressions valid in a WHERE clause are allowed. No subqueries with `INSERT`, `UPDATE`, `DELETE`, or `DROP`. 2. **No semicolons** — The engine rejects any SQL containing `;` to prevent statement injection. 3. **No comments** — SQL comments (`--` and `/* */`) are stripped. 4. **Parameterized session variables** — `$user.*` references are parameterized, not string-interpolated, to prevent SQL injection. 5. **Schema-scoped** — You can only reference columns from the permission's table and its known relationships. 6. **Allowlisted functions** — Only safe SQL functions are permitted (e.g., `NOW()`, `COALESCE`, `LOWER`, `UPPER`, `LENGTH`). System functions and file access functions are blocked. ## When to Use Raw SQL | Use Case | Recommendation | |---|---| | Simple equality | Use `$eq` operator | | IN / NOT IN | Use `$in` / `$nin` | | Range checks | Use `$gt`, `$gte`, `$lt`, `$lte` | | Date math | Use `sql` | | Computed columns | Use `sql` | | Cross-column logic | Use `sql` | | String patterns | Use `sql` with `LIKE` | Prefer standard operators when possible. Use `sql` only when the built-in operators cannot express your access rule. --- ## Middleware Run custom TypeScript before and after query execution — modify where clauses, transform input, restrict columns, and run side effects. URL: /docs/backend/permissions/middleware When declarative where clauses, validate rules, and defaults/overwrite are not enough, `middleware` lets you wrap the query execution with custom TypeScript. Middleware is defined inside an operation block. Every parameter is destructured — you work directly with `user`, `db`, `where`, `input`, and pass overrides to `next()` to control what gets executed. ```typescript permissions: { manage_tasks: { table: 'main.tasks', roles: ['member'], select: { columns: '*', where: { organization_id: { $eq: '$user.current_org_id' } }, middleware: async ({ user, db, where }, next) => { // Before: tighten the where clause based on a dynamic lookup const activeTeams = await db .select({ id: teams.id }) .from(teams) .innerJoin(teamMembers, eq(teamMembers.teamId, teams.id)) .where(and( eq(teamMembers.userId, user.id), eq(teamMembers.status, 'active'), )) const rows = await next({ where: { ...where, team_id: { $in: activeTeams.map(t => t.id) } }, }) // After: attach computed data before returning return rows.map(row => ({ ...row, is_mine: row.assigned_to === user.id, })) }, }, }, } ``` ## How It Works Middleware wraps the query with two hooks — **before** and **after** `next()`. You decide what runs at each stage: ```typescript select: { columns: '*', middleware: async ({ user, db, where, input, columns }, next) => { // BEFORE — runs before the query // Validate access, look up data, modify the request const rows = await next({ where: { ...where, status: { $ne: 'archived' } }, }) // AFTER — runs after the query // Transform results, run side effects, enrich rows return rows }, } ``` | Stage | What happens | |---|---| | **Before `next()`** | Validate access, look up data, compute overrides, modify the request | | **`next(overrides)`** | Executes the query — pass overrides to change what runs | | **After `next()`** | Transform results, run side effects, enrich with computed data | | **Skip `next()`** | Throw to deny the request entirely | ```text 6. Permission Check — where, validate, default, overwrite applied | v 6a. middleware (before next) <-- modify request here | v 7. Query Execution <-- next() triggers this | v 7a. middleware (after next) <-- transform results here | v 8. Audit Log ``` ## Parameters Every parameter is destructured — no wrapper object: ```typescript middleware: async ({ user, db, table, operation, columns, query, input, where }, next) => { ``` | Parameter | Type | Description | |---|---|---| | `user` | `UserSession` | Resolved session (same as `$user.*` in where clauses) | | `db` | `DrizzleInstance` | Drizzle query builder — run any query | | `table` | `string` | Target table, e.g. `'main.orders'` | | `operation` | `'select' \| 'insert' \| 'update' \| 'delete'` | Detected operation | | `columns` | `string[]` | Columns the client is requesting or writing | | `query` | `{ sql: string, params: any[] }` | The client's raw parameterized SQL | | `input` | `Record \| undefined` | Request body values (writes only) | | `where` | `Record \| undefined` | Resolved where clause (after merging and `$user.*` substitution) | ## Modifying the Request Pass overrides to `next()` to change what gets executed. This is how you modify the incoming request before it hits the database. ### Evaluate and Modify Where Clauses The `where` parameter contains the fully resolved where clause — after declarative `where`, `$user.*` substitution, and `sql` fragments are merged. You can read it, extend it, or replace it entirely: ```typescript // Extend: add conditions to the existing where middleware: async ({ user, db, where }, next) => { const teamIds = await getActiveTeamIds(db, user.id) return next({ where: { ...where, team_id: { $in: teamIds }, status: { $ne: 'archived' }, }, }) }, ``` ```typescript // Replace: swap the where clause entirely based on role middleware: async ({ user, where }, next) => { if (user.roles.includes('admin')) { // Admins see everything in their org return next({ where: { organization_id: { $eq: user.current_org_id } }, }) } // Everyone else keeps the original restrictive where return next() }, ``` ```typescript // Conditional: evaluate the incoming where and decide middleware: async ({ user, db, where }, next) => { // If the client is filtering by a specific project, verify access if (where?.project_id) { const hasAccess = await db.query.projectMembers.findFirst({ where: and( eq(projectMembers.projectId, where.project_id.$eq ?? where.project_id), eq(projectMembers.userId, user.id), ), }) if (!hasAccess) { throw new PermissionError('Not a member of this project') } } return next() }, ``` ### Transform Input Modify the request body before it's written to the database. Useful for normalizing values, computing derived fields, or injecting server-side data: ```typescript // Normalize and enrich input on insert middleware: async ({ user, db, input }, next) => { const product = await db.query.products.findFirst({ where: eq(products.id, input.product_id), }) if (!product) throw new PermissionError('Product not found') return next({ input: { ...input, email: input.email?.toLowerCase().trim(), unit_price: product.price, total: product.price * input.quantity, }, }) }, ``` ```typescript // Conditionally modify input on update middleware: async ({ user, input }, next) => { if (input?.status === 'published') { return next({ input: { ...input, published_at: new Date().toISOString(), published_by: user.id, }, }) } return next() }, ``` ### Restrict Columns Narrow the column list based on runtime conditions — subscriptions, roles, or any dynamic lookup: ```typescript // Free users see basic columns, paid see everything middleware: async ({ user, db }, next) => { const sub = await db.query.subscriptions.findFirst({ where: eq(subscriptions.organizationId, user.current_org_id), columns: { plan: true }, }) if (sub?.plan === 'free') { return next({ columns: ['id', 'name', 'status', 'created_at'] }) } return next() }, ``` ### Override Summary | Override | Effect | |---|---| | `where` | Replaces the where clause used in the WHERE | | `input` | Replaces the input data for writes | | `columns` | Replaces the column list | | `db` | Replaces the database instance (use for transactions) | If you call `next()` with no arguments, the query executes unchanged. ## Transforming Results Code after `next()` runs once the query completes. Use it to transform, enrich, or redact rows before they reach the client: ```typescript // Redact PII for non-clinical staff middleware: async ({ user }, next) => { const rows = await next() if (!user.roles.includes('clinical')) { return rows.map(row => ({ ...row, email: row.email ? `${row.email[0]}***@***` : null, phone: row.phone ? '***-****' : null, ssn: row.ssn ? '***-**-' + row.ssn.slice(-4) : null, })) } return rows }, ``` ```typescript // Enrich rows with computed data from another table middleware: async ({ db }, next) => { const rows = await next() if (rows.length === 0) return rows const counts = await db .select({ orderId: orderItems.orderId, itemCount: count() }) .from(orderItems) .where(inArray(orderItems.orderId, rows.map(r => r.id))) .groupBy(orderItems.orderId) const countMap = new Map(counts.map(c => [c.orderId, c.itemCount])) return rows.map(row => ({ ...row, item_count: countMap.get(row.id) ?? 0, })) }, ``` ## Transactions Wrap `next()` in `db.transaction()` and pass the transaction as `db` — if anything throws after the query, everything rolls back: ```typescript middleware: async ({ user, db, input }, next) => { return db.transaction(async (tx) => { // The main query runs inside the transaction const rows = await next({ db: tx }) // This runs after the query — if it throws, both the // main query and this update are rolled back await tx.insert(auditLogs).values({ userId: user.id, action: 'order_created', orderId: rows[0]?.id, timestamp: new Date(), }) return rows }) }, ``` This is especially useful for writes that need atomic side effects — stock decrements, balance updates, counter increments, or any operation where partial completion is not acceptable. ## Examples ### E-Commerce: Order Creation with Stock Validation An online store lets customers place orders through a dashboard built with Drizzle on the client. The client sends an `INSERT` with just the product and quantity — but the server needs to verify stock availability, compute the total price from the catalog, decrement inventory, and queue a warehouse fulfillment job. All of this must happen atomically: if the warehouse job fails to insert, the order and stock change must roll back too. ```typescript create_order: { table: 'main.orders', roles: ['customer'], insert: { columns: ['product_id', 'quantity', 'shipping_address'], overwrite: { customer_id: '$user.id', organization_id: '$user.current_org_id', status: 'pending', }, validate: { quantity: { $gte: 1, $lte: 100 }, }, middleware: async ({ user, db, input }, next) => { // Before: validate product and check stock const product = await db.query.products.findFirst({ where: and( eq(products.id, input.product_id), eq(products.organizationId, user.current_org_id), ), }) if (!product) { throw new PermissionError('Product not found') } if (product.stock < input.quantity) { throw new PermissionError( `Only ${product.stock} units available`, ) } // Execute atomically: insert order + decrement stock + queue job return db.transaction(async (tx) => { const rows = await next({ db: tx, // Before: enrich input with server-computed values input: { ...input, unit_price: product.price, total: product.price * input.quantity, }, }) // After: side effects inside the transaction await tx .update(products) .set({ stock: sql`stock - ${input.quantity}` }) .where(eq(products.id, input.product_id)) await tx.insert(warehouseJobs).values({ orderId: rows[0].id, productId: input.product_id, quantity: input.quantity, status: 'queued', }) return rows }) }, }, } ``` ### Project Management: Dynamic Where from Team Membership A project management app where users belong to multiple teams. Tasks should only be visible to members of the team that owns them. The team membership is complex — users can be active or inactive, and membership can expire — so a simple declarative `where` with `$user.team_ids` isn't enough. The middleware evaluates the user's current memberships and injects a dynamic where clause. ```typescript view_tasks: { table: 'main.tasks', roles: ['member'], select: { columns: '*', middleware: async ({ user, db, where }, next) => { // Before: look up active teams and inject into where const activeTeams = await db .select({ id: teams.id }) .from(teams) .innerJoin(teamMembers, eq(teamMembers.teamId, teams.id)) .where( and( eq(teamMembers.userId, user.id), eq(teamMembers.status, 'active'), gt(teamMembers.expiresAt, new Date()), ), ) if (activeTeams.length === 0) { throw new PermissionError('No active team memberships') } return next({ where: { ...where, team_id: { $in: activeTeams.map(t => t.id) }, }, }) }, }, } ``` ### Multi-Tenant: Cascading Delete with Usage Recalculation A multi-tenant platform tracks per-organization resource usage in a denormalized `organizations.project_count` column for fast billing lookups. When a user deletes a project, the count must be recalculated. Both the delete and the count update must succeed or fail together. ```typescript delete_project: { table: 'main.projects', roles: ['admin'], delete: { where: { organization_id: { $eq: '$user.current_org_id' }, }, middleware: async ({ user, db }, next) => { return db.transaction(async (tx) => { const rows = await next({ db: tx }) // After: recalculate count — if this fails, the delete rolls back const [{ total }] = await tx .select({ total: count() }) .from(projects) .where(eq(projects.organizationId, user.current_org_id)) await tx .update(organizations) .set({ projectCount: total }) .where(eq(organizations.id, user.current_org_id)) return rows }) }, }, } ``` ### Approval Workflow: Role-Based Status Transitions A procurement system where purchase orders go through an approval workflow: `draft` -> `submitted` -> `approved` -> `fulfilled`. The middleware evaluates the incoming input to enforce who can set which status, based on the current state and the user's role. ```typescript manage_purchase_orders: { table: 'main.purchase_orders', roles: ['member', 'manager'], select: { columns: '*', where: { organization_id: { $in: '$user.org_ids' } }, }, update: { columns: '*', where: { organization_id: { $in: '$user.org_ids' } }, validate: { status: { $in: ['draft', 'submitted', 'approved', 'fulfilled'] }, }, overwrite: { updated_by: '$user.id' }, middleware: async ({ user, db, input, where }, next) => { // Before: evaluate the incoming request and enforce transitions if (!input?.status) return next() const [order] = await db .select({ status: purchaseOrders.status, createdBy: purchaseOrders.createdBy }) .from(purchaseOrders) .where(eq(purchaseOrders.id, input.id)) .limit(1) if (!order) throw new PermissionError('Order not found') // draft -> submitted: only the creator if (input.status === 'submitted' && order.status === 'draft') { if (order.createdBy !== user.id) { throw new PermissionError('Only the creator can submit this order') } } // submitted -> approved: only managers if (input.status === 'approved' && order.status === 'submitted') { if (!user.roles.includes('manager')) { throw new PermissionError('Only managers can approve orders') } } // No skipping steps const allowedTransitions = { draft: ['submitted'], submitted: ['approved', 'draft'], approved: ['fulfilled'], } if (!allowedTransitions[order.status]?.includes(input.status)) { throw new PermissionError( `Cannot transition from ${order.status} to ${input.status}`, ) } return next() }, }, } ``` ## Combining with Declarative Rules Middleware works alongside where clauses, validate rules, defaults, overwrite, and raw SQL. The engine applies them in order — declarative rules run first, then middleware wraps the execution: ```text 1. where -> WHERE clause injected 2. validate -> Input values validated 3. default -> Missing values filled 4. overwrite -> Server values forced 5. sql -> SQL fragment appended to WHERE 6. middleware -> Your TypeScript wraps the execution (before + after) 7. Query executes (inside next()) ``` Use declarative rules for the common cases and middleware for the edge cases: ```typescript manage_orders: { table: 'main.orders', roles: ['member', 'manager'], select: { columns: '*', where: { organization_id: { $in: '$user.org_ids' } }, }, insert: { columns: ['amount', 'status', 'customer_id'], validate: { amount: { $gte: 0 } }, overwrite: { created_by: '$user.id' }, }, update: { columns: ['amount', 'status'], where: { organization_id: { $in: '$user.org_ids' } }, validate: { amount: { $gte: 0 } }, overwrite: { updated_by: '$user.id' }, middleware: async ({ user, db, input }, next) => { // Before: enforce business rules on the incoming request if (input?.status === 'approved') { const membership = await db.query.orgMembers.findFirst({ where: and( eq(orgMembers.userId, user.id), eq(orgMembers.orgId, user.current_org_id), ), columns: { role: true }, }) if (membership?.role !== 'manager') { throw new PermissionError('Only managers can approve orders') } } const rows = await next() // After: strip sensitive fields for non-managers if (!user.roles.includes('manager')) { return rows.map(({ internal_notes, ...row }) => row) } return rows }, }, } ``` ## Error Handling Throw `PermissionError` to reject with `403 Forbidden`: ```typescript middleware: async (params, next) => { throw new PermissionError('Reason shown in error response') }, ``` Any other thrown error returns `500 Internal Server Error` and is logged but not exposed to the client. When using transactions, any throw inside `db.transaction()` automatically rolls back all queries within that transaction. ## When to Use | Use Case | Recommendation | |---|---| | Static row filtering | Use [where clauses](/docs/backend/permissions/where) | | Value validation | Use [validate](/docs/backend/permissions/validate) | | Fill missing values | Use [default](/docs/backend/permissions/defaults) | | Force server values | Use [overwrite](/docs/backend/permissions/defaults) | | Date math, computed WHERE | Use [raw SQL](/docs/backend/permissions/sql) | | Dynamic where from lookups | Use `middleware` with `next({ where })` | | Input normalization / enrichment | Use `middleware` with `next({ input })` | | Runtime column restriction | Use `middleware` with `next({ columns })` | | Result transformation / redaction | Use `middleware` (after `next()`) | | Business rule enforcement | Use `middleware` (before `next()`) | | Atomic writes with side effects | Use `middleware` with `db.transaction()` | Prefer declarative rules when possible. Use `middleware` only when the built-in mechanisms cannot express your logic. --- ## Actions Typed server-side functions callable by the client, with full access to the database, user session, and any table. URL: /docs/backend/actions Actions are named server-side functions defined at the engine level. Unlike permissions (which are bound to a single table), actions can touch any table, run multi-table transactions, and execute arbitrary business logic. The client calls them by name — no raw SQL. Use as few actions as possible. For standard reads and writes, use the ORM directly through permissions — it's safer, simpler, and fully typed out of the box. Reserve actions for logic that genuinely can't be expressed as single-table CRUD: multi-table transactions, complex aggregations, or workflows with side effects. Actions use Zod schemas for input and output validation, and the types flow end-to-end: the schema endpoint exposes them, the CLI generates TypeScript types, and the client gets full autocomplete on `db.action()` calls. ```typescript const engine = createEngine({ connections: { main: process.env.PG_URL! }, permissions: { view_products: { /* ... */ }, edit_products: { /* ... */ }, }, actions: { incrementStock: { roles: ['warehouse_manager'], input: z.object({ productId: z.string(), amount: z.number().positive() }), output: z.object({ id: z.string(), stock: z.number() }), handler: async (ctx, inputData) => { const { db } = ctx const { productId, amount } = inputData const [updated] = await db .update(products) .set({ stock: sql`stock + ${amount}` }) .where(eq(products.id, productId)) .returning({ id: products.id, stock: products.stock }) return updated }, }, }, }) ``` ```typescript // Client — fully typed input and output const result = await db.action('incrementStock', { productId: 'prod_123', amount: 5, }) // result: { id: string; stock: number } ``` ## How It Works 1. Client sends `POST /actions/{actionName}` with a JSON body 2. Engine authenticates the user and resolves the session 3. Engine checks if the user's role is listed in the action's `roles` array 4. Engine validates the input against the Zod schema — rejects with `400` if invalid 5. Engine calls `handler` with `ctx` and the validated input 6. Engine validates the output against the Zod schema (if defined) 7. Return value is sent back to the client as JSON ```text POST /actions/incrementStock + Bearer JWT + { productId, amount } │ ▼ 1. Auth — JWT verification, session resolution │ ▼ 2. Role check — is the user's role in the action's roles array? │ ▼ 3. Input validation — parse input against Zod schema │ ▼ 4. Execute — handler(ctx, validatedInput) │ ▼ 5. Response — return value sent as JSON ``` ## Defining Actions Each action has four fields: | Field | Type | Required | Description | |---|---|---|---| | `roles` | `string[]` | Yes | Roles allowed to call this action | | `input` | `ZodSchema` | Yes | Zod schema that validates and types the client's request body | | `output` | `ZodSchema` | No | Zod schema that validates and types the return value | | `handler` | `(ctx, inputData) => Promise` | Yes | The function that executes the action | ```typescript const engine = createEngine({ connections: { /* ... */ }, permissions: { /* ... */ }, actions: { myAction: { roles: ['editor', 'admin'], input: z.object({ id: z.string() }), output: z.object({ success: z.boolean() }), handler: async (ctx, inputData) => { const { id } = inputData // inputData is typed as { id: string } return { success: true } // return is typed as { success: boolean } }, }, }, }) ``` Each action declares its own `roles` array inline -- no need to reference actions in a top-level roles config. The `handler` function receives: | Parameter | Type | Description | |---|---|---| | `ctx` | `{ user, db }` | Context object containing the user session and Drizzle query builder | | `inputData` | `z.infer` | The client's request body, validated and typed by the `input` schema | The `ctx` object contains: | Property | Type | Description | |---|---|---| | `ctx.user` | `UserSession` | Resolved session (same as `$user.*` in filters) | | `ctx.db` | `DrizzleInstance` | Drizzle query builder — run any query, any table | If the function returns nothing, the client receives `{ ok: true }`. ## Type Safety Actions are type-safe end-to-end — from engine definition to client call. ### 1. Schema Endpoint The `/schema` endpoint includes action definitions with their input/output JSON schemas: ```json { "connections": { "..." }, "actions": { "incrementStock": { "input": { "type": "object", "properties": { "productId": { "type": "string" }, "amount": { "type": "number" } }, "required": ["productId", "amount"] }, "output": { "type": "object", "properties": { "id": { "type": "string" }, "stock": { "type": "number" } }, "required": ["id", "stock"] } } } } ``` ### 2. Type Generation The CLI generates action types alongside table types: ```bash npx superapp generate --url http://localhost:3001 ``` ```typescript // generated/schema.ts — auto-generated, do not edit export interface SuperAppSchema { main: { orders: { id: string; amount: number; status: string } products: { id: string; name: string; stock: number } } } export interface SuperAppActions { incrementStock: { input: { productId: string; amount: number } output: { id: string; stock: number } } decrementStock: { input: { productId: string; amount: number } output: { id: string; stock: number } } resetStock: { input: { productId: string } output: void } } ``` ### 3. Client Usage Pass both types to `drizzle()` for fully typed queries and actions: ```typescript const db = drizzle({ connection: 'http://localhost:3001', token: session.token, schema, }) // Full autocomplete on action name, input, and output const result = await db.action('incrementStock', { productId: 'prod_123', // ← autocomplete amount: 5, // ← autocomplete }) // result: { id: string; stock: number } ``` Calling an action that doesn't exist or passing wrong input types is a compile-time error: ```typescript // ✗ Type error — 'unknownAction' does not exist await db.action('unknownAction', {}) // ✗ Type error — 'amount' must be number, not string await db.action('incrementStock', { productId: 'prod_123', amount: '5' }) ``` ## Client API ```typescript const result = await db.action('incrementStock', { ...params }) ``` This sends: ``` POST /actions/incrementStock Authorization: Bearer Content-Type: application/json { ...params } ``` ## Examples ### Inventory: Increment, Decrement, and Reset Stock A warehouse management system where operators scan items in and out. The client shouldn't write raw `SET stock = stock + 1` SQL — the server controls the atomic update and validates stock bounds. Each operation is a separate action with clear naming. ```typescript actions: { incrementStock: { roles: ['warehouse_manager', 'admin'], input: z.object({ productId: z.string(), amount: z.number().positive() }), output: z.object({ id: z.string(), stock: z.number() }), handler: async (ctx, inputData) => { const { db, user } = ctx const { productId, amount } = inputData const [updated] = await db .update(products) .set({ stock: sql`stock + ${amount}`, lastUpdatedBy: user.id, }) .where(eq(products.id, productId)) .returning({ id: products.id, stock: products.stock }) return updated }, }, decrementStock: { roles: ['warehouse_manager', 'admin'], input: z.object({ productId: z.string(), amount: z.number().positive() }), output: z.object({ id: z.string(), stock: z.number() }), handler: async (ctx, inputData) => { const { db, user } = ctx const { productId, amount } = inputData const product = await db.query.products.findFirst({ where: eq(products.id, productId), }) if (!product) throw new PermissionError('Product not found') if (product.stock < amount) { throw new PermissionError(`Only ${product.stock} units available`) } const [updated] = await db .update(products) .set({ stock: sql`stock - ${amount}`, lastUpdatedBy: user.id, }) .where(eq(products.id, productId)) .returning({ id: products.id, stock: products.stock }) return updated }, }, resetStock: { roles: ['admin'], input: z.object({ productId: z.string() }), handler: async (ctx, inputData) => { const { db, user } = ctx const { productId } = inputData await db .update(products) .set({ stock: 0, lastResetBy: user.id, lastResetAt: new Date(), }) .where(eq(products.id, productId)) }, }, } ``` ```typescript // Client — typed input and output const updated = await db.action('incrementStock', { productId: 'prod_123', amount: 10 }) // updated: { id: string; stock: number } await db.action('decrementStock', { productId: 'prod_123', amount: 3 }) await db.action('resetStock', { productId: 'prod_123' }) ``` ### Finance: Transfer Balance Between Accounts A fintech app where users transfer money between their own accounts. The transfer must be atomic — debit and credit must both succeed or both fail. This is a textbook case for actions: it touches two rows in the same table, requires row locking, and creates a record in a separate `transactions` table — none of which maps to a single `UPDATE` statement. ```typescript actions: { transfer: { roles: ['account_holder'], input: z.object({ fromAccountId: z.string(), toAccountId: z.string(), amount: z.number().positive(), }), output: z.object({ id: z.string(), fromAccountId: z.string(), toAccountId: z.string(), amount: z.number(), timestamp: z.string(), }), handler: async (ctx, inputData) => { const { db, user } = ctx const { fromAccountId, toAccountId, amount } = inputData return db.transaction(async (tx) => { const [source] = await tx .select() .from(accounts) .where(and(eq(accounts.id, fromAccountId), eq(accounts.ownerId, user.id))) .for('update') if (!source) throw new PermissionError('Source account not found') if (source.balance < amount) throw new PermissionError('Insufficient funds') const [dest] = await tx .select() .from(accounts) .where(eq(accounts.id, toAccountId)) .for('update') if (!dest) throw new PermissionError('Destination account not found') await tx.update(accounts).set({ balance: sql`balance - ${amount}` }).where(eq(accounts.id, fromAccountId)) await tx.update(accounts).set({ balance: sql`balance + ${amount}` }).where(eq(accounts.id, toAccountId)) const [record] = await tx .insert(transactions) .values({ fromAccountId, toAccountId, amount, initiatedBy: user.id, timestamp: new Date() }) .returning() return record }) }, }, } ``` ```typescript // Client const tx = await db.action('transfer', { fromAccountId: 'acc_checking', toAccountId: 'acc_savings', amount: 500, }) // tx: { id: string; fromAccountId: string; toAccountId: string; amount: number; timestamp: string } ``` ### E-Commerce: Apply Discount Code An online store where the client sends a discount code and the server validates it against the `discount_codes` table, checks expiration and usage limits, computes the discount amount (percentage or fixed), applies it to the order, and increments the usage counter. All in one transaction — if any step fails, nothing changes. ```typescript actions: { applyDiscount: { roles: ['customer'], input: z.object({ orderId: z.string(), code: z.string().min(1) }), output: z.object({ discountAmount: z.number(), newTotal: z.number() }), handler: async (ctx, inputData) => { const { db, user } = ctx const { orderId, code } = inputData return db.transaction(async (tx) => { const order = await tx.query.orders.findFirst({ where: and( eq(orders.id, orderId), eq(orders.customerId, user.id), eq(orders.status, 'draft'), ), }) if (!order) throw new PermissionError('Order not found or not editable') const discount = await tx.query.discountCodes.findFirst({ where: and( eq(discountCodes.code, code.toUpperCase()), gt(discountCodes.expiresAt, new Date()), lt(discountCodes.usageCount, discountCodes.usageLimit), ), }) if (!discount) throw new PermissionError('Invalid or expired discount code') const discountAmount = discount.type === 'percentage' ? order.subtotal * (discount.value / 100) : discount.value const applied = Math.min(discountAmount, order.subtotal) await tx.update(orders).set({ discountCode: code.toUpperCase(), discountAmount: applied, total: order.subtotal - applied, }).where(eq(orders.id, orderId)) await tx.update(discountCodes) .set({ usageCount: sql`usage_count + 1` }) .where(eq(discountCodes.id, discount.id)) return { discountAmount: applied, newTotal: order.subtotal - applied } }) }, }, } ``` ```typescript // Client const { discountAmount, newTotal } = await db.action('applyDiscount', { orderId: 'ord_456', code: 'SUMMER20', }) // discountAmount: number, newTotal: number ``` ### Team Management: Invite Member with Email Notification A multi-tenant app where admins invite new members to their organization. The action validates the email isn't already a member, enforces role hierarchy (only admins can invite admins), creates the membership record, and queues an invite email. This spans `members` and `email_jobs` tables — clearly not a single-table CRUD operation. ```typescript actions: { inviteMember: { roles: ['admin', 'owner'], input: z.object({ email: z.string().email(), role: z.enum(['viewer', 'editor', 'admin']), }), output: z.object({ memberId: z.string(), status: z.literal('invited') }), handler: async (ctx, inputData) => { const { db, user } = ctx const { email, role } = inputData if (role === 'admin' && !user.roles.includes('owner')) { throw new PermissionError('Only owners can invite admins') } const existing = await db.query.members.findFirst({ where: and( eq(members.organizationId, user.current_org_id), eq(members.email, email.toLowerCase()), ), }) if (existing) throw new PermissionError('Already a member') const [member] = await db.insert(members).values({ organizationId: user.current_org_id, email: email.toLowerCase(), role, status: 'invited', invitedBy: user.id, invitedAt: new Date(), }).returning() await db.insert(emailJobs).values({ to: email.toLowerCase(), template: 'org-invite', data: { inviterName: user.name, orgName: user.org_name, role }, }) return { memberId: member.id, status: 'invited' as const } }, }, } ``` ```typescript // Client const result = await db.action('inviteMember', { email: 'alice@example.com', role: 'editor', }) // result: { memberId: string; status: 'invited' } ``` ### Analytics: Server-Side Aggregation Report A dashboard where the client requests a revenue report. The aggregation query uses `DATE_TRUNC`, `SUM`, `AVG`, and `GROUP BY` — complex SQL that shouldn't be expressed on the client. The action runs a pre-defined query scoped to the user's organization and returns the computed result. No writes, just a controlled read that returns aggregated data. ```typescript actions: { revenueReport: { roles: ['analyst', 'admin', 'owner'], input: z.object({ startDate: z.string().date(), endDate: z.string().date(), }), output: z.array(z.object({ month: z.string(), totalRevenue: z.number(), orderCount: z.number(), avgOrderValue: z.number(), })), handler: async (ctx, inputData) => { const { db, user } = ctx const { startDate, endDate } = inputData return db .select({ month: sql`DATE_TRUNC('month', ${orders.createdAt})`.as('month'), totalRevenue: sql`SUM(${orders.total})`.as('total_revenue'), orderCount: count(), avgOrderValue: sql`AVG(${orders.total})`.as('avg_order_value'), }) .from(orders) .where( and( eq(orders.organizationId, user.current_org_id), gte(orders.createdAt, new Date(startDate)), lte(orders.createdAt, new Date(endDate)), eq(orders.status, 'completed'), ), ) .groupBy(sql`DATE_TRUNC('month', ${orders.createdAt})`) .orderBy(sql`month`) }, }, } ``` ```typescript // Client const report = await db.action('revenueReport', { startDate: '2025-01-01', endDate: '2025-12-31', }) // report: { month: string; totalRevenue: number; orderCount: number; avgOrderValue: number }[] ``` ## Actions vs Permissions vs Middleware | | Permissions (CRUD) | Middleware | Actions | |---|---|---|---| | **Scope** | Single table | Single table (wraps CRUD) | Any table, any logic | | **Client sends** | SQL via Drizzle Proxy | SQL via Drizzle Proxy | Action name + typed JSON | | **Defined on** | Permission object | Permission object | Engine config (top-level) | | **Access control** | Role → permission → table | Inherits from permission | Inline `roles` array on each action | | **Type safety** | Schema-generated table types | Inherits from permission | Zod input/output schemas | | **Use when** | Standard reads and writes | Intercept or extend a CRUD query | Multi-table logic, workflows, aggregations | ## Error Handling Throw `PermissionError` to reject with `403 Forbidden`: ```typescript myAction: { input: z.object({ id: z.string() }), handler: async (ctx) => { throw new PermissionError('Reason shown in error response') }, }, ``` Invalid input (fails Zod validation) returns `400 Bad Request` with the Zod error details. Any other thrown error returns `500 Internal Server Error` and is logged but not exposed to the client. When using `db.transaction()`, any throw automatically rolls back all queries in that transaction. --- ## Hono Deploy the superapp backend with Hono, the recommended adapter for Node.js and edge runtimes. URL: /docs/backend/adapters/hono Hono is the recommended adapter. It works on Node.js, Bun, Deno, and Cloudflare Workers with zero config changes. ```typescript const engine = createEngine({ connections: { main: process.env.PG_URL!, }, }) const app = new Hono() app.route('/', createHonoMiddleware(engine)) serve({ fetch: app.fetch, port: 3001 }) ``` ## With Authentication and Permissions ```typescript const engine = createEngine({ connections: { main: process.env.PG_URL!, }, auth: betterAuthProvider({ secret: process.env.AUTH_SECRET!, userTable: { table: 'main.users', matchOn: { column: 'id', jwtField: 'id' }, }, }), permissions: { view_own_orders: { name: 'View own orders', table: 'main.orders', roles: ['viewer'], select: { columns: '*', where: { customer_id: { $eq: '$user.id' } }, }, }, }, }) const app = new Hono() app.route('/', createHonoMiddleware(engine)) serve({ fetch: app.fetch, port: 3001 }, (info) => { console.log(`Server running at http://localhost:${info.port}`) }) ``` ## Adding Custom Routes Mount the engine middleware alongside your own Hono routes: ```typescript const app = new Hono() // Custom routes app.get('/health', (c) => c.json({ status: 'ok' })) // Engine routes (/data, /schema, /auth/*, /admin) app.route('/', createHonoMiddleware(engine)) serve({ fetch: app.fetch, port: 3001 }) ``` ## With CORS ```typescript const app = new Hono() app.use('*', cors({ origin: ['https://myapp.com'], credentials: true, })) app.route('/', createHonoMiddleware(engine)) ``` ## Route Prefix Mount the engine under a custom path prefix: ```typescript app.route('/api/v1', createHonoMiddleware(engine)) // Routes become: /api/v1/data, /api/v1/schema, /api/v1/auth/*, /api/v1/admin ``` --- ## Next.js Integrate the superapp backend into a Next.js App Router project with a catch-all route handler. URL: /docs/backend/adapters/nextjs Use the Next.js adapter to add the engine to your App Router project as a catch-all API route. ```typescript // app/api/[...superapp]/route.ts const engine = createEngine({ connections: { main: process.env.PG_URL!, }, }) const handler = createNextHandler(engine) export const GET = handler export const POST = handler export const PUT = handler export const DELETE = handler ``` ## With Authentication ```typescript // app/api/[...superapp]/route.ts const engine = createEngine({ connections: { main: process.env.PG_URL!, }, auth: betterAuthProvider({ secret: process.env.AUTH_SECRET!, userTable: { table: 'main.users', matchOn: { column: 'id', jwtField: 'id' }, }, }), permissions: { view_own_orders: { name: 'View own orders', table: 'main.orders', roles: ['viewer'], select: { columns: '*', where: { customer_id: { $eq: '$user.id' } }, }, }, }, }) const handler = createNextHandler(engine) export const GET = handler export const POST = handler export const PUT = handler export const DELETE = handler ``` ## Route Structure The catch-all route `[...superapp]` maps to these endpoints: | URL | Maps To | |---|---| | `/api/data` | `POST /data` — query endpoint | | `/api/schema` | `GET /schema` — schema introspection | | `/api/auth/*` | Authentication routes | | `/api/admin` | Admin UI | | `/api/admin/api/*` | Admin API | ## Environment Variables Add your database and auth credentials to `.env.local`: ```bash PG_URL=postgres://user:pass@localhost:5432/mydb AUTH_SECRET=your-secret-key SUPERAPP_MASTER_KEY=your-master-key ``` ## Edge Runtime The engine supports the Next.js edge runtime: ```typescript export const runtime = 'edge' ``` --- ## Express Add the superapp backend to an existing Express application as a router middleware. URL: /docs/backend/adapters/express The Express adapter mounts the engine as a standard Express router. ```typescript const engine = createEngine({ connections: { main: process.env.PG_URL!, }, }) const app = express() app.use('/', createExpressRouter(engine)) app.listen(3001, () => console.log('Server running on port 3001')) ``` ## With Authentication ```typescript const engine = createEngine({ connections: { main: process.env.PG_URL!, }, auth: betterAuthProvider({ secret: process.env.AUTH_SECRET!, userTable: { table: 'main.users', matchOn: { column: 'id', jwtField: 'id' }, }, }), permissions: { view_own_orders: { name: 'View own orders', table: 'main.orders', roles: ['viewer'], select: { columns: '*', where: { customer_id: { $eq: '$user.id' } }, }, }, }, }) const app = express() app.use('/', createExpressRouter(engine)) app.listen(3001) ``` ## Adding Custom Routes Mix engine routes with your own Express handlers: ```typescript const app = express() // Custom routes app.get('/health', (req, res) => res.json({ status: 'ok' })) // Engine routes app.use('/', createExpressRouter(engine)) app.listen(3001) ``` ## Route Prefix Mount under a custom path: ```typescript app.use('/api/v1', createExpressRouter(engine)) // Routes become: /api/v1/data, /api/v1/schema, /api/v1/auth/*, /api/v1/admin ``` ## With CORS ```typescript const app = express() app.use(cors({ origin: ['https://myapp.com'], credentials: true, })) app.use('/', createExpressRouter(engine)) ``` --- ## Generic (Workers/Deno/Bun) Use the generic adapter for Cloudflare Workers, Deno, Bun, or any runtime with the Web Fetch API. URL: /docs/backend/adapters/generic The generic adapter works with any runtime that supports the standard `Request`/`Response` Web API. Use it for Cloudflare Workers, Deno, Bun, or custom servers. ```typescript const engine = createEngine({ connections: { main: process.env.PG_URL!, }, }) const handler = createHandler(engine) export default { fetch: (req: Request) => handler(req), } ``` ## Cloudflare Workers ```typescript const engine = createEngine({ connections: { main: process.env.PG_URL!, }, }) const handler = createHandler(engine) export default { async fetch(request: Request, env: Env, ctx: ExecutionContext): Promise { return handler(request) }, } ``` ## Deno ```typescript const engine = createEngine({ connections: { main: Deno.env.get('PG_URL')!, }, }) const handler = createHandler(engine) Deno.serve({ port: 3001 }, (req) => handler(req)) ``` ## Bun ```typescript const engine = createEngine({ connections: { main: process.env.PG_URL!, }, }) const handler = createHandler(engine) Bun.serve({ port: 3001, fetch: (req) => handler(req), }) ``` ## Handler Signature The generic handler accepts a standard `Request` and returns a `Promise`: ```typescript type GenericHandler = (request: Request) => Promise ``` This makes it compatible with any runtime or framework that uses the Web Fetch API standard. --- ## PostgreSQL Wire Protocol Expose the superapp engine as a PostgreSQL-compatible server using pg-gateway. Connect with psql, DBeaver, Metabase, or any PG driver. URL: /docs/backend/adapters/pg-wire Connect any PostgreSQL client directly to the superapp engine. Tools like psql, DBeaver, Metabase, Grafana, and any PostgreSQL driver can query your data with full permission enforcement — no HTTP adapter needed. Powered by [pg-gateway](https://github.com/supabase-community/pg-gateway), the engine speaks the PostgreSQL wire protocol (v3) natively over TCP. ## Quick Start ```typescript const engine = createEngine({ connections: { main: process.env.PG_URL!, }, pgWire: { port: 5433, }, }) engine.startPgWire() ``` ```bash # Connect with psql psql "host=localhost port=5433" ``` ## With Authentication ```typescript const engine = createEngine({ connections: { main: process.env.PG_URL!, }, auth: betterAuthProvider({ secret: process.env.AUTH_SECRET!, userTable: { table: 'main.users', matchOn: { column: 'id', jwtField: 'id' }, }, }), permissions: { view_own_orders: { name: 'View own orders', table: 'main.orders', roles: ['viewer'], select: { columns: '*', where: { customer_id: { $eq: '$user.id' } }, }, }, }, pgWire: { port: 5433, auth: { strategy: 'jwt' }, }, }) engine.startPgWire() ``` ```bash # Connect with JWT as password psql "host=localhost port=5433 user=viewer password=eyJhbGciOiJSUzI1NiIs..." ``` ## Authentication Strategies The wire protocol supports multiple auth strategies. PostgreSQL clients send credentials via the standard username/password fields in the connection handshake. | Strategy | Username field | Password field | How it works | |---|---|---|---| | `jwt` | Ignored (or role hint) | JWT token | Validates JWT, resolves session. Same as HTTP `Authorization: Bearer`. | | `password` | Email | Password | Authenticates against the auth provider (e.g. better-auth). Session created for connection lifetime. | | `apiKey` | Role name | API key | Looks up API key in `superapp_db`. Maps to a user + role. Best for BI tools and machine-to-machine. | | `trust` | — | — | No auth. Dev-only. Only works when no `auth` provider is configured. | ### Single Strategy ```typescript pgWire: { auth: { strategy: 'jwt' }, } ``` ### Multiple Strategies Pass an array to try strategies in order. The first successful match wins: ```typescript pgWire: { auth: { strategy: ['jwt', 'apiKey'] }, } ``` ### API Key Auth (BI Tools) For tools like Metabase or Grafana that can't send JWTs, use API key auth: ```typescript pgWire: { auth: { strategy: 'apiKey' }, } ``` ```bash # Metabase / DBeaver connection settings # Host: localhost # Port: 5433 # User: analyst ← role name # Password: sk_live_abc ← API key ``` ## Schema Routing All database connections are exposed as PostgreSQL schemas. The connection name becomes the schema prefix: ```sql -- Connection 'main' → schema 'main' SELECT * FROM main.orders; SELECT * FROM main.products; -- Connection 'warehouse' → schema 'warehouse' SELECT * FROM warehouse.inventory; -- List tables from a connection \dt main.* \dt warehouse.* ``` This matches the existing `{connection}.{table}` namespacing used throughout superapp. See [Multi-Database Querying](/docs/advanced/multi-database) for details. ## Permission Enforcement Every query through the wire protocol passes through the exact same permission pipeline as HTTP requests. There are no shortcuts or bypasses. ```text PG Client sends SQL │ ▼ 1. Parse SQL → extract tables, operation, columns │ ▼ 2. Permission Check → same CASL pipeline as HTTP (column allowlist, filter injection, $user.* substitution, validate rules, default/overwrite injection) │ ▼ 3. Middleware → before/after hooks (if defined) │ ▼ 4. Execute via Drizzle ORM → against the actual database │ ▼ 5. Audit Log → record query, user, duration │ ▼ 6. Return results as PG wire protocol messages ``` ```sql -- With permission: where: { customer_id: { $eq: '$user.customer_id' } } SELECT * FROM main.orders; -- → Engine injects: WHERE customer_id = 'usr_123' -- → Only the user's own orders are returned -- Blocked operations DROP TABLE main.orders; -- ERROR: DDL operations are not allowed INSERT INTO main.orders (amount) VALUES (-100); -- ERROR: permission denied — validate rule failed: amount must be > 0 ``` ## TLS Encryption Enable TLS for encrypted connections: ```typescript pgWire: { port: 5433, tls: { key: fs.readFileSync('server.key'), cert: fs.readFileSync('server.crt'), }, } ``` ```bash # Connect with SSL psql "host=localhost port=5433 sslmode=require" ``` ## Running Alongside HTTP The wire protocol runs on a separate TCP port in the same process. Use both transports simultaneously: ```typescript const engine = createEngine({ connections: { main: process.env.PG_URL! }, auth: betterAuthProvider({ /* ... */ }), permissions: { /* ... */ }, pgWire: { port: 5433, auth: { strategy: ['jwt', 'apiKey'] }, }, }) // HTTP on port 3001 const app = new Hono() app.route('/', createHonoMiddleware(engine)) serve({ fetch: app.fetch, port: 3001 }) // Wire protocol on port 5433 engine.startPgWire() ``` ## Connection Lifecycle Unlike HTTP (stateless per-request), a wire protocol connection holds a session for its entire lifetime: 1. **TCP connect** → optional TLS upgrade 2. **Startup** → client sends username, database, params 3. **Authentication** → credentials validated, session resolved 4. **Ready for Query** → session cached (user, role, permissions) 5. **Query loop** → each query goes through the permission pipeline 6. **Disconnect** → session cleanup Permission changes (in interactive mode) apply only to new connections — existing connections keep their cached session. ## Error Handling Errors are returned as standard PostgreSQL `ErrorResponse` messages with proper SQLSTATE codes: | Scenario | SQLSTATE | Message | |---|---|---| | Auth failure | `28P01` | `authentication failed for user "X"` | | Permission denied | `42501` | `permission denied for table "main.orders"` | | Table not found | `42P01` | `relation "main.xyz" does not exist` | | Column not found | `42703` | `column "secret_col" does not exist` | | Blocked operation | `42501` | `DDL operations are not allowed` | | Query timeout | `57014` | `query exceeded timeout of 30000ms` | | Rate limited | `53300` | `rate limit exceeded` | ## Graceful Shutdown ```typescript const wire = engine.startPgWire() process.on('SIGTERM', async () => { await wire.close() // stops accepting, drains existing connections process.exit(0) }) ``` ## Limitations The wire protocol supports the Simple Query and Extended Query protocols (prepared statements, parameter binding). The following PostgreSQL features are not supported: - **Transactions** — `BEGIN`, `COMMIT`, `ROLLBACK` are not supported - **COPY protocol** — use `INSERT` statements instead - **LISTEN / NOTIFY** — not available through the wire protocol - **Server-side cursors** — all results are returned at once (subject to `maxRows` limit) - **SET commands** — session variables cannot be modified - **Cross-connection joins** — depend on underlying database support ## Configuration Reference See the [`pgWire` section](/docs/reference/server-api/create-engine#pgwire) in `createEngine` options for the full configuration reference. --- ## Overview Drizzle ORM for the frontend — real Drizzle with a proxy driver that returns permission-filtered data. URL: /docs/client/overview `@superapp/db` gives your frontend **real Drizzle ORM** connected to any database through the superapp backend. Same API you already know — the data you get back is already filtered, restricted, and validated by the backend's permission engine. ## How It Works The client is built on [Drizzle Proxy](https://orm.drizzle.team/docs/connect-drizzle-proxy) — Drizzle ORM builds parameterized SQL on the client, and the proxy driver sends SQL + params to the superapp backend over HTTP. The server validates your JWT, applies row-level permissions to the SQL, and executes directly against your database. The data returned to the client has already been filtered and restricted by the permission engine. ```text Your Frontend superapp Backend Database ───────────── ──────────────── ──────── db.select(...) Drizzle builds SQL │ on the client ▼ Proxy sends ── SQL ───▶ Verify JWT SQL + params + params Who is this user? │ ▼ Apply permissions • Inject WHERE user_id = ? • Restrict visible columns • Validate write operations │ ▼ Execute modified ─── query ────▶ Postgres SQL directly MySQL Return filtered ◀── results ── SQLite results CSV │ Receive scoped ◀── JSON ── │ data only ``` **You write normal Drizzle queries. The backend ensures each user only sees their own data.** ## Quick Example ```typescript // The token ties this Drizzle instance to a specific user. // Every query through this `db` will only return that user's data. const db = drizzle({ connection: 'http://localhost:3001', token: session.token, // ← JWT identifies the logged-in user schema, }) // This looks like a normal Drizzle query — no user_id filter needed. // The backend reads the JWT, resolves the user, and automatically // scopes results so this user can only see their own orders. const orders = await db.select() .from(schema.orders) .where(eq(schema.orders.status, 'active')) .orderBy(desc(schema.orders.createdAt)) .limit(10) // If Alice is logged in: returns only Alice's active orders // If Bob is logged in: returns only Bob's active orders // Same code, different data — enforced by the backend, not the client ``` ## Available Methods | Method | Description | | --- | --- | | [`db.select()`](/docs/client/queries/select) | Select rows with filtering, sorting, joins, and pagination | | [`db.query.*.findMany()`](/docs/client/queries/select) | Relational queries with eager loading | | [`db.query.*.findFirst()`](/docs/client/queries/select) | Find a single record with relations | | [`db.insert()`](/docs/client/queries/insert) | Insert one or more rows | | [`db.update()`](/docs/client/queries/update) | Update rows matching a condition | | [`db.delete()`](/docs/client/queries/delete) | Delete rows matching a condition | | [`db.select({ count: count() })`](/docs/client/queries/aggregations) | Count rows | | [`db.select({ total: sum() })`](/docs/client/queries/aggregations) | Aggregations (sum, avg, min, max) | ## Imports Summary | Import | Path | Description | | --- | --- | --- | | `drizzle` | `@superapp/db` | Create a Drizzle client with proxy driver | | `eq`, `gt`, `desc`, ... | `drizzle-orm` | Standard Drizzle filter and sort operators | | `schema` | `./generated/schema` | Auto-generated Drizzle schema from your database | | `createAuth` | `@superapp/auth` | Auth client for session management | | `useSession` | `@superapp/auth` | React hook for current session | | `AuthProvider` | `@superapp/auth/components` | Root layout wrapper for auth context | | `AuthCard` | `@superapp/auth/components` | Pre-built sign-in/sign-up/forgot-password UI | | `UserButton` | `@superapp/auth/components` | Navbar dropdown with avatar and sign out | ## What's Next - [Setting Up the Client](/docs/client/setup) — Configure `drizzle()` with your backend URL and schema. - [Drizzle Compatibility](/docs/client/drizzle-compatibility) — Side-by-side comparison with standard Drizzle. - [Auth Setup](/docs/client/auth/setup) — Configure authentication in your frontend. --- ## Setup Configure the Drizzle ORM client with your superapp backend connection and generated schema. URL: /docs/client/setup The `drizzle()` function creates a Drizzle ORM instance using [Drizzle Proxy](https://orm.drizzle.team/docs/connect-drizzle-proxy) under the hood. Drizzle builds parameterized SQL on the client, and the proxy driver sends SQL + params to your superapp backend over HTTP. ```typescript // Each user gets their own Drizzle instance tied to their JWT. // All queries through this `db` are automatically scoped to this user's data. const db = drizzle({ connection: 'http://localhost:3001', token: session.token, // ← identifies who this user is schema, }) ``` ## Options | Option | Type | Required | Description | | --- | --- | --- | --- | | `connection` | `string` | Yes | Your superapp backend URL | | `token` | `string` | Yes | A valid JWT obtained from the auth client | | `schema` | `object` | Yes | Your generated Drizzle schema object | ## Schema Import The schema is auto-generated from your database by the CLI. It contains standard Drizzle table definitions: ```typescript // schema.orders → pgTable('orders', { ... }) // schema.customers → pgTable('customers', { ... }) // schema.ordersRelations → relations(orders, ({ one }) => ({ ... })) ``` See [Type Generation](/docs/client/type-generation) for how to generate the schema. ## Recommended Setup Create a shared setup file and a React hook for easy access across your app. **`lib/db.ts`** ```typescript const SUPERAPP_URL = process.env.NEXT_PUBLIC_SUPERAPP_URL! export const authClient = createAuth(SUPERAPP_URL) export function createDb(token: string) { return drizzle({ connection: SUPERAPP_URL, token, schema, }) } ``` **`hooks/use-db.ts`** ```tsx export function useDb() { const { data: session } = useSession() return useMemo( () => (session?.token ? createDb(session.token) : null), [session?.token] ) } ``` **Usage in a component:** ```tsx export function OrdersList() { const db = useDb() async function loadOrders() { if (!db) return // No user_id filter needed — the backend scopes results automatically. // Each user only sees their own orders. const orders = await db.select() .from(schema.orders) .where(eq(schema.orders.status, 'active')) .orderBy(desc(schema.orders.createdAt)) .limit(50) } } ``` ## How It Works When you call a Drizzle query method, the proxy driver: 1. **Drizzle builds parameterized SQL** on the client (e.g., `SELECT ... FROM orders WHERE status = $1`). 2. **POSTs SQL + params** to the backend's `/data` endpoint with the JWT in the `Authorization` header. 3. **The server validates** the token, applies row-level permissions to the SQL, and executes directly against the database. 4. **Returns typed results** as JSON, deserialized into the expected TypeScript type. This is standard [Drizzle Proxy](https://orm.drizzle.team/docs/connect-drizzle-proxy) — the query syntax is identical to any other Drizzle setup, only the transport layer differs. ## Calling Actions The `db` instance also exposes a typed `action()` method for calling server-side [actions](/docs/backend/actions): ```typescript // Fully typed — autocomplete on action name, input params, and return type const result = await db.action('incrementStock', { productId: 'prod_123', amount: 5, }) // result: { id: string; stock: number } ``` This sends `POST /actions/incrementStock` with the input as JSON. The engine validates the JWT, checks the user's role for `action_incrementStock`, validates the input against the Zod schema, and returns the typed result. See [Actions](/docs/backend/actions) for details. --- ## Select Fetch rows with filtering, sorting, joins, pagination, and relational queries. URL: /docs/client/queries/select Drizzle provides two query styles: the **select builder** for SQL-like queries, and **relational queries** for loading nested data. ## Select Builder ```typescript const db = drizzle({ connection, token, schema }) // No need to filter by user_id — the backend automatically scopes // results to the logged-in user based on the JWT token above. const orders = await db.select() .from(schema.orders) .where(eq(schema.orders.status, 'active')) .orderBy(desc(schema.orders.createdAt)) .limit(10) // → Returns only the current user's active orders ``` ## Select Specific Columns ```typescript // Even when selecting specific columns, the backend still scopes rows // to the current user — and may further restrict which columns are visible // based on the user's role and permission configuration. const orders = await db.select({ id: schema.orders.id, amount: schema.orders.amount, status: schema.orders.status, }).from(schema.orders) ``` ## Filtering ```typescript const activeHighValue = await db.select() .from(schema.orders) .where(and( eq(schema.orders.status, 'active'), gte(schema.orders.amount, 100) )) ``` ## Sorting and Pagination ```typescript const page2 = await db.select() .from(schema.orders) .orderBy(desc(schema.orders.createdAt)) .limit(20) .offset(20) ``` ## Joins ```typescript const ordersWithCustomer = await db.select({ orderId: schema.orders.id, amount: schema.orders.amount, customerName: schema.customers.name, }).from(schema.orders) .leftJoin(schema.customers, eq(schema.orders.customerId, schema.customers.id)) ``` ## Relational Queries Use `db.query` for loading nested relations -- similar to Prisma's `include`. ```typescript const ordersWithCustomer = await db.query.orders.findMany({ with: { customer: true }, where: eq(schema.orders.status, 'active'), orderBy: desc(schema.orders.createdAt), limit: 10, }) // { id, amount, status, customer: { id, name, email } }[] ``` ## Find First Use `findFirst()` to retrieve a single record. Returns `undefined` if no match. ```typescript const order = await db.query.orders.findFirst({ where: eq(schema.orders.id, 'ord_abc123'), }) // order: { id, amount, status, ... } | undefined ``` With relations: ```typescript const order = await db.query.orders.findFirst({ where: eq(schema.orders.id, 'ord_abc123'), with: { customer: true }, }) // order: { id, amount, status, customer: { id, name, email } } | undefined ``` Or use the select builder with `.limit(1)`: ```typescript const [order] = await db.select() .from(schema.orders) .where(eq(schema.orders.id, 'ord_abc123')) .limit(1) ``` ## Complex Filters ```typescript const filtered = await db.select() .from(schema.orders) .where(or( and(eq(schema.orders.status, 'active'), gte(schema.orders.amount, 500)), and(eq(schema.orders.status, 'pending'), lt(schema.orders.amount, 100)) )) ``` --- ## Insert Insert one or more rows with type-safe values. URL: /docs/client/queries/insert Use `db.insert()` to add rows to a table. ```typescript const db = drizzle({ connection, token, schema }) // The backend validates that the logged-in user has insert permission // and may auto-set fields like customerId from the user's session. const [order] = await db.insert(schema.orders).values({ amount: 250, status: 'pending', customerId: 'cust_abc123', }).returning() // order: { id, amount, status, customerId, createdAt } ``` ## Insert Without Returning ```typescript await db.insert(schema.orders).values({ amount: 250, status: 'pending', customerId: 'cust_abc123', }) ``` ## Insert Multiple Rows ```typescript await db.insert(schema.orders).values([ { amount: 100, status: 'pending', customerId: 'cust_1' }, { amount: 200, status: 'pending', customerId: 'cust_2' }, { amount: 300, status: 'pending', customerId: 'cust_3' }, ]) ``` ## Upsert (On Conflict) ```typescript await db.insert(schema.customers) .values({ id: 'cust_abc123', name: 'Acme Corp', email: 'billing@acme.com' }) .onConflictDoNothing() ``` ## Column Presets If your server configuration defines column presets (e.g., automatically setting `organizationId` or `createdBy` from the session), those columns are injected server-side. You don't need to include them: ```typescript // If the server has a preset: organizationId = $user.organizationId await db.insert(schema.orders).values({ amount: 250, status: 'pending', customerId: 'cust_abc123', }) // organizationId is automatically set by the server ``` --- ## Update Update one or more rows matching a condition. URL: /docs/client/queries/update Use `db.update()` to modify rows matching a condition. ```typescript const db = drizzle({ connection, token, schema }) // The backend ensures you can only update your own rows. // Even if you pass someone else's order ID, the permission filter // scopes the UPDATE to rows owned by the logged-in user — so it // simply won't match and nothing gets modified. await db.update(schema.orders) .set({ status: 'shipped' }) .where(eq(schema.orders.id, 'ord_abc123')) ``` ## Update with Returning ```typescript const [updated] = await db.update(schema.orders) .set({ status: 'shipped' }) .where(eq(schema.orders.id, 'ord_abc123')) .returning() // updated: { id, amount, status, ... } ``` ## Update Multiple Fields ```typescript await db.update(schema.orders) .set({ status: 'delivered', amount: 300, }) .where(eq(schema.orders.id, 'ord_abc123')) ``` ## Bulk Update Update all rows matching a broader condition. ```typescript await db.update(schema.orders) .set({ status: 'cancelled' }) .where(eq(schema.orders.status, 'pending')) ``` ## Update with Complex Filters ```typescript await db.update(schema.orders) .set({ status: 'expired' }) .where(and( eq(schema.orders.status, 'pending'), lt(schema.orders.createdAt, new Date('2024-01-01')) )) ``` --- ## Delete Delete one or more rows matching a condition. URL: /docs/client/queries/delete Use `db.delete()` to remove rows matching a condition. ```typescript const db = drizzle({ connection, token, schema }) // The backend scopes this to the logged-in user's data. // You can only delete your own rows — attempting to delete // another user's order simply won't match any rows. await db.delete(schema.orders) .where(eq(schema.orders.id, 'ord_abc123')) ``` ## Delete with Returning ```typescript const [deleted] = await db.delete(schema.orders) .where(eq(schema.orders.id, 'ord_abc123')) .returning() // deleted: { id, amount, status, ... } ``` ## Filtered Delete Delete multiple rows matching a condition. ```typescript await db.delete(schema.orders) .where(eq(schema.orders.status, 'cancelled')) ``` ## Delete with Complex Filters ```typescript await db.delete(schema.orders) .where(and( eq(schema.orders.status, 'expired'), lt(schema.orders.createdAt, new Date('2023-01-01')) )) ``` --- ## Filter Operators All Drizzle ORM filter operators for building query conditions. URL: /docs/client/queries/filtering Drizzle provides filter operators as functions imported from `drizzle-orm`. They're used in `.where()` clauses and relational query `where` options. ```typescript const orders = await db.select() .from(schema.orders) .where(and( eq(schema.orders.status, 'active'), gt(schema.orders.amount, 100) )) ``` ## Operator Reference | Operator | Description | Example | | --- | --- | --- | | `eq(col, val)` | Equal to | `eq(schema.orders.status, 'active')` | | `ne(col, val)` | Not equal to | `ne(schema.orders.status, 'cancelled')` | | `gt(col, val)` | Greater than | `gt(schema.orders.amount, 100)` | | `gte(col, val)` | Greater than or equal | `gte(schema.orders.amount, 100)` | | `lt(col, val)` | Less than | `lt(schema.orders.amount, 500)` | | `lte(col, val)` | Less than or equal | `lte(schema.orders.amount, 500)` | | `inArray(col, vals)` | In array | `inArray(schema.orders.status, ['active', 'pending'])` | | `notInArray(col, vals)` | Not in array | `notInArray(schema.orders.status, ['cancelled'])` | | `like(col, pattern)` | SQL LIKE (case-sensitive) | `like(schema.customers.name, 'Acme%')` | | `ilike(col, pattern)` | SQL LIKE (case-insensitive) | `ilike(schema.customers.email, '%@acme.com')` | | `isNull(col)` | Is null | `isNull(schema.orders.deletedAt)` | | `isNotNull(col)` | Is not null | `isNotNull(schema.orders.deletedAt)` | | `between(col, a, b)` | Between two values | `between(schema.orders.amount, 100, 500)` | | `and(...conds)` | Logical AND | `and(eq(...), gt(...))` | | `or(...conds)` | Logical OR | `or(eq(...), eq(...))` | | `not(cond)` | Logical NOT | `not(eq(schema.orders.status, 'cancelled'))` | ## Comparison Operators ### eq — Equal ```typescript const active = await db.select() .from(schema.orders) .where(eq(schema.orders.status, 'active')) ``` ### ne — Not Equal ```typescript const notCancelled = await db.select() .from(schema.orders) .where(ne(schema.orders.status, 'cancelled')) ``` ### gt, gte, lt, lte — Range ```typescript const midRange = await db.select() .from(schema.orders) .where(and( gte(schema.orders.amount, 100), lt(schema.orders.amount, 1000) )) ``` ### between — Range (Inclusive) ```typescript const midRange = await db.select() .from(schema.orders) .where(between(schema.orders.amount, 100, 1000)) ``` ## Set Operators ### inArray — In Array ```typescript const selected = await db.select() .from(schema.orders) .where(inArray(schema.orders.status, ['active', 'pending', 'shipped'])) ``` ### notInArray — Not In Array ```typescript const excluded = await db.select() .from(schema.orders) .where(notInArray(schema.orders.status, ['cancelled', 'expired'])) ``` ## Pattern Matching ### like — Case-Sensitive ```typescript const matched = await db.select() .from(schema.customers) .where(like(schema.customers.name, 'Acme%')) ``` ### ilike — Case-Insensitive ```typescript const matched = await db.select() .from(schema.customers) .where(ilike(schema.customers.email, '%@acme.com')) ``` ## Null Checks ```typescript // Find rows where deletedAt is null (active records) const active = await db.select() .from(schema.orders) .where(isNull(schema.orders.deletedAt)) // Find rows where deletedAt is not null (soft-deleted) const deleted = await db.select() .from(schema.orders) .where(isNotNull(schema.orders.deletedAt)) ``` ## Logical Operators ### and — All Conditions Must Match ```typescript const result = await db.select() .from(schema.orders) .where(and( gte(schema.orders.amount, 100), lte(schema.orders.amount, 500) )) ``` ### or — Any Condition Must Match ```typescript const result = await db.select() .from(schema.orders) .where(or( eq(schema.orders.status, 'active'), eq(schema.orders.status, 'pending') )) ``` ### not — Negate a Condition ```typescript const result = await db.select() .from(schema.orders) .where(not(inArray(schema.orders.status, ['cancelled', 'expired']))) ``` ## Combining Operators Nest logical operators for complex queries. ```typescript const result = await db.select() .from(schema.orders) .where(and( or( eq(schema.orders.status, 'active'), eq(schema.orders.status, 'shipped') ), gte(schema.orders.amount, 100), isNull(schema.orders.deletedAt) )) ``` --- ## Aggregations Count rows and compute sum, avg, min, max with Drizzle's aggregation functions. URL: /docs/client/queries/aggregations Use Drizzle's aggregation functions with `db.select()` for counting and numeric computations. ## Count ```typescript const db = drizzle({ connection, token, schema }) const [result] = await db.select({ count: count() }) .from(schema.orders) // result.count: number ``` ### Count with Filter ```typescript const [result] = await db.select({ count: count() }) .from(schema.orders) .where(eq(schema.orders.status, 'active')) ``` ### Count by Group ```typescript const statusCounts = await db.select({ status: schema.orders.status, count: count(), }).from(schema.orders) .groupBy(schema.orders.status) // [{ status: 'active', count: 42 }, { status: 'pending', count: 15 }] ``` ## Sum ```typescript const [result] = await db.select({ total: sum(schema.orders.amount), }).from(schema.orders) // result.total: string (numeric aggregations return strings) ``` ## Average ```typescript const [result] = await db.select({ average: avg(schema.orders.amount), }).from(schema.orders) ``` ## Min and Max ```typescript const [result] = await db.select({ minimum: min(schema.orders.amount), maximum: max(schema.orders.amount), }).from(schema.orders) ``` ## Multiple Aggregations ```typescript const [stats] = await db.select({ total: sum(schema.orders.amount), average: avg(schema.orders.amount), minimum: min(schema.orders.amount), maximum: max(schema.orders.amount), }).from(schema.orders) .where(eq(schema.orders.status, 'active')) ``` ## Group By ```typescript const byStatus = await db.select({ status: schema.orders.status, total: sum(schema.orders.amount), average: avg(schema.orders.amount), }).from(schema.orders) .groupBy(schema.orders.status) // [{ status: 'active', total: '15000', average: '357.14' }, ...] ``` ### Group By with Filter ```typescript const byCustomer = await db.select({ customerId: schema.orders.customerId, total: sum(schema.orders.amount), }).from(schema.orders) .where(inArray(schema.orders.status, ['active', 'shipped'])) .groupBy(schema.orders.customerId) ``` --- ## Setup Configure the auth client for session management and authentication in your frontend. URL: /docs/client/auth/setup `createAuth` from `@superapp/auth` initializes the auth client that handles sign-in, sign-up, session management, and token refresh. This package uses [better-auth](https://www.better-auth.com) by default, but supports custom adapters for any auth provider. ```typescript const authClient = createAuth('http://localhost:3001') ``` ## Setup Create a shared auth client and database client in a setup file so they can be used across your app. **`lib/superapp.ts`** ```typescript const SUPERAPP_URL = process.env.NEXT_PUBLIC_SUPERAPP_URL! export const authClient = createAuth(SUPERAPP_URL) export function createDb(token: string) { return drizzle({ connection: SUPERAPP_URL, token, schema, }) } ``` ## createAuth Options | Parameter | Type | Required | Description | | --- | --- | --- | --- | | `url` | `string` | Yes | Base URL of your superapp backend (without `/data`) | ## useSession Hook The `useSession` hook provides the current user's session in any React component. It must be used within an [`AuthProvider`](/docs/client/components/auth-provider). ```tsx export function ProfilePage() { const { data: session, isPending } = useSession() if (isPending) return Loading... if (!session) return Not signed in return ( Email: {session.user.email} Name: {session.user.name} ) } ``` ### Return Value | Property | Type | Description | | --- | --- | --- | | `data` | `Session \| null` | The current session, or `null` if not authenticated | | `isPending` | `boolean` | `true` while the session is being loaded | ### Session Object | Property | Type | Description | | --- | --- | --- | | `token` | `string` | JWT for authenticating queries | | `user.id` | `string` | User ID | | `user.email` | `string` | User email | | `user.name` | `string` | User display name | ## useDb Hook Combine `useSession` with `createDb` to create a reusable hook for database access. **`hooks/use-db.ts`** ```tsx export function useDb() { const { data: session } = useSession() return useMemo( () => (session?.token ? createDb(session.token) : null), [session?.token] ) } ``` This returns `null` when the user is not authenticated and a typed `db` client when they are. --- ## Session Management How sessions, tokens, and automatic refresh work in the client SDK. URL: /docs/client/auth/session The auth client manages JWT sessions automatically — storing tokens, refreshing them before expiry, and clearing them on sign out. ```tsx export function Dashboard() { const { data: session, isPending } = useSession() if (isPending) return Loading... if (!session) return Please sign in return Welcome, {session.user.name} } ``` ## How Sessions Work 1. The user signs in through [`AuthCard`](/docs/client/components/auth-card) or the auth client API. 2. The server returns a JWT and a refresh token. 3. The auth client stores the tokens and provides them via `useSession`. 4. When you create a `db` client with `createDb(session.token)`, the JWT is sent with every query. 5. The server validates the JWT, extracts the user identity, and applies permissions. ## Token Lifecycle | Event | What Happens | | --- | --- | | Sign in | JWT + refresh token stored | | Query | JWT sent in `Authorization` header | | Token expiry approaching | Auth client refreshes automatically | | Refresh token expired | Session cleared, user must sign in again | | Sign out | Both tokens cleared | ## Accessing the Session ### In React Components Use the `useSession` hook inside an [`AuthProvider`](/docs/client/components/auth-provider). ```tsx export function UserGreeting() { const { data: session, isPending } = useSession() if (isPending) return Loading... if (!session) return Guest return Hello, {session.user.name} } ``` ### For Database Queries Use the `useDb` hook to get a typed client that is automatically `null` when no session exists. ```tsx export function OrdersList() { const db = useDb() async function loadOrders() { if (!db) return // Not authenticated const orders = await db.main.orders.findMany({ select: ['id', 'amount', 'status'], }) // orders: { id: string; amount: number; status: string }[] } } ``` ## Reacting to Session Changes Use the `onSessionChange` callback on [`AuthProvider`](/docs/client/components/auth-provider) to run side effects when the session changes (sign in, sign out, token refresh). ```tsx export function Providers({ children }: { children: React.ReactNode }) { const router = useRouter() return ( router.refresh()} > {children} ) } ``` ## Protected Routes Check the session to gate access to authenticated pages. ```tsx export function ProtectedPage({ children }: { children: React.ReactNode }) { const { data: session, isPending } = useSession() const router = useRouter() useEffect(() => { if (!isPending && !session) { router.replace('/sign-in') } }, [session, isPending, router]) if (isPending) return Loading... if (!session) return null return <>{children} } ``` --- ## Auth Provider Root layout wrapper that provides auth context to all child components. URL: /docs/client/components/auth-provider `AuthProvider` wraps your application root and provides auth context (session, sign-in state) to all child components. Required for `useSession`, `AuthCard`, and `UserButton` to work. ```tsx export default function RootLayout({ children }: { children: React.ReactNode }) { return ( {children} ) } ``` ## Props | Prop | Type | Required | Description | | --- | --- | --- | --- | | `authClient` | `AuthClient` | Yes | The auth client created by `createAuth()` | | `navigate` | `(url: string) => void` | No | Navigation function for client-side routing | | `replace` | `(url: string) => void` | No | Replace navigation function (no history entry) | | `Link` | `React.ComponentType` | No | Link component for framework-specific navigation | | `onSessionChange` | `() => void` | No | Callback fired on sign-in, sign-out, or token refresh | | `children` | `React.ReactNode` | Yes | Your application content | ## Next.js App Router Setup ```tsx 'use client' export function Providers({ children }: { children: React.ReactNode }) { const router = useRouter() return ( router.refresh()} > {children} ) } ``` Then use it in your root layout: ```tsx export default function RootLayout({ children }: { children: React.ReactNode }) { return ( {children} ) } ``` ## Why navigate and replace? `AuthProvider` handles redirects after sign-in and sign-out. Passing your framework's navigation functions lets it do this with client-side routing instead of full page reloads. | Prop | Used For | | --- | --- | | `navigate` | Redirect after sign-in (pushes to history) | | `replace` | Redirect after sign-out (replaces history entry) | | `Link` | Internal links in auth UI components | ## onSessionChange The `onSessionChange` callback fires whenever the auth state changes. Common uses: - **`router.refresh()`** — Re-run server components to pick up the new session (Next.js App Router). - **Refetch queries** — Invalidate cached data when the user changes. - **Analytics** — Track sign-in/sign-out events. ```tsx { router.refresh() }} > {children} ``` --- ## Auth Card Pre-built sign-in, sign-up, and forgot-password UI component. URL: /docs/client/components/auth-card `AuthCard` renders a complete authentication form that automatically switches between sign-in, sign-up, and forgot-password views based on the current URL. ```tsx export default function AuthPage() { return } ``` ## How It Works `AuthCard` reads the current URL path to decide which view to render: | URL Path | View | | --- | --- | | `/sign-in` | Sign-in form | | `/sign-up` | Sign-up form | | `/forgot-password` | Forgot password form | Navigation between views (e.g., "Don't have an account? Sign up") is handled automatically using the `navigate` and `Link` props from [`AuthProvider`](/docs/client/components/auth-provider). ## Next.js App Router Example Create a catch-all route to handle all auth paths. **`app/(auth)/sign-in/page.tsx`** ```tsx export default function SignInPage() { return ( ) } ``` **`app/(auth)/sign-up/page.tsx`** ```tsx export default function SignUpPage() { return ( ) } ``` **`app/(auth)/forgot-password/page.tsx`** ```tsx export default function ForgotPasswordPage() { return ( ) } ``` ## Prerequisites `AuthCard` requires an [`AuthProvider`](/docs/client/components/auth-provider) ancestor with `navigate`, `replace`, and `Link` configured for routing to work correctly. ```tsx router.refresh()} > ``` ## After Sign-In On successful sign-in, `AuthProvider` automatically: 1. Stores the JWT and refresh token. 2. Fires `onSessionChange`. 3. Navigates to the previous page or the app root. --- ## User Button Navbar dropdown component with user avatar and sign-out action. URL: /docs/client/components/user-button `UserButton` renders a clickable avatar that opens a dropdown with the user's info and a sign-out button. Drop it into your navbar for instant user management. ```tsx export function Navbar() { return ( My App ) } ``` ## What It Renders - **Signed in** — Shows the user's avatar (or initials). Clicking it opens a dropdown with the user's name, email, and a sign-out button. - **Signed out** — Renders nothing (or can be hidden with conditional rendering). ## Prerequisites `UserButton` requires an [`AuthProvider`](/docs/client/components/auth-provider) ancestor. It reads the current session from context. ```tsx export function Providers({ children }: { children: React.ReactNode }) { return ( {children} ) } ``` ## Full Navbar Example ```tsx export function Navbar() { const { data: session } = useSession() return ( My App {session && ( <> Orders Customers )} {session ? ( ) : ( Sign In )} ) } ``` ## Sign-Out Behavior When the user clicks sign out in the dropdown: 1. The auth client clears the stored tokens. 2. `onSessionChange` fires on [`AuthProvider`](/docs/client/components/auth-provider). 3. `useSession` returns `null`, triggering UI updates across the app. --- ## Type Generation Generate TypeScript types from your database schema for fully type-safe queries. URL: /docs/client/type-generation The superapp CLI generates `SuperAppSchema` and `SuperAppActions` types from your live engine, giving you full autocomplete for connection names, table names, column names, action names, action inputs, and return types. ```bash npx superapp generate ``` ## Setup Run the `generate` command pointing at your running superapp backend. ```bash npx superapp generate --url http://localhost:3001 --output ./generated/schema.ts ``` This connects to your backend, introspects all configured database connections, and writes a TypeScript file with the full schema. ## Options | Flag | Description | Default | | --- | --- | --- | | `--url` | URL of your superapp backend | `http://localhost:3001` | | `--output` | Output file path | `./generated/schema.ts` | ## Generated Schema Structure The generated file exports a `SuperAppSchema` type (table types) and a `SuperAppActions` type (action input/output types). ```typescript // generated/schema.ts (auto-generated, do not edit) export type SuperAppSchema = { main: { orders: { id: string amount: number status: string customer_id: string created_at: string updated_at: string } customers: { id: string name: string email: string organization_id: string created_at: string } organizations: { id: string name: string plan: string created_at: string } } } export type SuperAppActions = { incrementStock: { input: { productId: string; amount: number } output: { id: string; stock: number } } revenueReport: { input: { startDate: string; endDate: string } output: { month: string; totalRevenue: number; orderCount: number; avgOrderValue: number }[] } } ``` The `SuperAppSchema` top-level keys are connection names (e.g., `main`), and each connection contains table names with their column types. The `SuperAppActions` type is derived from the Zod schemas defined on each action in the engine config. ## Using the Schema Pass the generated types to `createClient` for type-safe queries and actions. ```typescript const db = createClient({ url: 'http://localhost:3001/data', userToken, }) // Full autocomplete for connections, tables, columns const orders = await db.main.orders.findMany({ select: ['id', 'amount', 'status'], // ^ autocomplete shows: id, amount, status, customer_id, created_at, updated_at where: { status: { $eq: 'active' } }, }) // orders: { id: string; amount: number; status: string }[] // Full autocomplete for action names, input, and output const result = await db.action('incrementStock', { productId: 'prod_123', // ← autocomplete amount: 5, // ← autocomplete }) // result: { id: string; stock: number } ``` ## Regenerating Run the `generate` command again whenever your database schema changes (new tables, columns, or connections). ```bash npx superapp generate --url http://localhost:3001 --output ./generated/schema.ts ``` Add it to your `package.json` scripts for convenience. ```json { "scripts": { "generate": "superapp generate --url http://localhost:3001 --output ./generated/schema.ts" } } ``` ## Adding to .gitignore The generated file is derived from your database and can be regenerated at any time. You may choose to either commit it for CI convenience or add it to `.gitignore`. ``` # .gitignore generated/ ``` --- ## Architecture "How the pieces fit together." URL: /docs/advanced/architecture superapp is three npm packages: `@superapp/backend` (server), `@superapp/db` (data client), and `@superapp/auth` (auth client). The server owns all data access, auth, and permissions. The data client uses [Drizzle Proxy](https://orm.drizzle.team/docs/connect-drizzle-proxy) to send parameterized SQL to the server, where permissions are applied before execution. ## System Diagram ```text @superapp/db (Data) @superapp/auth (Auth) ┌────────────────────────┐ ┌──────────────────────────┐ │ drizzle() │ │ createAuth() │ │ Drizzle Proxy driver │ │ AuthCard · AuthProvider │ │ Sends SQL + params │ │ useSession · UserButton │ └────────────┬───────────┘ └────────────┬─────────────┘ │ POST /data + JWT │ POST /auth/* └────────────┬───────────────┘ ▼ @superapp/backend (Server) ┌───────────────────────────────────┐ │ HTTP Adapter (Hono/Express/Next) │ │ │ │ │ ▼ │ │ Auth (better-auth / custom) │ │ │ │ │ ▼ │ │ Permissions (CASL) │ │ Validate SQL + inject filters │ │ │ │ │ ▼ │ │ Query Execution (native drivers) │ └─────────┬─────────────────────────┘ │ ┌───────┼───────┬───────┐ ▼ ▼ ▼ ▼ Postgres MySQL SQLite CSV ``` ## Three Packages ### `@superapp/backend` (Server) The backend is the single source of truth for all data access. It: - **Receives parameterized SQL** from the Drizzle Proxy client over HTTP. - **Verifies JWTs** through a pluggable auth provider. - **Enforces permissions** using CASL-based rules that validate the SQL, inject row filters, restrict columns, validate writes, and preset values. - **Executes the permission-filtered SQL** directly against the target database using its native driver. - **Stores metadata** (sessions, roles, audit logs) in a Turso or local SQLite database via Drizzle ORM. ```typescript ``` ### `@superapp/db` (Data Client) The data client is real Drizzle ORM built on [Drizzle Proxy](https://orm.drizzle.team/docs/connect-drizzle-proxy). Drizzle builds parameterized SQL on the client, and the proxy driver sends SQL + params to the server. The data you get back is already filtered, restricted, and validated by the backend's permission engine. It: - **Exposes standard Drizzle ORM query syntax** that mirrors your schema. - **Sends parameterized SQL + params** to the server's `/data` endpoint via Drizzle Proxy. - **Returns permission-filtered data** — row filters, column restrictions, and write validations are applied transparently by the backend before results reach the client. - **Generates TypeScript types** from your server's `/schema` endpoint for full autocomplete. ```typescript ``` ### `@superapp/auth` (Auth Client) The auth client handles session management and ships pre-built auth UI components. It uses [better-auth](https://www.better-auth.com) by default, but supports custom adapters for any auth provider. It: - **Manages JWT sessions** — sign-in, sign-up, token refresh, sign-out. - **Ships auth UI components** (`AuthCard`, `AuthProvider`, `UserButton`) built with React. - **Provides React hooks** (`useSession`) for accessing the current session. - **Supports custom adapters** — swap the default better-auth adapter for Firebase, Auth0, Clerk, or any custom auth system. ```typescript ``` ## Tech Stack | Layer | Technology | Role | |---|---|---| | Query execution | Native drivers | Direct connections via `pg`, `mysql2`, `better-sqlite3` | | HTTP framework | Hono | Lightweight, edge-compatible HTTP routing | | Metadata store | Turso + Drizzle | Sessions, roles, audit logs, admin configuration | | Authentication | better-auth | JWT-based auth with session management | | Authorization | CASL | Attribute-based permission evaluation | | Admin UI | React + Vite | Visual permission and connection management | | Data client | @superapp/db | Drizzle Proxy over HTTP, returns permission-filtered data | | Auth client | @superapp/auth | Session management, auth UI components (better-auth default, custom adapters) | ## Dependency Graph ```text createEngine() ├── integrations[] → Database providers ──→ Native driver pools ├── connections{} → Named DB configs ──→ Native driver pools ├── auth → AuthProvider │ ├── verifyToken() │ ├── findUser() │ └── resolveSession() ├── permissions{} → Permission definitions ──→ CASL abilities ├── roles{} → Role → permission mappings ├── connections{} → Connection pool config ├── limits{} → Rate and query limits ├── audit{} → Audit logging └── adapter → HTTP adapter ├── POST /data ├── POST /auth/* └── GET /schema ``` ## Request Lifecycle Every query follows this exact path: 1. **HTTP** -- Adapter receives `POST /data` with JWT in `Authorization` header. The body contains parameterized SQL + params from the Drizzle Proxy client. 2. **Auth** -- `verifyToken()` validates the JWT signature. `findUser()` retrieves the user record. `resolveSession()` enriches with roles and org memberships. 3. **Permissions** -- CASL evaluates which permissions apply based on the user's role. The engine validates the incoming SQL, injects WHERE filters, strips unauthorized columns, and for writes, validates data against `check` rules and injects `preset` values. 4. **Query Execution** -- Executes the permission-filtered SQL directly against the target database (Postgres, MySQL, SQLite, or CSV) using its native driver. 5. **Response** -- Results are serialized to JSON and returned to the client. No step can be skipped. There is no "bypass" mode in production. All SQL is parameterized (preventing injection), and the server validates and modifies every query before execution. --- ## Request Pipeline The complete 9-step request pipeline from incoming request to JSON response. URL: /docs/advanced/request-pipeline Every `POST /data` request passes through a 9-step pipeline. The client sends parameterized SQL + params via [Drizzle Proxy](https://orm.drizzle.team/docs/connect-drizzle-proxy), and the server validates, applies permissions, and executes. Understanding this pipeline helps you debug permission issues, optimize queries, and reason about security. This pipeline applies to both HTTP (`POST /data`) and [PostgreSQL wire protocol](/docs/backend/adapters/pg-wire) requests. The wire protocol replaces steps 1-3 with TCP connection handshake + PG protocol parsing, but steps 4-9 are identical. ```text Incoming Request — POST /data + Bearer JWT + SQL + params │ ▼ 1. Rate Limiting — Per-user (200/min) and per-IP (500/min) │ ▼ 2. Body Validation — Parse SQL + params, validate structure │ ▼ 3. JWT Extraction — Decode token from Authorization header │ ▼ 4. Session Resolution — resolveSession(user, db) → enriched user │ ▼ 5. Role Injection — Map user.role → permission slugs │ ▼ 6. Permission Check — Validate SQL → Inject filters → Restrict columns → Check → Preset │ ▼ 6a. Middleware (before next) — Custom TypeScript logic (optional) │ ▼ 7. Query Execution — next() runs the permission-filtered SQL │ ▼ 7a. Middleware (after next) — Custom TypeScript logic (optional) │ ▼ 8. Audit Log — Record query, params, duration, user │ ▼ 9. Response — Return JSON result to client ``` ## Step-by-Step ### 1. Rate Limiting The engine enforces rate limits before processing any request logic: ```typescript limits: { rateLimitPerUser: 200, // req/min per user rateLimitPerIP: 500, // req/min per IP } ``` If exceeded, returns `429 Too Many Requests`. ### 2. Body Validation The request body contains parameterized SQL + params sent by the Drizzle Proxy client: ```json { "sql": "SELECT \"id\", \"amount\", \"status\" FROM \"main\".\"orders\" WHERE \"status\" = $1 LIMIT $2", "params": ["active", 100], "method": "all" } ``` The `method` field indicates the query type: `all` (returns rows as arrays), `get` (single row), `values` (raw values), or `run` (no return). Invalid structure returns `400 Bad Request`. ### 3. JWT Extraction The `Authorization: Bearer ` header is extracted and passed to the auth provider's `verifyToken` method. Invalid or expired tokens return `401 Unauthorized`. ### 4. Session Resolution The auth provider's `findUser` locates the user record, then `resolveSession` enriches it with additional data: ```typescript // Input: decoded JWT payload // Output: enriched session object { id: 'usr_123', email: 'alice@example.com', org_ids: ['org_1', 'org_2'], current_org_id: 'org_1', role: 'editor', } ``` User not found returns `401 Unauthorized`. ### 5. Role Injection The engine looks up `user.role` in the `roles` config and resolves the list of active permission slugs: ```typescript // user.role = 'editor' // roles.editor = ['view_own_orders', 'edit_org_orders', 'create_orders'] // → Active permissions: view_own_orders, edit_org_orders, create_orders ``` ### 6. Permission Check The engine parses the incoming SQL and evaluates each active permission against it: 1. **Table match** — Does any permission cover the tables in the SQL? 2. **Operation match** — Does that permission allow the detected operation (SELECT/INSERT/UPDATE/DELETE)? 3. **Column match** — Are the referenced columns in the allowed list? 4. **Filter injection** — Inject WHERE clauses from the permission's `filter` into the SQL 5. **FK traversal** — Resolve relationship paths in filters to subqueries 6. **Check validation** — For writes, validate parameter values against `check` rules 7. **Preset injection** — For writes, inject `preset` values into the SQL Query limits are also enforced at this stage: ```typescript limits: { maxRows: 10_000, // max rows a query can return maxRelationDepth: 3, // orders → items → product = depth 3 maxFilterNesting: 5, // nested $and/$or/$not levels } ``` If any check fails, returns `403 Forbidden`. ### 6a. Middleware (before next) If the matching permission defines a `middleware` function, it runs now with destructured parameters (`user`, `db`, `table`, `operation`, `columns`, `query`, `input`, `filter`) and a `next()` function. Before calling `next()`, the middleware can: - **Throw** to reject the request with `403 Forbidden` - **Pass overrides** to `next({ filter, input, columns, db })` to modify the query - **Run queries** via `db` to look up related data - **Wrap in `db.transaction()`** for atomic operations See [Middleware](/docs/backend/permissions/middleware) for details. ### 7. Query Execution The permission-filtered SQL is executed directly against the target database using its native driver (Postgres, MySQL, etc.): ```typescript limits: { queryTimeout: 30_000, // kill slow queries (ms) } ``` Timeout returns `408 Request Timeout`. ### 7a. Middleware (after next) After `next()` returns the query results, the middleware can: - **Transform rows** — redact fields, add computed columns, filter results - **Run side effects** — write audit entries, trigger notifications - **Return rows unchanged** — pass through when no transformation is needed See [Middleware](/docs/backend/permissions/middleware) for details. ### 8. Audit Log If audit logging is enabled, the engine records: - User ID, role, IP address - Table, operation - SQL query and parameters (if configured) - Execution duration - Success or error status ### 9. Response The query results are returned as JSON: ```json { "data": [ { "id": 1, "amount": 500, "status": "active" }, { "id": 2, "amount": 1200, "status": "draft" } ], "count": 2 } ``` ## Actions Pipeline Requests to `POST /actions/{actionName}` follow a shorter pipeline — no SQL parsing or permission filtering, just authentication and role checks: ```text POST /actions/incrementStock + Bearer JWT + { productId, amount } │ ▼ 1. Rate Limiting │ ▼ 2. JWT Extraction + Session Resolution │ ▼ 3. Role check — is 'action_incrementStock' in the user's role array? │ ▼ 4. Input validation — parse input against Zod schema (400 if invalid) │ ▼ 5. Execute action function — run({ user, db }, validatedInput) │ ▼ 6. Audit Log + Response ``` See [Actions](/docs/backend/actions) for details. ## Error Responses | Status | Step | Cause | |---|---|---| | `400` | Body Validation | Invalid request structure or malformed SQL | | `401` | JWT / Session | Invalid token or user not found | | `403` | Permission Check | No permission for table, operation, or column | | `408` | Query Execution | Query timeout exceeded | | `429` | Rate Limiting | Rate limit exceeded | | `500` | Any | Internal server error | --- ## Engine Modes "Programmatic vs Interactive mode." URL: /docs/advanced/engine-modes superapp supports two engine modes that control how permissions and connections are managed. Choose the mode that matches your team's workflow. ## Side-by-Side Comparison | Feature | `programmatic` | `interactive` | |---|---|---| | **Permissions defined in** | Code (`createEngine` config) | Admin UI (stored in metadata DB) | | **Connections defined in** | Code (`createEngine` config) | Admin UI (stored in metadata DB) | | **Roles defined in** | Code (`createEngine` config) | Admin UI (stored in metadata DB) | | **Version control** | Git-tracked, PR-reviewed | Database-stored, audit-logged | | **Admin UI** | Read-only dashboard (view-only access to config defined in code) | Full editing capabilities (create, edit, and delete permissions, connections, and roles) | | **Deployment** | Config changes require redeploy | Changes take effect immediately | | **Default** | Yes | No | | **Best for** | Engineering teams, CI/CD pipelines | Non-technical admins, rapid iteration | ## Programmatic Mode In programmatic mode, permissions, connections, and roles are defined entirely in your `createEngine` configuration. The source code is the single source of truth. ```typescript const engine = createEngine({ mode: 'programmatic', // this is the default connections: { main: process.env.PG_URL!, }, permissions: { view_own_orders: { name: 'View own orders', table: 'main.orders', roles: ['viewer', 'editor'], select: { columns: ['id', 'amount', 'status', 'created_at'], where: { customer_id: { $eq: '$user.customer_id' } }, }, }, edit_own_orders: { name: 'Edit own orders', table: 'main.orders', roles: ['editor'], update: { columns: ['amount', 'status', 'notes'], where: { customer_id: { $eq: '$user.customer_id' } }, validate: { status: { $in: ['draft', 'active'] } }, overwrite: { updated_by: '$user.id' }, }, }, }, }) ``` ### How It Works 1. You define permissions and roles in TypeScript. 2. Changes go through your normal code review process (PR, CI, tests). 3. The admin UI (if accessed) shows permissions in read-only mode for visibility. 4. The engine reads the config once at startup. Changes require a restart or redeploy. ### When to Use Programmatic Mode - Your team uses git-based workflows and wants permissions in version control. - You want PR reviews on permission changes. - You need reproducible deployments where the same code always produces the same behavior. - You have a CI/CD pipeline that deploys config changes. - Security is paramount and you want to audit permission changes in git history. ## Interactive Mode In `interactive` mode, the admin UI becomes the primary interface for managing permissions, connections, and roles. Changes are stored in the metadata database and take effect immediately — no code changes or redeployments needed. ```typescript const engine = createEngine({ mode: 'interactive', superapp_db: process.env.TURSO_URL ?? './superapp.db', masterKey: process.env.SUPERAPP_MASTER_KEY!, auth: betterAuthProvider({ /* ... */ }), // No permissions or roles here — they live in the admin UI }) ``` ### How It Works 1. Start the engine with `mode: 'interactive'` and a `masterKey`. 2. Access the admin UI (served by the engine on the `/admin` route). 3. Add connections, define permissions, and assign roles visually. 4. Changes are persisted to the metadata database and applied immediately. 5. All admin actions are audit-logged. ### Interactive Mode Capabilities In `interactive` mode, the admin UI provides: - **Connection management** -- Add, edit, and test database connections. - **Permission editor** -- Visual builder for where clauses, columns, validate, and default/overwrite rules. - **Role management** -- Assign roles inline on permissions and actions. - **User management** -- View users, assign roles, manage sessions. - **Audit log viewer** -- Browse query and admin action logs. - **Schema browser** -- Explore attached database tables and columns. ### When to Use Interactive Mode - Non-technical team members need to manage access rules. - You want to iterate on permissions without redeploying. - You are building a multi-tenant platform where customers configure their own access rules. - Rapid prototyping where you are still figuring out the permission model. ## Switching Between Modes You can switch from programmatic to `interactive` mode at any time. When switching to `interactive` mode, any permissions defined in code are imported into the metadata database as a starting point. Switching from `interactive` back to programmatic requires exporting your current permissions from the admin UI and adding them to your `createEngine` config. ```typescript // Start in programmatic mode during development const engine = createEngine({ mode: 'programmatic', permissions: { /* ... */ }, }) // Later, switch to interactive mode for production const engine = createEngine({ mode: 'interactive', masterKey: process.env.SUPERAPP_MASTER_KEY!, // permissions from code are imported into the metadata DB on first run }) ``` ## Hybrid Approach Some teams use programmatic mode in production with `interactive` mode in staging. This lets you: 1. Prototype permission changes in the staging admin UI. 2. Export the tested configuration. 3. Add it to your codebase and deploy through CI/CD. ```typescript const engine = createEngine({ mode: process.env.NODE_ENV === 'production' ? 'programmatic' : 'interactive', permissions: { // These are always available as a baseline view_own_orders: { /* ... */ }, }, masterKey: process.env.SUPERAPP_MASTER_KEY!, }) ``` --- ## Multi-Database Querying "Connect to Postgres, MySQL, SQLite, and CSV simultaneously." URL: /docs/advanced/multi-database superapp connects to multiple databases simultaneously using native drivers for each database type. Each connection gets its own driver instance and connection pool, and all queries are routed directly to the target database. ## How Connections Work When the engine starts, each configured connection is initialized with its native driver (e.g., `pg` for Postgres, `mysql2` for MySQL, `better-sqlite3` for SQLite). The engine manages connection pools per database and routes queries based on the table namespace. ``` createEngine({ connections: { main: 'postgres://...', warehouse: 'mysql://...', legacy: './legacy.db', imports: { directory: './data/' }, }, }) ``` ```text Query: db.select().from(schema.orders) │ ▼ Engine routes by namespace ├── main.* → Postgres (pg driver) ├── warehouse.* → MySQL (mysql2 driver) ├── legacy.* → SQLite (better-sqlite3) └── imports.* → CSV (in-memory) ``` After initialization, every table is accessible through its namespace: `main.orders`, `warehouse.events`, `legacy.customers`, `imports.products_2024`. ## Connection Namespacing Every table reference in superapp follows the pattern `{connection}.{table}`: ``` main.orders → Postgres table "orders" main.customers → Postgres table "customers" warehouse.events → MySQL table "events" warehouse.sessions → MySQL table "sessions" legacy.users → SQLite table "users" imports.products → CSV file "products.csv" ``` This namespacing is consistent across the entire system: ```typescript // Client queries db.select().from(schema.orders) // main connection db.select().from(schema.events) // warehouse connection // Permission definitions permissions: { view_orders: { table: 'main.orders', // same namespace // ... }, view_events: { table: 'warehouse.events', // same namespace // ... }, } ``` ## Same Permission Pipeline Regardless of Source The permission engine does not care which database type backs a table. Every query -- whether it hits Postgres, MySQL, SQLite, or a CSV file -- passes through the same pipeline: ``` Client request (Drizzle Proxy: SQL + params) │ ▼ JWT verification │ ▼ Permission evaluation (validate SQL, inject where clauses, restrict columns, validate writes) │ ▼ Query execution → routed to the correct database driver │ ▼ Response ``` A permission on a CSV table works identically to a permission on a Postgres table: ```typescript permissions: { view_imported_products: { table: 'imports.products', // CSV source roles: ['viewer'], select: { columns: ['sku', 'name', 'price', 'category'], where: { category: { $in: '$user.allowed_categories' }, }, }, }, view_orders: { table: 'main.orders', // Postgres source roles: ['viewer'], select: { columns: ['id', 'amount', 'status'], where: { customer_id: { $eq: '$user.customer_id' }, }, }, }, } ``` Both permissions compile to the same CASL ability structure and produce the same kind of SQL WHERE clause. The engine handles routing transparently. ## Provider Types The engine infers the database type from the connection URL protocol. No explicit provider imports needed for built-in databases. ### Built-in Providers | Database | Connection Config | Driver | |---|---|---| | Postgres | `'postgres://...'` or `process.env.PG_URL!` | `pg` | | MySQL | `'mysql://...'` or `process.env.MYSQL_URL!` | `mysql2` | | SQLite | `'./path/to/db.db'` or `process.env.SQLITE_PATH!` | `better-sqlite3` | | CSV | `{ directory: string }` | built-in | ### Connecting Multiple Databases Just pass the connection URLs. The engine loads the correct driver automatically based on the protocol: ```typescript const engine = createEngine({ connections: { main: process.env.PG_URL!, warehouse: process.env.MYSQL_URL!, }, }) ``` ## Performance Considerations - **Native drivers** connect directly to the database with no intermediary, minimizing latency. - **Connection pooling** is managed per provider. Each database connection maintains its own pool. - **CSV sources** are loaded into memory at startup. Keep CSV files small or use a database for large datasets. - Queries are scoped to a single connection. Each query targets one database. --- ## Audit Logging Track every query with user context, parameters, duration, and automatic retention policies. URL: /docs/advanced/audit-logging Audit logging records every data request with the user, query, parameters, and execution time. Enable it for compliance, debugging, and security monitoring. ```typescript const engine = createEngine({ connections: { main: process.env.PG_URL!, }, audit: { enabled: true, }, }) ``` ## Full Configuration ```typescript audit: { enabled: true, // Enable audit logging logQuery: true, // Log the SQL query text logParams: true, // Log query parameters logDuration: true, // Log execution time in milliseconds retention: '90d', // Auto-delete logs older than 90 days } ``` ## Options Reference | Option | Type | Default | Description | |---|---|---|---| | `enabled` | `boolean` | `false` | Enable or disable audit logging | | `logQuery` | `boolean` | `true` | Include the SQL query text in the log | | `logParams` | `boolean` | `true` | Include query parameters in the log | | `logDuration` | `boolean` | `true` | Include execution duration in the log | | `retention` | `string` | `'90d'` | Auto-delete logs after this period (e.g., `'30d'`, `'1y'`) | ## What Gets Logged Each audit log entry contains: | Field | Description | |---|---| | `timestamp` | When the request was made | | `user_id` | Authenticated user ID | | `table` | Target table (e.g., `main.orders`) | | `operation` | `select`, `insert`, `update`, `delete` | | `query` | Generated SQL (if `logQuery: true`) | | `params` | Query parameters (if `logParams: true`) | | `duration_ms` | Execution time in milliseconds (if `logDuration: true`) | | `ip_address` | Client IP address | | `role` | User's role at the time of the request | | `status` | `success` or `error` | ## Retention The `retention` option automatically purges old audit logs: ```typescript audit: { enabled: true, retention: '90d', // Keep logs for 90 days } ``` Supported formats: - `'30d'` — 30 days - `'90d'` — 90 days - `'1y'` — 1 year - `'365d'` — 365 days ## PII Considerations When `logParams: true`, query parameters are stored in plain text. If your queries include personally identifiable information (PII), consider: ```typescript // Option 1: Disable parameter logging audit: { enabled: true, logParams: false, } // Option 2: Keep params but reduce retention audit: { enabled: true, logParams: true, retention: '30d', } ``` ## Querying Audit Logs Audit logs are stored in the engine's internal database and accessible through the admin API: ```bash # Get recent audit logs curl -H "Authorization: Bearer $MASTER_KEY" \ https://myapp.com/admin/api/audit?limit=100 # Filter by user curl -H "Authorization: Bearer $MASTER_KEY" \ https://myapp.com/admin/api/audit?user_id=usr_123 # Filter by table and operation curl -H "Authorization: Bearer $MASTER_KEY" \ https://myapp.com/admin/api/audit?table=main.orders&operation=delete ``` ## Disabling for Development Turn off audit logging in development to reduce noise: ```typescript audit: { enabled: process.env.NODE_ENV === 'production', } ``` --- ## Overview Defense-in-depth security architecture. URL: /docs/advanced/security/overview `@superapp/backend` is built on the principle that no query executes without passing through a permission check. The client sends parameterized SQL via [Drizzle Proxy](https://orm.drizzle.team/docs/connect-drizzle-proxy), and the server validates, applies permissions, and executes. There is no god mode. ``` Client Request (Drizzle Proxy: SQL + params + JWT) │ ├─ 1. Rate Limiting ─────── per-user and per-IP throttle ├─ 2. JWT Validation ────── algorithm allowlist, issuer, audience, expiry ├─ 3. Permission Check ──── validate SQL, table, operation, columns, row-level filters ├─ 4. Query Isolation ───── per-request connection from pool, sandboxed └─ 5. Audit Log ─────────── query, params, duration, user, IP ``` ## Defense Layers ### Parameterized SQL Only Clients send parameterized SQL built by Drizzle ORM (via [Drizzle Proxy](https://orm.drizzle.team/docs/connect-drizzle-proxy)), never raw interpolated strings. All user values are passed as parameters (`$1`, `$2`, ...), never concatenated into the query string. SQL injection is structurally impossible because user input never interpolates into query strings. ### No God Mode Every query must match at least one permission. If no permission grants access to a table, the request is rejected with `403 Forbidden`. There is no superuser bypass in the data path -- even admin operations go through the admin API with master key authentication. ### JWT Validation Tokens are validated against an algorithm allowlist. Weak algorithms (`HS256`, `none`) are rejected by default. Claims are verified for issuer, audience, and expiry with configurable clock skew tolerance. ### Permission Enforcement Permissions define which tables, operations, and columns a role can access. Row-level filters are injected into every query automatically -- users cannot see or modify rows they are not authorized to access. ### Query Isolation Each request gets an isolated database connection from the pool. The engine validates and rewrites all SQL before execution. Dangerous operations (filesystem access, schema modification, system functions) are blocked at the permission layer. ### Encryption at Rest Connection secrets (database URLs, API keys) are encrypted with AES-256-GCM using per-project keys derived from your master key via HKDF. Secrets are displayed once at creation time and never again. ## Security Configuration All security settings are configured through `createEngine`: ```typescript const engine = createEngine({ connections: { /* ... */ }, jwt: { algorithms: ['RS256', 'ES256'], // allowed signing algorithms issuer: 'https://auth.myapp.com', // reject other issuers audience: 'https://api.myapp.com', // reject other audiences clockSkewSeconds: 30, // clock drift tolerance }, limits: { maxRows: 10_000, // max rows a query can return maxRelationDepth: 3, // orders → items → product = depth 3 maxFilterNesting: 5, // nested $and/$or/$not levels maxFilterConditions: 20, // total conditions per query maxRequestBodySize: '1mb', // max JSON body size queryTimeout: 30_000, // kill slow queries (ms) rateLimitPerUser: 200, // req/min per user rateLimitPerIP: 500, // req/min per IP }, masterKey: process.env.SUPERAPP_MASTER_KEY!, // admin API key }) ``` ## Next Steps - [Session Isolation](/docs/advanced/security/session-isolation) -- Query isolation and connection pooling - [JWT Validation](/docs/advanced/security/jwt-validation) -- Algorithm allowlists and claims verification - [Request Limits](/docs/advanced/security/request-limits) -- Rate limiting and query constraints - [Encryption](/docs/advanced/security/encryption) -- Connection secret encryption and key management --- ## Session Isolation Every request gets an isolated database connection. URL: /docs/advanced/security/session-isolation Each request runs queries against an isolated database connection acquired from a per-driver connection pool. Connections are sandboxed from other sessions and dangerous SQL statements are blocked before execution. ``` User A ──→ Connection from pool (permission-filtered, sandboxed) User B ──→ Connection from pool (permission-filtered, sandboxed) User C ──→ Connection from pool (permission-filtered, sandboxed) ``` ## How It Works When a user makes a `/data` request, the engine acquires a database connection from the appropriate pool (Postgres, MySQL, or SQLite). The permission-filtered SQL is executed against this connection. After the query completes, the connection is returned to the pool for reuse. ## What Is Blocked The engine validates and rewrites all incoming SQL before execution. Dangerous operations are rejected at the permission layer: | Blocked Category | Examples | |---|---| | Schema modification | `CREATE TABLE`, `DROP TABLE`, `ALTER TABLE` | | Filesystem access | `COPY TO`, `pg_read_file()`, `LOAD_FILE()` | | System functions | `current_setting()`, `pg_sleep()`, system catalogs | | Raw SQL execution | Only parameterized SQL from Drizzle Proxy is accepted | | Unauthorized tables | Any table not covered by a permission is inaccessible | All data access goes through the engine's permission layer, which validates and modifies the parameterized SQL received from the Drizzle Proxy client. Users cannot bypass permission filters or access unauthorized tables. ## Connection Pooling Each database driver maintains its own connection pool to minimize latency: ```typescript const engine = createEngine({ connections: { main: process.env.PG_URL!, // postgres pool warehouse: process.env.MYSQL_URL!, // mysql pool local: './data/app.db', // sqlite (single connection) }, limits: { queryTimeout: 30_000, // kill slow queries (ms) }, }) ``` - Each incoming request acquires a connection from the relevant pool - After the query completes, the connection is returned to the pool - If all connections are busy, new requests wait until one is available ## Resource Limits Each request is constrained to prevent a single user from consuming all server resources: - **Time** -- `queryTimeout` (default 30 seconds) applies to every query. Exceeded queries return `408 Request Timeout`. - **Rate** -- `rateLimitPerUser` and `rateLimitPerIP` cap request frequency per minute. - **Complexity** -- `maxRows`, `maxRelationDepth`, and `maxFilterNesting` cap query complexity. ```typescript const engine = createEngine({ limits: { queryTimeout: 30_000, // kill slow queries (ms) rateLimitPerUser: 200, // req/min per user rateLimitPerIP: 500, // req/min per IP maxRows: 10_000, // max rows a query can return }, }) ``` ## Request Lifecycle ``` 1. Request arrives with JWT 2. JWT validated → user identity extracted 3. Permissions evaluated → SQL validated and filtered 4. Connection acquired from pool 5. Permission-filtered SQL executed against the database 6. Results returned to client 7. Connection returned to pool ``` Connections do not persist any user state between requests. Each query starts from a clean slate with the permission-filtered SQL. --- ## JWT Validation Algorithm allowlists and claims verification. URL: /docs/advanced/security/jwt-validation Every request to `/data` must include a valid JWT in the `Authorization` header. The engine validates the token's algorithm, signature, and claims before processing the query. ```typescript const engine = createEngine({ jwt: { algorithms: ['RS256', 'ES256'], issuer: 'https://auth.myapp.com', audience: 'https://api.myapp.com', clockSkewSeconds: 30, }, }) ``` ## Configuration Options | Option | Type | Default | Description | |---|---|---|---| | `algorithms` | `string[]` | `['RS256', 'ES256']` | Allowed signing algorithms | | `issuer` | `string` | — | Expected `iss` claim value | | `audience` | `string` | — | Expected `aud` claim value | | `clockSkewSeconds` | `number` | `30` | Tolerance for `exp` and `nbf` checks | ## Algorithm Allowlist Only algorithms in the `algorithms` array are accepted. Tokens signed with any other algorithm are rejected with `401 Unauthorized`. The following algorithms are blocked by default and cannot be added to the allowlist: | Algorithm | Why Blocked | |---|---| | `none` | No signature verification -- trivially forgeable | | `HS256` | Symmetric key -- the server secret can be brute-forced or leaked | | `HS384` | Same as HS256 | | `HS512` | Same as HS256 | Recommended algorithms: | Algorithm | Key Type | Use Case | |---|---|---| | `RS256` | RSA 2048+ | Most common, wide library support | | `RS384` | RSA 2048+ | Higher security RSA | | `RS512` | RSA 2048+ | Highest security RSA | | `ES256` | ECDSA P-256 | Smaller tokens, faster verification | | `ES384` | ECDSA P-384 | Higher security ECDSA | | `ES512` | ECDSA P-521 | Highest security ECDSA | | `EdDSA` | Ed25519 | Modern, fast, small keys | ## Claims Validation Every token is checked for: | Claim | Check | Error | |---|---|---| | `exp` | Must be in the future (+ clock skew) | `401 Token expired` | | `nbf` | Must be in the past (- clock skew) | `401 Token not yet valid` | | `iss` | Must match `issuer` config (if set) | `401 Invalid issuer` | | `aud` | Must match `audience` config (if set) | `401 Invalid audience` | | `sub` | Must be present | `401 Missing subject` | ## Token Format Tokens must be sent in the `Authorization` header: ``` Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9... ``` ## Token Revocation The engine does not maintain a token blocklist. To revoke access: 1. **Short-lived tokens** -- set `exp` to 5-15 minutes. The auth provider issues fresh tokens on each request. 2. **Session invalidation** -- when using `betterAuthProvider`, call `authClient.signOut()` to invalidate the session server-side. The next token refresh will fail. 3. **Key rotation** -- rotate the signing key in your auth provider. All tokens signed with the old key become invalid immediately. ```typescript // Client-side: sign out and invalidate session await authClient.signOut() // Server-side: better-auth handles session invalidation // The user's next request will fail JWT validation ``` ## Example with External Auth Provider If you use an external auth provider (Auth0, Clerk, Firebase), configure the JWT settings to match their token format: ```typescript const engine = createEngine({ jwt: { algorithms: ['RS256'], issuer: 'https://myapp.us.auth0.com/', audience: 'https://api.myapp.com', clockSkewSeconds: 60, }, }) ``` --- ## Encryption Connection secret encryption with AES-256-GCM. URL: /docs/advanced/security/encryption Connection secrets -- database URLs, API keys, passwords -- are encrypted at rest using AES-256-GCM with per-project derived keys. Secrets are displayed once at creation time and never again. ```typescript const engine = createEngine({ masterKey: process.env.SUPERAPP_MASTER_KEY!, connections: { main: process.env.PG_URL!, }, }) ``` ## Master Key The `masterKey` is the root secret for all encryption operations. It is never stored in the database -- only in your environment variables or secret manager. Generate a master key: ```bash openssl rand -hex 32 ``` This produces a 256-bit (64-character hex) key. Store it as an environment variable: ```bash SUPERAPP_MASTER_KEY=a1b2c3d4e5f6...your-64-char-hex-key ``` If the master key is lost, all encrypted connection secrets become unrecoverable. Store it in a secret manager (AWS Secrets Manager, HashiCorp Vault, Doppler) and back it up securely. ## Key Derivation The engine never uses the master key directly for encryption. Instead, it derives a unique encryption key for each project using HKDF (HMAC-based Key Derivation Function): ``` Master Key │ └─ HKDF-SHA256(masterKey, salt=projectId, info="superapp-encryption") │ └─ Per-Project Key (256-bit) │ ├─ Encrypt connection "main" URL ├─ Encrypt connection "warehouse" URL └─ Encrypt custom provider API keys ``` This ensures that even if one project's derived key is compromised, other projects remain secure. ## Encryption Details | Property | Value | |---|---| | Algorithm | AES-256-GCM | | Key size | 256 bits | | IV size | 96 bits (random per encryption) | | Auth tag | 128 bits | | KDF | HKDF-SHA256 | Each encryption operation uses a random 96-bit initialization vector (IV). The IV and authentication tag are stored alongside the ciphertext. The auth tag provides tamper detection -- if the ciphertext or IV is modified, decryption fails. ## Display-Once Flow When a connection is added through the admin UI or API, the secret (database URL, API key) follows a display-once flow: 1. User submits the connection config with the plaintext secret 2. Engine encrypts the secret with the per-project derived key 3. Ciphertext, IV, and auth tag are stored in the engine database 4. The plaintext secret is returned to the user **once** in the response 5. All subsequent reads return a masked value (`postgres://****:****@host:5432/db`) ``` POST /admin/api/connections { "name": "main", "type": "postgres", "url": "postgres://user:secret@host:5432/db" } → 201 Created { "name": "main", "type": "postgres", "url": "postgres://user:secret@host:5432/db" ← shown once } GET /admin/api/connections/main { "name": "main", "type": "postgres", "url": "postgres://****:****@host:5432/db" ← masked forever } ``` ## Key Rotation To rotate the master key: 1. Set the new master key and old master key in environment variables 2. Run the rotation command 3. Remove the old master key ```bash SUPERAPP_MASTER_KEY=new-key-here \ SUPERAPP_MASTER_KEY_OLD=old-key-here \ npx @superapp/backend rotate-keys ``` The rotation command: 1. Decrypts all secrets using the old derived keys 2. Re-encrypts all secrets using new derived keys 3. Updates the stored ciphertext 4. Verifies all secrets decrypt correctly with the new key Key rotation is atomic -- if any step fails, all changes are rolled back and the old key remains active. ## Programmatic Connections When connections are defined in code (not through the admin UI), the URL comes from environment variables and is never stored in the engine database: ```typescript const engine = createEngine({ connections: { main: process.env.PG_URL!, }, }) ``` In this case, encryption applies only to connections added at runtime through the admin API. The `masterKey` is still required for admin API authentication and schema token signing. --- ## Request Limits Rate limiting and query constraints. URL: /docs/advanced/security/request-limits The engine enforces limits on query complexity, request size, and request frequency to prevent abuse and protect downstream databases. ```typescript const engine = createEngine({ limits: { maxRows: 10_000, // max rows a query can return maxRelationDepth: 3, // orders → items → product = depth 3 maxFilterNesting: 5, // nested $and/$or/$not levels maxFilterConditions: 20, // total conditions per query maxRequestBodySize: '1mb', // max JSON body size queryTimeout: 30_000, // kill slow queries (ms) rateLimitPerUser: 200, // req/min per user rateLimitPerIP: 500, // req/min per IP }, }) ``` ## All Limits | Limit | Default | Error | Description | |---|---|---|---| | `maxRows` | `10_000` | `400 Limit exceeds maximum of 10000` | Cap on rows returned per query | | `maxRelationDepth` | `3` | `400 Relation depth exceeds maximum of 3` | How many levels deep relations can be nested | | `maxFilterNesting` | `5` | `400 Filter nesting exceeds maximum of 5` | How many levels deep `$and`/`$or`/`$not` can nest | | `maxFilterConditions` | `20` | `400 Filter has too many conditions (max 20)` | Total filter conditions per query | | `maxRequestBodySize` | `'1mb'` | `413 Request body too large` | Maximum JSON body size | | `queryTimeout` | `30_000` | `408 Query timed out after 30000ms` | Maximum query execution time in milliseconds | | `rateLimitPerUser` | `200` | `429 Rate limit exceeded, retry after {n}s` | Requests per minute per authenticated user | | `rateLimitPerIP` | `500` | `429 Rate limit exceeded, retry after {n}s` | Requests per minute per IP address | ## Query Complexity Limits ### maxRows Caps the `limit` parameter on `findMany` queries. If a client requests more rows than allowed, the engine returns an error instead of silently truncating: ```typescript // Client request db.main.orders.findMany({ limit: 50_000 }) // → 400 Limit exceeds maximum of 10000 ``` ### maxRelationDepth Prevents deeply nested relation joins that could generate expensive multi-table queries: ```typescript // Depth 1: orders → customers // Depth 2: orders → customers → organizations // Depth 3: orders → customers → organizations → members (max) db.main.orders.findMany({ include: { customer: { include: { organization: { include: { members: true, // depth 3 — allowed }, }, }, }, }, }) ``` ### maxFilterNesting Limits how deeply `$and` and `$or` conditions can be nested inside each other: ```typescript // Nesting level 1 { $or: [ // Nesting level 2 { $and: [ { status: { $eq: 'active' } }, { amount: { $gt: 100 } }, ]}, { status: { $eq: 'pending' } }, ]} ``` ### maxFilterConditions Caps the total number of filter conditions across all nesting levels. This prevents clients from constructing queries with hundreds of `$or` branches. ## Request Size Limits ### maxRequestBodySize Rejects requests with JSON bodies larger than the configured size. Accepts string values like `'1mb'`, `'500kb'`, or `'2mb'`. ## Timeout ### queryTimeout Kills queries that run longer than the specified duration. The database connection is released and the client receives a `408` error. This protects against unoptimized queries or unexpected table scans. ## Rate Limiting Rate limits use a sliding window counter. When a client exceeds the limit, subsequent requests receive a `429` response with a `Retry-After` header. ### rateLimitPerUser Tracks requests by the `sub` claim in the JWT. Authenticated users share no state with other users. ### rateLimitPerIP Tracks requests by client IP address. This catches unauthenticated abuse and limits the blast radius of a compromised user token. Both limits are applied independently. A request must pass both checks. ## Overriding Per-Environment Use different limits for development and production: ```typescript const isProd = process.env.NODE_ENV === 'production' const engine = createEngine({ limits: { // Tighter limits in production, relaxed for local development maxRows: isProd ? 10_000 : 100_000, rateLimitPerUser: isProd ? 200 : 10_000, rateLimitPerIP: isProd ? 500 : 10_000, queryTimeout: isProd ? 30_000 : 120_000, }, }) ``` --- ## Orders Dashboard Full CRUD example with auth, validation, and row-level security. URL: /docs/examples/orders-dashboard **Use case:** A sales team app where reps manage their own orders, managers oversee their team, and admins have full access. ## Backend ```typescript title="engine.ts" const auth = betterAuthProvider({ secret: process.env.AUTH_SECRET!, userTable: { table: 'main.users', matchOn: { column: 'id', jwtField: 'id' }, }, }) const engine = createEngine({ connections: { main: process.env.PG_URL!, }, auth, permissions: { // ─── Sales Rep ────────────────────────────────────────── // // Can CRUD their own orders. Cannot see other reps' data. // New orders must start as 'draft' with a positive amount. // Can only delete drafts. rep_orders: { table: 'main.orders', roles: ['sales_rep'], select: { columns: ['id', 'amount', 'status', 'created_by', 'created_at'], where: { created_by: { $eq: '$user.id' } }, }, insert: { columns: ['amount', 'status'], validate: { amount: { $gt: 0 }, status: { $eq: 'draft' }, }, overwrite: { created_by: '$user.id', created_at: '$now', }, }, update: { columns: ['amount', 'status'], where: { created_by: { $eq: '$user.id' } }, validate: { amount: { $gt: 0 }, status: { $in: ['draft', 'active', 'cancelled'] }, }, overwrite: { updated_at: '$now' }, }, delete: { where: { created_by: { $eq: '$user.id' }, status: { $eq: 'draft' }, }, }, }, // ─── Manager ──────────────────────────────────────────── // // Can view all orders and update status on any order. // Cannot create or delete orders. manager_orders: { table: 'main.orders', roles: ['manager'], select: { columns: ['id', 'amount', 'status', 'created_by', 'created_at', 'updated_at'], }, update: { columns: ['status'], validate: { status: { $in: ['active', 'completed', 'cancelled'] }, }, overwrite: { updated_at: '$now' }, }, }, // ─── Admin ────────────────────────────────────────────── // // Full access to all orders. Can delete cancelled orders. admin_orders: { table: 'main.orders', roles: ['admin'], select: { columns: ['id', 'amount', 'status', 'created_by', 'created_at', 'updated_at', 'notes'], }, insert: { columns: ['amount', 'status', 'created_by', 'notes'], validate: { amount: { $gt: 0 } }, overwrite: { created_at: '$now' }, }, update: { columns: ['amount', 'status', 'notes'], validate: { amount: { $gt: 0 } }, overwrite: { updated_at: '$now' }, }, delete: { where: { status: { $in: ['cancelled', 'draft'] } }, }, }, }, }) ``` **What each role can do:** | | View | Create | Update | Delete | |---|---|---|---|---| | **sales_rep** | Own orders only | Yes — draft only, amount > 0 | Own orders — amount > 0, restricted statuses | Own drafts only | | **manager** | All orders + `updated_at` | No | Status only — active/completed/cancelled | No | | **admin** | All orders + `notes` | Yes — any status, can set `created_by` | All fields | Cancelled and draft orders | **How it works:** - **`roles`** — declared at the permission level. The entire permission block (all its operations) applies to those roles. - **`where`** — injects a WHERE clause. A rep querying orders automatically gets `WHERE created_by = ?` — they never see other reps' orders. Managers and admins have no `where`, so they see everything. - **`validate`** — validates incoming data. Sending `amount: -5` or `status: 'shipped'` is rejected with `403` before any SQL runs. - **`overwrite`** — always applied server-side. The client cannot override these — `created_by` and timestamps are enforced automatically. ## Client ```tsx title="app/orders/page.tsx" 'use client' type Order = Awaited< ReturnType['query']['orders']['findMany']> >[number] export default function OrdersDashboard() { const { data: session, isPending } = useSession() const db = useDb() const [orders, setOrders] = useState([]) const [loading, setLoading] = useState(true) // Fetch orders — backend scopes to current user automatically const loadOrders = () => { if (!db) return setLoading(true) db.query.orders .findMany({ orderBy: desc(schema.orders.createdAt), limit: 50, }) .then(setOrders) .finally(() => setLoading(false)) } useEffect(loadOrders, [db]) // Create — backend enforces created_by and validates amount > 0 const createOrder = async (amount: number) => { await db!.insert(schema.orders).values({ amount, status: 'draft' }) loadOrders() } // Update — backend only allows own orders for reps, validates status const updateStatus = async (id: number, status: string) => { await db!.update(schema.orders).set({ status }).where({ id }) loadOrders() } // Delete — backend only allows own draft orders for reps const deleteOrder = async (id: number) => { await db!.delete(schema.orders).where({ id }) loadOrders() } if (isPending) return Loading session... if (!session) return Please sign in to view orders. if (loading) return Loading orders... return ( ID Amount Status Date Actions {orders.map((order) => ( {order.id} ${order.amount.toFixed(2)} {order.status} {new Date(order.created_at).toLocaleDateString()} updateStatus(order.id, 'cancelled')}> Cancel deleteOrder(order.id)}>Delete ))} ) } ``` --- ## Multi-Tenant SaaS Organization-scoped permissions setup. URL: /docs/examples/multi-tenant-saas **Use case:** A SaaS app where users belong to organizations. Each user only sees data from their own orgs. Roles control what they can do within those orgs. ## Backend ```typescript title="engine.ts" const auth = betterAuthProvider({ secret: process.env.AUTH_SECRET!, userTable: { table: 'main.users', matchOn: { column: 'id', jwtField: 'id' }, }, }) const engine = createEngine({ connections: { main: process.env.PG_URL!, }, auth, // Enrich session with the user's org memberships resolveSession: async (user, db) => { const memberships = await db.main.members.findMany({ select: ['organization_id', 'role'], where: { user_id: { $eq: user.id } }, }) return { ...user, org_ids: memberships.map((m) => m.organization_id), } }, permissions: { orders: { table: 'main.orders', // Read orders — scoped to user's orgs select: { roles: ['viewer', 'editor', 'admin'], columns: ['id', 'amount', 'status', 'organization_id', 'created_at'], where: { organization_id: { $in: '$user.org_ids' } }, }, // Create orders — org_id must match user's orgs, amount must be positive insert: { roles: ['editor', 'admin'], columns: ['amount', 'status', 'organization_id'], validate: { organization_id: { $in: '$user.org_ids' }, amount: { $gt: 0 }, status: { $eq: 'draft' }, }, overwrite: { created_by: '$user.id', created_at: '$now', }, }, // Update orders — only within user's orgs update: { roles: ['editor', 'admin'], columns: ['amount', 'status'], where: { organization_id: { $in: '$user.org_ids' } }, validate: { amount: { $gt: 0 }, status: { $in: ['draft', 'active', 'cancelled'] }, }, overwrite: { updated_at: '$now' }, }, // Delete draft orders — only within user's orgs delete: { roles: ['admin'], where: { organization_id: { $in: '$user.org_ids' }, status: { $eq: 'draft' }, }, }, }, }, }) ``` **How org scoping works:** 1. `resolveSession` fetches the user's org memberships and returns `org_ids` 2. Every operation uses `where: { organization_id: { $in: '$user.org_ids' } }` — injected as a WHERE clause 3. On inserts, `validate` checks that the `organization_id` the client sends matches one of the user's orgs 4. The client never filters by org — the server handles it automatically ## Client ```tsx title="app/orders/page.tsx" 'use client' // Type is inferred from the Drizzle schema type Order = Awaited< ReturnType['query']['orders']['findMany']> >[number] export default function OrdersDashboard() { const { data: session, isPending } = useSession() const db = useDb() const [orders, setOrders] = useState([]) const [loading, setLoading] = useState(true) const loadOrders = () => { if (!db) return setLoading(true) // No org filter needed — the server injects it automatically db.query.orders .findMany({ orderBy: desc(schema.orders.createdAt), limit: 50, }) .then(setOrders) .finally(() => setLoading(false)) } useEffect(loadOrders, [db]) // Create — backend validates org_id and enforces created_by const createOrder = async (amount: number, organizationId: number) => { await db!.insert(schema.orders).values({ amount, status: 'draft', organization_id: organizationId, }) loadOrders() } // Update — backend scopes to user's orgs, validates status const updateStatus = async (id: number, status: string) => { await db!.update(schema.orders).set({ status }).where({ id }) loadOrders() } // Delete — backend only allows draft orders in user's orgs const deleteOrder = async (id: number) => { await db!.delete(schema.orders).where({ id }) loadOrders() } if (isPending) return Loading session... if (!session) return Please sign in to view orders. if (loading) return Loading orders... return ( ID Amount Status Date Actions {orders.map((order) => ( {order.id} ${order.amount.toFixed(2)} {order.status} {new Date(order.created_at).toLocaleDateString()} updateStatus(order.id, 'cancelled')}> Cancel deleteOrder(order.id)}>Delete ))} ) } ``` --- ## createEngine Options "Complete configuration reference." URL: /docs/reference/server-api/create-engine `createEngine` accepts a single configuration object and returns an engine instance. This page documents every available option. ```typescript const engine = createEngine({ connections: { main: 'postgres://localhost:5432/mydb', }, }) ``` ## Top-Level Options ", default: "{}", description: "Your database connections. Each key becomes a namespace prefix (e.g. main.orders). Type is auto-detected from the URL.", }, auth: { type: "AuthProvider", description: "Authentication provider. If omitted, the engine runs in open mode (no auth).", }, permissions: { type: "Record", default: "{}", description: "Permission definitions. Keys must be snake_case slugs. See Permission Object for the full shape.", }, actions: { type: "Record", default: "{}", description: "Typed server-side functions callable via POST /actions/{name}. See Actions.", }, masterKey: { type: "string", description: "Admin API key. Required for interactive mode and /admin endpoints. Always use an env var, never hardcode.", }, schemaEndpoint: { type: "SchemaEndpointConfig | boolean", default: "false", description: "Enable GET /schema for introspection. Pass true for defaults or an object to customize.", }, limits: { type: "LimitsConfig", description: "Query complexity and rate limit constraints. See limits section below.", }, audit: { type: "AuditConfig", description: "Audit logging configuration. See audit section below.", }, jwt: { type: "JWTConfig", description: "JWT verification settings. See jwt section below.", }, security: { type: "SecurityConfig", description: "CORS, CSP, and network security. See security section below.", }, pgWire: { type: "PgWireConfig | boolean", description: "PostgreSQL wire protocol configuration. Pass true for defaults (port 5433, trust auth) or an object to customize. Call engine.startPgWire() to start the TCP server.", }, }} /> ## `connections` Each connection maps a name to a database URL or config object. The engine infers the database type from the URL protocol — no explicit `type` field needed. ```typescript connections: { main: 'postgres://localhost:5432/mydb', // postgres warehouse: 'mysql://localhost:3306/warehouse', // mysql legacy: './legacy.db', // sqlite imports: { directory: './data/imports/' }, // csv } ``` ### Type Inference | URL / Config | Inferred Type | |---|---| | `postgres://...` or `postgresql://...` | PostgreSQL | | `mysql://...` | MySQL | | File path (`.db`, `.sqlite`) or `file:...` | SQLite | | `libsql://...` or Turso URL | SQLite (Turso) | | `{ directory: string }` | CSV | ### Examples ```typescript // Single Postgres connections: { main: process.env.PG_URL!, } // Multiple databases connections: { main: process.env.PG_URL!, warehouse: process.env.MYSQL_URL!, legacy: './legacy.db', imports: { directory: './data/imports/' }, } ``` ## `permissions` Permission definitions using Drizzle-like syntax. Each permission targets a table, declares which roles have access, and configures operations (`select`, `insert`, `update`, `delete`) individually. ```typescript permissions: { view_own_orders: { table: 'main.orders', roles: ['viewer', 'editor', 'admin'], select: { columns: ['id', 'amount', 'status', 'created_at'], where: { customer_id: { $eq: '$user.customer_id' } }, }, }, } ``` See [Permission Object](/docs/reference/server-api/permission-object) for the complete shape reference. ## `actions` Typed server-side functions with inline roles. Each action defines its allowed roles directly. ```typescript actions: { incrementStock: { roles: ['warehouse_manager', 'admin'], input: z.object({ productId: z.string(), amount: z.number().positive() }), output: z.object({ id: z.string(), stock: z.number() }), run: async ({ user, db }, { productId, amount }) => { const [updated] = await db .update(products) .set({ stock: sql`stock + ${amount}` }) .where(eq(products.id, productId)) .returning({ id: products.id, stock: products.stock }) return updated }, }, }, ``` See [Actions](/docs/backend/actions) for full documentation. ## `limits` Query complexity and rate limit constraints to prevent abuse and protect your databases. ```typescript limits: { maxRows: 5000, // max rows a query can return maxRelationDepth: 2, // orders → items = depth 2 maxFilterNesting: 4, // nested $and/$or/$not levels maxFilterConditions: 30, // total conditions per query maxRequestBodySize: '2mb', // max JSON body size queryTimeout: 15_000, // kill slow queries (ms) rateLimitPerUser: 100, // req/min per user rateLimitPerIP: 300, // req/min per IP } ``` ## `audit` Configure audit logging for queries, permission denials, and admin actions. ```typescript audit: { enabled: true, // turn logging on/off logQuery: true, // log SQL statements logParams: false, // log query params (may contain sensitive data) logDuration: true, // log query time (ms) logUser: true, // log who ran each query logDenied: true, // log permission denials logAdminActions: true, // log admin UI changes retention: '90d', // auto-delete after 90 days piiRedaction: true, // strip emails & phone numbers } ``` ## `jwt` JWT verification settings. These configure how the auth provider validates incoming tokens. ```typescript jwt: { algorithms: ['RS256', 'ES256'], // allowed signing algorithms issuer: 'https://auth.myapp.com', // reject tokens from other issuers audience: 'https://api.myapp.com', // reject tokens for other audiences clockSkewSeconds: 10, // clock drift tolerance (seconds) } ``` ## `security` Network security, CORS, and access control settings. ```typescript security: { cors: { origin: ['https://myapp.com', 'https://admin.myapp.com'], // allowed origins credentials: true, // allow cookies maxAge: 86400, // preflight cache (seconds) }, csp: "default-src 'self'; script-src 'self'", // CSP header for admin UI adminIpAllowlist: ['10.0.0.0/8', '192.168.1.0/24'], // restrict /admin access } ``` ## `schemaEndpoint` Enable the `GET /schema` endpoint for schema introspection. ```typescript schemaEndpoint: true // shorthand — enable with defaults schemaEndpoint: { enabled: true, // expose GET /schema token: process.env.SCHEMA_API_TOKEN, // require token for access } ``` ## `pgWire` Configure the PostgreSQL wire protocol server. When configured, calling `engine.startPgWire()` starts a TCP server that speaks the PostgreSQL wire protocol. ```typescript // Shorthand — enable with defaults (port 5433, trust auth) pgWire: true // Full configuration pgWire: { port: 5433, auth: { strategy: ['jwt', 'apiKey'] }, tls: { key: fs.readFileSync('server.key'), cert: fs.readFileSync('server.crt'), }, serverVersion: '16.3 (superapp)', } ``` See [PostgreSQL Wire Protocol](/docs/backend/adapters/pg-wire) for usage guide. ## Full Example ```typescript const engine = createEngine({ mode: 'programmatic', // config in code (or 'interactive') superapp_db: process.env.TURSO_URL ?? './superapp.db', // internal tables storage masterKey: process.env.SUPERAPP_MASTER_KEY!, // admin API key (use env var) connections: { main: process.env.PG_URL!, // namespace: main.* warehouse: process.env.MYSQL_URL!, // namespace: warehouse.* }, auth: betterAuthProvider({ // omit for open mode (no auth) secret: process.env.AUTH_SECRET!, userTable: { table: 'main.users', matchOn: { column: 'id', jwtField: 'id' }, columns: ['id', 'email', 'name'], }, }), jwt: { algorithms: ['RS256'], // allowed signing algorithms issuer: 'https://auth.myapp.com', // reject other issuers audience: 'https://api.myapp.com', // reject other audiences clockSkewSeconds: 5, // clock drift tolerance }, limits: { maxRows: 10_000, // max rows a query can return maxRelationDepth: 3, // orders → items → product = depth 3 maxFilterNesting: 5, // nested $and/$or/$not levels maxFilterConditions: 50, // total conditions per query maxRequestBodySize: '1mb', // max JSON body size queryTimeout: 30_000, // kill slow queries (ms) rateLimitPerUser: 200, // req/min per user rateLimitPerIP: 500, // req/min per IP }, audit: { enabled: true, // turn logging on/off logQuery: true, // log SQL statements logParams: false, // log query params (may contain sensitive data) logDuration: true, // log query time (ms) logUser: true, // log who ran each query logDenied: true, // log permission denials logAdminActions: true, // log admin UI changes retention: '90d', // auto-delete after 90 days piiRedaction: true, // strip emails & phone numbers }, security: { cors: { origin: ['https://myapp.com'], // allowed origins (never '*' in prod) credentials: true, // allow cookies }, adminIpAllowlist: ['10.0.0.0/8'], // restrict /admin access }, schemaEndpoint: { enabled: true, // expose GET /schema token: process.env.SCHEMA_API_TOKEN, // require token for access }, permissions: { view_own_orders: { table: 'main.orders', roles: ['viewer', 'warehouse_manager'], select: { columns: ['id', 'amount', 'status', 'created_at'], where: { customer_id: { $eq: '$user.customer_id' } }, }, }, }, actions: { incrementStock: { roles: ['warehouse_manager'], input: z.object({ productId: z.string(), amount: z.number().positive() }), output: z.object({ id: z.string(), stock: z.number() }), run: async ({ user, db }, { productId, amount }) => { const [updated] = await db .update(products) .set({ stock: sql`stock + ${amount}` }) .where(eq(products.id, productId)) .returning({ id: products.id, stock: products.stock }) return updated }, }, }, pgWire: { port: 5433, // TCP port for wire protocol auth: { strategy: ['jwt', 'apiKey'] }, // auth strategies (try in order) serverVersion: '16.3 (superapp)', // version string for clients }, }) engine.startPgWire() ``` --- ## Permission Object "Complete permission shape reference." URL: /docs/reference/server-api/permission-object A permission object defines what a user can do with a specific table. Permissions are identified by `snake_case` slugs. Each permission declares which roles have access and which operations (select, insert, update, delete) are allowed — using the same patterns as Drizzle ORM. ## TypeScript Interface ```typescript interface Permission { /** Target table in connection.table format */ table: string /** Roles that have this permission */ roles: string[] /** Human-readable name for display in admin UI */ name?: string /** Optional description explaining the permission's purpose */ description?: string /** SELECT operation config */ select?: { /** Allowed columns to return. Omit for all columns. */ columns?: string[] /** WHERE clause — restricts which rows are visible */ where?: Record /** Raw SQL appended to WHERE clause */ sql?: string /** Max rows per query */ limit?: number /** Custom middleware wrapping query execution */ middleware?: MiddlewareFn } /** INSERT operation config */ insert?: { /** Allowed columns the client can write */ columns?: string[] /** Validate incoming values — reject with 403 if invalid */ validate?: Record /** Default values — used when client doesn't provide a value */ default?: Record /** Override values — always applied, client cannot override */ overwrite?: Record /** Custom middleware wrapping query execution */ middleware?: MiddlewareFn } /** UPDATE operation config */ update?: { /** Allowed columns the client can write */ columns?: string[] /** WHERE clause — restricts which rows can be updated */ where?: Record /** Raw SQL appended to WHERE clause */ sql?: string /** Validate incoming values — reject with 403 if invalid */ validate?: Record /** Default values — used when client doesn't provide a value */ default?: Record /** Override values — always applied, client cannot override */ overwrite?: Record /** Custom middleware wrapping query execution */ middleware?: MiddlewareFn } /** DELETE operation config */ delete?: { /** WHERE clause — restricts which rows can be deleted */ where?: Record /** Raw SQL appended to WHERE clause */ sql?: string /** Custom middleware wrapping query execution */ middleware?: MiddlewareFn } } type MiddlewareFn = ( params: { user; db; table; operation; columns; query; input?; filter? }, next: (overrides?) => Promise, ) => Promise ``` ## Field Reference ### `table` | | | |---|---| | **Type** | `string` | | **Required** | Yes | | **Format** | `connection_name.table_name` | The fully qualified table name this permission applies to. Must match a table in one of your configured connections. ```typescript table: 'main.orders' table: 'warehouse.events' table: 'imports.products' ``` ### `roles` | | | |---|---| | **Type** | `string[]` | | **Required** | Yes | Which roles have this permission. When a user authenticates and `resolveSession` returns their role, the engine checks if that role is listed in the permission's `roles` array. ```typescript roles: ['viewer', 'editor', 'admin'] roles: ['sales_rep'] ``` ### `name` | | | |---|---| | **Type** | `string` | | **Required** | No | | **Used in** | Admin UI display, audit logs | Human-readable label for the permission. ```typescript name: 'View own orders' ``` ### `description` | | | |---|---| | **Type** | `string` | | **Required** | No | | **Used in** | Admin UI tooltip, documentation | Explains what this permission does and why it exists. ```typescript description: 'Allows users to read orders belonging to their organization. Restricted to non-financial columns.' ``` ### `select` | | | |---|---| | **Type** | `SelectConfig` | | **Required** | No | Configures the SELECT operation. If omitted, select is not allowed. ```typescript select: { columns: ['id', 'amount', 'status', 'created_at'], where: { organization_id: { $eq: '$user.current_org_id' } }, limit: 1000, } ``` #### `select.columns` An allowlist of column names returned in SELECT results. Omit to allow all columns. ```typescript select: { columns: ['id', 'name', 'status'] } ``` #### `select.where` A condition injected into the query's WHERE clause. The user cannot see, modify, or remove this. Supports all [operators](/docs/backend/permissions/operators) and `$user.*` variable substitution. ```typescript // Simple equality select: { where: { customer_id: { $eq: '$user.id' } } } // Array membership select: { where: { organization_id: { $in: '$user.org_ids' } } } // Multiple conditions (AND) select: { where: { organization_id: { $eq: '$user.current_org_id' }, status: { $ne: 'archived' }, }, } // Nested relationship traversal select: { where: { organization: { members: { user_id: { $eq: '$user.id' } }, }, }, } ``` #### `select.sql` A raw SQL fragment appended to the WHERE clause for advanced filtering that operators cannot express. ```typescript select: { where: { organization_id: { $in: '$user.org_ids' } }, sql: "created_at >= CURRENT_DATE - INTERVAL '30 days'", } ``` #### `select.limit` Maximum rows this permission can return per query. The client's `limit` is capped to whichever is lower: the permission limit or the global `limits.maxRows`. ```typescript select: { limit: 100 } ``` #### `select.middleware` Custom TypeScript that wraps query execution. See [Middleware](/docs/backend/permissions/middleware). ### `insert` | | | |---|---| | **Type** | `InsertConfig` | | **Required** | No | Configures the INSERT operation. If omitted, insert is not allowed. ```typescript insert: { columns: ['amount', 'status'], validate: { amount: { $gt: 0 }, status: { $eq: 'draft' }, }, default: { created_at: '$now', }, overwrite: { created_by: '$user.id', organization_id: '$user.current_org_id', }, } ``` #### `insert.columns` An allowlist of column names the client can write. Omit to allow all columns. #### `insert.validate` Validation conditions the incoming data must satisfy. If the data fails, the operation is rejected with `403`. Unlike `where`, `validate` checks the *new* data, not existing rows. ```typescript insert: { validate: { amount: { $gte: 0, $lte: 1_000_000 }, status: { $in: ['draft'] }, }, } ``` #### `insert.default` Default values applied when the client doesn't provide a value for the column. If the client sends a value, their value is used. ```typescript insert: { default: { created_at: '$now', status: 'draft', }, } ``` #### `insert.overwrite` Values that are always applied, regardless of what the client sends. The client cannot override these. Use for server-controlled fields like `created_by` or `organization_id`. ```typescript insert: { overwrite: { created_by: '$user.id', organization_id: '$user.current_org_id', }, } ``` #### `insert.middleware` Custom TypeScript that wraps query execution. See [Middleware](/docs/backend/permissions/middleware). ### `update` | | | |---|---| | **Type** | `UpdateConfig` | | **Required** | No | Configures the UPDATE operation. If omitted, update is not allowed. ```typescript update: { columns: ['amount', 'status'], where: { organization_id: { $in: '$user.org_ids' } }, validate: { amount: { $gt: 0 }, status: { $in: ['draft', 'active', 'cancelled'] }, }, default: { updated_at: '$now' }, overwrite: { updated_by: '$user.id' }, } ``` #### `update.columns` An allowlist of column names the client can write. #### `update.where` A condition injected into the query's WHERE clause, restricting which rows can be updated. ```typescript update: { where: { organization_id: { $in: '$user.org_ids' } }, } ``` #### `update.sql` A raw SQL fragment appended to the WHERE clause. #### `update.validate` Validation conditions the new values must satisfy. ```typescript update: { validate: { amount: { $gt: 0 }, status: { $in: ['draft', 'active', 'cancelled'] }, }, } ``` #### `update.default` Default values applied when the client doesn't provide a value for the column. If the client sends a value, their value is used. ```typescript update: { default: { updated_at: '$now' }, } ``` #### `update.overwrite` Values that are always applied, regardless of what the client sends. ```typescript update: { overwrite: { updated_by: '$user.id' }, } ``` #### `update.middleware` Custom TypeScript that wraps query execution. See [Middleware](/docs/backend/permissions/middleware). ### `delete` | | | |---|---| | **Type** | `DeleteConfig` | | **Required** | No | Configures the DELETE operation. If omitted, delete is not allowed. ```typescript delete: { where: { customer_id: { $eq: '$user.id' }, status: { $eq: 'draft' }, }, } ``` #### `delete.where` A condition injected into the query's WHERE clause, restricting which rows can be deleted. #### `delete.sql` A raw SQL fragment appended to the WHERE clause. #### `delete.middleware` Custom TypeScript that wraps query execution. See [Middleware](/docs/backend/permissions/middleware). ## default vs overwrite | | `default` | `overwrite` | |---|---|---| | **Purpose** | Fill in missing values | Force server-controlled values | | **Client control** | Client value wins if provided | Client value is always replaced | | **Use case** | Sensible defaults, timestamps | Ownership fields, org scoping | | **Available on** | `insert`, `update` | `insert`, `update` | ```typescript insert: { // If client doesn't send created_at, use current time default: { created_at: '$now' }, // Always set created_by to the current user, no matter what client sends overwrite: { created_by: '$user.id' }, } ``` ## Complete Examples ### Read-Only Permission ```typescript view_own_orders: { table: 'main.orders', roles: ['viewer', 'editor', 'admin'], description: 'Read orders belonging to the user\'s organization', select: { columns: ['id', 'amount', 'status', 'customer_id', 'created_at'], where: { organization_id: { $eq: '$user.current_org_id' } }, limit: 1000, }, } ``` ### Full CRUD Permission ```typescript manage_team_tasks: { table: 'main.tasks', roles: ['editor', 'admin'], description: 'Full CRUD on tasks for the user\'s team', select: { columns: ['id', 'title', 'description', 'status', 'priority', 'assigned_to', 'due_date'], where: { team_id: { $in: '$user.team_ids' } }, }, insert: { columns: ['title', 'description', 'status', 'priority', 'assigned_to', 'due_date'], validate: { status: { $in: ['todo', 'in_progress', 'done', 'cancelled'] }, priority: { $in: ['low', 'medium', 'high', 'critical'] }, }, overwrite: { organization_id: '$user.current_org_id', created_by: '$user.id', }, }, update: { columns: ['title', 'description', 'status', 'priority', 'assigned_to', 'due_date'], where: { team_id: { $in: '$user.team_ids' } }, validate: { status: { $in: ['todo', 'in_progress', 'done', 'cancelled'] }, priority: { $in: ['low', 'medium', 'high', 'critical'] }, }, overwrite: { updated_by: '$user.id' }, }, delete: { where: { team_id: { $in: '$user.team_ids' } }, }, } ``` ### Insert-Only Permission ```typescript submit_feedback: { table: 'main.feedback', roles: ['user'], description: 'Users can submit feedback but cannot read or modify existing entries', insert: { columns: ['message', 'category', 'rating'], validate: { rating: { $gte: 1, $lte: 5 }, category: { $in: ['bug', 'feature', 'general'] }, }, default: { status: 'pending' }, overwrite: { user_id: '$user.id', submitted_at: '$now', }, }, } ``` ### Delete with Restrictions ```typescript delete_draft_orders: { table: 'main.orders', roles: ['sales_rep', 'admin'], description: 'Users can only delete their own orders that are still in draft status', delete: { where: { customer_id: { $eq: '$user.customer_id' }, status: { $eq: 'draft' }, }, }, } ``` --- ## AuthProvider Interface "Build custom auth providers." URL: /docs/reference/server-api/auth-provider An auth provider handles JWT verification, user lookup, and session enrichment. superapp ships with `betterAuthProvider` for most use cases, but you can implement the `AuthProvider` interface for custom authentication. ## TypeScript Interface ```typescript interface AuthProvider { /** * Verify a JWT token and return the decoded payload. * Throw an error if the token is invalid or expired. */ verifyToken(token: string): Promise /** * Look up a user record from the decoded JWT payload. * Return null if the user is not found (results in 401). */ findUser(payload: JWTPayload, db: QueryBuilder): Promise /** * Enrich the user with session data (roles, orgs, etc.). * The returned object becomes $user.* in permissions. * Optional — if omitted, the User object is used directly. */ resolveSession?(user: User, db: QueryBuilder): Promise /** * HTTP route handlers for auth endpoints (/auth/*). * Optional — if omitted, no auth routes are registered. */ routes?: Record } interface JWTPayload { sub?: string iss?: string aud?: string exp?: number iat?: number [key: string]: any } interface User { id: string email?: string name?: string [key: string]: any } interface EnrichedUser extends User { roles?: string[] [key: string]: any } type RouteHandler = (ctx: RequestContext) => Promise interface QueryBuilder { selectFrom(table: string): SelectQueryBuilder insertInto(table: string): InsertQueryBuilder updateTable(table: string): UpdateQueryBuilder deleteFrom(table: string): DeleteQueryBuilder } ``` ## `betterAuthProvider` The built-in provider using [better-auth](https://www.better-auth.com). Handles JWT signing/verification, session management, and auth routes automatically. ```typescript const auth = betterAuthProvider({ secret: process.env.AUTH_SECRET!, userTable: { table: 'main.users', matchOn: { column: 'id', jwtField: 'id' }, columns: ['id', 'email', 'name'], }, session: { expiresIn: '7d', refreshWindow: '1d', }, resolveSession: async (user, db) => { const memberships = await db .selectFrom('main.members') .select(['organization_id', 'role']) .where('user_id', '=', user.id) .where('status', '=', 'active') .execute() return { ...user, org_ids: memberships.map(m => m.organization_id), current_org_id: memberships[0]?.organization_id ?? null, roles: memberships.map(m => m.role), } }, }) ``` ### Options | Option | Type | Required | Default | Description | |---|---|---|---|---| | `secret` | `string` | Yes | -- | Secret key for JWT signing and verification. Must be at least 32 characters. | | `userTable` | `UserTableConfig` | Yes | -- | Configuration for the user table lookup. | | `userTable.table` | `string` | Yes | -- | Fully qualified table name (e.g., `'main.users'`). | | `userTable.matchOn` | `{ column: string; jwtField: string }` | Yes | -- | How to match a JWT payload to a user row. `column` is the database column, `jwtField` is the JWT claim. | | `userTable.columns` | `string[]` | No | All columns | Which columns to select from the user table. | | `session` | `SessionConfig` | No | See below | Session duration and refresh settings. | | `session.expiresIn` | `string` | No | `'7d'` | Session duration. Accepts `'1h'`, `'7d'`, `'30d'`, etc. | | `session.refreshWindow` | `string` | No | `'1d'` | Time before expiry when the session is automatically refreshed. | | `resolveSession` | `(user: User, db: QueryBuilder) => Promise` | No | Identity function | Enrich the user with additional data. The returned object becomes `$user.*` in permissions. | | `emailVerification` | `boolean` | No | `false` | Require email verification before allowing sign-in. | | `forgotPassword` | `boolean` | No | `true` | Enable the forgot password flow. | ## Custom Auth Provider Implement the `AuthProvider` interface to integrate with any authentication system. ### Example: Firebase Auth ```typescript const firebaseAuth: AuthProvider = { async verifyToken(token: string) { const decoded = await admin.auth().verifyIdToken(token) return { sub: decoded.uid, email: decoded.email, name: decoded.name, } }, async findUser(payload, db) { const users = await db .selectFrom('main.users') .selectAll() .where('firebase_uid', '=', payload.sub) .execute() return users[0] ?? null }, async resolveSession(user, db) { const memberships = await db .selectFrom('main.org_members') .select(['org_id', 'role']) .where('user_id', '=', user.id) .execute() return { ...user, org_ids: memberships.map(m => m.org_id), roles: memberships.map(m => m.role), } }, } const engine = createEngine({ connections: { main: process.env.PG_URL!, }, auth: firebaseAuth, permissions: { /* ... */ }, }) ``` ### Example: Auth0 ```typescript const client = jwksClient({ jwksUri: 'https://your-tenant.auth0.com/.well-known/jwks.json', }) const auth0Provider: AuthProvider = { async verifyToken(token: string) { const decoded = jwt.decode(token, { complete: true }) if (!decoded) throw new Error('Invalid token') const key = await client.getSigningKey(decoded.header.kid) const verified = jwt.verify(token, key.getPublicKey(), { algorithms: ['RS256'], audience: 'https://api.myapp.com', issuer: 'https://your-tenant.auth0.com/', }) return verified as Record }, async findUser(payload, db) { const users = await db .selectFrom('main.users') .selectAll() .where('auth0_id', '=', payload.sub) .execute() return users[0] ?? null }, async resolveSession(user, db) { const roles = await db .selectFrom('main.user_roles') .select(['role']) .where('user_id', '=', user.id) .execute() return { ...user, roles: roles.map(r => r.role), } }, } ``` ### Example: Simple API Key Auth ```typescript const apiKeyProvider: AuthProvider = { async verifyToken(token: string) { // Token is an API key, not a JWT // Return a payload-like object with the key return { sub: token, type: 'api_key' } }, async findUser(payload, db) { const keys = await db .selectFrom('main.api_keys') .select(['user_id', 'key_hash', 'scopes']) .where('key_hash', '=', hashApiKey(payload.sub)) .where('revoked', '=', false) .execute() if (!keys[0]) return null const users = await db .selectFrom('main.users') .selectAll() .where('id', '=', keys[0].user_id) .execute() return users[0] ?? null }, } ``` ## The `resolveSession` Return Value Whatever you return from `resolveSession` becomes available as `$user.*` in permission definitions. Design this object to include everything your permissions need. ```typescript // resolveSession returns: { id: 'usr_42', email: 'alice@example.com', name: 'Alice', customer_id: 'cust_002', org_ids: ['org_1', 'org_2'], current_org_id: 'org_1', team_ids: ['team_a', 'team_b'], roles: ['editor', 'team_lead'], } // Available in permissions as: where: { customer_id: { $eq: '$user.customer_id' } } // 'cust_002' where: { organization_id: { $in: '$user.org_ids' } } // ['org_1', 'org_2'] where: { team_id: { $in: '$user.team_ids' } } // ['team_a', 'team_b'] overwrite: { created_by: '$user.id' } // 'usr_42' overwrite: { organization_id: '$user.current_org_id' } // 'org_1' ``` --- ## drizzle() Options "Drizzle client configuration reference." URL: /docs/reference/client-api/create-client `drizzle()` creates a Drizzle ORM instance using [Drizzle Proxy](https://orm.drizzle.team/docs/connect-drizzle-proxy) that connects to your superapp backend. Drizzle builds parameterized SQL on the client, and the proxy driver sends SQL + params over HTTP. It returns a standard Drizzle `db` object with all query methods. ```typescript const db = drizzle({ connection: 'http://localhost:3001', token: session.token, schema, }) ``` ## Options | Option | Type | Required | Default | Description | |---|---|---|---|---| | `connection` | `string` | Yes | -- | Your superapp backend URL. The proxy driver sends SQL + params to this endpoint. | | `token` | `string` | Yes | -- | A valid JWT obtained from the auth client. Sent as `Authorization: Bearer ` on every request. | | `schema` | `object` | Yes | -- | Your generated Drizzle schema object containing table definitions and relations. | ```typescript const db = drizzle({ connection: 'https://api.myapp.com', token: session.token, schema, }) ``` ## Schema The schema is auto-generated from your database by the CLI. It contains standard Drizzle table definitions: ```typescript // schema.orders → pgTable('orders', { id, amount, status, ... }) // schema.customers → pgTable('customers', { id, name, email, ... }) // schema.ordersRelations → relations(orders, ({ one }) => ({ customer: one(customers, { ... }) })) ``` ### Generating the Schema Run the CLI against your running server: ```bash npx @superapp/backend generate --url http://localhost:3001 --output ./generated/schema.ts ``` This introspects your database via the `/schema` endpoint and outputs a Drizzle schema file with all table definitions, column types, and relations. ## Return Type `drizzle()` returns a standard Drizzle database instance: ```typescript // Select builder db.select().from(schema.orders).where(eq(schema.orders.status, 'active')) db.select({ id: schema.orders.id }).from(schema.orders) // Mutations db.insert(schema.orders).values({ amount: 100, status: 'pending' }) db.update(schema.orders).set({ status: 'shipped' }).where(eq(schema.orders.id, 'ord_1')) db.delete(schema.orders).where(eq(schema.orders.id, 'ord_1')) // Relational queries db.query.orders.findMany({ with: { customer: true }, limit: 50 }) db.query.orders.findFirst({ where: eq(schema.orders.id, 'ord_1') }) ``` ## Recommended Setup ### Shared Setup File ```typescript // lib/db.ts const SUPERAPP_URL = process.env.NEXT_PUBLIC_SUPERAPP_URL! export const authClient = createAuth(SUPERAPP_URL) export function createDb(token: string) { return drizzle({ connection: SUPERAPP_URL, token, schema, }) } ``` ### React Hook ```typescript // hooks/use-db.ts export function useDb() { const { data: session } = useSession() return useMemo( () => (session?.token ? createDb(session.token) : null), [session?.token], ) } ``` ### Usage in Components ```tsx export function OrdersList() { const db = useDb() async function loadOrders() { if (!db) return const orders = await db.select() .from(schema.orders) .where(eq(schema.orders.status, 'active')) .orderBy(desc(schema.orders.createdAt)) .limit(50) return orders } } ``` ## How It Works When you call a Drizzle query method, the proxy driver: 1. **Drizzle builds parameterized SQL** on the client (e.g., `SELECT ... FROM orders WHERE status = $1`). 2. **POSTs SQL + params** to the backend's `/data` endpoint with the JWT in the `Authorization: Bearer` header. 3. **The server validates** the token, applies row-level permissions to the SQL, and executes directly against the database. 4. **Returns typed results** as JSON, deserialized into the expected TypeScript type. This is standard [Drizzle Proxy](https://orm.drizzle.team/docs/connect-drizzle-proxy) — the query syntax is identical to any other Drizzle setup, only the transport layer differs. --- ## Query Methods "All Drizzle query methods available through the superapp client." URL: /docs/reference/client-api/query-methods The superapp client exposes standard Drizzle ORM query methods. All methods return promises and queries are executed via [Drizzle Proxy](https://orm.drizzle.team/docs/connect-drizzle-proxy). ```typescript const db = drizzle({ connection, token, schema }) ``` ## Method Reference ### `db.select()` Select rows from a table with optional filtering, sorting, joins, and pagination. ```typescript // All columns const orders = await db.select().from(schema.orders) // Specific columns const orders = await db.select({ id: schema.orders.id, amount: schema.orders.amount, }).from(schema.orders) ``` **Chainable methods:** | Method | Description | |---|---| | `.from(table)` | Table to select from | | `.where(condition)` | Filter condition | | `.orderBy(column)` | Sort order (`asc()` or `desc()`) | | `.limit(n)` | Maximum rows to return | | `.offset(n)` | Rows to skip | | `.leftJoin(table, on)` | Left join another table | | `.innerJoin(table, on)` | Inner join another table | | `.groupBy(column)` | Group results | ```typescript const orders = await db.select({ id: schema.orders.id, amount: schema.orders.amount, customerName: schema.customers.name, }).from(schema.orders) .leftJoin(schema.customers, eq(schema.orders.customerId, schema.customers.id)) .where(eq(schema.orders.status, 'active')) .orderBy(desc(schema.orders.createdAt)) .limit(50) .offset(0) ``` ### `db.query.*.findMany()` Relational queries with eager loading. Similar to Prisma's `include`. ```typescript const orders = await db.query.orders.findMany({ with: { customer: true }, where: eq(schema.orders.status, 'active'), orderBy: desc(schema.orders.createdAt), limit: 50, }) // { id, amount, status, customer: { id, name, email } }[] ``` | Option | Type | Description | |---|---|---| | `with` | `object` | Relations to eager load | | `where` | `SQL` | Filter condition | | `orderBy` | `SQL` | Sort order | | `limit` | `number` | Maximum rows | | `offset` | `number` | Rows to skip | | `columns` | `object` | Columns to include/exclude | ### `db.query.*.findFirst()` Find a single record. Returns `undefined` if no match. ```typescript const order = await db.query.orders.findFirst({ where: eq(schema.orders.id, 'ord_abc123'), with: { customer: true }, }) // { id, amount, status, customer: { ... } } | undefined ``` ### `db.insert()` Insert one or more rows. ```typescript // Single row const [order] = await db.insert(schema.orders).values({ amount: 250, status: 'pending', customerId: 'cust_abc123', }).returning() // Multiple rows await db.insert(schema.orders).values([ { amount: 100, status: 'pending', customerId: 'cust_1' }, { amount: 200, status: 'pending', customerId: 'cust_2' }, ]) // Upsert await db.insert(schema.customers) .values({ id: 'cust_1', name: 'Acme', email: 'a@acme.com' }) .onConflictDoNothing() ``` **Chainable methods:** | Method | Description | |---|---| | `.values(data)` | Row data to insert | | `.returning()` | Return inserted rows | | `.onConflictDoNothing()` | Skip on conflict | | `.onConflictDoUpdate({ target, set })` | Update on conflict | ### `db.update()` Update rows matching a condition. ```typescript const [updated] = await db.update(schema.orders) .set({ status: 'shipped' }) .where(eq(schema.orders.id, 'ord_abc123')) .returning() ``` **Chainable methods:** | Method | Description | |---|---| | `.set(data)` | New values to set | | `.where(condition)` | Filter condition | | `.returning()` | Return updated rows | ### `db.delete()` Delete rows matching a condition. ```typescript await db.delete(schema.orders) .where(eq(schema.orders.id, 'ord_abc123')) ``` **Chainable methods:** | Method | Description | |---|---| | `.where(condition)` | Filter condition | | `.returning()` | Return deleted rows | ### Aggregations Use Drizzle's aggregation functions with `db.select()`. ```typescript // Count const [{ total }] = await db.select({ total: count() }).from(schema.orders) // Sum with filter const [{ revenue }] = await db.select({ revenue: sum(schema.orders.amount), }).from(schema.orders) .where(eq(schema.orders.status, 'active')) // Group by const byStatus = await db.select({ status: schema.orders.status, count: count(), total: sum(schema.orders.amount), }).from(schema.orders) .groupBy(schema.orders.status) ``` ## Error Handling All methods throw on server errors. Catch errors to handle permission denials, validation failures, and network issues. ```typescript try { await db.insert(schema.orders).values({ amount: -100, status: 'invalid', }) } catch (error) { // error.message: "You do not have permission to perform this action." } ``` ## Permission Notes - Permission filters are applied transparently on all queries - Column restrictions are enforced — selecting restricted columns returns an error - Server-side presets (like `createdBy`, `organizationId`) are injected automatically on inserts - The count and aggregation results reflect only rows the user is permitted to see --- ## Filter Operators "Complete Drizzle filter operator reference." URL: /docs/reference/client-api/filter-operators Drizzle provides filter operators as functions imported from `drizzle-orm`. They're used in `.where()` clauses and relational query conditions. ```typescript const orders = await db.select() .from(schema.orders) .where(and( eq(schema.orders.status, 'active'), gt(schema.orders.amount, 100) )) ``` ## Comparison Operators | Function | Description | SQL Equivalent | Example | |---|---|---|---| | `eq(col, val)` | Equal to | `= value` | `eq(schema.orders.status, 'active')` | | `ne(col, val)` | Not equal to | `!= value` | `ne(schema.orders.status, 'cancelled')` | | `gt(col, val)` | Greater than | `> value` | `gt(schema.orders.amount, 100)` | | `gte(col, val)` | Greater than or equal | `>= value` | `gte(schema.orders.amount, 100)` | | `lt(col, val)` | Less than | `< value` | `lt(schema.orders.amount, 1000)` | | `lte(col, val)` | Less than or equal | `<= value` | `lte(schema.orders.amount, 1000)` | | `between(col, a, b)` | Between two values | `BETWEEN a AND b` | `between(schema.orders.amount, 100, 1000)` | ```typescript // Range with individual operators db.select().from(schema.orders).where(and( gte(schema.orders.amount, 10), lte(schema.orders.amount, 100) )) // Same range with between db.select().from(schema.orders).where( between(schema.orders.amount, 10, 100) ) // Date comparison db.select().from(schema.orders).where( gte(schema.orders.createdAt, new Date('2025-01-01')) ) ``` ## Set Operators | Function | Description | SQL Equivalent | Example | |---|---|---|---| | `inArray(col, vals)` | Value is in array | `IN (values)` | `inArray(schema.orders.status, ['active', 'pending'])` | | `notInArray(col, vals)` | Value is not in array | `NOT IN (values)` | `notInArray(schema.orders.status, ['cancelled'])` | ```typescript db.select().from(schema.orders).where( inArray(schema.orders.status, ['active', 'pending', 'processing']) ) db.select().from(schema.orders).where( notInArray(schema.orders.status, ['cancelled', 'archived']) ) ``` ## Pattern Matching | Function | Description | SQL Equivalent | Example | |---|---|---|---| | `like(col, pattern)` | Case-sensitive pattern | `LIKE pattern` | `like(schema.customers.name, 'Acme%')` | | `ilike(col, pattern)` | Case-insensitive pattern | `ILIKE pattern` | `ilike(schema.customers.email, '%@example.com')` | Pattern characters: `%` matches any sequence, `_` matches one character. ```typescript db.select().from(schema.customers).where( like(schema.customers.name, 'Order%') ) db.select().from(schema.customers).where( ilike(schema.customers.email, '%@example.com') ) ``` ## Null Checks | Function | Description | SQL Equivalent | |---|---|---| | `isNull(col)` | Column is null | `IS NULL` | | `isNotNull(col)` | Column is not null | `IS NOT NULL` | ```typescript // Active records (not soft-deleted) db.select().from(schema.orders).where(isNull(schema.orders.deletedAt)) // Assigned records db.select().from(schema.orders).where(isNotNull(schema.orders.assignedTo)) ``` ## Logical Operators | Function | Description | SQL Equivalent | |---|---|---| | `and(...conds)` | All conditions must match | `cond1 AND cond2` | | `or(...conds)` | At least one must match | `cond1 OR cond2` | | `not(cond)` | Negate a condition | `NOT (condition)` | ### and() ```typescript db.select().from(schema.orders).where(and( eq(schema.orders.status, 'active'), gte(schema.orders.amount, 100) )) ``` ### or() ```typescript db.select().from(schema.orders).where(or( eq(schema.orders.status, 'active'), eq(schema.orders.status, 'pending') )) ``` ### not() ```typescript db.select().from(schema.orders).where( not(inArray(schema.orders.status, ['cancelled', 'archived'])) ) ``` ## Complex Combinations ```typescript // Active or shipped, high value, not deleted db.select().from(schema.orders).where(and( or( eq(schema.orders.status, 'active'), eq(schema.orders.status, 'shipped') ), gte(schema.orders.amount, 1000), isNull(schema.orders.deletedAt) )) ``` ## Server-Side Permission Operators The backend permission system uses a separate MongoDB-style operator syntax (not Drizzle functions). See [Permission Operators](/docs/backend/permissions/operators) for the server-side reference. ## Quick Reference | Function | Import | Usage | |---|---|---| | `eq` | `drizzle-orm` | `eq(col, value)` | | `ne` | `drizzle-orm` | `ne(col, value)` | | `gt` | `drizzle-orm` | `gt(col, value)` | | `gte` | `drizzle-orm` | `gte(col, value)` | | `lt` | `drizzle-orm` | `lt(col, value)` | | `lte` | `drizzle-orm` | `lte(col, value)` | | `between` | `drizzle-orm` | `between(col, min, max)` | | `inArray` | `drizzle-orm` | `inArray(col, [values])` | | `notInArray` | `drizzle-orm` | `notInArray(col, [values])` | | `like` | `drizzle-orm` | `like(col, pattern)` | | `ilike` | `drizzle-orm` | `ilike(col, pattern)` | | `isNull` | `drizzle-orm` | `isNull(col)` | | `isNotNull` | `drizzle-orm` | `isNotNull(col)` | | `and` | `drizzle-orm` | `and(...conditions)` | | `or` | `drizzle-orm` | `or(...conditions)` | | `not` | `drizzle-orm` | `not(condition)` | --- ## "POST /data" "Query endpoint request and response format." URL: /docs/reference/http-api/data-endpoint The `/data` endpoint is the single entry point for all data operations. It receives parameterized SQL + params from the [Drizzle Proxy](https://orm.drizzle.team/docs/connect-drizzle-proxy) client and returns query results. Every request must include a valid JWT. ## Request Format The client sends parameterized SQL built by Drizzle ORM on the client side: ``` POST /data HTTP/1.1 Content-Type: application/json Authorization: Bearer { "sql": "SELECT \"id\", \"amount\", \"status\" FROM \"main\".\"orders\" WHERE \"status\" = $1 ORDER BY \"created_at\" DESC LIMIT $2", "params": ["active", 50], "method": "all" } ``` ### Request Body Fields | Field | Type | Required | Description | |---|---|---|---| | `sql` | `string` | Yes | Parameterized SQL query built by Drizzle ORM. Uses `$1`, `$2`, etc. as placeholders. | | `params` | `any[]` | Yes | Parameter values corresponding to the SQL placeholders. | | `method` | `string` | Yes | Query execution method. One of: `all` (returns rows as arrays), `get` (single row), `values` (raw values), or `run` (execute without returning rows). | ## Request Examples ### SELECT (method: "all") ```json { "sql": "SELECT \"id\", \"amount\", \"status\", \"created_at\" FROM \"main\".\"orders\" WHERE \"status\" = $1 AND \"amount\" >= $2 ORDER BY \"created_at\" DESC LIMIT $3 OFFSET $4", "params": ["active", 100, 20, 0], "method": "all" } ``` ### SELECT single row (method: "get") ```json { "sql": "SELECT \"id\", \"amount\", \"status\" FROM \"main\".\"orders\" WHERE \"id\" = $1 LIMIT $2", "params": ["ord_abc123", 1], "method": "get" } ``` ### INSERT (method: "run") ```json { "sql": "INSERT INTO \"main\".\"orders\" (\"amount\", \"status\", \"notes\") VALUES ($1, $2, $3) RETURNING \"id\", \"amount\", \"status\", \"created_at\"", "params": [250, "draft", "New order"], "method": "run" } ``` ### UPDATE (method: "run") ```json { "sql": "UPDATE \"main\".\"orders\" SET \"status\" = $1, \"notes\" = $2 WHERE \"id\" = $3 RETURNING \"id\", \"status\", \"updated_at\"", "params": ["active", "Approved", "ord_abc123"], "method": "run" } ``` ### DELETE (method: "run") ```json { "sql": "DELETE FROM \"main\".\"orders\" WHERE \"id\" = $1", "params": ["ord_abc123"], "method": "run" } ``` ### COUNT (method: "get") ```json { "sql": "SELECT count(*) AS \"count\" FROM \"main\".\"orders\" WHERE \"status\" = $1", "params": ["active"], "method": "get" } ``` ### Aggregation (method: "all") ```json { "sql": "SELECT \"status\", sum(\"amount\") AS \"total\", count(*) AS \"count\", avg(\"amount\") AS \"average\" FROM \"main\".\"orders\" WHERE \"status\" = $1 GROUP BY \"status\"", "params": ["active"], "method": "all" } ``` ## Response Format The response follows the [Drizzle Proxy protocol](https://orm.drizzle.team/docs/connect-drizzle-proxy). The format depends on the `method` field in the request. ### method: "all" Returns rows as arrays of values: ```json { "rows": [ ["ord_001", 250, "active", "2025-03-15T10:00:00Z"], ["ord_002", 100, "active", "2025-03-14T09:00:00Z"] ] } ``` ### method: "get" Returns a single row as an array: ```json { "rows": ["ord_001", 250, "active"] } ``` ### method: "run" Returns an empty result (used for writes): ```json { "rows": [] } ``` Drizzle ORM on the client maps these raw arrays back into typed objects automatically. ## Authentication Every request to `/data` must include a valid JWT in the `Authorization` header: ``` Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9... ``` If the token is missing, invalid, or expired, the server responds with `401 Unauthorized`: ```json { "error": "UNAUTHORIZED", "message": "Invalid or expired token.", "correlation_id": "req_xyz789" } ``` ## Error Format All errors follow the same shape: ```typescript interface ErrorResponse { /** Error code for programmatic handling */ error: string /** Human-readable error message */ message: string /** Unique request ID for log correlation */ correlation_id: string /** Additional details (development mode only) */ details?: Record } ``` ### Error Codes | HTTP Status | Error Code | Description | |---|---|---| | `400` | `BAD_REQUEST` | Invalid request body, missing required fields, or malformed SQL. | | `401` | `UNAUTHORIZED` | Missing, invalid, or expired JWT token. | | `403` | `PERMISSION_DENIED` | The user does not have permission for this table/operation. | | `404` | `TABLE_NOT_FOUND` | The specified table does not exist in any connection. | | `408` | `QUERY_TIMEOUT` | The query exceeded the configured timeout. | | `413` | `PAYLOAD_TOO_LARGE` | The request body exceeds `limits.maxRequestBodySize`. | | `422` | `VALIDATION_ERROR` | Write data failed the permission's `check` validation. | | `429` | `RATE_LIMITED` | The user or IP has exceeded the rate limit. | | `500` | `INTERNAL_ERROR` | An unexpected server error occurred. | ### Error Example ```json { "error": "PERMISSION_DENIED", "message": "You do not have permission to perform this action.", "correlation_id": "req_abc123" } ``` In development mode (`NODE_ENV !== 'production'`), the response includes additional details: ```json { "error": "PERMISSION_DENIED", "message": "No SELECT permission on table 'main.orders' for role 'viewer'.", "correlation_id": "req_abc123", "details": { "table": "main.orders", "operation": "select", "role": "viewer", "evaluated_permissions": ["view_own_orders"] } } ``` ## cURL Examples ```bash # SELECT query curl -X POST http://localhost:3001/data \ -H "Content-Type: application/json" \ -H "Authorization: Bearer $TOKEN" \ -d '{ "sql": "SELECT \"id\", \"amount\", \"status\" FROM \"main\".\"orders\" LIMIT $1", "params": [10], "method": "all" }' # INSERT query curl -X POST http://localhost:3001/data \ -H "Content-Type: application/json" \ -H "Authorization: Bearer $TOKEN" \ -d '{ "sql": "INSERT INTO \"main\".\"orders\" (\"amount\", \"status\") VALUES ($1, $2) RETURNING *", "params": [250, "draft"], "method": "run" }' # COUNT query curl -X POST http://localhost:3001/data \ -H "Content-Type: application/json" \ -H "Authorization: Bearer $TOKEN" \ -d '{ "sql": "SELECT count(*) AS \"count\" FROM \"main\".\"orders\" WHERE \"status\" = $1", "params": ["active"], "method": "get" }' ``` --- ## Auth Endpoints "Authentication API routes." URL: /docs/reference/http-api/auth-endpoints The auth endpoints handle user authentication, session management, and account recovery. They are served under the `/auth` path when an auth provider is configured. All auth endpoints accept and return JSON. No JWT is required for authentication endpoints (they are how you obtain one). ## Endpoints Overview | Method | Path | Description | |---|---|---| | `POST` | `/auth/sign-in` | Authenticate with email and password | | `POST` | `/auth/sign-up` | Create a new user account | | `POST` | `/auth/sign-out` | Invalidate the current session | | `GET` | `/auth/session` | Get the current session and user info | | `POST` | `/auth/forgot-password` | Request a password reset email | | `POST` | `/auth/reset-password` | Reset password with a token | | `POST` | `/auth/verify-email` | Verify an email address with a token | ## POST /auth/sign-in Authenticate a user with email and password. Returns a session token (JWT) on success. ### Request ```json { "email": "alice@example.com", "password": "s3cure-passw0rd" } ``` | Field | Type | Required | Description | |---|---|---|---| | `email` | `string` | Yes | The user's email address. | | `password` | `string` | Yes | The user's password. | ### Response (200 OK) ```json { "user": { "id": "usr_42", "email": "alice@example.com", "name": "Alice", "emailVerified": true, "createdAt": "2025-01-15T10:00:00Z" }, "session": { "token": "eyJhbGciOiJSUzI1NiIs...", "expiresAt": "2025-03-23T10:00:00Z" } } ``` ### Errors | Status | Error Code | Condition | |---|---|---| | `401` | `INVALID_CREDENTIALS` | Email or password is incorrect. | | `403` | `EMAIL_NOT_VERIFIED` | Email verification is required but not completed (when `emailVerification: true`). | | `429` | `RATE_LIMITED` | Too many sign-in attempts. | ## POST /auth/sign-up Create a new user account. Returns the created user and a session token. ### Request ```json { "email": "bob@example.com", "password": "s3cure-passw0rd", "name": "Bob" } ``` | Field | Type | Required | Description | |---|---|---|---| | `email` | `string` | Yes | Email address for the new account. Must be unique. | | `password` | `string` | Yes | Password. Minimum 8 characters. | | `name` | `string` | No | Display name for the user. | ### Response (201 Created) ```json { "user": { "id": "usr_43", "email": "bob@example.com", "name": "Bob", "emailVerified": false, "createdAt": "2025-03-16T12:00:00Z" }, "session": { "token": "eyJhbGciOiJSUzI1NiIs...", "expiresAt": "2025-03-23T12:00:00Z" } } ``` When `emailVerification` is enabled, the session token is still returned but data access may be restricted until the email is verified. ### Errors | Status | Error Code | Condition | |---|---|---| | `400` | `VALIDATION_ERROR` | Missing required fields or password too short. | | `409` | `EMAIL_EXISTS` | An account with this email already exists. | | `429` | `RATE_LIMITED` | Too many sign-up attempts. | ## POST /auth/sign-out Invalidate the current session. Requires an active JWT. ### Request ``` POST /auth/sign-out HTTP/1.1 Authorization: Bearer ``` No request body is needed. ### Response (200 OK) ```json { "success": true } ``` After sign-out, the JWT is invalidated server-side. Subsequent requests using the same token receive `401 Unauthorized`. ### Errors | Status | Error Code | Condition | |---|---|---| | `401` | `UNAUTHORIZED` | No valid session token provided. | ## GET /auth/session Retrieve the current session and user information. Requires an active JWT. ### Request ``` GET /auth/session HTTP/1.1 Authorization: Bearer ``` ### Response (200 OK) ```json { "user": { "id": "usr_42", "email": "alice@example.com", "name": "Alice", "emailVerified": true, "createdAt": "2025-01-15T10:00:00Z" }, "session": { "token": "eyJhbGciOiJSUzI1NiIs...", "expiresAt": "2025-03-23T10:00:00Z" } } ``` If the session is within the `refreshWindow`, the response includes a refreshed token with an extended expiration. ### Errors | Status | Error Code | Condition | |---|---|---| | `401` | `UNAUTHORIZED` | Token is missing, invalid, or expired. | ## POST /auth/forgot-password Request a password reset. Sends an email with a reset token. Always returns `200 OK` regardless of whether the email exists (to prevent email enumeration). ### Request ```json { "email": "alice@example.com" } ``` | Field | Type | Required | Description | |---|---|---|---| | `email` | `string` | Yes | Email address of the account to reset. | ### Response (200 OK) ```json { "success": true, "message": "If an account with that email exists, a reset link has been sent." } ``` ### Errors | Status | Error Code | Condition | |---|---|---| | `429` | `RATE_LIMITED` | Too many reset requests. | ## POST /auth/reset-password Reset a password using the token received via email. ### Request ```json { "token": "rst_abc123def456", "password": "new-s3cure-passw0rd" } ``` | Field | Type | Required | Description | |---|---|---|---| | `token` | `string` | Yes | The reset token from the email link. | | `password` | `string` | Yes | The new password. Minimum 8 characters. | ### Response (200 OK) ```json { "success": true, "message": "Password has been reset. Please sign in with your new password." } ``` ### Errors | Status | Error Code | Condition | |---|---|---| | `400` | `INVALID_TOKEN` | The reset token is invalid or expired. | | `400` | `VALIDATION_ERROR` | New password does not meet requirements. | ## POST /auth/verify-email Verify an email address using the token sent during sign-up. ### Request ```json { "token": "vrf_abc123def456" } ``` | Field | Type | Required | Description | |---|---|---|---| | `token` | `string` | Yes | The verification token from the email link. | ### Response (200 OK) ```json { "success": true, "message": "Email verified successfully." } ``` ### Errors | Status | Error Code | Condition | |---|---|---| | `400` | `INVALID_TOKEN` | The verification token is invalid or expired. | ## Client SDK Usage The `@superapp/auth` client SDK wraps these endpoints for convenience: ```typescript const auth = createAuth('http://localhost:3001') // Sign up const { user, session } = await auth.signUp({ email: 'bob@example.com', password: 's3cure-passw0rd', name: 'Bob', }) // Sign in const { user, session } = await auth.signIn({ email: 'alice@example.com', password: 's3cure-passw0rd', }) // Get current session const { user, session } = await auth.getSession() // Sign out await auth.signOut() // Forgot password await auth.forgotPassword({ email: 'alice@example.com' }) // Reset password await auth.resetPassword({ token: 'rst_abc123def456', password: 'new-s3cure-passw0rd', }) // Verify email await auth.verifyEmail({ token: 'vrf_abc123def456' }) ``` ### React Hook ```typescript function MyComponent() { const { data: session, isPending, error } = useSession() if (isPending) return Loading... if (!session) return Not signed in return Welcome, {session.user.name} } ``` ## cURL Examples ```bash # Sign in curl -X POST http://localhost:3001/auth/sign-in \ -H "Content-Type: application/json" \ -d '{ "email": "alice@example.com", "password": "s3cure-passw0rd" }' # Sign up curl -X POST http://localhost:3001/auth/sign-up \ -H "Content-Type: application/json" \ -d '{ "email": "bob@example.com", "password": "s3cure-passw0rd", "name": "Bob" }' # Get session curl http://localhost:3001/auth/session \ -H "Authorization: Bearer $TOKEN" # Sign out curl -X POST http://localhost:3001/auth/sign-out \ -H "Authorization: Bearer $TOKEN" # Forgot password curl -X POST http://localhost:3001/auth/forgot-password \ -H "Content-Type: application/json" \ -d '{ "email": "alice@example.com" }' ``` --- ## "GET /schema" "Schema introspection endpoint." URL: /docs/reference/http-api/schema-endpoint The `/schema` endpoint returns a complete description of all connected databases, tables, columns, and relationships. It is used by the CLI to generate TypeScript types and can be consumed by any tool that needs schema information. ## Enabling the Endpoint The schema endpoint is disabled by default. Enable it in your `createEngine` configuration: ```typescript const engine = createEngine({ connections: { main: process.env.PG_URL!, }, schemaEndpoint: true, // enable with defaults }) ``` To require authentication: ```typescript const engine = createEngine({ connections: { main: process.env.PG_URL!, }, schemaEndpoint: { enabled: true, token: process.env.SCHEMA_API_TOKEN, // require this token for access }, }) ``` ### Configuration Options | Option | Type | Default | Description | |---|---|---|---| | `enabled` | `boolean` | `false` | Enable the `/schema` endpoint. | | `token` | `string` | `undefined` | If set, requests must include this token in the `Authorization` header. | ## Authentication When a `token` is configured, the request must include it as a Bearer token: ``` GET /schema HTTP/1.1 Authorization: Bearer ``` This token is separate from user JWTs. It is a static secret used for schema introspection only. Store it as an environment variable. When no `token` is configured, the endpoint is publicly accessible. This is acceptable for development but not recommended for production. ## Request ```bash curl http://localhost:3001/schema \ -H "Authorization: Bearer $SCHEMA_API_TOKEN" ``` No request body or query parameters are needed. ## Response Format The response is a JSON object describing all connections, their tables, columns, relationships, and actions. ```typescript interface SchemaResponse { /** All configured database connections */ connections: { [connectionName: string]: { /** Database type */ type: 'postgres' | 'mysql' | 'sqlite' | 'csv' /** Tables in this connection */ tables: { [tableName: string]: { /** Table columns */ columns: { [columnName: string]: { /** Column data type */ type: string /** Whether the column is nullable */ nullable: boolean /** Whether the column has a default value */ hasDefault: boolean /** Whether the column is a primary key */ primaryKey: boolean } } /** Relationships to other tables */ relationships: { [relationName: string]: { /** Related table in connection.table format */ table: string /** Relationship type */ type: 'one-to-one' | 'one-to-many' | 'many-to-one' | 'many-to-many' /** Local column(s) for the join */ from: string | string[] /** Remote column(s) for the join */ to: string | string[] } } } } } } /** Typed server-side actions (derived from Zod schemas) */ actions?: { [actionName: string]: { /** JSON Schema for the action's input */ input: JSONSchema /** JSON Schema for the action's output (if defined) */ output?: JSONSchema } } } ``` ### Example Response ```json { "connections": { "main": { "type": "postgres", "tables": { "users": { "columns": { "id": { "type": "text", "nullable": false, "hasDefault": true, "primaryKey": true }, "email": { "type": "text", "nullable": false, "hasDefault": false, "primaryKey": false }, "name": { "type": "text", "nullable": true, "hasDefault": false, "primaryKey": false }, "created_at": { "type": "timestamp", "nullable": false, "hasDefault": true, "primaryKey": false } }, "relationships": {} }, "orders": { "columns": { "id": { "type": "text", "nullable": false, "hasDefault": true, "primaryKey": true }, "amount": { "type": "numeric", "nullable": false, "hasDefault": false, "primaryKey": false }, "status": { "type": "text", "nullable": false, "hasDefault": true, "primaryKey": false }, "customer_id": { "type": "text", "nullable": false, "hasDefault": false, "primaryKey": false }, "created_at": { "type": "timestamp", "nullable": false, "hasDefault": true, "primaryKey": false } }, "relationships": { "customer": { "table": "main.users", "type": "many-to-one", "from": "customer_id", "to": "id" } } } } }, "warehouse": { "type": "mysql", "tables": { "events": { "columns": { "id": { "type": "bigint", "nullable": false, "hasDefault": true, "primaryKey": true }, "event_type": { "type": "varchar", "nullable": false, "hasDefault": false, "primaryKey": false }, "order_id": { "type": "varchar", "nullable": true, "hasDefault": false, "primaryKey": false }, "created_at": { "type": "datetime", "nullable": false, "hasDefault": true, "primaryKey": false } }, "relationships": { "order": { "table": "main.orders", "type": "many-to-one", "from": "order_id", "to": "id" } } } } } }, "actions": { "incrementStock": { "input": { "type": "object", "properties": { "productId": { "type": "string" }, "amount": { "type": "number" } }, "required": ["productId", "amount"] }, "output": { "type": "object", "properties": { "id": { "type": "string" }, "stock": { "type": "number" } }, "required": ["id", "stock"] } } } } ``` ## Type Generation The primary use case for `/schema` is generating TypeScript types for the client SDK: ```bash npx @superapp/backend generate \ --url http://localhost:3001 \ --output ./generated/schema.ts ``` This command: 1. Sends a `GET /schema` request to your running server. 2. Parses the response into a TypeScript type definition. 3. Writes the type file to the specified output path. The generated file includes both table types and action types: ```typescript // generated/schema.ts — auto-generated, do not edit export interface SuperAppSchema { main: { users: { id: string email: string name: string | null created_at: string } orders: { id: string amount: number status: string customer_id: string created_at: string } } warehouse: { events: { id: number event_type: string order_id: string | null created_at: string } } } export interface SuperAppActions { incrementStock: { input: { productId: string; amount: number } output: { id: string; stock: number } } } ``` Pass these types for full autocomplete on queries and actions: ```typescript const db = createClient({ url, userToken }) // Table queries — autocomplete on columns db.main.orders.findMany({ select: ['id', 'amount'] }) // Actions — autocomplete on name, input, and output const result = await db.action('incrementStock', { productId: 'prod_123', amount: 5 }) // result: { id: string; stock: number } ``` ## Errors | Status | Error Code | Condition | |---|---|---| | `401` | `UNAUTHORIZED` | A `token` is configured but the request does not include it or the token is wrong. | | `404` | `NOT_FOUND` | The schema endpoint is not enabled (`schemaEndpoint: false` or omitted). | ## Security Considerations - In production, always set a `token` to prevent unauthorized schema introspection. - The schema endpoint exposes table names, column names, and column types. This is useful for tooling but should not be public. - The schema endpoint does not expose data -- only structure. But column names alone can reveal sensitive information about your data model. - Consider restricting access to CI/CD environments and developer machines only. --- ## create-app Scaffold a new project. URL: /docs/reference/cli/create-app Generate a complete project with a configured backend, Next.js frontend, and example permissions in one command. ```bash npx @superapp/backend create-app my-app ``` ## What It Creates ``` my-app/ ├── apps/ │ ├── backend/ │ │ ├── src/ │ │ │ └── server.ts # Engine config with example connections │ │ ├── package.json │ │ └── tsconfig.json │ └── web/ │ ├── app/ │ │ ├── layout.tsx # AuthProvider wrapper │ │ ├── page.tsx # Landing page │ │ ├── auth/ │ │ │ └── [[...slug]]/ │ │ │ └── page.tsx # Sign in / sign up │ │ └── dashboard/ │ │ └── page.tsx # Example data page │ ├── hooks/ │ │ └── use-db.ts # Typed database hook │ ├── lib/ │ │ └── superapp.ts # Client SDK setup │ ├── generated/ │ │ └── schema.ts # Auto-generated types │ ├── package.json │ └── tsconfig.json ├── .env.example # Environment variables template ├── package.json # Workspace root └── turbo.json # Turborepo config ``` ## Generated Backend The scaffolded `server.ts` includes a working engine with SQLite for local development: ```typescript const engine = createEngine({ connections: { main: './data/dev.db', }, auth: betterAuthProvider({ secret: process.env.AUTH_SECRET!, userTable: { table: 'main.users', matchOn: { column: 'id', jwtField: 'id' }, columns: ['id', 'email', 'name'], }, }), permissions: { read_orders: { table: 'main.orders', roles: ['viewer'], select: { columns: ['id', 'amount', 'status', 'customer_id', 'created_at'], }, }, }, }) const app = new Hono() app.route('/', createHonoMiddleware(engine)) serve({ fetch: app.fetch, port: 3001 }) ``` ## Running the Project ```bash cd my-app cp .env.example .env npm install npm run dev ``` This starts both the backend (port 3001) and the Next.js frontend (port 3000) via Turborepo. Open `http://localhost:3000` to see the app. Sign up, sign in, and the dashboard page queries your database immediately. ## Switching to Postgres When you are ready to move off SQLite, update the connection string: ```typescript const engine = createEngine({ connections: { main: process.env.PG_URL!, }, // ... rest of config }) ``` Add `PG_URL` to your `.env` file and regenerate types: ```bash npx @superapp/backend generate --url http://localhost:3001 --token $SCHEMA_TOKEN ``` ## Environment Variables The `.env.example` file includes all required variables: ```bash # Backend AUTH_SECRET=your-auth-secret-here SUPERAPP_MASTER_KEY=your-master-key-here # Frontend NEXT_PUBLIC_SUPERAPP_URL=http://localhost:3001 # Database (uncomment when switching to Postgres) # PG_URL=postgres://user:password@localhost:5432/mydb ``` --- ## generate Generate TypeScript types from your engine. URL: /docs/reference/cli/generate Introspect your running engine and generate a fully typed schema file for the client SDK. ```bash npx @superapp/backend generate --url http://localhost:3001 --token ``` ## What It Generates The command connects to your engine's `/schema` endpoint, reads every connection, table, column, and action, and writes a `schema.ts` file: ```typescript // generated/schema.ts — auto-generated, do not edit export interface SuperAppSchema { main: { orders: { id: number amount: number status: string customer_id: number created_at: string } customers: { id: number name: string email: string organization_id: number created_at: string } } } export interface SuperAppActions { incrementStock: { input: { productId: string; amount: number } output: { id: string; stock: number } } revenueReport: { input: { startDate: string; endDate: string } output: { month: string; totalRevenue: number; orderCount: number; avgOrderValue: number }[] } } ``` These types are passed to `createClient()` for full autocomplete on table names, column names, query results, action names, and action input/output. ## Options | Flag | Default | Description | |---|---|---| | `--url` | `http://localhost:3001` | Engine URL | | `--token` | — | Schema token for authentication | | `--output` | `./generated/schema.ts` | Output file path | ## Schema Tokens Schema tokens are separate from user JWTs. They grant read-only access to the `/schema` endpoint and nothing else. Create a schema token in the admin UI: 1. Open `http://localhost:3001/admin` 2. Navigate to **Settings > Schema Tokens** 3. Click **Create Token** 4. Copy the token -- it is displayed once and cannot be retrieved later Schema tokens do not expire by default. Rotate them periodically and revoke unused tokens from the admin UI. ## CI/CD Usage Add the generate step to your CI pipeline so types stay in sync with your database: ```bash # In your CI script npx @superapp/backend generate \ --url $SUPERAPP_URL \ --token $SCHEMA_TOKEN \ --output ./generated/schema.ts ``` ## Committing to Git The generated file should be committed to your repository. It is a source-of-truth for your frontend types and enables type checking without a running backend: ```bash npx @superapp/backend generate --url http://localhost:3001 --token $SCHEMA_TOKEN git add generated/schema.ts git commit -m "chore: update superapp schema types" ``` Add `generated/schema.ts` to your CI checks so pull requests that change the database schema also update the types. ## Example Workflow ```bash # 1. Start the engine npm run dev:backend # 2. Generate types npx @superapp/backend generate --url http://localhost:3001 --token sk_schema_abc123 # 3. Use in your frontend ``` ```typescript const db = createClient({ url: 'http://localhost:3001/data', userToken: token, }) // Full autocomplete on queries and actions db.main.orders.findMany(...) const result = await db.action('incrementStock', { productId: 'prod_123', amount: 5 }) ``` --- ## llms.txt Machine-readable documentation for AI agents and LLMs. URL: /docs/llms-txt Feed these files to AI agents so they understand the superapp documentation without crawling the site. ## Files | File | What it contains | |---|---| | [`/llms.txt`](/llms.txt) | Index of all documentation pages with titles, descriptions, and URLs | | [`/llms-full.txt`](/llms-full.txt) | Full content of every page — titles, descriptions, and body text | ## Usage Pass the URL to any AI agent or LLM that accepts context: ```bash # Quick overview (page index only) curl https://typescript-superapp.bunnytech.app/llms.txt # Full documentation content curl https://typescript-superapp.bunnytech.app/llms-full.txt ``` Both files are generated at runtime from the MDX source files, so they're always in sync with the docs.