superapp
Advanced

Multi-Database Querying

Connect to Postgres, MySQL, SQLite, and CSV simultaneously.

Chat in Claude

superapp connects to multiple databases simultaneously using native drivers for each database type. Each connection gets its own driver instance and connection pool, and all queries are routed directly to the target database.

How Connections Work

When the engine starts, each configured connection is initialized with its native driver (e.g., pg for Postgres, mysql2 for MySQL, better-sqlite3 for SQLite). The engine manages connection pools per database and routes queries based on the table namespace.

createEngine({
  connections: {
    main:      'postgres://...',
    warehouse: 'mysql://...',
    legacy:    './legacy.db',
    imports:   { directory: './data/' },
  },
})
  Query: db.select().from(schema.orders)


  Engine routes by namespace
    ├── main.*      → Postgres (pg driver)
    ├── warehouse.* → MySQL (mysql2 driver)
    ├── legacy.*    → SQLite (better-sqlite3)
    └── imports.*   → CSV (in-memory)

After initialization, every table is accessible through its namespace: main.orders, warehouse.events, legacy.customers, imports.products_2024.

Connection Namespacing

Every table reference in superapp follows the pattern {connection}.{table}:

main.orders          → Postgres table "orders"
main.customers       → Postgres table "customers"
warehouse.events     → MySQL table "events"
warehouse.sessions   → MySQL table "sessions"
legacy.users         → SQLite table "users"
imports.products     → CSV file "products.csv"

This namespacing is consistent across the entire system:

// Client queries
db.select().from(schema.orders)           // main connection
db.select().from(schema.events)           // warehouse connection

// Permission definitions
permissions: {
  view_orders: {
    table: 'main.orders',          // same namespace
    // ...
  },
  view_events: {
    table: 'warehouse.events',     // same namespace
    // ...
  },
}

Same Permission Pipeline Regardless of Source

The permission engine does not care which database type backs a table. Every query -- whether it hits Postgres, MySQL, SQLite, or a CSV file -- passes through the same pipeline:

Client request (Drizzle Proxy: SQL + params)


JWT verification


Permission evaluation (validate SQL, inject where clauses, restrict columns, validate writes)


Query execution → routed to the correct database driver


Response

A permission on a CSV table works identically to a permission on a Postgres table:

permissions: {
  view_imported_products: {
    table: 'imports.products',       // CSV source
    roles: ['viewer'],
    select: {
      columns: ['sku', 'name', 'price', 'category'],
      where: {
        category: { $in: '$user.allowed_categories' },
      },
    },
  },
  view_orders: {
    table: 'main.orders',           // Postgres source
    roles: ['viewer'],
    select: {
      columns: ['id', 'amount', 'status'],
      where: {
        customer_id: { $eq: '$user.customer_id' },
      },
    },
  },
}

Both permissions compile to the same CASL ability structure and produce the same kind of SQL WHERE clause. The engine handles routing transparently.

Provider Types

The engine infers the database type from the connection URL protocol. No explicit provider imports needed for built-in databases.

Built-in Providers

DatabaseConnection ConfigDriver
Postgres'postgres://...' or process.env.PG_URL!pg
MySQL'mysql://...' or process.env.MYSQL_URL!mysql2
SQLite'./path/to/db.db' or process.env.SQLITE_PATH!better-sqlite3
CSV{ directory: string }built-in

Connecting Multiple Databases

Just pass the connection URLs. The engine loads the correct driver automatically based on the protocol:

import { createEngine } from '@superapp/backend'

const engine = createEngine({
  connections: {
    main: process.env.PG_URL!,
    warehouse: process.env.MYSQL_URL!,
  },
})

Performance Considerations

  • Native drivers connect directly to the database with no intermediary, minimizing latency.
  • Connection pooling is managed per provider. Each database connection maintains its own pool.
  • CSV sources are loaded into memory at startup. Keep CSV files small or use a database for large datasets.
  • Queries are scoped to a single connection. Each query targets one database.

On this page