# Hyperlight API — Technical Architecture

> Last updated: 2026-03-14

## Performance

- **1000+** Accounts per query
- **43** REST endpoints
- **1s** Orderbook snapshots
- **24/7** Real-time streaming

## Tech Stack

- **Language:** Rust
- **HTTP Framework:** Axum 0.7+ with Tokio async runtime
- **WebSocket:** tokio-tungstenite
- **Database:** PostgreSQL 16 with time-partitioned tables
- **Cache & Pubsub:** Redis 7
- **Object Storage:** S3-compatible (for Parquet exports)
- **Serialization:** serde + rust_decimal (precise decimal arithmetic)
- **Logging:** tracing with JSON output
- **Metrics:** Prometheus

## Architecture

Hyperlight sits between client applications and the Lighter DEX as a transparent proxy and enhancement layer.

```
Your App  -->  Hyperlight  -->  Lighter DEX
(REST/WS)     (Rust/Axum)      (ZK-proof L2)
```

### Your Application

- Connect via REST or WebSocket — use the same endpoints you already know from Lighter.
- Existing Lighter API calls work unchanged. Just swap the base URL.
- Add batch queries, enriched streams, and historical exports on top of what you already have.

### The Hyperlight Layer

- Built in Rust with Axum and Tokio for low-latency, high-throughput performance.
- PostgreSQL for time-partitioned historical data, Redis for caching and real-time pubsub.
- Enriches every orderbook and trade message with account attribution and whale flags.
- Manages API keys, tiered rate limits, and Prometheus metrics out of the box.

### Lighter DEX

- A ZK-proof perpetual DEX running as an Ethereum L2 with ~50 native API endpoints.
- Hyperlight proxies and caches native calls — you get faster responses with no extra load on the chain.
- Native endpoints remain available as a fallback; Hyperlight adds capabilities, never removes them.

## Services

Hyperlight runs as 4 independent services:

1. **API Server** (port 8080) — REST endpoints with auth, rate limiting, usage tracking
2. **WebSocket Proxy** (port 8081) — Real-time stream multiplexing, enrichment, and fan-out via Redis pubsub
3. **Ingestion Service** — Continuously ingests Lighter WebSocket data (orderbook, trades) and REST data (candles, funding, liquidations), publishes to Redis, writes to PostgreSQL
4. **Data Lake Service** — Scheduled Parquet exports to S3

## Technical Specifications

### Built in Rust

Axum + Tokio async runtime for low-latency, high-throughput performance.

### PostgreSQL + Redis

Time-partitioned tables for history, Redis for caching and pubsub.

### Parquet Exports

Columnar format for Python, DuckDB, Spark, and modern analytics tools.

### 1s Snapshots

Orderbook snapshots every second for high-resolution backtesting.

### SHA-256 Keys

Keys hashed before storage. Plaintext shown once, never retained.

### Production Monitoring

Prometheus metrics, health monitoring, and automated alerting.

## Data Storage

- **PostgreSQL:** Trades, orderbook snapshots, candles, funding rates, liquidation events, account snapshots, API keys, users, market alerts — all in time-partitioned tables for efficient range queries
- **Redis:** Rate limit tracking (token bucket via Lua scripts), orderbook cache, ticker cache, funding cache, API usage counters, alert cooldowns, pubsub for real-time data distribution
- **S3/MinIO:** Parquet exports for backtesting (trades, orderbook, candles, funding, liquidations)

## Security

- API keys prefixed with `hlt_`, SHA-256 hashed before database storage
- Plaintext shown exactly once at creation, never retained
- Optional IP allowlisting per key
- Tiered rate limiting enforced per-key via Redis token bucket with atomic Lua scripts
- HMAC-SHA256 webhook signing for alert notifications
- JWT tokens with 24-hour expiration for dashboard endpoints

## Numeric Precision

All prices, sizes, and balances use `rust_decimal::Decimal` stored as PostgreSQL `NUMERIC`. No floating-point approximation. Lighter indices (`account_index`, `market_id`) are `i64` / `BIGINT`. All timestamps normalized to `DateTime<Utc>` / `TIMESTAMPTZ`.