All Blog Posts
Article Image

Step-by-Step Guide to Using Dwellir's Hyperliquid Order Book Server

3rd November 2025 12min read

Hyperliquid's onchain order book delivers performance comparable to centralized exchanges: transactions finalize in about 0.2 seconds, and the platform handles around 200,000 orders per second. However, the public WebSocket endpoint limits you to 100 simultaneous connections and 1,000 subscriptions, which becomes restrictive for serious trading operations. Dwellir's hosted order book server removes these limits, provides access to detailed Level 4 (L4) order data, and reduces median latency by 24.1% compared to the public feed.

Quick answer: Use authenticated WebSocket connections to access Dwellir's order book server. Subscribe to trade feeds, L2 order book depth, and L4 detailed order data using the open-source Python examples. Add reconnection handling, analytics, and data storage as needed. Combine this with Dwellir's HyperEVM RPC, gRPC snapshots, and dedicated nodes for a complete Hyperliquid data solution.

This tutorial walks through using Dwellir's order book server. We'll cover the prerequisites, work through each Python example, and share tips for building production-ready systems on Hyperliquid.

Why Use Dwellir's Order Book Server

  • Lower latency: Benchmarks across 2,662 trades show a median latency improvement of 51 ms (24.1%) compared to the public API, with fewer latency spikes.
  • Access to L4 data: Public feeds only provide L2 aggregated depth. Dwellir's server includes authenticated L4 streams that show individual orders and user addresses for detailed market analysis.
  • Reliable infrastructure: We handle monitoring, connection management, and data compression so your feeds stay up even during node maintenance.
  • Complete data access: Use the same credentials for WebSockets, HyperEVM RPC, gRPC snapshots, and historical data instead of managing multiple providers.

Why Dwellir? One account gives you access to the entire Hyperliquid data stack: WebSocket feeds, HyperEVM RPC, gRPC APIs, historical datasets, and dedicated node infrastructure.

Prerequisites

  1. Request access - Contact our team to get access to Dwellir's Hyperliquid services, including the order book server.
  2. Workspace setup - Clone the official hyperliquid-orderbook-server-code-examples repository.
  3. Environment variables - Copy .env.example to .env, then add the WebSocket URL provided by Dwellir (wss://<your-instance>.dwellir.com/ws).
  4. Python toolchain - Python 3.8+, pip, and optionally virtualenv.
  5. API key security - Store credentials outside source control and use separate keys for each service.

Understanding the Example Repository

In this article, we'll walk through seven Python examples that take you from basic WebSocket connections to running your own order book monitoring system. Here's how they're organized:

StageExample DirectoryConceptProduction Takeaway
101_websocket_basicsSingle-feed trade subscriptionBaseline connection, message decoding
202_l2_orderbook_basicsAggregated depth with spreadsValidate parity with UI, tune nLevels/nSigFigs
303_multiple_subscriptionsMulti-coin routingMessage dispatch, concurrency patterns
404_multi_coin_trackerAdvanced multi-coin trackingTrack metrics across multiple coins, VWAP calculations
505_reconnection_handlingExponential backoff + resubscriptionStay live through upstream restarts
606_data_analysisMarket metrics and analyticsReal-time VWAP, spreads, buy/sell ratios
707_l4_orderbookIndividual order visibility with L4 dataWallet-level liquidity, order tracking, market microstructure

We’ll expand each stage, highlight the key code paths, and layer in Dwellir-specific guidance.

Step 1 – WebSocket Basics

Goal: connect, subscribe to BTC trades, and stream payloads.
Key files: python-examples/01_websocket_basics/websocket_basics.py

Focus on three lines:

  1. ws_url = os.getenv("WEBSOCKET_URL") – keep environment injection external so secrets never hit commits.
  2. await websocket.send(json.dumps(subscription)) – Dwellir follows Hyperliquid’s JSON schema ({ "method": "subscribe", "subscription": { ... } }). The payloads match the official WebSocket docs.
  3. async for message in websocket: – messages arrive as text frames; decode with json.loads and dispatch by channel.
import asyncio
import json
import os
import websockets
from dotenv import load_dotenv

load_dotenv()

async def main():
    ws_url = os.getenv("WEBSOCKET_URL")

    # Connect to WebSocket
    websocket = await websockets.connect(ws_url)

    # Subscribe to BTC trades
    subscribe_message = {
        "method": "subscribe",
        "subscription": {
            "type": "trades",
            "coin": "BTC"
        }
    }
    await websocket.send(json.dumps(subscribe_message))

    # Listen for messages
    async for message in websocket:
        data = json.loads(message)
        print(f"Received: {json.dumps(data, indent=2)}")

if __name__ == "__main__":
    asyncio.run(main())

Production tips

  • Add structured logging (timestamp, coin, latency to screen).
  • Persist raw frames for replay if you need deterministic testing.
  • Use separate API wallets per process to avoid nonce collisions when you later send actions.

Step 2 – L2 Order Book Basics

Goal: reconstruct aggregated depth and calculate spreads. Key files: python-examples/02_l2_orderbook_basics/l2_orderbook.py

Important subscription arguments:

{
  "type": "l2Book",
  "coin": "ETH",
  "nLevels": 20,
  "nSigFigs": 5
}
  • nLevels controls depth. Start with 20 and scale once memory profiling is done.
  • nSigFigs governs rounding granularity; match UI expectations (5 retains enough precision for liquid coins).

Validation: Log the best bid and ask prices, then compare them with app.hyperliquid.com to verify they match.

Production tips

  • Emit telemetry: time between message receipt and internal queueing, number of levels changed per update.
  • Pre-allocate arrays for depth ladders so GC pauses never block processing.

Step 3 – Multiple Subscriptions

Goal: subscribe to trades and books for multiple coins concurrently.
Key files: python-examples/03_multiple_subscriptions/multiple_subscriptions.py

Patterns to copy:

  • Dispatcher: route by channel to specialized handlers.
  • Backpressure: throttle print/log statements; use bounded queues for CPU-intensive consumers.
  • Heartbeat: reply to server ping events to maintain connection health (the examples show websocket.ping() scaffolding).

Production tips

  • Separate CPU-bound analytics (e.g., pandas) from IO loops-push heavy operations to worker threads or async tasks.
  • Wrap handlers with try/except and metrics so one malformed message doesn’t kill the entire process.

Step 4 – Multi-Coin Tracker

Goal: track trades from multiple coins simultaneously and compare metrics across markets. Key files: python-examples/04_multi_coin_tracker/multi_coin_tracker.py

This example introduces a class-based architecture for tracking multiple coins:

class CoinTracker:
    """Track trading metrics for a specific coin"""

    def __init__(self, coin):
        self.coin = coin
        self.trades = deque(maxlen=50)  # Last 50 trades
        self.total_volume = 0
        self.buy_volume = 0
        self.sell_volume = 0

The multi-coin tracker calculates real-time metrics for each coin:

  • VWAP: Volume-weighted average price across recent trades
  • Buy/Sell Ratio: Volume ratio to gauge market sentiment
  • Total Volume: Cumulative trading activity
  • Price Tracking: Latest execution prices

Production tips

  • Scale the tracking to dozens of markets by managing memory with fixed-size deques
  • Aggregate metrics periodically and emit them to monitoring dashboards
  • Use separate subscription handlers to isolate failures per coin

Step 5 – Reconnection Handling

Goal: survive node restarts, network hiccups, and intentional server exits.
Key files: python-examples/05_reconnection_handling/robust_client.py

Essentials:

  • Exponential backoff capped at 60 seconds.
  • Replay subscriptions after reconnecting.
  • Persist state of pending subscriptions so you never miss channels when the server restarts to reconcile L4/L2 state (a documented behavior).
async def resilient_stream(subscriptions):
    backoff = 1
    while True:
        try:
            async with websockets.connect(WS_URL, extra_headers=HEADERS, open_timeout=10) as websocket:
                for subscription in subscriptions:
                    await websocket.send(json.dumps(subscription))
                async for raw in websocket:
                    handle_message(json.loads(raw))
        except websockets.ConnectionClosedError as err:
            logger.warning("socket closed (%s) retrying in %ss", err.code, backoff)
            await asyncio.sleep(backoff)
            backoff = min(backoff * 2, 60)
        else:
            backoff = 1

Add metrics (success counts, failure reasons) and surface them via Prometheus or CloudWatch to track reliability.

Step 6 – Market Metrics and Data Analysis

Goal: calculate comprehensive market statistics from real-time order book and trade data. Key files: python-examples/06_data_analysis/market_metrics.py

The MarketAnalyzer class provides a complete analytics framework:

class MarketAnalyzer:
    """Analyze market data and calculate metrics"""

    def __init__(self, history_size=100):
        self.trades = deque(maxlen=history_size)
        self.spreads = deque(maxlen=history_size)
        self.prices = deque(maxlen=history_size)
        self.volumes = deque(maxlen=history_size)

    def get_vwap(self):
        """Calculate Volume-Weighted Average Price"""
        total_value = sum(t['price'] * t['size'] for t in self.trades)
        total_volume = sum(t['size'] for t in self.trades)
        return total_value / total_volume if total_volume > 0 else 0

The analyzer tracks multiple metrics simultaneously:

  • VWAP: Volume-weighted average price over a rolling window
  • Buy/Sell Ratio: Gauge market pressure direction
  • Average Spread: Monitor liquidity conditions
  • Price Change: Track absolute and percentage price movements
  • Total Volume: Aggregate trading activity

Production tips

  • Emit analytics to Kafka or Redpanda for downstream services
  • Annotate trades with HyperEVM events (funding, settlement) retrieved via Dwellir's RPC to contextualize signals
  • Set configurable display intervals to balance between real-time updates and information overload

Step 7 – L4 Order Book (Individual Orders)

Goal: unlock per-order visibility with individual order IDs and user addresses. Key files: python-examples/07_l4_orderbook/l4_orderbook.py

The L4 subscription provides the most detailed market microstructure data available:

{
  "method": "subscribe",
  "subscription": {
    "type": "l4Book",
    "coin": "BTC"
  }
}

The L4 order book tracks individual orders with full details:

class L4OrderBook:
    def __init__(self):
        # Store orders by order ID: {oid: {user, limitPx, sz, side}}
        self.orders = {}
        # Store bids and asks by price: {price: [oid1, oid2, ...]}
        self.bids = defaultdict(list)
        self.asks = defaultdict(list)

L4 data enables advanced market analysis:

  • Order Flow Analysis: Track aggressive vs passive flow by wallet address
  • Spoofing Detection: Monitor order lifetimes and cancellation patterns
  • Liquidity Mapping: Build heatmaps keyed by participant
  • Market Microstructure: Understand how individual orders form market depth

Dwellir's server provides authenticated L4 access, bypassing the public API's 100 connection limit and unlocking analytics that would otherwise require self-hosting infrastructure.

Production tips

  • Increase WebSocket max_size parameter to handle large L4 messages (10MB recommended)
  • Store snapshots periodically to enable deterministic replay for audits
  • Compress historical logs with LZ4 or zstd to match Hyperliquid's archival format
  • Process book diffs incrementally to maintain low-latency order book state

Extending Beyond the Examples

With all seven examples mastered, integrate Dwellir's complementary services:

  • Historical trade indices: hydrate backtests with normalized node_trades, node_fills, and node_fills_by_block data sets sourced from the managed trade archive.
  • gRPC snapshots: call GetOrderBookSnapshot to reconcile WebSocket state or recover after long outages.
  • HyperEVM RPC: deploy smart-order routing or risk management contracts against the same credentials, guided by the Hyperliquid RPC overview.

Hardening for Production

  1. Rate-limit awareness: even with dedicated infrastructure, respect Hyperliquid’s per-IP and per-user rules (100 connections, 1,000 subscriptions) to avoid unexpected throttles.
  2. State checkpoints: snapshot L4 ladders and trade sequences so you can reconstruct any trading incident.
  3. Latency dashboards: track round-trip and publish-subscribe delays to verify the 51 ms median advantage persists in your deployment.
  4. Security hygiene: rotate API keys quarterly, segment environment variables by service, and store secrets in Vault or AWS Secrets Manager.
  5. Disaster drills: simulate node restarts and network partitions; confirm reconnection logic resubscribes and replays without manual intervention.

Troubleshooting & Monitoring Playbook

  • Missed heartbeats: If the server closes the socket after idle periods, confirm your client responds to ping frames or sends noop keepalives every 15 seconds.
  • Sequencing gaps: Persist last sequence numbers for every channel; on mismatch, request a fresh snapshot via WebSocket or gRPC before resuming processing.
  • Rate-limit warnings: Track HTTP response codes and WebSocket system messages. Public endpoints clamp at 100 connections/1,000 subscriptions-choose Dwellir’s dedicated channels to eliminate those limits.
  • Latency drift: Log message timestamps end-to-end and compare against benchmark medians (51 ms improvement target). Spikes often stem from downstream processing, not the feed; move heavy computations off the main event loop.
  • Credential rotation: Stage new API keys in parallel, then redeploy clients with blue/green rollout so you never drop connections mid-trade.
  • Unexpected payload changes: Version your message parsers and write schema assertions. Dwellir broadcasts release notes; subscribe so you can redeploy parsers ahead of protocol updates.

Comparing to Public Endpoints

FeaturePublic Hyperliquid APIDwellir Order Book Server
Connections100 WebSockets per IPDedicated capacity with managed routing
Data depthTrades + L2Trades + L2 + private L4
Latency0.26 s median (example benchmark)0.21 s median (51 ms faster)
Rate limits1,200 REST weight/min, 1,000 subscriptionsTuned per client; SLA-backed
ReliabilitySelf-managed24/7 monitored infrastructure

Public limits hamper multi-market bots and advanced analytics; Dwellir clears the path by hosting co-located infrastructure and layering operational support.

Dwellir’s Hyperliquid Infrastructure Options

  • Dedicated Hyperliquid nodes: Exclusive validator and archive capacity with Hypercore + HyperEVM access, backed by SLAs and proactive monitoring. Start the conversation via the Hyperliquid hub.
  • Co-located execution environments: Single-tenant VPS instances racked alongside your dedicated node so trading bots and analytics stay within the same rack latency budget.
  • Custom Hypercore gRPC: Deterministic block fills, order deltas, and liquidation streams over managed TLS channels-documented alongside the GetOrderBookSnapshot entry.
  • HyperEVM RPC & REST suite: Contract calls, settlement operations, and funding checks using the same credential domain that powers your order book feeds.
  • Historical data delivery: Curated trade, book, and funding archives synchronized from Hyperliquid’s S3 buckets into query-ready formats (docs).

FAQ

How do I know my feed is in sync with Hyperliquid?
Track a reference market (e.g., BTC-PERP) and compare top-of-book quotes against app.hyperliquid.com. Any deviation longer than a few milliseconds signals desynchronization - refresh your snapshot or reconnect.

Can I run multiple client instances with one API key?
Yes, but keep separate environment variables per service so a compromised key doesn’t cascade across bots. Use tags in your observability stack to attribute traffic per process.

What if I need REST access alongside WebSockets?
Point your REST clients at Dwellir’s Hyperliquid RPC endpoints. They share auth with the order book server, which simplifies credential management for order placement, funding checks, or HyperEVM contract calls.

How do I store L4 data efficiently? Append updates to columnar formats such as Parquet or Apache Iceberg; partition by market and hour so analytics jobs can prune unnecessary files. Compress with lz4 or zstd for 5–10x size reductions without sacrificing replay speed.

Putting It All Together

By now you have practiced seven core skills: opening a secure WebSocket connection, reading live trades, understanding depth ladders, juggling multiple markets, recovering from dropped links, turning raw data into simple metrics, and inspecting individual orders. That toolkit is enough to stand up a dependable feed for the coins you care about.

With access to Dwellir’s order book server you can turn those skills into real products—a live market dashboard, alerting for sudden spread changes, or a historical replay that powers post-trade reviews—all without running your own Hyperliquid node.

Want credentials for the order book server? Reach out to our team and we’ll help you get started.


Ready to get started? For HyperEVM RPC access, sign up at our dashboard. For access to the order book server with L4 data, contact our team. Check out the Hyperliquid documentation for implementation details.

read another blog post

© Copyright 2025 Dwellir AB