Guides

Stream Order Book Data with the Dwellir WebSocket

Connect to Dwellir's Hyperliquid order book server for 100-level depth, multi-coin tracking, and production-ready reconnection.

The public Hyperliquid WebSocket API caps order book depth at 20 levels and limits you to 100 simultaneous connections. If you are building a trading bot, market-making system, or analytics dashboard that needs deeper visibility into the order book, those limits become a bottleneck fast. Dwellir's hosted order book server provides up to 100 levels of depth, access to Level 4 individual-order data, and edge servers in Singapore and Tokyo that bring median latency down to sub-millisecond for regional traders. This guide walks you through connecting, parsing L2 and L4 data, tracking multiple coins, and building a production-ready reconnection layer.

What you will build

By the end of this guide you will have a Python application that can:

  • Connect to Dwellir's Hyperliquid order book server over WebSocket and subscribe to live data feeds
  • Parse L2 order book depth with configurable levels and precision, including spread calculation
  • Track multiple coins simultaneously with per-coin metrics such as VWAP and buy/sell ratio
  • Reconnect gracefully using exponential backoff so your application survives network interruptions and server restarts

Each section builds on the previous one. The complete, runnable code for every stage is available in the companion repository.

Prerequisites

  • Python 3.8+ with pip installed. A virtual environment is recommended.
  • Dwellir API key with order book server access. Contact the Dwellir team to request credentials if you do not already have them.
  • Basic asyncio understanding. The examples use async/await throughout. If you are new to Python's async model, the asyncio documentation covers the essentials.

Install the two required packages:

Python
pip install websockets>=11.0.0 python-dotenv>=1.0.0

Create a .env file in your project root with the WebSocket URL provided by Dwellir:

Text
WEBSOCKET_URL=wss://your-instance.dwellir.com/ws

Keep this file out of version control. Store credentials separately from source code, and use distinct keys for each service or environment.

How the order book server works

Hyperliquid's order book exposes two depth levels through WebSocket subscriptions: L2 and L4.

L2 (aggregated depth) groups orders by price level. Each level shows the total size and order count at that price. L2 is the standard view you see on most exchange interfaces and is suitable for spread monitoring, general market analysis, and display purposes.

L4 (individual orders) goes deeper. Each entry includes the order ID, the wallet address that placed it, the limit price, the size, and the side (bid or ask). L4 data enables order flow analysis, spoofing detection, and per-participant liquidity mapping.

When subscribing to L2 data, two parameters control what you receive:

  • nLevels sets the number of price levels returned. The public API caps this at 20. Through Dwellir's server, you can request up to 100.
  • nSigFigs controls price aggregation precision. A value of 5 retains enough granularity for liquid coins like BTC and ETH. Lower values group prices more aggressively, which can be useful for less liquid markets.

L4 subscriptions do not take these parameters. You receive the full set of individual orders.

Dwellir operates edge servers in Singapore and Tokyo, placing infrastructure close to Hyperliquid's own validators. The result is lower latency and fewer missed updates compared to routing through distant data centers.

FeaturePublic Hyperliquid APIDwellir order book server
Order book depth20 levels (L2 only)Up to 100 levels (L2 + L4)
Update deliveryPeriodic (~100 ms batches)Real-time streaming
Edge locationsNone (single origin)Singapore, Tokyo
Connection limits100 WebSockets, 1,000 subscriptionsDedicated capacity per client
Individual order dataNot availableFull L4 with wallet addresses

Connect and parse L2 data

Start with a single subscription to L2 order book data for one coin. This establishes the connection pattern you will reuse in every later section.

The subscription message follows Hyperliquid's JSON schema. You send a method: "subscribe" request with a subscription object that specifies the data type, coin, depth, and precision.

Python
import asyncio
import json
import os
from dotenv import load_dotenv
import websockets

load_dotenv()

COIN = "ETH"
N_LEVELS = 20
N_SIG_FIGS = 5


def display_orderbook(data):
    """Parse and display L2 order book data with spread calculation."""
    book = data["data"]
    levels = book.get("levels", [[], []])
    bids = levels[0] if len(levels) > 0 else []
    asks = levels[1] if len(levels) > 1 else []

    if not bids or not asks:
        print("Waiting for order book data...")
        return

    best_bid = float(bids[0]["px"])
    best_ask = float(asks[0]["px"])
    spread = best_ask - best_bid
    spread_pct = (spread / best_ask) * 100

    print(f"\n{'=' * 60}")
    print(f"  {COIN} Order Book")
    print(f"  Best Bid: ${best_bid:,.2f}  |  Best Ask: ${best_ask:,.2f}")
    print(f"  Spread: ${spread:,.2f} ({spread_pct:.4f}%)")
    print(f"{'=' * 60}")

    print(f"\n  {'Price':>12}  {'Size':>12}  {'Orders':>8}")
    print(f"  {'-' * 36}")

    # Display top 5 ask levels (reversed so lowest ask is closest to spread)
    for level in reversed(asks[:5]):
        px = float(level["px"])
        sz = float(level["sz"])
        n = level.get("n", "")
        print(f"  ${px:>11,.2f}  {sz:>12.4f}  {n:>8}  ASK")

    print(f"  {'--- spread ---':^36}")

    # Display top 5 bid levels
    for level in bids[:5]:
        px = float(level["px"])
        sz = float(level["sz"])
        n = level.get("n", "")
        print(f"  ${px:>11,.2f}  {sz:>12.4f}  {n:>8}  BID")


async def main():
    ws_url = os.getenv("WEBSOCKET_URL")
    if not ws_url:
        print("Set WEBSOCKET_URL in your .env file")
        return

    async with websockets.connect(ws_url) as websocket:
        # Subscribe to L2 order book
        subscribe_message = {
            "method": "subscribe",
            "subscription": {
                "type": "l2Book",
                "coin": COIN,
                "nLevels": N_LEVELS,
                "nSigFigs": N_SIG_FIGS,
            },
        }
        await websocket.send(json.dumps(subscribe_message))
        print(f"Subscribed to {COIN} L2 order book ({N_LEVELS} levels)")

        async for message in websocket:
            data = json.loads(message)
            channel = data.get("channel")

            if channel == "l2Book":
                display_orderbook(data)
            elif channel == "subscriptionResponse":
                print(f"Subscription confirmed: {data}")

if __name__ == "__main__":
    asyncio.run(main())

Run the script and you should see a live order book updating in your terminal. Compare the best bid and best ask against app.hyperliquid.xyz to verify the data matches.

A few things to note about this pattern:

  • Environment injection keeps secrets out of source code. The WEBSOCKET_URL is loaded from .env and never hardcoded.
  • Message routing by channel is essential once you add more subscription types. The channel field in each response tells you whether you received l2Book, trades, subscriptionResponse, or another type.
  • nLevels and nSigFigs are tuneable. Start with 20 levels for development and increase to 100 once you have profiled memory usage in your application.

This pattern is expanded in examples 01 and 02 of the companion repo, which cover trade subscriptions and more detailed order book visualization.

Track multiple coins

A single WebSocket connection can carry subscriptions for multiple coins and data types simultaneously. Rather than opening one connection per coin, you send multiple subscribe messages over the same socket and route incoming messages to per-coin handlers.

The CoinTracker class below maintains a rolling window of recent trades for each coin. It calculates VWAP (volume-weighted average price) and buy/sell ratio to give you a quick read on market conditions across several instruments at once.

Python
import asyncio
import json
import os
from collections import deque
from datetime import datetime, timezone
from dotenv import load_dotenv
import websockets

load_dotenv()

COINS = ["BTC", "ETH", "SOL"]


class CoinTracker:
    """Track trading metrics for a specific coin."""

    def __init__(self, coin):
        self.coin = coin
        self.trades = deque(maxlen=50)
        self.total_volume = 0.0
        self.buy_volume = 0.0
        self.sell_volume = 0.0
        self.trade_count = 0
        self.latest_price = 0.0

    def add_trade(self, price, size, side):
        self.trades.append({"price": price, "size": size, "side": side})
        self.total_volume += size
        self.trade_count += 1
        self.latest_price = price

        if side == "B":
            self.buy_volume += size
        else:
            self.sell_volume += size

    def get_vwap(self):
        if not self.trades:
            return 0.0
        total_value = sum(t["price"] * t["size"] for t in self.trades)
        total_size = sum(t["size"] for t in self.trades)
        return total_value / total_size if total_size > 0 else 0.0

    def get_buy_sell_ratio(self):
        if self.sell_volume == 0:
            return float("inf") if self.buy_volume > 0 else 0.0
        return self.buy_volume / self.sell_volume


class MultiCoinTracker:
    """Aggregate trackers for multiple coins and display a dashboard."""

    def __init__(self, coins):
        self.trackers = {coin: CoinTracker(coin) for coin in coins}
        self.update_count = 0
        self.display_interval = 10  # Show dashboard every N trades

    def process_trade(self, coin, trades_data):
        if coin not in self.trackers:
            return

        tracker = self.trackers[coin]
        for trade in trades_data:
            price = float(trade["px"])
            size = float(trade["sz"])
            side = trade.get("side", "?")
            tracker.add_trade(price, size, side)

        self.update_count += 1
        if self.update_count % self.display_interval == 0:
            self.display_dashboard()

    def display_dashboard(self):
        now = datetime.now(timezone.utc).strftime("%H:%M:%S")
        print(f"\n{'=' * 70}")
        print(f"  Multi-Coin Dashboard  |  {now} UTC")
        print(f"{'=' * 70}")
        print(f"  {'Coin':<6} {'Price':>12} {'VWAP':>12} {'Volume':>10} {'B/S':>8} {'Trades':>8}")
        print(f"  {'-' * 62}")

        for coin in sorted(
            self.trackers, key=lambda c: self.trackers[c].total_volume, reverse=True
        ):
            t = self.trackers[coin]
            if t.trade_count == 0:
                continue
            ratio = t.get_buy_sell_ratio()
            ratio_str = f"{ratio:.2f}" if ratio != float("inf") else "inf"
            print(
                f"  {coin:<6} ${t.latest_price:>11,.2f} ${t.get_vwap():>11,.2f}"
                f" {t.total_volume:>10.4f} {ratio_str:>8} {t.trade_count:>8}"
            )


async def main():
    ws_url = os.getenv("WEBSOCKET_URL")
    if not ws_url:
        print("Set WEBSOCKET_URL in your .env file")
        return

    tracker = MultiCoinTracker(COINS)

    async with websockets.connect(ws_url) as websocket:
        # Subscribe to trades for each coin
        for coin in COINS:
            sub = {
                "method": "subscribe",
                "subscription": {"type": "trades", "coin": coin},
            }
            await websocket.send(json.dumps(sub))
            print(f"Subscribed to {coin} trades")

        async for message in websocket:
            data = json.loads(message)
            channel = data.get("channel")

            if channel == "trades":
                coin = data["data"]["coin"]
                tracker.process_trade(coin, data["data"]["trades"])
            elif channel == "subscriptionResponse":
                print(f"Confirmed: {data['data']['subscription']['coin']}")


if __name__ == "__main__":
    try:
        asyncio.run(main())
    except KeyboardInterrupt:
        print("\nShutting down.")

The MultiCoinTracker prints a dashboard every 10 trades, ranked by volume. You can adjust display_interval or replace the print logic with a push to your monitoring system.

Key design decisions in this pattern:

  • Fixed-size deques (maxlen=50) bound memory usage. Even if you track dozens of coins, each tracker stores only the 50 most recent trades.
  • One connection, many subscriptions. Sending multiple subscribe messages over a single WebSocket is more efficient than opening parallel connections. It also keeps you well within Hyperliquid's subscription limits.
  • Routing by coin field. The data["data"]["coin"] field tells you which tracker should receive each message. This dispatcher pattern scales cleanly to 20 or more instruments.

This pattern is expanded in examples 03 and 04 of the companion repo, which add L2 book subscriptions alongside trades and build a richer per-coin analytics view.

Access L4 deep order book data

L2 data tells you the aggregate size at each price level. L4 data tells you who placed each order, at what price, and with what size. This level of visibility enables market microstructure analysis that is not possible with aggregated depth alone.

To subscribe to L4 data, change the subscription type to l4Book. No nLevels or nSigFigs parameters are needed.

Python
import asyncio
import json
import os
from collections import defaultdict
from dotenv import load_dotenv
import websockets

load_dotenv()

COIN = "BTC"


class L4OrderBook:
    """Maintain an order book from L4 individual-order data."""

    def __init__(self):
        self.orders = {}  # oid -> {user, limitPx, sz, side}
        self.bids = defaultdict(list)  # price -> [oid, ...]
        self.asks = defaultdict(list)  # price -> [oid, ...]

    def process_snapshot(self, levels):
        """Load a full snapshot, replacing any existing state."""
        self.orders.clear()
        self.bids.clear()
        self.asks.clear()

        for side_idx, side_name in enumerate(["bids", "asks"]):
            if side_idx >= len(levels):
                continue
            book_side = self.bids if side_name == "bids" else self.asks
            for order in levels[side_idx]:
                oid = order.get("oid", order.get("tid", ""))
                px = order["px"]
                sz = order["sz"]
                user = order.get("user", "unknown")
                self.orders[oid] = {
                    "user": user,
                    "limitPx": px,
                    "sz": sz,
                    "side": side_name,
                }
                book_side[px].append(oid)

    def get_stats(self):
        """Calculate market microstructure statistics."""
        bid_prices = sorted(
            [float(p) for p in self.bids.keys() if self.bids[p]], reverse=True
        )
        ask_prices = sorted(
            [float(p) for p in self.asks.keys() if self.asks[p]]
        )

        stats = {
            "total_orders": len(self.orders),
            "bid_levels": len(bid_prices),
            "ask_levels": len(ask_prices),
            "unique_users": len(set(o["user"] for o in self.orders.values())),
        }

        if bid_prices and ask_prices:
            best_bid = bid_prices[0]
            best_ask = ask_prices[0]
            stats["best_bid"] = best_bid
            stats["best_ask"] = best_ask
            stats["spread"] = best_ask - best_bid
            stats["spread_bps"] = ((best_ask - best_bid) / best_ask) * 10000

        # Buy/sell size ratio
        bid_size = sum(
            float(self.orders[oid]["sz"])
            for oids in self.bids.values()
            for oid in oids
            if oid in self.orders
        )
        ask_size = sum(
            float(self.orders[oid]["sz"])
            for oids in self.asks.values()
            for oid in oids
            if oid in self.orders
        )
        stats["bid_total_size"] = bid_size
        stats["ask_total_size"] = ask_size
        if ask_size > 0:
            stats["bid_ask_ratio"] = bid_size / ask_size

        return stats


def display_stats(stats):
    """Print a summary of order book microstructure."""
    print(f"\n{'=' * 60}")
    print(f"  {COIN} L4 Order Book Summary")
    print(f"{'=' * 60}")
    print(f"  Total orders:    {stats['total_orders']}")
    print(f"  Unique wallets:  {stats['unique_users']}")
    print(f"  Bid levels:      {stats['bid_levels']}")
    print(f"  Ask levels:      {stats['ask_levels']}")

    if "spread" in stats:
        print(f"  Best bid:        ${stats['best_bid']:,.2f}")
        print(f"  Best ask:        ${stats['best_ask']:,.2f}")
        print(f"  Spread:          ${stats['spread']:,.2f} ({stats['spread_bps']:.1f} bps)")

    if "bid_ask_ratio" in stats:
        print(f"  Bid size total:  {stats['bid_total_size']:.4f}")
        print(f"  Ask size total:  {stats['ask_total_size']:.4f}")
        print(f"  Bid/Ask ratio:   {stats['bid_ask_ratio']:.3f}")


async def main():
    ws_url = os.getenv("WEBSOCKET_URL")
    if not ws_url:
        print("Set WEBSOCKET_URL in your .env file")
        return

    # L4 messages can be large; increase the receive limit
    async with websockets.connect(ws_url, max_size=10 * 1024 * 1024) as websocket:
        subscribe_message = {
            "method": "subscribe",
            "subscription": {
                "type": "l4Book",
                "coin": COIN,
            },
        }
        await websocket.send(json.dumps(subscribe_message))
        print(f"Subscribed to {COIN} L4 order book")

        book = L4OrderBook()
        msg_count = 0

        async for message in websocket:
            data = json.loads(message)
            channel = data.get("channel")

            if channel == "l4Book":
                book_data = data.get("data", {})
                levels = book_data.get("levels", [])
                if levels:
                    book.process_snapshot(levels)

                msg_count += 1
                if msg_count % 5 == 0:
                    display_stats(book.get_stats())

            elif channel == "subscriptionResponse":
                print(f"Subscription confirmed: {data}")


if __name__ == "__main__":
    try:
        asyncio.run(main())
    except KeyboardInterrupt:
        print("\nShutting down.")

A few important differences from L2:

  • Message size. L4 snapshots for popular coins like BTC can be several megabytes. The max_size=10 * 1024 * 1024 parameter on the connection prevents the WebSocket library from rejecting oversized frames.
  • Snapshot vs. diff processing. The initial message contains a full snapshot of all orders. Subsequent messages may contain diffs (book_diffs) with new, modified, and removed orders. The example above processes snapshots. For production use, implement incremental diff handling to maintain state without re-parsing the entire book on each update.
  • Unique wallet addresses. The user field on each order lets you track which participants are providing liquidity at which price levels. This is data that is simply not available through the public API.

When to use L4 vs. L2:

  • Use L2 when you need aggregate depth for spread monitoring, display, or general price analysis. L2 messages are smaller and arrive faster.
  • Use L4 when you need to track individual orders, analyze order flow by wallet, detect spoofing or layering patterns, or build per-participant liquidity maps.

This pattern is expanded in example 07 of the companion repo, which adds incremental diff processing and richer per-order display.

Handle reconnection gracefully

WebSocket connections drop. Servers restart for maintenance, networks have transient failures, and cloud infrastructure occasionally hiccups. A production application must detect disconnections, wait an appropriate interval, reconnect, and resubscribe to all feeds without manual intervention.

The pattern below wraps the connection lifecycle in a retry loop with exponential backoff. The backoff starts at 1 second and doubles on each consecutive failure, capping at 60 seconds. A successful connection resets the backoff timer.

Python
import asyncio
import json
import logging
import os
from datetime import datetime, timezone
from dotenv import load_dotenv
import websockets

load_dotenv()

logging.basicConfig(
    level=logging.INFO,
    format="%(asctime)s [%(levelname)s] %(message)s",
)
logger = logging.getLogger(__name__)


class RobustOrderBookClient:
    """WebSocket client with automatic reconnection and resubscription."""

    def __init__(self, ws_url):
        self.ws_url = ws_url
        self.subscriptions = []
        self.running = True
        self.reconnect_delay = 1
        self.max_delay = 60
        self.connection_count = 0

    def add_subscription(self, subscription):
        """Register a subscription to be sent on every connection."""
        self.subscriptions.append(
            {"method": "subscribe", "subscription": subscription}
        )

    async def _handle_message(self, data):
        """Process an incoming message. Replace with your own logic."""
        channel = data.get("channel")
        if channel == "trades":
            coin = data["data"]["coin"]
            for trade in data["data"]["trades"]:
                side = trade.get("side", "?")
                px = float(trade["px"])
                sz = float(trade["sz"])
                logger.info(f"Trade: {coin} {side} {sz:.4f} @ ${px:,.2f}")
        elif channel == "l2Book":
            levels = data["data"].get("levels", [[], []])
            if levels[0] and levels[1]:
                bid = float(levels[0][0]["px"])
                ask = float(levels[1][0]["px"])
                spread = ask - bid
                logger.info(f"Book: bid=${bid:,.2f} ask=${ask:,.2f} spread=${spread:,.2f}")
        elif channel == "subscriptionResponse":
            logger.info(f"Subscription confirmed: {data.get('data', {})}")

    async def run(self):
        """Main loop: connect, subscribe, listen, reconnect on failure."""
        while self.running:
            try:
                logger.info(
                    f"Connecting to {self.ws_url} "
                    f"(attempt {self.connection_count + 1})"
                )
                async with websockets.connect(
                    self.ws_url, open_timeout=10, close_timeout=5
                ) as websocket:
                    self.connection_count += 1
                    self.reconnect_delay = 1  # Reset backoff on success
                    logger.info("Connected. Sending subscriptions...")

                    for sub in self.subscriptions:
                        await websocket.send(json.dumps(sub))

                    async for message in websocket:
                        if not self.running:
                            break
                        data = json.loads(message)
                        await self._handle_message(data)

            except websockets.ConnectionClosedError as e:
                logger.warning(
                    f"Connection closed (code {e.code}). "
                    f"Reconnecting in {self.reconnect_delay}s..."
                )
            except websockets.InvalidStatusCode as e:
                logger.error(
                    f"Server rejected connection (status {e.status_code}). "
                    f"Retrying in {self.reconnect_delay}s..."
                )
            except OSError as e:
                logger.error(
                    f"Network error: {e}. Retrying in {self.reconnect_delay}s..."
                )
            except Exception as e:
                logger.error(
                    f"Unexpected error: {e}. Retrying in {self.reconnect_delay}s..."
                )

            if self.running:
                await asyncio.sleep(self.reconnect_delay)
                self.reconnect_delay = min(
                    self.reconnect_delay * 2, self.max_delay
                )

    def stop(self):
        self.running = False


async def main():
    ws_url = os.getenv("WEBSOCKET_URL")
    if not ws_url:
        print("Set WEBSOCKET_URL in your .env file")
        return

    client = RobustOrderBookClient(ws_url)

    # Register all subscriptions. These are replayed on every reconnect.
    client.add_subscription({"type": "trades", "coin": "BTC"})
    client.add_subscription({
        "type": "l2Book",
        "coin": "ETH",
        "nLevels": 20,
        "nSigFigs": 5,
    })

    try:
        await client.run()
    except KeyboardInterrupt:
        client.stop()
        logger.info("Shut down cleanly.")


if __name__ == "__main__":
    asyncio.run(main())

The critical details in this pattern:

  • Subscription replay. The subscriptions list is stored on the client instance, not on the connection. After every reconnect, each subscription message is re-sent automatically. You never need to track which subscriptions are active.
  • Backoff cap at 60 seconds. Doubling without a cap would eventually produce reconnection delays measured in hours. The 60 second ceiling keeps your application responsive after extended outages.
  • Broad exception handling. Network errors (OSError), authentication failures (InvalidStatusCode), and unexpected exceptions all funnel into the same retry path. Log the error type so you can distinguish transient network issues from credential problems during post-incident review.
  • Clean shutdown. Setting self.running = False before breaking the message loop prevents the retry logic from immediately reconnecting when you intentionally stop the client.

This pattern is expanded in example 05 of the companion repo, which adds heartbeat monitoring and more granular connection state tracking.

Production tips

Once the core connection, parsing, and reconnection logic is working, a few operational considerations will help you run reliably at scale.

Subscription limits. Hyperliquid enforces a per-user limit of 1,000 active subscriptions. If you need to track more than a few hundred coins across L2, L4, and trade feeds, split the load across multiple connections. Each connection can carry a distinct subset of subscriptions.

Message rate handling. High-volume coins like BTC generate frequent updates. Move CPU-intensive processing (analytics, storage, serialization) off the main event loop. Use asyncio.Queue to decouple message ingestion from downstream consumers, and set a bounded queue size to apply backpressure when consumers fall behind.

VWAP calculation. The CoinTracker class from the multi-coin section demonstrates a rolling VWAP over the most recent 50 trades. For production use, consider time-windowed VWAP (for example, 1-minute or 5-minute windows) instead of a fixed trade count. This gives more consistent results across coins with different trade frequencies.

Monitoring. Track three metrics at minimum: messages received per second (throughput), time between message receipt and processing completion (consumer lag), and reconnection events with their error codes. Emit these to Prometheus, Datadog, or your preferred observability platform. Reconnection spikes often correlate with upstream maintenance windows that Dwellir communicates in advance.

Multiple connections. For latency-sensitive applications, separate your L4 subscription from L2 and trade subscriptions. L4 snapshots are large and can momentarily delay processing of smaller messages on the same connection. Dedicated connections per data type keep your fastest feeds responsive.

Message size for L4. L4 order book snapshots for popular coins can reach several megabytes. Set max_size on your WebSocket connection to at least 10 MB. If you are storing L4 data for replay or audit, compress with LZ4 or zstd for 5-10x size reduction without meaningful decompression overhead.

Next steps

You now have the building blocks for a production order book application on Hyperliquid: L2 and L4 parsing, multi-coin tracking, and resilient reconnection.

To go further:

  • Builder codes guide covers referral and fee-sharing integration for Hyperliquid applications.
  • Hyperliquid gRPC documentation describes the GetOrderBookSnapshot RPC, useful for reconciling WebSocket state after long outages or initializing a fresh order book.
  • The companion repository contains all 7 progressive Python examples, including trade subscriptions, multiple subscription routing, data analysis with market metrics, and incremental L4 diff processing. Work through the full set for deeper coverage of each topic.

For HyperEVM RPC access, sign up at the Dwellir dashboard. For order book server credentials with L4 data, contact the Dwellir team.