Skip to main content

StreamOrderbookSnapshots

Stream continuous order book snapshots starting from a position, providing real-time access to every individual open order across all markets on Hyperliquid. Each snapshot contains complete order metadata including order IDs, timestamps, trigger conditions, and child orders.

Dedicated Node Required

This is a premium endpoint available only with dedicated nodes. It is not available on shared infrastructure. Contact support to provision a dedicated node.

Full Code Examples

Clone our gRPC Code Examples Repository for complete, runnable implementations in Go, Python, and Node.js.

When to Use This Method#

StreamOrderbookSnapshots is essential for:

  • Market Making - Monitor individual order changes and adjust quotes in real-time
  • Trading Algorithms - Access live order-level data for execution strategies
  • Order Flow Analysis - Track individual orders appearing and disappearing across snapshots
  • Whale Watching - Detect large orders and trigger order clustering in real-time
  • Risk Management - Monitor market conditions with full order granularity

Method Signature#

rpc StreamOrderbookSnapshots(Position) returns (stream OrderBookSnapshot) {}

Request Message#

message Position {
// Leave all fields unset or zero to target the latest data.
oneof position {
int64 timestamp_ms = 1; // ms since Unix epoch, inclusive
int64 block_height = 2; // block height, inclusive
}
}

The Position message allows flexible stream starting points:

  • timestamp_ms: Start streaming from a specific time (milliseconds since Unix epoch)
  • block_height: Start streaming from a specific block height
  • Empty/zero: Start streaming from the latest order book snapshot

Response Stream#

message OrderBookSnapshot {
// JSON-encoded Hyperliquid order book snapshot.
bytes data = 1;
}

Each streamed message contains a full order book snapshot. These are large responses (typically ~129 MB each) containing every individual open order across all markets.

Large Message Size

Configure your gRPC client with a max receive message length of at least 150 MB (recommended 500 MB) to handle each snapshot message.

Top-Level Structure#

{
"block": "908730000",
"timestamp": 1772276651839,
"data": [ ... ]
}
FieldTypeDescription
blockstringBlock height at which this snapshot was taken
timestampintegerUnix timestamp in milliseconds when the snapshot was taken
dataarrayArray of market entries (one per trading pair). ~656 markets in a typical snapshot

Market Entry Structure#

Each element in data is a 2-element array (tuple):

[
"BTC",
[
[ ...bid orders... ],
[ ...ask orders... ]
]
]
IndexTypeDescription
[0]stringCoin/asset symbol. Perp markets use ticker names (e.g. "BTC", "ETH"). Spot markets use @-prefixed numeric IDs (e.g. "@1", "@105"). Pre-market stocks use xyz: prefix (e.g. "xyz:TSLA")
[1]array[2]Two sub-arrays: [0] = bid orders (sorted descending by price), [1] = ask orders (sorted ascending by price)

Order Object#

Each order in the bid/ask arrays is an individual order with full metadata. See the GetOrderBookSnapshot Field Reference for the complete field-by-field breakdown.

{
"coin": "BTC",
"side": "B",
"limitPx": "84500.0",
"sz": "0.5",
"oid": 333003526755,
"timestamp": 1772276628506,
"triggerCondition": "N/A",
"isTrigger": false,
"triggerPx": "0.0",
"children": [],
"isPositionTpsl": false,
"reduceOnly": false,
"orderType": "Limit",
"origSz": "0.5",
"tif": "Alo",
"cloid": "0x00000000000000000000000000000318"
}

Coin Symbol Conventions#

PatternTypeExamples
Plain tickerPerpetual futures"BTC", "ETH", "AAVE", "ARB"
@ + numberSpot markets"@1", "@10", "@105"
xyz: + tickerPre-market stocks"xyz:TSLA", "xyz:TSM", "xyz:SOFTBANK"

Implementation Examples#

package main

import (
"context"
"encoding/json"
"fmt"
"log"
"os"
"strings"
"time"

pb "hyperliquid-grpc-client/api/v2"

"google.golang.org/grpc"
"google.golang.org/grpc/credentials"
"google.golang.org/grpc/metadata"
)

type Snapshot struct {
Block string `json:"block"`
Timestamp int64 `json:"timestamp"`
Data json.RawMessage `json:"data"`
}

func streamOrderbookSnapshots() {
endpoint := os.Getenv("HYPERLIQUID_ENDPOINT")
apiKey := os.Getenv("API_KEY")

if endpoint == "" {
log.Fatal("Error: HYPERLIQUID_ENDPOINT environment variable is required.")
}

if apiKey == "" {
log.Fatal("Error: API_KEY environment variable is required.")
}

fmt.Println("Hyperliquid Go gRPC Client - Stream Order Book Snapshots")
fmt.Println("========================================================")
fmt.Printf("Endpoint: %s\n\n", endpoint)

// Set up TLS connection
creds := credentials.NewTLS(nil)

opts := []grpc.DialOption{
grpc.WithTransportCredentials(creds),
grpc.WithDefaultCallOptions(
grpc.MaxCallRecvMsgSize(500 * 1024 * 1024), // 500MB max
),
}

fmt.Println("Connecting to gRPC server...")
conn, err := grpc.NewClient(endpoint, opts...)
if err != nil {
log.Fatalf("Failed to connect: %v", err)
}
defer conn.Close()

client := pb.NewHyperliquidL1GatewayClient(conn)
fmt.Println("Connected successfully!\n")

// Create context with API key
ctx := metadata.AppendToOutgoingContext(context.Background(), "x-api-key", apiKey)

// Create request - empty Position means latest/current snapshots
request := &pb.Position{}

fmt.Println("Starting order book snapshots stream...")
fmt.Println("Press Ctrl+C to stop streaming\n")

stream, err := client.StreamOrderbookSnapshots(ctx, request)
if err != nil {
log.Fatalf("Failed to start stream: %v", err)
}

snapshotCount := 0
for {
response, err := stream.Recv()
if err != nil {
fmt.Printf("Stream ended: %v\n", err)
break
}

snapshotCount++
fmt.Printf("\n===== SNAPSHOT #%d =====\n", snapshotCount)
fmt.Printf("Response size: %d bytes\n", len(response.Data))

processSnapshot(response.Data, snapshotCount)

fmt.Println("\n" + strings.Repeat("-", 50))
}

fmt.Printf("\nTotal snapshots received: %d\n", snapshotCount)
}

func processSnapshot(data []byte, snapshotNum int) {
var snapshot Snapshot
if err := json.Unmarshal(data, &snapshot); err != nil {
fmt.Printf("Failed to parse JSON: %v\n", err)
return
}

fmt.Printf("Block: %s\n", snapshot.Block)

timestamp := time.UnixMilli(snapshot.Timestamp)
fmt.Printf("Time: %s\n", timestamp.Format("2006-01-02 15:04:05 UTC"))

// Parse the market data
var markets []json.RawMessage
if err := json.Unmarshal(snapshot.Data, &markets); err != nil {
fmt.Printf("Failed to parse markets: %v\n", err)
return
}

fmt.Printf("Total markets: %d\n", len(markets))
}

func main() {
streamOrderbookSnapshots()
}

Common Use Cases#

1. Order Flow Tracking#

Compare consecutive snapshots to detect new, modified, and cancelled orders:

class OrderFlowTracker:
def __init__(self):
self.previous_orders = {} # oid -> order

def track(self, snapshot):
"""Compare snapshots to detect order changes"""
current_orders = {}

for market in snapshot['data']:
coin = market[0]
bids = market[1][0]
asks = market[1][1]

for order in bids + asks:
current_orders[order['oid']] = order

# Detect changes
new_oids = set(current_orders) - set(self.previous_orders)
removed_oids = set(self.previous_orders) - set(current_orders)

for oid in new_oids:
order = current_orders[oid]
print(f'NEW: {order["coin"]} {order["side"]} '
f'{order["sz"]} @ {order["limitPx"]} '
f'({order["orderType"]})')

for oid in removed_oids:
order = self.previous_orders[oid]
print(f'REMOVED: {order["coin"]} {order["side"]} '
f'{order["sz"]} @ {order["limitPx"]}')

self.previous_orders = current_orders

2. Trigger Order Monitor#

Track stop-loss and take-profit orders clustering around price levels:

function analyzeTriggerOrders(snapshot, targetCoin) {
for (const market of snapshot.data) {
if (market[0] !== targetCoin) continue;

const allOrders = [...market[1][0], ...market[1][1]];
const triggerOrders = allOrders.filter(o => o.isTrigger);

// Group by trigger price
const triggerLevels = {};
for (const order of triggerOrders) {
const px = order.triggerPx;
if (!triggerLevels[px]) {
triggerLevels[px] = { count: 0, totalSz: 0, types: [] };
}
triggerLevels[px].count++;
triggerLevels[px].totalSz += parseFloat(order.sz);
triggerLevels[px].types.push(order.orderType);
}

// Report significant trigger clusters
const sorted = Object.entries(triggerLevels)
.sort((a, b) => b[1].totalSz - a[1].totalSz);

console.log(`\n${targetCoin} Trigger Order Clusters:`);
for (const [px, data] of sorted.slice(0, 5)) {
console.log(` ${px}: ${data.count} orders, ` +
`size ${data.totalSz.toFixed(2)}`);
}
}
}

3. Large Order Detection#

Monitor for whale-sized orders appearing in the stream:

func detectLargeOrders(data []byte, threshold float64) {
var snapshot struct {
Block string `json:"block"`
Timestamp int64 `json:"timestamp"`
Data [][]interface{} `json:"data"`
}

if err := json.Unmarshal(data, &snapshot); err != nil {
return
}

for _, market := range snapshot.Data {
coin := market[0].(string)
sides := market[1].([]interface{})

for _, side := range sides {
orders := side.([]interface{})
for _, o := range orders {
order := o.(map[string]interface{})
sz, _ := strconv.ParseFloat(order["sz"].(string), 64)
if sz >= threshold {
log.Printf("LARGE ORDER: %s %s %.2f @ %s (%s)",
coin, order["side"], sz,
order["limitPx"], order["orderType"])
}
}
}
}
}

Error Handling and Reconnection#

class RobustOrderbookStreamer {
constructor(endpoint, apiKey) {
this.endpoint = endpoint;
this.apiKey = apiKey;
this.maxRetries = 5;
this.retryDelay = 1000;
this.currentRetries = 0;
}

async startStreamWithRetry() {
while (this.currentRetries < this.maxRetries) {
try {
await this.startStream();
this.currentRetries = 0;
this.retryDelay = 1000;
} catch (error) {
this.currentRetries++;
console.error(`Stream attempt ${this.currentRetries} failed:`, error.message);

if (this.currentRetries >= this.maxRetries) {
throw new Error('Max retry attempts exceeded');
}

// Exponential backoff
await this.sleep(this.retryDelay);
this.retryDelay *= 2;
}
}
}

sleep(ms) {
return new Promise(resolve => setTimeout(resolve, ms));
}
}

Important Notes#

  • Individual orders, not aggregated levels. Each entry is an individual order with full metadata (order ID, timestamp, trigger info, children, etc.).
  • Bids are sorted by price descending (highest price first). Asks are sorted by price ascending (lowest price first).
  • Children are never nested. Child orders always have an empty children array.
  • Trigger orders (isTrigger: true) have tif: null and meaningful triggerCondition / triggerPx values.
  • Once triggered, the order becomes isTrigger: false, triggerCondition: "Triggered", triggerPx: "0.0", and receives a tif value (typically "Gtc").

For detailed response examples including orders with TP/SL children, triggered orders, and all field descriptions, see the GetOrderBookSnapshot documentation.

Best Practices#

  1. Message Size Configuration: Set gRPC max receive message length to at least 500 MB
  2. Connection Management: Implement robust reconnection logic with exponential backoff
  3. Memory Management: Use bounded collections for storing historical snapshots; avoid keeping many full snapshots in memory simultaneously
  4. Performance: Process snapshots asynchronously to avoid blocking the stream
  5. Monitoring: Track stream health and snapshot rates
  6. Resource Cleanup: Properly close streams and connections on shutdown

Current Limitations#

  • Data Retention: Node maintains only 24 hours of historical order book data
  • Backpressure: High-volume periods may require careful handling to avoid overwhelming downstream systems
  • Availability: This endpoint requires a dedicated node and is not available on shared infrastructure

Resources#

Need help? Contact our support team or check the Hyperliquid gRPC documentation.