StreamOrderbookSnapshots
Stream continuous order book snapshots starting from a position, providing real-time access to every individual open order across all markets on Hyperliquid. Each snapshot contains complete order metadata including order IDs, timestamps, trigger conditions, and child orders.
This is a premium endpoint available only with dedicated nodes. It is not available on shared infrastructure. Contact support to provision a dedicated node.
Clone our gRPC Code Examples Repository for complete, runnable implementations in Go, Python, and Node.js.
When to Use This Method#
StreamOrderbookSnapshots is essential for:
- Market Making - Monitor individual order changes and adjust quotes in real-time
- Trading Algorithms - Access live order-level data for execution strategies
- Order Flow Analysis - Track individual orders appearing and disappearing across snapshots
- Whale Watching - Detect large orders and trigger order clustering in real-time
- Risk Management - Monitor market conditions with full order granularity
Method Signature#
rpc StreamOrderbookSnapshots(Position) returns (stream OrderBookSnapshot) {}
Request Message#
message Position {
// Leave all fields unset or zero to target the latest data.
oneof position {
int64 timestamp_ms = 1; // ms since Unix epoch, inclusive
int64 block_height = 2; // block height, inclusive
}
}
The Position message allows flexible stream starting points:
- timestamp_ms: Start streaming from a specific time (milliseconds since Unix epoch)
- block_height: Start streaming from a specific block height
- Empty/zero: Start streaming from the latest order book snapshot
Response Stream#
message OrderBookSnapshot {
// JSON-encoded Hyperliquid order book snapshot.
bytes data = 1;
}
Each streamed message contains a full order book snapshot. These are large responses (typically ~129 MB each) containing every individual open order across all markets.
Configure your gRPC client with a max receive message length of at least 150 MB (recommended 500 MB) to handle each snapshot message.
Top-Level Structure#
{
"block": "908730000",
"timestamp": 1772276651839,
"data": [ ... ]
}
| Field | Type | Description |
|---|---|---|
block | string | Block height at which this snapshot was taken |
timestamp | integer | Unix timestamp in milliseconds when the snapshot was taken |
data | array | Array of market entries (one per trading pair). ~656 markets in a typical snapshot |
Market Entry Structure#
Each element in data is a 2-element array (tuple):
[
"BTC",
[
[ ...bid orders... ],
[ ...ask orders... ]
]
]
| Index | Type | Description |
|---|---|---|
[0] | string | Coin/asset symbol. Perp markets use ticker names (e.g. "BTC", "ETH"). Spot markets use @-prefixed numeric IDs (e.g. "@1", "@105"). Pre-market stocks use xyz: prefix (e.g. "xyz:TSLA") |
[1] | array[2] | Two sub-arrays: [0] = bid orders (sorted descending by price), [1] = ask orders (sorted ascending by price) |
Order Object#
Each order in the bid/ask arrays is an individual order with full metadata. See the GetOrderBookSnapshot Field Reference for the complete field-by-field breakdown.
{
"coin": "BTC",
"side": "B",
"limitPx": "84500.0",
"sz": "0.5",
"oid": 333003526755,
"timestamp": 1772276628506,
"triggerCondition": "N/A",
"isTrigger": false,
"triggerPx": "0.0",
"children": [],
"isPositionTpsl": false,
"reduceOnly": false,
"orderType": "Limit",
"origSz": "0.5",
"tif": "Alo",
"cloid": "0x00000000000000000000000000000318"
}
Coin Symbol Conventions#
| Pattern | Type | Examples |
|---|---|---|
| Plain ticker | Perpetual futures | "BTC", "ETH", "AAVE", "ARB" |
@ + number | Spot markets | "@1", "@10", "@105" |
xyz: + ticker | Pre-market stocks | "xyz:TSLA", "xyz:TSM", "xyz:SOFTBANK" |
Implementation Examples#
- Go
- Python
- Node.js
package main
import (
"context"
"encoding/json"
"fmt"
"log"
"os"
"strings"
"time"
pb "hyperliquid-grpc-client/api/v2"
"google.golang.org/grpc"
"google.golang.org/grpc/credentials"
"google.golang.org/grpc/metadata"
)
type Snapshot struct {
Block string `json:"block"`
Timestamp int64 `json:"timestamp"`
Data json.RawMessage `json:"data"`
}
func streamOrderbookSnapshots() {
endpoint := os.Getenv("HYPERLIQUID_ENDPOINT")
apiKey := os.Getenv("API_KEY")
if endpoint == "" {
log.Fatal("Error: HYPERLIQUID_ENDPOINT environment variable is required.")
}
if apiKey == "" {
log.Fatal("Error: API_KEY environment variable is required.")
}
fmt.Println("Hyperliquid Go gRPC Client - Stream Order Book Snapshots")
fmt.Println("========================================================")
fmt.Printf("Endpoint: %s\n\n", endpoint)
// Set up TLS connection
creds := credentials.NewTLS(nil)
opts := []grpc.DialOption{
grpc.WithTransportCredentials(creds),
grpc.WithDefaultCallOptions(
grpc.MaxCallRecvMsgSize(500 * 1024 * 1024), // 500MB max
),
}
fmt.Println("Connecting to gRPC server...")
conn, err := grpc.NewClient(endpoint, opts...)
if err != nil {
log.Fatalf("Failed to connect: %v", err)
}
defer conn.Close()
client := pb.NewHyperliquidL1GatewayClient(conn)
fmt.Println("Connected successfully!\n")
// Create context with API key
ctx := metadata.AppendToOutgoingContext(context.Background(), "x-api-key", apiKey)
// Create request - empty Position means latest/current snapshots
request := &pb.Position{}
fmt.Println("Starting order book snapshots stream...")
fmt.Println("Press Ctrl+C to stop streaming\n")
stream, err := client.StreamOrderbookSnapshots(ctx, request)
if err != nil {
log.Fatalf("Failed to start stream: %v", err)
}
snapshotCount := 0
for {
response, err := stream.Recv()
if err != nil {
fmt.Printf("Stream ended: %v\n", err)
break
}
snapshotCount++
fmt.Printf("\n===== SNAPSHOT #%d =====\n", snapshotCount)
fmt.Printf("Response size: %d bytes\n", len(response.Data))
processSnapshot(response.Data, snapshotCount)
fmt.Println("\n" + strings.Repeat("-", 50))
}
fmt.Printf("\nTotal snapshots received: %d\n", snapshotCount)
}
func processSnapshot(data []byte, snapshotNum int) {
var snapshot Snapshot
if err := json.Unmarshal(data, &snapshot); err != nil {
fmt.Printf("Failed to parse JSON: %v\n", err)
return
}
fmt.Printf("Block: %s\n", snapshot.Block)
timestamp := time.UnixMilli(snapshot.Timestamp)
fmt.Printf("Time: %s\n", timestamp.Format("2006-01-02 15:04:05 UTC"))
// Parse the market data
var markets []json.RawMessage
if err := json.Unmarshal(snapshot.Data, &markets); err != nil {
fmt.Printf("Failed to parse markets: %v\n", err)
return
}
fmt.Printf("Total markets: %d\n", len(markets))
}
func main() {
streamOrderbookSnapshots()
}
import grpc
import json
import signal
import sys
import os
from datetime import datetime
from dotenv import load_dotenv
import hyperliquid_pb2
import hyperliquid_pb2_grpc
load_dotenv()
def stream_orderbook_snapshots():
endpoint = os.getenv('HYPERLIQUID_ENDPOINT')
api_key = os.getenv('API_KEY')
if not endpoint:
print("Error: HYPERLIQUID_ENDPOINT environment variable is required.")
sys.exit(1)
if not api_key:
print("Error: API_KEY environment variable is required.")
sys.exit(1)
print('Hyperliquid Python gRPC Client - Stream Order Book Snapshots')
print('============================================================')
print(f'Endpoint: {endpoint}\n')
credentials = grpc.ssl_channel_credentials()
options = [
('grpc.max_receive_message_length', 500 * 1024 * 1024), # 500MB max
]
metadata = [('x-api-key', api_key)]
print('Connecting to gRPC server...')
with grpc.secure_channel(endpoint, credentials, options=options) as channel:
client = hyperliquid_pb2_grpc.HyperliquidL1GatewayStub(channel)
print('Connected successfully!\n')
# Create request - empty Position means latest/current snapshots
request = hyperliquid_pb2.Position()
print('Starting order book snapshots stream...')
print('Press Ctrl+C to stop streaming\n')
snapshot_count = 0
def signal_handler(sig, frame):
print('\nStopping stream...')
print(f'Total snapshots received: {snapshot_count}')
sys.exit(0)
signal.signal(signal.SIGINT, signal_handler)
try:
for response in client.StreamOrderbookSnapshots(request, metadata=metadata):
snapshot_count += 1
print(f'\n===== SNAPSHOT #{snapshot_count} =====')
print(f'Response size: {len(response.data)} bytes')
process_snapshot(response.data, snapshot_count)
print('\n' + '-' * 50)
except grpc.RpcError as e:
print(f'Stream error: {e}')
except KeyboardInterrupt:
print('\nStopping stream...')
print(f'\nTotal snapshots received: {snapshot_count}')
def process_snapshot(data, snapshot_num):
try:
snapshot = json.loads(data.decode('utf-8'))
print(f'Block: {snapshot["block"]}')
dt = datetime.fromtimestamp(snapshot['timestamp'] / 1000)
print(f'Time: {dt.strftime("%Y-%m-%d %H:%M:%S UTC")}')
markets = snapshot['data']
print(f'Total markets: {len(markets)}')
# Count orders across all markets
total_bids = 0
total_asks = 0
for market in markets:
orders = market[1]
total_bids += len(orders[0])
total_asks += len(orders[1])
print(f'Total bid orders: {total_bids:,}')
print(f'Total ask orders: {total_asks:,}')
print(f'Total orders: {total_bids + total_asks:,}')
except json.JSONDecodeError as e:
print(f'Failed to parse JSON: {e}')
except Exception as e:
print(f'Error processing snapshot: {e}')
if __name__ == '__main__':
stream_orderbook_snapshots()
const grpc = require('@grpc/grpc-js');
const protoLoader = require('@grpc/proto-loader');
const path = require('path');
require('dotenv').config();
const PROTO_PATH = path.join(__dirname, 'v2.proto');
const packageDefinition = protoLoader.loadSync(PROTO_PATH, {
keepCase: true,
longs: String,
enums: String,
defaults: true,
oneofs: true
});
const proto = grpc.loadPackageDefinition(packageDefinition);
async function streamOrderbookSnapshots() {
const endpoint = process.env.HYPERLIQUID_ENDPOINT;
const apiKey = process.env.API_KEY;
if (!endpoint) {
console.error('Error: HYPERLIQUID_ENDPOINT environment variable is required.');
process.exit(1);
}
if (!apiKey) {
console.error('Error: API_KEY environment variable is required.');
process.exit(1);
}
console.log('Hyperliquid Node.js gRPC Client - Stream Order Book Snapshots');
console.log('=============================================================');
console.log(`Endpoint: ${endpoint}\n`);
const metadata = new grpc.Metadata();
metadata.add('x-api-key', apiKey);
// 500MB max to handle the large snapshots
const client = new proto.hyperliquid_l1_gateway.v2.HyperliquidL1Gateway(
endpoint,
grpc.credentials.createSsl(),
{
'grpc.max_receive_message_length': 500 * 1024 * 1024
}
);
console.log('Starting order book snapshots stream...');
console.log('Press Ctrl+C to stop streaming\n');
// Make the gRPC call - empty Position for latest
const stream = client.StreamOrderbookSnapshots({}, metadata);
let snapshotCount = 0;
stream.on('data', (data) => {
snapshotCount++;
try {
const snapshot = JSON.parse(data.data);
console.log(`\n===== SNAPSHOT #${snapshotCount} =====`);
console.log(`Response size: ${data.data.length} bytes`);
console.log(`Block: ${snapshot.block}`);
const date = new Date(snapshot.timestamp);
console.log(`Time: ${date.toISOString()}`);
console.log(`Total markets: ${snapshot.data.length}`);
// Count orders across all markets
let totalBids = 0;
let totalAsks = 0;
for (const market of snapshot.data) {
totalBids += market[1][0].length;
totalAsks += market[1][1].length;
}
console.log(`Total orders: ${(totalBids + totalAsks).toLocaleString()}`);
console.log('\n' + '-'.repeat(50));
} catch (error) {
console.error(`Failed to parse message #${snapshotCount}:`, error.message);
}
});
stream.on('error', (error) => {
console.error('Stream error:', error.message);
});
stream.on('end', () => {
console.log('Stream ended');
console.log(`\nTotal snapshots received: ${snapshotCount}`);
});
}
streamOrderbookSnapshots();
Common Use Cases#
1. Order Flow Tracking#
Compare consecutive snapshots to detect new, modified, and cancelled orders:
class OrderFlowTracker:
def __init__(self):
self.previous_orders = {} # oid -> order
def track(self, snapshot):
"""Compare snapshots to detect order changes"""
current_orders = {}
for market in snapshot['data']:
coin = market[0]
bids = market[1][0]
asks = market[1][1]
for order in bids + asks:
current_orders[order['oid']] = order
# Detect changes
new_oids = set(current_orders) - set(self.previous_orders)
removed_oids = set(self.previous_orders) - set(current_orders)
for oid in new_oids:
order = current_orders[oid]
print(f'NEW: {order["coin"]} {order["side"]} '
f'{order["sz"]} @ {order["limitPx"]} '
f'({order["orderType"]})')
for oid in removed_oids:
order = self.previous_orders[oid]
print(f'REMOVED: {order["coin"]} {order["side"]} '
f'{order["sz"]} @ {order["limitPx"]}')
self.previous_orders = current_orders
2. Trigger Order Monitor#
Track stop-loss and take-profit orders clustering around price levels:
function analyzeTriggerOrders(snapshot, targetCoin) {
for (const market of snapshot.data) {
if (market[0] !== targetCoin) continue;
const allOrders = [...market[1][0], ...market[1][1]];
const triggerOrders = allOrders.filter(o => o.isTrigger);
// Group by trigger price
const triggerLevels = {};
for (const order of triggerOrders) {
const px = order.triggerPx;
if (!triggerLevels[px]) {
triggerLevels[px] = { count: 0, totalSz: 0, types: [] };
}
triggerLevels[px].count++;
triggerLevels[px].totalSz += parseFloat(order.sz);
triggerLevels[px].types.push(order.orderType);
}
// Report significant trigger clusters
const sorted = Object.entries(triggerLevels)
.sort((a, b) => b[1].totalSz - a[1].totalSz);
console.log(`\n${targetCoin} Trigger Order Clusters:`);
for (const [px, data] of sorted.slice(0, 5)) {
console.log(` ${px}: ${data.count} orders, ` +
`size ${data.totalSz.toFixed(2)}`);
}
}
}
3. Large Order Detection#
Monitor for whale-sized orders appearing in the stream:
func detectLargeOrders(data []byte, threshold float64) {
var snapshot struct {
Block string `json:"block"`
Timestamp int64 `json:"timestamp"`
Data [][]interface{} `json:"data"`
}
if err := json.Unmarshal(data, &snapshot); err != nil {
return
}
for _, market := range snapshot.Data {
coin := market[0].(string)
sides := market[1].([]interface{})
for _, side := range sides {
orders := side.([]interface{})
for _, o := range orders {
order := o.(map[string]interface{})
sz, _ := strconv.ParseFloat(order["sz"].(string), 64)
if sz >= threshold {
log.Printf("LARGE ORDER: %s %s %.2f @ %s (%s)",
coin, order["side"], sz,
order["limitPx"], order["orderType"])
}
}
}
}
}
Error Handling and Reconnection#
class RobustOrderbookStreamer {
constructor(endpoint, apiKey) {
this.endpoint = endpoint;
this.apiKey = apiKey;
this.maxRetries = 5;
this.retryDelay = 1000;
this.currentRetries = 0;
}
async startStreamWithRetry() {
while (this.currentRetries < this.maxRetries) {
try {
await this.startStream();
this.currentRetries = 0;
this.retryDelay = 1000;
} catch (error) {
this.currentRetries++;
console.error(`Stream attempt ${this.currentRetries} failed:`, error.message);
if (this.currentRetries >= this.maxRetries) {
throw new Error('Max retry attempts exceeded');
}
// Exponential backoff
await this.sleep(this.retryDelay);
this.retryDelay *= 2;
}
}
}
sleep(ms) {
return new Promise(resolve => setTimeout(resolve, ms));
}
}
Important Notes#
- Individual orders, not aggregated levels. Each entry is an individual order with full metadata (order ID, timestamp, trigger info, children, etc.).
- Bids are sorted by price descending (highest price first). Asks are sorted by price ascending (lowest price first).
- Children are never nested. Child orders always have an empty
childrenarray. - Trigger orders (
isTrigger: true) havetif: nulland meaningfultriggerCondition/triggerPxvalues. - Once triggered, the order becomes
isTrigger: false,triggerCondition: "Triggered",triggerPx: "0.0", and receives atifvalue (typically"Gtc").
For detailed response examples including orders with TP/SL children, triggered orders, and all field descriptions, see the GetOrderBookSnapshot documentation.
Best Practices#
- Message Size Configuration: Set gRPC max receive message length to at least 500 MB
- Connection Management: Implement robust reconnection logic with exponential backoff
- Memory Management: Use bounded collections for storing historical snapshots; avoid keeping many full snapshots in memory simultaneously
- Performance: Process snapshots asynchronously to avoid blocking the stream
- Monitoring: Track stream health and snapshot rates
- Resource Cleanup: Properly close streams and connections on shutdown
Current Limitations#
- Data Retention: Node maintains only 24 hours of historical order book data
- Backpressure: High-volume periods may require careful handling to avoid overwhelming downstream systems
- Availability: This endpoint requires a dedicated node and is not available on shared infrastructure
Resources#
- GitHub: gRPC Code Examples - Complete working examples
- Copy Trading Bot - Production-ready trading bot example
- Pricing - Dedicated node pricing details
Need help? Contact our support team or check the Hyperliquid gRPC documentation.