StreamOrderbookSnapshots - Real-time Order Book Streaming
Stream continuous order book snapshots with individual order visibility from Hyperliquid L1 Gateway via gRPC. Premium endpoint available on dedicated nodes.
Stream continuous order book snapshots starting from a position, providing real-time access to every individual open order across all markets on Hyperliquid. Each snapshot contains complete order metadata including order IDs, timestamps, trigger conditions, and child orders.
Dedicated Node Required
This is a premium endpoint available only with dedicated nodes. It is not available on shared infrastructure. Contact support to provision a dedicated node.
Positioning Note
Start this stream from a timestamp cursor. block_height is not supported for order book snapshots.
Full Code Examples
Clone our gRPC Code Examples Repository for complete, runnable implementations in Go, Python, and Node.js.
When to Use This Method
StreamOrderbookSnapshots is essential for:
- Market Making - Monitor individual order changes and adjust quotes in real-time
- Trading Algorithms - Access live order-level data for execution strategies
- Order Flow Analysis - Track individual orders appearing and disappearing across snapshots
- Whale Watching - Detect large orders and trigger order clustering in real-time
- Risk Management - Monitor market conditions with full order granularity
Method Signature
rpc StreamOrderbookSnapshots(Position) returns (stream OrderBookSnapshot) {}Request Parameters
One of `position`. ms since Unix epoch, inclusive
One of `position`. block height, inclusive
Response Body
Block height at which this snapshot was taken
Unix timestamp in milliseconds when the snapshot was taken
Array of market entries (one per trading pair). ~656 markets in a typical snapshot
Response Stream
message OrderBookSnapshot {
// JSON-encoded Hyperliquid order book snapshot.
bytes data = 1;
}Each streamed message contains a full order book snapshot. These are large responses (typically ~129 MB each) containing every individual open order across all markets.
Large Message Size
Set your gRPC client max receive message length to at least 64 MB, and increase it if your environment still reports message size errors while streaming.
Top-Level Structure
{
"block": "908730000",
"timestamp": 1772276651839,
"data": [ ... ]
}| Field | Type | Description |
|---|---|---|
block | string | Block height at which this snapshot was taken |
timestamp | integer | Unix timestamp in milliseconds when the snapshot was taken |
data | array | Array of market entries (one per trading pair). ~656 markets in a typical snapshot |
Market Entry Structure
Each element in data is a 2-element array (tuple):
[
"BTC",
[
[ ...bid orders... ],
[ ...ask orders... ]
]
]| Index | Type | Description |
|---|---|---|
[0] | string | Coin/asset symbol. Perp markets use ticker names (e.g. "BTC", "ETH"). Spot markets use @-prefixed numeric IDs (e.g. "@1", "@105"). Pre-market stocks use xyz: prefix (e.g. "xyz:TSLA") |
[1] | array[2] | Two sub-arrays: [0] = bid orders (sorted descending by price), [1] = ask orders (sorted ascending by price) |
Order Object
Each order in the bid/ask arrays is an individual order with full metadata. See the GetOrderBookSnapshot Field Reference for the complete field-by-field breakdown.
{
"coin": "BTC",
"side": "B",
"limitPx": "84500.0",
"sz": "0.5",
"oid": 333003526755,
"timestamp": 1772276628506,
"triggerCondition": "N/A",
"isTrigger": false,
"triggerPx": "0.0",
"children": [],
"isPositionTpsl": false,
"reduceOnly": false,
"orderType": "Limit",
"origSz": "0.5",
"tif": "Alo",
"cloid": "0x00000000000000000000000000000318"
}Coin Symbol Conventions
| Pattern | Type | Examples |
|---|---|---|
| Plain ticker | Perpetual futures | "BTC", "ETH", "AAVE", "ARB" |
@ + number | Spot markets | "@1", "@10", "@105" |
xyz: + ticker | Pre-market stocks | "xyz:TSLA", "xyz:TSM", "xyz:SOFTBANK" |
Common Use Cases
1. Order Flow Tracking
Compare consecutive snapshots to detect new, modified, and cancelled orders:
class OrderFlowTracker:
def __init__(self):
self.previous_orders = {} # oid -> order
def track(self, snapshot):
"""Compare snapshots to detect order changes"""
current_orders = {}
for market in snapshot['data']:
coin = market[0]
bids = market[1][0]
asks = market[1][1]
for order in bids + asks:
current_orders[order['oid']] = order
# Detect changes
new_oids = set(current_orders) - set(self.previous_orders)
removed_oids = set(self.previous_orders) - set(current_orders)
for oid in new_oids:
order = current_orders[oid]
print(f'NEW: {order["coin"]} {order["side"]} '
f'{order["sz"]} @ {order["limitPx"]} '
f'({order["orderType"]})')
for oid in removed_oids:
order = self.previous_orders[oid]
print(f'REMOVED: {order["coin"]} {order["side"]} '
f'{order["sz"]} @ {order["limitPx"]}')
self.previous_orders = current_orders2. Trigger Order Monitor
Track stop-loss and take-profit orders clustering around price levels:
function analyzeTriggerOrders(snapshot, targetCoin) {
for (const market of snapshot.data) {
if (market[0] !== targetCoin) continue;
const allOrders = [...market[1][0], ...market[1][1]];
const triggerOrders = allOrders.filter(o => o.isTrigger);
// Group by trigger price
const triggerLevels = {};
for (const order of triggerOrders) {
const px = order.triggerPx;
if (!triggerLevels[px]) {
triggerLevels[px] = { count: 0, totalSz: 0, types: [] };
}
triggerLevels[px].count++;
triggerLevels[px].totalSz += parseFloat(order.sz);
triggerLevels[px].types.push(order.orderType);
}
// Report significant trigger clusters
const sorted = Object.entries(triggerLevels)
.sort((a, b) => b[1].totalSz - a[1].totalSz);
console.log(`\n${targetCoin} Trigger Order Clusters:`);
for (const [px, data] of sorted.slice(0, 5)) {
console.log(` ${px}: ${data.count} orders, ` +
`size ${data.totalSz.toFixed(2)}`);
}
}
}3. Large Order Detection
Monitor for whale-sized orders appearing in the stream:
func detectLargeOrders(data []byte, threshold float64) {
var snapshot struct {
Block string `json:"block"`
Timestamp int64 `json:"timestamp"`
Data [][]interface{} `json:"data"`
}
if err := json.Unmarshal(data, &snapshot); err != nil {
return
}
for _, market := range snapshot.Data {
coin := market[0].(string)
sides := market[1].([]interface{})
for _, side := range sides {
orders := side.([]interface{})
for _, o := range orders {
order := o.(map[string]interface{})
sz, _ := strconv.ParseFloat(order["sz"].(string), 64)
if sz >= threshold {
log.Printf("LARGE ORDER: %s %s %.2f @ %s (%s)",
coin, order["side"], sz,
order["limitPx"], order["orderType"])
}
}
}
}
}Error Handling and Reconnection
class RobustOrderbookStreamer {
constructor(endpoint, apiKey) {
this.endpoint = endpoint;
this.apiKey = apiKey;
this.maxRetries = 5;
this.retryDelay = 1000;
this.currentRetries = 0;
}
async startStreamWithRetry() {
while (this.currentRetries < this.maxRetries) {
try {
await this.startStream();
this.currentRetries = 0;
this.retryDelay = 1000;
} catch (error) {
this.currentRetries++;
console.error(`Stream attempt ${this.currentRetries} failed:`, error.message);
if (this.currentRetries >= this.maxRetries) {
throw new Error('Max retry attempts exceeded');
}
// Exponential backoff
await this.sleep(this.retryDelay);
this.retryDelay *= 2;
}
}
}
sleep(ms) {
return new Promise(resolve => setTimeout(resolve, ms));
}
}Important Notes
- Individual orders, not aggregated levels. Each entry is an individual order with full metadata (order ID, timestamp, trigger info, children, etc.).
- Bids are sorted by price descending (highest price first). Asks are sorted by price ascending (lowest price first).
- Children are never nested. Child orders always have an empty
childrenarray. - Trigger orders (
isTrigger: true) havetif: nulland meaningfultriggerCondition/triggerPxvalues. - Once triggered, the order becomes
isTrigger: false,triggerCondition: "Triggered",triggerPx: "0.0", and receives atifvalue (typically"Gtc").
For detailed response examples including orders with TP/SL children, triggered orders, and all field descriptions, see the GetOrderBookSnapshot documentation.
Best Practices
- Message Size Configuration: Set gRPC max receive message length to at least 64 MB, and increase it if you still see message-size errors.
- Connection Management: Implement robust reconnection logic with exponential backoff
- Memory Management: Use bounded collections for storing historical snapshots; avoid keeping many full snapshots in memory simultaneously
- Performance: Process snapshots asynchronously to avoid blocking the stream
- Monitoring: Track stream health and snapshot rates
- Resource Cleanup: Properly close streams and connections on shutdown
Current Limitations
- Data Retention: Node maintains only 24 hours of historical order book data
- Backpressure: High-volume periods may require careful handling to avoid overwhelming downstream systems
- Availability: This endpoint requires a dedicated node and is not available on shared infrastructure
Resources
- GitHub: gRPC Code Examples - Complete working examples
- Copy Trading Bot - Production-ready trading bot example
- Pricing - Dedicated node pricing details
Need help? Contact our support team or check the Hyperliquid gRPC documentation.