userRateLimit - HyperCore Info Endpoint
Get rate limit information for a user on Hyperliquid. Monitor API usage, track limits, and optimize request patterns.
Get rate limit information for a user on Hyperliquid, including cumulative volume and request-cap usage.
Authenticate HyperCore Info requests by sending your Dwellir API key in the x-api-key header to https://api-hyperliquid-mainnet-info.n.dwellir.com/info.
When to Use This Endpoint
The userRateLimit endpoint is essential for:
- Usage Monitoring — Track API usage against rate limits
- Optimization — Identify when to throttle requests
- Compliance — Ensure requests stay within allowed limits
- Planning — Understand user-specific rate limit allocations
Common Use Cases
1. Check Rate Limit Status
Monitor current rate limit status:
async function checkRateLimitStatus(userAddress) {
const rateLimit = await getUserRateLimit(userAddress);
const percentUsed = (rateLimit.nRequestsUsed / rateLimit.nRequestsCap) * 100;
console.log('=== Rate Limit Status ===\n');
console.log(`Cumulative volume: ${rateLimit.cumVlm}`);
console.log(`Requests used: ${rateLimit.nRequestsUsed}/${rateLimit.nRequestsCap}`);
console.log(`Used: ${percentUsed.toFixed(1)}%`);
if (rateLimit.nRequestsSurplus > 0) {
console.warn('WARNING: Approaching rate limit!');
}
}2. Implement Smart Throttling
Automatically throttle requests based on remaining capacity:
class RateLimitedClient {
constructor(userAddress) {
this.userAddress = userAddress;
this.lastCheck = null;
}
async shouldThrottle() {
const rateLimit = await getUserRateLimit(this.userAddress);
this.lastCheck = Date.now();
// Throttle if usage is approaching the cap
const usedPercent = (rateLimit.nRequestsUsed / rateLimit.nRequestsCap) * 100;
return usedPercent > 80;
}
async makeRequest(requestFn) {
if (await this.shouldThrottle()) {
console.log('Throttling: rate-cap usage is high');
await new Promise(r => setTimeout(r, 1000));
}
return await requestFn();
}
}
// Usage
const client = new RateLimitedClient('0x63E8c7C149556D5f34F833419A287bb9Ef81487f');
const result = await client.makeRequest(() => someApiCall());3. Build Rate Limit Monitor
Create a monitoring system:
async function monitorRateLimit(userAddress, alertThreshold = 10) {
const rateLimit = await getUserRateLimit(userAddress);
const status = {
cumulativeVolume: rateLimit.cumVlm,
requestsUsed: rateLimit.nRequestsUsed,
requestsCap: rateLimit.nRequestsCap,
requestsSurplus: rateLimit.nRequestsSurplus,
percentUsed: (rateLimit.nRequestsUsed / rateLimit.nRequestsCap) * 100
};
if (status.requestsSurplus > 0) {
console.error(`ALERT: Request cap exceeded by ${status.requestsSurplus}`);
}
return status;
}
// Usage
const status = await monitorRateLimit('0x63E8c7C149556D5f34F833419A287bb9Ef81487f');
console.log('Rate limit status:', status);4. Calculate Optimal Request Rate
Calculate safe request rate:
async function calculateOptimalRate(userAddress) {
const rateLimit = await getUserRateLimit(userAddress);
const safeRPS = Math.max(1, Math.floor(rateLimit.nRequestsCap / 60));
return {
safeRPS: safeRPS,
delayBetweenRequests: 1000 / safeRPS
};
}
// Usage
const optimal = await calculateOptimalRate('0x63E8c7C149556D5f34F833419A287bb9Ef81487f');
console.log(`Safe request rate: ${optimal.safeRPS.toFixed(2)} req/s`);
console.log(`Delay between requests: ${optimal.delayBetweenRequests.toFixed(0)}ms`);5. Rate Limit Dashboard
Create a comprehensive rate limit dashboard:
async function getRateLimitDashboard(userAddress) {
try {
const rateLimit = await getUserRateLimit(userAddress);
const percentUsed = (rateLimit.nRequestsUsed / rateLimit.nRequestsCap) * 100;
return {
status: 'success',
metrics: {
cumulativeVolume: rateLimit.cumVlm,
used: rateLimit.nRequestsUsed,
cap: rateLimit.nRequestsCap,
surplus: rateLimit.nRequestsSurplus,
percentUsed: percentUsed.toFixed(1) + '%'
},
health: percentUsed < 80 ? 'healthy' : percentUsed < 95 ? 'warning' : 'critical'
};
} catch (error) {
return {
status: 'error',
error: error.message
};
}
}Best Practices
- Check regularly — Monitor rate limits before making bulk requests
- Respect limits — Implement throttling when approaching limits
- Cache strategically — Cache rate limit data for 10-30 seconds
- Plan ahead — Check limits before starting long-running operations
- Implement backoff — Use exponential backoff when limits are reached
Related Endpoints
- clearinghouseState — Get account state
- userFees — Get user fee information
- openOrders — Get user open orders
Monitor and optimize your Hyperliquid API usage with Dwellir's HyperCore Info Endpoint. Get your API key →