eth_getFilterLogs - Polygon RPC Method
Returns all logs matching a previously created filter on Polygon. Essential for initial log retrieval, backfilling event data, and one-time historical queries for enterprise solutions (Starbucks, Disney, Reddit), gaming, DeFi, and stablecoin payments.
Returns an array of all logs matching the filter that was previously created with eth_newFilter on Polygon. Unlike eth_getFilterChanges which returns only new logs since the last poll, this method returns the complete set of matching logs for the filter's block range.
Why Polygon? Build on the most adopted Ethereum scaling solution with 45,000+ dApps and enterprise partnerships with $4B+ TVL, sub-$0.01 transactions, 8M+ daily transactions, and zkEVM for enhanced security.
When to Use This Method
eth_getFilterLogs is essential for enterprise developers, gaming studios, and teams building high-throughput applications:
- Initial Log Retrieval — Fetch the complete set of matching logs when you first create a filter on Polygon
- Backfilling Event Data — Recover historical events after an indexer restart or gap in polling
- One-Time Queries — Retrieve all logs for a specific block range without incremental polling
- Data Reconciliation — Compare against incrementally collected data from
eth_getFilterChangesto detect missed events
Code Examples
Common Use Cases
1. Backfill Event Data After Indexer Restart
Recover missed events when your indexer goes down and comes back:
async function backfillEvents(provider, contractAddress, topics, lastProcessedBlock) {
const currentBlock = await provider.getBlockNumber();
// Create a filter covering the gap
const filterId = await provider.send('eth_newFilter', [{
fromBlock: '0x' + (lastProcessedBlock + 1).toString(16),
toBlock: '0x' + currentBlock.toString(16),
address: contractAddress,
topics: topics
}]);
// Retrieve all logs in the gap
const logs = await provider.send('eth_getFilterLogs', [filterId]);
console.log(`Backfilling ${logs.length} events from blocks ${lastProcessedBlock + 1} to ${currentBlock}`);
for (const log of logs) {
await processEvent(log);
}
// Clean up and switch to incremental polling
await provider.send('eth_uninstallFilter', [filterId]);
return currentBlock;
}2. Compare Filter Results with Direct Query
Verify data consistency between filter-based and direct log queries:
async function verifyFilterResults(provider, filterParams) {
// Create filter and get logs via eth_getFilterLogs
const filterId = await provider.send('eth_newFilter', [filterParams]);
const filterLogs = await provider.send('eth_getFilterLogs', [filterId]);
// Get logs directly via eth_getLogs
const directLogs = await provider.send('eth_getLogs', [filterParams]);
console.log(`Filter logs: ${filterLogs.length}, Direct logs: ${directLogs.length}`);
if (filterLogs.length !== directLogs.length) {
console.warn('Mismatch detected — investigate missing events');
}
await provider.send('eth_uninstallFilter', [filterId]);
return { filterLogs, directLogs };
}3. Paginated Historical Event Loader
Load large volumes of historical events in manageable chunks:
async function loadHistoricalEvents(provider, contractAddress, startBlock, endBlock, chunkSize = 2000) {
const allLogs = [];
for (let from = startBlock; from <= endBlock; from += chunkSize) {
const to = Math.min(from + chunkSize - 1, endBlock);
const filterId = await provider.send('eth_newFilter', [{
fromBlock: '0x' + from.toString(16),
toBlock: '0x' + to.toString(16),
address: contractAddress
}]);
const logs = await provider.send('eth_getFilterLogs', [filterId]);
allLogs.push(...logs);
console.log(`Blocks ${from}-${to}: ${logs.length} events (total: ${allLogs.length})`);
await provider.send('eth_uninstallFilter', [filterId]);
}
return allLogs;
}Error Handling
Common errors and solutions:
| Error Code | Description | Solution |
|---|---|---|
| -32000 | Filter not found | Filter expired or was uninstalled — recreate with eth_newFilter |
| -32000 | Query returned more than 10000 results | Narrow the block range in your filter — use smaller fromBlock/toBlock windows |
| -32600 | Invalid request | Verify the filter ID is a valid hex string |
| -32603 | Internal error | Node may be overloaded — retry with exponential backoff |
| -32005 | Rate limit exceeded | Reduce request frequency or implement client-side rate limiting |
async function safeGetFilterLogs(provider, filterId, maxRetries = 3) {
for (let i = 0; i < maxRetries; i++) {
try {
return await provider.send('eth_getFilterLogs', [filterId]);
} catch (error) {
if (error.message.includes('filter not found')) {
throw new Error('Filter expired — must recreate before retrying');
}
if (error.message.includes('more than 10000 results')) {
throw new Error('Too many results — narrow your block range');
}
if (i < maxRetries - 1) {
const delay = Math.pow(2, i) * 1000;
await new Promise(r => setTimeout(r, delay));
} else {
throw error;
}
}
}
}Related Methods
eth_newFilter— Create the log filter whose results this method returnseth_getFilterChanges— Poll for only new logs since the last call (incremental)eth_getLogs— Query logs directly without creating a filter firsteth_uninstallFilter— Remove a filter when no longer needed
eth_getFilterChanges
Poll a filter for new results since the last poll on Polygon. Essential for event streaming, real-time monitoring, and efficient log indexing for enterprise solutions (Starbucks, Disney, Reddit), gaming, DeFi, and stablecoin payments.
eth_uninstallFilter
Remove a filter on Polygon created by eth_newFilter, eth_newBlockFilter, or eth_newPendingTransactionFilter. Essential for resource cleanup after event monitoring.