eth_getFilterChanges - Avalanche RPC Method
Poll a filter for new results since the last poll on Avalanche. Essential for event streaming, real-time monitoring, and efficient log indexing for institutional RWA tokenization ($18B+ transfer volume), gaming subnets, and enterprise blockchains.
Polls a filter on Avalanche and returns an array of changes (logs, block hashes, or transaction hashes) that have occurred since the last poll. The return type depends on the filter that was created — log filters return log objects, block filters return block hashes, and pending transaction filters return transaction hashes.
Why Avalanche? Build on the fastest smart contract platform with sub-second finality and customizable L1 subnets with sub-second finality, Evergreen subnets for institutions, and partnerships with Franklin Templeton, VanEck, and Bergen County.
When to Use This Method
eth_getFilterChanges is essential for enterprise developers, RWA tokenizers, and teams building custom blockchain networks:
- Event Streaming — Incrementally consume new contract events without re-fetching the entire log history on Avalanche
- Real-Time Monitoring — Track contract activity, token transfers, or governance votes for institutional RWA tokenization ($18B+ transfer volume), gaming subnets, and enterprise blockchains
- Efficient Log Indexing — Process only new events since your last poll, minimizing bandwidth and compute overhead
- Block & Transaction Tracking — When used with block or pending-transaction filters, detect new blocks or mempool activity in real time
Code Examples
Common Use Cases
1. Real-Time Token Transfer Monitor
Stream ERC-20 transfer events on Avalanche:
async function monitorTransfers(provider, tokenAddress) {
// Transfer event topic: keccak256('Transfer(address,address,uint256)')
const transferTopic = '0xddf252ad1be2c89b69c2b068fc378daa952ba7f163c4a11628f55a4df523b3ef';
const filterId = await provider.send('eth_newFilter', [{
fromBlock: 'latest',
address: tokenAddress,
topics: [transferTopic]
}]);
console.log(`Monitoring transfers on ${tokenAddress}...`);
setInterval(async () => {
try {
const logs = await provider.send('eth_getFilterChanges', [filterId]);
for (const log of logs) {
const from = '0x' + log.topics[1].slice(26);
const to = '0x' + log.topics[2].slice(26);
const value = BigInt(log.data);
console.log(`Transfer: ${from} -> ${to} (${value})`);
}
} catch (error) {
if (error.message.includes('filter not found')) {
console.log('Filter expired — recreating...');
// Filters expire after ~5 minutes of inactivity on most nodes
}
}
}, 3000);
}2. Multi-Contract Event Aggregator
Monitor events across multiple contracts simultaneously:
async function aggregateEvents(provider, contracts) {
const filterId = await provider.send('eth_newFilter', [{
fromBlock: 'latest',
address: contracts // Array of contract addresses
}]);
const eventBuffer = [];
let pollCount = 0;
const interval = setInterval(async () => {
const changes = await provider.send('eth_getFilterChanges', [filterId]);
pollCount++;
if (changes.length > 0) {
eventBuffer.push(...changes);
console.log(`Poll #${pollCount}: ${changes.length} new events (total: ${eventBuffer.length})`);
// Process in batches
if (eventBuffer.length >= 50) {
await processBatch(eventBuffer.splice(0, 50));
}
}
}, 2000);
return { filterId, stop: () => clearInterval(interval) };
}3. Block-Aware Event Processor with Reorg Handling
Detect chain reorganizations by checking the removed flag:
async function safeEventProcessor(provider, filterParams) {
const filterId = await provider.send('eth_newFilter', [filterParams]);
const processedEvents = new Map();
setInterval(async () => {
const changes = await provider.send('eth_getFilterChanges', [filterId]);
for (const log of changes) {
const eventKey = `${log.transactionHash}-${log.logIndex}`;
if (log.removed) {
// Chain reorganization — undo previously processed event
console.warn(`Reorg detected: removing event ${eventKey}`);
processedEvents.delete(eventKey);
await rollbackEvent(log);
} else {
processedEvents.set(eventKey, log);
await processEvent(log);
}
}
}, 2000);
}Error Handling
Common errors and solutions:
| Error Code | Description | Solution |
|---|---|---|
| -32000 | Filter not found | Filter expired or was uninstalled — recreate with eth_newFilter |
| -32600 | Invalid request | Verify the filter ID is correctly formatted as a hex string |
| -32603 | Internal error | Node may be overloaded — retry with exponential backoff |
| -32005 | Rate limit exceeded | Reduce polling frequency or implement client-side rate limiting |
async function resilientPoll(provider, filterId, createFilter, interval = 2000) {
let currentFilterId = filterId;
let retries = 0;
while (true) {
try {
const changes = await provider.send('eth_getFilterChanges', [currentFilterId]);
retries = 0;
if (changes.length > 0) {
return changes;
}
} catch (error) {
if (error.message.includes('filter not found')) {
console.log('Filter expired — recreating...');
currentFilterId = await createFilter();
} else if (error.code === -32005) {
const delay = Math.pow(2, retries) * 1000;
retries++;
await new Promise(r => setTimeout(r, delay));
} else {
throw error;
}
}
await new Promise(r => setTimeout(r, interval));
}
}Related Methods
eth_newFilter— Create a log/event filter to poll with this methodeth_newBlockFilter— Create a block filter for new block notificationseth_getFilterLogs— Get all logs matching a filter (full history, not incremental)eth_uninstallFilter— Remove a filter when no longer neededeth_getLogs— Query logs directly without creating a filter
eth_newFilter
Create an event log filter on Avalanche. Essential for event monitoring, contract activity tracking, and DeFi event streaming for institutional RWA tokenization ($18B+ transfer volume), gaming subnets, and enterprise blockchains.
eth_getFilterLogs
Returns all logs matching a previously created filter on Avalanche. Essential for initial log retrieval, backfilling event data, and one-time historical queries for institutional RWA tokenization ($18B+ transfer volume), gaming subnets, and enterprise blockchains.