Skip to main content

state_getKeysPaged

Description#

Retrieves storage keys matching a given prefix with pagination support, allowing efficient traversal of large storage maps without overwhelming the client or server. Unlike state_getKeys which returns all matching keys in a single response, this method returns a limited batch of keys starting from a specified position, enabling controlled iteration through potentially massive key sets. This is the recommended approach for enumerating storage maps with hundreds, thousands, or millions of entries. The pagination mechanism uses the last key from the previous page as a starting point for the next request, ensuring complete coverage without gaps or duplicates.

Request Example#

curl -s https://api-acala.n.dwellir.com/YOUR_API_KEY -H 'Content-Type: application/json' -d '{"jsonrpc":"2.0","id":1,"method":"state_getKeysPaged","params":["0x26aa394e...6371da9", 100, "0x", null]}'

Parameters#

ParameterTypeRequiredDescription
prefixstringYesHexadecimal storage key prefix to search for
countnumberYesMaximum number of keys to return in this page (recommended: 100-1000)
start_keystringNoStorage key to start from. Use "0x" for first page, then last key from previous page
block_hashstringNoBlock hash to query at. Omit for latest state

Response Format#

Returns an array of storage key strings in hexadecimal format, containing at most count entries. If the array has fewer entries than requested, you've reached the end of the storage map. If it has exactly count entries, there may be more keys available - make another request using the last returned key as start_key.

Pagination Pattern#

Implement a loop to retrieve all keys: start with start_key = "0x", collect the returned keys, then use the last key from the response as the start_key for the next request. Continue until a response contains fewer keys than requested, indicating the end of the storage map. This pattern ensures complete enumeration while maintaining manageable response sizes.

Use Cases#

  • Large Account Lists: Enumerate all accounts on chains with thousands or millions of users
  • Token Holder Analysis: Iterate through all holders of specific tokens for analytics
  • Validator Sets: Process complete validator and nominator lists for staking analysis
  • Storage Audits: Systematically scan entire storage maps for compliance or debugging
  • Data Migration: Export complete storage state for migration or backup purposes
  • Historical Analysis: Build complete datasets from historical chain states for research

Best Practices#

Choose appropriate page sizes based on your use case - smaller pages (100-500) for real-time processing, larger pages (1000-5000) for batch jobs with reliable connections. Always use the same block hash across all pagination requests when consistency is required, otherwise storage modifications between pages may cause inconsistent results. Implement retry logic for failed requests, as network issues during large scans are common. Monitor memory usage when accumulating large numbers of keys.

Performance Optimization#

Tune the count parameter to balance between request overhead and response size. Too small values result in excessive requests, while too large values may hit node or network limits. For production applications processing millions of keys, consider parallel pagination using different start keys, though this requires careful coordination to avoid overlaps or gaps.

Storage Consistency#

When paginating without specifying a block hash, the underlying storage may change between requests if new blocks are produced. For applications requiring consistent snapshots, always specify a block hash and ensure the node is an archive node that retains historical states.