BatchGetObjects - Efficient Bulk Object Retrieval
Retrieve multiple Sui objects in a single gRPC request for maximum efficiency. Learn how to use BatchGetObjects to minimize latency and optimize blockchain data fetching with Dwellir's Sui infrastructure.
Bulk Object Retrieval with Maximum Efficiency
The BatchGetObjects method enables retrieval of multiple Sui objects in a single gRPC call, dramatically reducing network overhead and improving application performance. Instead of making individual requests for each object, batch operations consolidate multiple queries into one efficient round-trip, making it essential for applications that need to fetch multiple objects simultaneously.
Overview
When building blockchain applications, you often need to retrieve multiple objects at once—whether loading a user's NFT collection, fetching related objects, or displaying portfolio data. Making individual requests for each object creates unnecessary network overhead and increases latency. The BatchGetObjects method solves this by batching multiple object queries into a single request.
Performance Impact
Individual Requests vs Batch:
- 10 individual requests: ~150ms total latency (10 × 15ms)
- 1 batch request: ~18ms total latency
- Performance gain: 8.3x faster
Key Benefits
- Reduced Latency: Single round-trip instead of multiple sequential requests
- Lower Overhead: One connection, one authentication, one response
- Network Efficiency: Reduced bandwidth consumption through HTTP/2 multiplexing
- Simplified Code: One request handler instead of managing multiple concurrent calls
- Cost Effective: Fewer API calls with bundled queries
Method Signature
Service: sui.rpc.v2.LedgerService
Method: BatchGetObjects
Type: Unary RPC (single request, single response)
Use Cases
1. NFT Portfolio Display
Load an entire NFT collection efficiently:
interface NFTPortfolio {
id: string;
name: string;
image: string;
collection: string;
owner: string;
}
async function loadUserNFTs(nftObjectIds: string[]): Promise<NFTPortfolio[]> {
// Fetch all NFTs in one batch request
const objects = await batchGetObjects(nftObjectIds);
return objects.map(obj => ({
id: obj.object_id,
name: extractName(obj.contents),
image: extractImageUrl(obj.contents),
collection: obj.object_type,
owner: obj.owner?.address || ''
}));
}
function extractName(contents: any): string {
// Parse object contents for NFT name
return contents?.fields?.name || 'Unnamed NFT';
}
function extractImageUrl(contents: any): string {
// Parse object contents for image URL
return contents?.fields?.url || '';
}2. Multi-Object Transaction Preparation
Fetch all objects needed for a complex transaction:
interface TransactionObjects {
coins: any[];
nfts: any[];
packages: any[];
}
async function prepareTransactionObjects(
coinIds: string[],
nftIds: string[],
packageIds: string[]
): Promise<TransactionObjects> {
// Combine all object IDs
const allIds = [...coinIds, ...nftIds, ...packageIds];
// Single batch request for all objects
const objects = await batchGetObjects(allIds);
// Categorize by type
const coins = objects.filter(obj =>
obj.object_type.includes('::coin::Coin')
);
const nfts = objects.filter(obj =>
!obj.object_type.includes('::coin::Coin') &&
obj.has_public_transfer
);
const packages = objects.filter(obj =>
obj.object_type.includes('::package::Package')
);
return { coins, nfts, packages };
}3. Object Dependency Resolution
Resolve object dependencies for complex operations:
async function resolveObjectDependencies(
rootObjectId: string
): Promise<Map<string, any>> {
const resolved = new Map<string, any>();
const toFetch = new Set<string>([rootObjectId]);
const fetched = new Set<string>();
while (toFetch.size > 0) {
// Get unfetched IDs
const batchIds = Array.from(toFetch).filter(id => !fetched.has(id));
if (batchIds.length === 0) break;
// Fetch batch
const objects = await batchGetObjects(batchIds);
// Process each object
objects.forEach(obj => {
resolved.set(obj.object_id, obj);
fetched.add(obj.object_id);
toFetch.delete(obj.object_id);
// Extract dependencies from object contents
const deps = extractDependencies(obj);
deps.forEach(depId => {
if (!fetched.has(depId)) {
toFetch.add(depId);
}
});
});
}
return resolved;
}
function extractDependencies(obj: any): string[] {
// Parse object contents for referenced object IDs
const deps: string[] = [];
// Implementation depends on object structure
return deps;
}4. Batch Validation
Validate multiple objects before operation:
interface ValidationResult {
valid: boolean;
object_id: string;
reason?: string;
}
async function validateObjectsForTransfer(
objectIds: string[]
): Promise<ValidationResult[]> {
const objects = await batchGetObjects(objectIds);
return objects.map(obj => {
// Check if object exists
if (!obj.object_id) {
return {
valid: false,
object_id: obj.object_id,
reason: 'Object not found'
};
}
// Check transfer permission
if (!obj.has_public_transfer) {
return {
valid: false,
object_id: obj.object_id,
reason: 'Object not transferable'
};
}
// Check ownership
const ownerType = obj.owner?.kind;
if (ownerType !== 0) { // 0 = ADDRESS_OWNER
return {
valid: false,
object_id: obj.object_id,
reason: 'Object has non-address owner'
};
}
return {
valid: true,
object_id: obj.object_id
};
});
}Performance Optimization
Optimal Batch Sizes
Different batch sizes have different performance characteristics:
| Batch Size | Latency | Throughput | Best For |
|---|---|---|---|
| 1-10 | ~18ms | High | Small collections |
| 11-50 | ~35ms | Optimal | Most use cases |
| 51-100 | ~65ms | Good | Large batches |
| 100+ | ~120ms | Lower | Split into multiple batches |
Recommendation: Use batch sizes of 25-50 for optimal performance.
Chunking Large Requests
For very large collections, split into chunks:
async function fetchLargeCollection(
objectIds: string[],
chunkSize: number = 50
): Promise<any[]> {
const chunks = [];
for (let i = 0; i < objectIds.length; i += chunkSize) {
chunks.push(objectIds.slice(i, i + chunkSize));
}
console.log(`Fetching ${objectIds.length} objects in ${chunks.length} batches`);
// Fetch chunks sequentially to avoid rate limits
const results: any[] = [];
for (const chunk of chunks) {
const objects = await batchGetObjects(chunk);
results.push(...objects);
// Small delay to respect rate limits
await new Promise(resolve => setTimeout(resolve, 100));
}
return results;
}Parallel Batch Processing
For independent batches, process in parallel:
async function parallelBatchFetch(
objectIdGroups: string[][]
): Promise<any[][]> {
const promises = objectIdGroups.map(group =>
batchGetObjects(group)
);
return await Promise.all(promises);
}
// Usage
const userNFTs = ['0x123...', '0x456...'];
const userCoins = ['0xabc...', '0xdef...'];
const userPackages = ['0x789...', '0xghi...'];
const [nfts, coins, packages] = await parallelBatchFetch([
userNFTs,
userCoins,
userPackages
]);Best Practices
1. Use Field Masking
Request only necessary fields to minimize bandwidth:
// ✅ Good: Request minimal fields
const request = {
requests: objectIds.map(id => ({ object_id: id })),
read_mask: {
paths: ['object_id', 'owner', 'object_type']
}
};
// ❌ Bad: Request all fields
const request = {
requests: objectIds.map(id => ({ object_id: id }))
};2. Implement Caching
Cache frequently accessed objects:
const objectCache = new Map<string, { data: any; timestamp: number }>();
const CACHE_TTL = 60000; // 1 minute
async function getCachedObjects(objectIds: string[]): Promise<any[]> {
const now = Date.now();
const uncached: string[] = [];
const results: any[] = new Array(objectIds.length);
// Check cache
objectIds.forEach((id, index) => {
const cached = objectCache.get(id);
if (cached && now - cached.timestamp < CACHE_TTL) {
results[index] = cached.data;
} else {
uncached.push(id);
}
});
// Fetch uncached objects
if (uncached.length > 0) {
const fetched = await batchGetObjects(uncached);
fetched.forEach(obj => {
objectCache.set(obj.object_id, {
data: obj,
timestamp: now
});
const index = objectIds.indexOf(obj.object_id);
results[index] = obj;
});
}
return results.filter(Boolean);
}3. Handle Rate Limits
Implement exponential backoff:
async function batchWithBackoff(
objectIds: string[],
maxRetries: number = 3
): Promise<any[]> {
for (let attempt = 0; attempt < maxRetries; attempt++) {
try {
return await batchGetObjects(objectIds);
} catch (error: any) {
if (error.code === grpc.status.RESOURCE_EXHAUSTED && attempt < maxRetries - 1) {
const delay = Math.pow(2, attempt) * 1000;
console.log(`Rate limited. Retrying in ${delay}ms...`);
await new Promise(resolve => setTimeout(resolve, delay));
} else {
throw error;
}
}
}
throw new Error('Max retries exceeded');
}Related Methods
- GetObject - Single object retrieval
- ListOwnedObjects - Discover objects owned by address
Performance Metrics
Batch vs Individual Requests:
| Objects | Individual Requests | Batch Request | Improvement |
|---|---|---|---|
| 10 | 150ms | 18ms | 8.3x faster |
| 50 | 750ms | 35ms | 21x faster |
| 100 | 1,500ms | 65ms | 23x faster |
Need help with batch operations? Contact our support team or check the gRPC overview.
TypeScript
Complete guide to building TypeScript applications with Sui gRPC API on Dwellir. Includes project setup, type safety, React integration, and modern async patterns.
GetCheckpoint
Retrieve Sui blockchain checkpoints via gRPC to access finalized state, transaction batches, and validator signatures. Essential for applications requiring guaranteed finality with Dwellir's infrastructure.