BatchGetObjects
Bulk Object Retrieval with Maximum Efficiency#
The BatchGetObjects method enables retrieval of multiple Sui objects in a single gRPC call, dramatically reducing network overhead and improving application performance. Instead of making individual requests for each object, batch operations consolidate multiple queries into one efficient round-trip, making it essential for applications that need to fetch multiple objects simultaneously.
Overview#
When building blockchain applications, you often need to retrieve multiple objects at once—whether loading a user's NFT collection, fetching related objects, or displaying portfolio data. Making individual requests for each object creates unnecessary network overhead and increases latency. The BatchGetObjects method solves this by batching multiple object queries into a single request.
Performance Impact#
Individual Requests vs Batch:
- 10 individual requests: ~150ms total latency (10 × 15ms)
- 1 batch request: ~18ms total latency
- Performance gain: 8.3x faster
Key Benefits#
- Reduced Latency: Single round-trip instead of multiple sequential requests
- Lower Overhead: One connection, one authentication, one response
- Network Efficiency: Reduced bandwidth consumption through HTTP/2 multiplexing
- Simplified Code: One request handler instead of managing multiple concurrent calls
- Cost Effective: Fewer API calls with bundled queries
Method Signature#
Service: sui.rpc.v2beta2.LedgerService
Method: BatchGetObjects
Type: Unary RPC (single request, single response)
Parameters#
| Parameter | Type | Required | Description |
|---|---|---|---|
requests | repeated GetObjectRequest | Yes | Array of object requests to fetch |
read_mask | FieldMask | No | Fields to include in all responses (applies to all objects) |
GetObjectRequest Structure#
Each request in the batch contains:
| Field | Type | Description |
|---|---|---|
object_id | string | Unique identifier of the object to retrieve |
Field Mask Options#
Apply a single field mask to all objects in the batch:
| Path | Description |
|---|---|
object_id | Object identifier |
version | Version number |
digest | Object digest hash |
owner | Ownership information |
object_type | Type classification |
has_public_transfer | Transfer permissions |
contents | Object data |
previous_transaction | Last transaction that modified the object |
storage_rebate | Storage rebate amount |
Response Structure#
Returns an array of Object messages:
message BatchGetObjectsResponse {
repeated Object objects = 1;
}
Each object in the response array contains the same fields as GetObject, matching the order of the request array.
Code Examples#
- TypeScript
- Python
- Go
import * as grpc from '@grpc/grpc-js';
import * as protoLoader from '@grpc/proto-loader';
// Configuration
const ENDPOINT = 'api-sui-mainnet-full.n.dwellir.com';
const API_TOKEN = 'your_api_token_here';
// Load proto
const packageDefinition = protoLoader.loadSync(
'./protos/ledger.proto',
{
keepCase: true,
longs: String,
enums: String,
defaults: true,
oneofs: true,
includeDirs: ['./protos']
}
);
const protoDescriptor = grpc.loadPackageDefinition(packageDefinition) as any;
const credentials = grpc.credentials.createSsl();
const client = new protoDescriptor.sui.rpc.v2beta2.LedgerService(
ENDPOINT,
credentials
);
const metadata = new grpc.Metadata();
metadata.add('x-api-key', API_TOKEN);
// Batch fetch multiple objects
async function batchGetObjects(objectIds: string[]): Promise<any[]> {
return new Promise((resolve, reject) => {
const request = {
requests: objectIds.map(id => ({ object_id: id })),
read_mask: {
paths: [
'object_id',
'version',
'digest',
'owner',
'object_type',
'has_public_transfer'
]
}
};
client.BatchGetObjects(request, metadata, (error: any, response: any) => {
if (error) {
console.error('BatchGetObjects error:', error.message);
reject(error);
return;
}
resolve(response.objects || []);
});
});
}
// Example: Fetch NFT collection
async function fetchNFTCollection(collectionObjects: string[]): Promise<void> {
console.log(`Fetching ${collectionObjects.length} NFTs...`);
const startTime = Date.now();
const objects = await batchGetObjects(collectionObjects);
const elapsed = Date.now() - startTime;
console.log(`\n✓ Fetched ${objects.length} objects in ${elapsed}ms`);
console.log(`Average: ${(elapsed / objects.length).toFixed(2)}ms per object\n`);
objects.forEach((obj, index) => {
console.log(`NFT ${index + 1}:`);
console.log(` ID: ${obj.object_id}`);
console.log(` Type: ${obj.object_type}`);
console.log(` Owner: ${obj.owner?.address || 'N/A'}`);
console.log(` Transferable: ${obj.has_public_transfer ? 'Yes' : 'No'}`);
console.log('');
});
}
// Example: Fetch with error handling
async function safeBatchGetObjects(objectIds: string[]) {
try {
const objects = await batchGetObjects(objectIds);
return {
success: true,
objects: objects,
found: objects.filter(obj => obj.object_id).length,
missing: objects.filter(obj => !obj.object_id).length
};
} catch (error: any) {
return {
success: false,
error: error.message,
objects: []
};
}
}
// Usage
const nftIds = [
'0x3bf5e563d2e5d59bb8c22eaf91830be3e297929fb0c496f7afb90d03ff7edc23',
'0x5a8d8c4f7e6b9a2c3d1e0f8b7a6c5d4e3f2a1b0c9d8e7f6a5b4c3d2e1f0a9b8c',
'0x7b8c9d0e1f2a3b4c5d6e7f8a9b0c1d2e3f4a5b6c7d8e9f0a1b2c3d4e5f6a7b8'
];
fetchNFTCollection(nftIds);
import grpc
import time
from typing import List, Dict
from google.protobuf import field_mask_pb2
import ledger_service_pb2
import ledger_service_pb2_grpc
# Configuration
ENDPOINT = 'api-sui-mainnet-full.n.dwellir.com'
API_TOKEN = 'your_api_token_here'
class BatchObjectFetcher:
def __init__(self, endpoint: str, api_token: str):
self.endpoint = endpoint
self.api_token = api_token
self.channel = None
self.client = None
def connect(self):
"""Establish secure gRPC connection"""
credentials = grpc.ssl_channel_credentials()
self.channel = grpc.secure_channel(self.endpoint, credentials)
self.client = ledger_service_pb2_grpc.LedgerServiceStub(self.channel)
def batch_get_objects(
self,
object_ids: List[str],
fields: List[str] = None
) -> List[Dict]:
"""
Fetch multiple objects in a single request
Args:
object_ids: List of object IDs to retrieve
fields: Optional list of fields to include
Returns:
List of object data dictionaries
"""
if not self.client:
self.connect()
# Setup authentication
metadata = [('x-api-key', self.api_token)]
# Default fields if none specified
if fields is None:
fields = [
'object_id',
'version',
'digest',
'owner',
'object_type',
'has_public_transfer'
]
# Build batch request
requests = [
ledger_service_pb2.GetObjectRequest(object_id=obj_id)
for obj_id in object_ids
]
request = ledger_service_pb2.BatchGetObjectsRequest(
requests=requests,
read_mask=field_mask_pb2.FieldMask(paths=fields)
)
try:
start_time = time.time()
response = self.client.BatchGetObjects(
request,
metadata=metadata,
timeout=30.0
)
elapsed = time.time() - start_time
objects = list(response.objects)
print(f'✓ Fetched {len(objects)} objects in {elapsed*1000:.2f}ms')
print(f' Average: {(elapsed*1000)/len(objects):.2f}ms per object')
return objects
except grpc.RpcError as e:
print(f'gRPC Error: {e.code()}: {e.details()}')
raise
def fetch_with_retry(
self,
object_ids: List[str],
max_retries: int = 3
) -> List[Dict]:
"""Fetch with automatic retry on failure"""
for attempt in range(max_retries):
try:
return self.batch_get_objects(object_ids)
except Exception as e:
if attempt == max_retries - 1:
raise
wait_time = 2 ** attempt
print(f'Retry {attempt + 1}/{max_retries} after {wait_time}s...')
time.sleep(wait_time)
def close(self):
"""Close gRPC channel"""
if self.channel:
self.channel.close()
# Example usage
def main():
fetcher = BatchObjectFetcher(ENDPOINT, API_TOKEN)
try:
# Fetch multiple NFTs
nft_ids = [
'0x3bf5e563d2e5d59bb8c22eaf91830be3e297929fb0c496f7afb90d03ff7edc23',
'0x5a8d8c4f7e6b9a2c3d1e0f8b7a6c5d4e3f2a1b0c9d8e7f6a5b4c3d2e1f0a9b8c',
'0x7b8c9d0e1f2a3b4c5d6e7f8a9b0c1d2e3f4a5b6c7d8e9f0a1b2c3d4e5f6a7b8'
]
objects = fetcher.batch_get_objects(nft_ids)
print('\nFetched Objects:')
print('=' * 60)
for i, obj in enumerate(objects, 1):
print(f'\nObject {i}:')
print(f' ID: {obj.object_id}')
print(f' Type: {obj.object_type}')
print(f' Version: {obj.version}')
print(f' Owner: {obj.owner.address if obj.owner.address else "N/A"}')
except Exception as e:
print(f'Error: {e}')
finally:
fetcher.close()
if __name__ == '__main__':
main()
package main
import (
"context"
"fmt"
"log"
"strings"
"time"
"google.golang.org/grpc"
"google.golang.org/grpc/credentials"
"google.golang.org/grpc/metadata"
"google.golang.org/protobuf/types/known/fieldmaskpb"
pb "sui-grpc-client/sui/rpc/v2"
)
const (
batchEndpoint = "api-sui-mainnet-full.n.dwellir.com:443"
batchAPIKey = "API_KEY"
)
type BatchFetcher struct {
conn *grpc.ClientConn
client pb.LedgerServiceClient
}
func NewBatchFetcher(endpoint, token string) (*BatchFetcher, error) {
creds := credentials.NewClientTLSFromCert(nil, "")
conn, err := grpc.NewClient(
endpoint,
grpc.WithTransportCredentials(creds),
)
if err != nil {
return nil, fmt.Errorf("failed to connect: %w", err)
}
client := pb.NewLedgerServiceClient(conn)
return &BatchFetcher{
conn: conn,
client: client,
}, nil
}
func (bf *BatchFetcher) BatchGetObjects(
ctx context.Context,
objectIDs []string,
fields []string,
) ([]*pb.GetObjectResult, error) {
// Add authentication
ctx = metadata.AppendToOutgoingContext(ctx, "x-api-key", batchAPIKey)
// Default fields if none provided
if len(fields) == 0 {
fields = []string{
"object_id",
"version",
"digest",
"owner",
"object_type",
"has_public_transfer",
}
}
// Build request array
requests := make([]*pb.GetObjectRequest, len(objectIDs))
for i, id := range objectIDs {
objID := id // Create a copy for the pointer
requests[i] = &pb.GetObjectRequest{
ObjectId: &objID,
}
}
// Create batch request
request := &pb.BatchGetObjectsRequest{
Requests: requests,
ReadMask: &fieldmaskpb.FieldMask{
Paths: fields,
},
}
// Execute batch request
start := time.Now()
response, err := bf.client.BatchGetObjects(ctx, request)
elapsed := time.Since(start)
if err != nil {
return nil, fmt.Errorf("batch request failed: %w", err)
}
fmt.Printf("✓ Fetched %d objects in %v\n", len(response.Objects), elapsed)
fmt.Printf(" Average: %.2fms per object\n",
float64(elapsed.Milliseconds())/float64(len(response.Objects)))
return response.Objects, nil
}
func (bf *BatchFetcher) Close() error {
return bf.conn.Close()
}
func main() {
fetcher, err := NewBatchFetcher(batchEndpoint, batchAPIKey)
if err != nil {
log.Fatalf("Failed to create fetcher: %v", err)
}
defer fetcher.Close()
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()
// Fetch multiple objects - using well-known Sui system objects
objectIDs := []string{
"0x5", // Sui system state object
"0x6", // Clock object
"0x403", // Random object
}
results, err := fetcher.BatchGetObjects(ctx, objectIDs, nil)
if err != nil {
log.Fatalf("Failed to fetch objects: %v", err)
}
fmt.Println("\nFetched Objects:")
fmt.Println(strings.Repeat("=", 60))
for i, result := range results {
fmt.Printf("\nObject %d:\n", i+1)
// Check if result is an object or an error
if obj := result.GetObject(); obj != nil {
fmt.Printf(" ID: %s\n", obj.GetObjectId())
fmt.Printf(" Version: %d\n", obj.GetVersion())
fmt.Printf(" Digest: %s\n", obj.GetDigest())
if owner := obj.GetOwner(); owner != nil {
ownerKind := owner.GetKind()
switch ownerKind {
case pb.Owner_ADDRESS:
fmt.Printf(" Owner: Address (%s)\n", owner.GetAddress())
case pb.Owner_OBJECT:
fmt.Printf(" Owner: Object (%s)\n", owner.GetAddress())
case pb.Owner_SHARED:
fmt.Printf(" Owner: Shared (version: %d)\n", owner.GetVersion())
case pb.Owner_IMMUTABLE:
fmt.Printf(" Owner: Immutable\n")
case pb.Owner_CONSENSUS_ADDRESS:
fmt.Printf(" Owner: Consensus Address (%s, version: %d)\n", owner.GetAddress(), owner.GetVersion())
}
}
objType := obj.GetObjectType()
if objType != "" {
fmt.Printf(" Type: %s\n", objType)
}
} else if err := result.GetError(); err != nil {
fmt.Printf(" Error: %s (code: %d)\n", err.GetMessage(), err.GetCode())
}
}
}
Use Cases#
1. NFT Portfolio Display#
Load an entire NFT collection efficiently:
interface NFTPortfolio {
id: string;
name: string;
image: string;
collection: string;
owner: string;
}
async function loadUserNFTs(nftObjectIds: string[]): Promise<NFTPortfolio[]> {
// Fetch all NFTs in one batch request
const objects = await batchGetObjects(nftObjectIds);
return objects.map(obj => ({
id: obj.object_id,
name: extractName(obj.contents),
image: extractImageUrl(obj.contents),
collection: obj.object_type,
owner: obj.owner?.address || ''
}));
}
function extractName(contents: any): string {
// Parse object contents for NFT name
return contents?.fields?.name || 'Unnamed NFT';
}
function extractImageUrl(contents: any): string {
// Parse object contents for image URL
return contents?.fields?.url || '';
}
2. Multi-Object Transaction Preparation#
Fetch all objects needed for a complex transaction:
interface TransactionObjects {
coins: any[];
nfts: any[];
packages: any[];
}
async function prepareTransactionObjects(
coinIds: string[],
nftIds: string[],
packageIds: string[]
): Promise<TransactionObjects> {
// Combine all object IDs
const allIds = [...coinIds, ...nftIds, ...packageIds];
// Single batch request for all objects
const objects = await batchGetObjects(allIds);
// Categorize by type
const coins = objects.filter(obj =>
obj.object_type.includes('::coin::Coin')
);
const nfts = objects.filter(obj =>
!obj.object_type.includes('::coin::Coin') &&
obj.has_public_transfer
);
const packages = objects.filter(obj =>
obj.object_type.includes('::package::Package')
);
return { coins, nfts, packages };
}
3. Object Dependency Resolution#
Resolve object dependencies for complex operations:
async function resolveObjectDependencies(
rootObjectId: string
): Promise<Map<string, any>> {
const resolved = new Map<string, any>();
const toFetch = new Set<string>([rootObjectId]);
const fetched = new Set<string>();
while (toFetch.size > 0) {
// Get unfetched IDs
const batchIds = Array.from(toFetch).filter(id => !fetched.has(id));
if (batchIds.length === 0) break;
// Fetch batch
const objects = await batchGetObjects(batchIds);
// Process each object
objects.forEach(obj => {
resolved.set(obj.object_id, obj);
fetched.add(obj.object_id);
toFetch.delete(obj.object_id);
// Extract dependencies from object contents
const deps = extractDependencies(obj);
deps.forEach(depId => {
if (!fetched.has(depId)) {
toFetch.add(depId);
}
});
});
}
return resolved;
}
function extractDependencies(obj: any): string[] {
// Parse object contents for referenced object IDs
const deps: string[] = [];
// Implementation depends on object structure
return deps;
}
4. Batch Validation#
Validate multiple objects before operation:
interface ValidationResult {
valid: boolean;
object_id: string;
reason?: string;
}
async function validateObjectsForTransfer(
objectIds: string[]
): Promise<ValidationResult[]> {
const objects = await batchGetObjects(objectIds);
return objects.map(obj => {
// Check if object exists
if (!obj.object_id) {
return {
valid: false,
object_id: obj.object_id,
reason: 'Object not found'
};
}
// Check transfer permission
if (!obj.has_public_transfer) {
return {
valid: false,
object_id: obj.object_id,
reason: 'Object not transferable'
};
}
// Check ownership
const ownerType = obj.owner?.kind;
if (ownerType !== 0) { // 0 = ADDRESS_OWNER
return {
valid: false,
object_id: obj.object_id,
reason: 'Object has non-address owner'
};
}
return {
valid: true,
object_id: obj.object_id
};
});
}
Performance Optimization#
Optimal Batch Sizes#
Different batch sizes have different performance characteristics:
| Batch Size | Latency | Throughput | Best For |
|---|---|---|---|
| 1-10 | ~18ms | High | Small collections |
| 11-50 | ~35ms | Optimal | Most use cases |
| 51-100 | ~65ms | Good | Large batches |
| 100+ | ~120ms | Lower | Split into multiple batches |
Recommendation: Use batch sizes of 25-50 for optimal performance.
Chunking Large Requests#
For very large collections, split into chunks:
async function fetchLargeCollection(
objectIds: string[],
chunkSize: number = 50
): Promise<any[]> {
const chunks = [];
for (let i = 0; i < objectIds.length; i += chunkSize) {
chunks.push(objectIds.slice(i, i + chunkSize));
}
console.log(`Fetching ${objectIds.length} objects in ${chunks.length} batches`);
// Fetch chunks sequentially to avoid rate limits
const results: any[] = [];
for (const chunk of chunks) {
const objects = await batchGetObjects(chunk);
results.push(...objects);
// Small delay to respect rate limits
await new Promise(resolve => setTimeout(resolve, 100));
}
return results;
}
Parallel Batch Processing#
For independent batches, process in parallel:
async function parallelBatchFetch(
objectIdGroups: string[][]
): Promise<any[][]> {
const promises = objectIdGroups.map(group =>
batchGetObjects(group)
);
return await Promise.all(promises);
}
// Usage
const userNFTs = ['0x123...', '0x456...'];
const userCoins = ['0xabc...', '0xdef...'];
const userPackages = ['0x789...', '0xghi...'];
const [nfts, coins, packages] = await parallelBatchFetch([
userNFTs,
userCoins,
userPackages
]);
Error Handling#
Handle partial failures gracefully:
interface BatchResult {
success: any[];
failed: Array<{ id: string; error: string }>;
}
async function safeBatchGetObjects(objectIds: string[]): Promise<BatchResult> {
try {
const objects = await batchGetObjects(objectIds);
const success = objects.filter(obj => obj.object_id);
const failed = objectIds
.filter(id => !objects.find(obj => obj.object_id === id))
.map(id => ({
id,
error: 'Object not found or inaccessible'
}));
return { success, failed };
} catch (error: any) {
// All requests failed
return {
success: [],
failed: objectIds.map(id => ({
id,
error: error.message
}))
};
}
}
Best Practices#
1. Use Field Masking#
Request only necessary fields to minimize bandwidth:
// ✅ Good: Request minimal fields
const request = {
requests: objectIds.map(id => ({ object_id: id })),
read_mask: {
paths: ['object_id', 'owner', 'object_type']
}
};
// ❌ Bad: Request all fields
const request = {
requests: objectIds.map(id => ({ object_id: id }))
};
2. Implement Caching#
Cache frequently accessed objects:
const objectCache = new Map<string, { data: any; timestamp: number }>();
const CACHE_TTL = 60000; // 1 minute
async function getCachedObjects(objectIds: string[]): Promise<any[]> {
const now = Date.now();
const uncached: string[] = [];
const results: any[] = new Array(objectIds.length);
// Check cache
objectIds.forEach((id, index) => {
const cached = objectCache.get(id);
if (cached && now - cached.timestamp < CACHE_TTL) {
results[index] = cached.data;
} else {
uncached.push(id);
}
});
// Fetch uncached objects
if (uncached.length > 0) {
const fetched = await batchGetObjects(uncached);
fetched.forEach(obj => {
objectCache.set(obj.object_id, {
data: obj,
timestamp: now
});
const index = objectIds.indexOf(obj.object_id);
results[index] = obj;
});
}
return results.filter(Boolean);
}
3. Handle Rate Limits#
Implement exponential backoff:
async function batchWithBackoff(
objectIds: string[],
maxRetries: number = 3
): Promise<any[]> {
for (let attempt = 0; attempt < maxRetries; attempt++) {
try {
return await batchGetObjects(objectIds);
} catch (error: any) {
if (error.code === grpc.status.RESOURCE_EXHAUSTED && attempt < maxRetries - 1) {
const delay = Math.pow(2, attempt) * 1000;
console.log(`Rate limited. Retrying in ${delay}ms...`);
await new Promise(resolve => setTimeout(resolve, delay));
} else {
throw error;
}
}
}
throw new Error('Max retries exceeded');
}
Related Methods#
- GetObject - Single object retrieval
- ListOwnedObjects - Discover objects owned by address
Performance Metrics#
Batch vs Individual Requests:
| Objects | Individual Requests | Batch Request | Improvement |
|---|---|---|---|
| 10 | 150ms | 18ms | 8.3x faster |
| 50 | 750ms | 35ms | 21x faster |
| 100 | 1,500ms | 65ms | 23x faster |
Need help with batch operations? Contact our support team or check the gRPC overview.