Transaction Streaming Overview
What the Aptos gRPC transaction stream is for and how to operate it safely
Coming soon: Need support for this? Email support@dwellir.com if you want early access
The Aptos streaming service delivers ordered, finalized transactions over gRPC with support for both historical replay and real-time tails. It is aimed at indexers, analytics backends, bots, and monitoring systems that need a continuous feed instead of repeated REST polling.
What This Gives You
- real-time tails for new finalized transactions
- replay from a known starting point when you need to backfill or recover
- one long-lived connection instead of many short polling requests
- ordered delivery so downstream processors can checkpoint progress cleanly
Compared with REST and GraphQL, streaming is the better fit when your system needs low-latency ingestion or durable catch-up after restarts.
Connection Model
The stream is authenticated with bearer-token metadata and then kept open as a long-lived gRPC subscription. The stronger examples elsewhere in this section use a TLS endpoint and standard gRPC metadata:
import { credentials, Metadata } from "@grpc/grpc-js";
const metadata = new Metadata();
metadata.add("authorization", `Bearer ${process.env.DWELLIR_API_KEY}`);
// The concrete endpoint is provisioned during streaming onboarding.
// See the linked authentication and real-time guides for the full client setup.
const client = new TransactionStreamClient(
"stream.aptos.dwellir.com:443",
credentials.createSsl()
);Operating Patterns
Real-Time Consumers
Start from the head of chain when you want live notifications, live dashboards, or alerting systems that only care about new transactions.
Replay and Recovery
Start from a previously stored version when a worker restarts or when you need to backfill a gap. This is the safer model for production because it lets you prove that no committed transactions were skipped.
Hybrid Pipelines
Many production systems replay to a checkpoint first and then transition into a live tail on the same service. That keeps the ingestion model consistent whether the worker is catching up or operating at the head of chain.
Implementation Checklist
- Persist the last fully processed version so reconnects can resume without gaps.
- Make downstream writes idempotent. Network retries can cause the same transaction to be observed again.
- Handle backpressure explicitly; do not let slow consumers block the stream reader.
- Monitor reconnect frequency, stream lag, and message throughput so degraded consumers are visible before they fall behind.
- Keep REST or GraphQL available for targeted lookups, retries, and operator debugging.
Related Guides
- Streaming Authentication for bearer-token setup
- Real-Time Streaming for tailing the head of chain
- Historical Replay for backfills and recovery
- Custom Processors for worker architecture patterns