Site icon Chainstack

Solana node types: validator, RPC, and trader nodes

Running Solana infrastructure in 2026 means choosing between three operationally distinct node types — each built for a different job. Whether you’re securing the network as a validator, serving application traffic as an RPC node, or competing for block inclusion as a trader node, the wrong infrastructure choice produces performance problems that look like software bugs. This guide breaks down how each Solana node type works, what it costs to run, and which workloads it’s built for.

TL;DR

What are Solana nodes?

The Solana network is composed of independent computers i.e. nodes that collectively maintain the blockchain’s state. Each node runs software that participates in one or more roles: validating transactions and producing blocks, serving data queries from applications, or submitting transactions on behalf of programs that require low latency.

Unlike some blockchains, where every node performs every role, Solana’s architecture distinguishes these responsibilities clearly. Understanding which node does what is the foundation for making correct infrastructure decisions.

Types of Solana nodes

There are two protocol-level node roles and one common low-latency operating pattern, each with different setup requirements:

📖 Voting vs non-voting: An RPC node is technically a non-voting validator. It runs the same software and maintains the same ledger state, but it does not have a funded vote account and does not participate in consensus. A trader node is similar: it is validator software configured purely for transaction forwarding and submission, not for voting.

Let’s go through each of them in detail.

Solana Validator Node

A validator is a full participant in Solana’s Tower BFT consensus. It maintains the ledger state required for validation; full historical retention is optional and requires additional storage configuration, verifies and processes every transaction, votes on every block produced by the current leader, and periodically takes its own turn as leader, producing and broadcasting new blocks. Every validator has a publicly registered identity keypair and a vote account on-chain to which stake can be delegated.

Voting is not free. Each vote is a transaction broadcast to the network, costing historically around 1.1 SOL per day in vote transaction fees, though this varies with network conditions. This is why a validator with little or no delegated stake operates at a net loss, the staking rewards earned must exceed the vote costs to break even.

Validator Pipeline

➡️ If you are already familiar with the Solana Validator Architecture, you can skip to “Validator Setup” section below.

The best way to understand a validator’s internals is to follow a single transaction from the moment it leaves a client’s wallet all the way to the point where the block it belongs to is finalized by the network. Two pipelines handle this: the TPU (Transaction Processing Unit) runs on the leader that produces the block, and the TVU (Transaction Validation Unit) runs on every other validator that validates it.

Source: anza.xyz

Gossip and Gulf Stream

Before a transaction can be processed, every validator in the network needs to know two things: who the other validators are and who the current leader is. This is the job of the Gossip Service, which runs continuously on every validator, independent of whether it is currently a leader or not.

Each validator broadcasts its ContactInfo, its public key plus the socket addresses of all its ports (TPU port, TVU port, repair port, etc.) into the gossip network. This data is stored in a shared structure called the CRDS (Cluster Replicated Data Store). Every validator continuously exchanges CRDS entries with its gossip peers, so within a few seconds every node has a complete picture of the cluster. The Gossip Service also propagates the leader schedule: a deterministic rotation of which validator will produce blocks in each upcoming slot, computed from stake weights two epochs in advance.

This leader schedule is what makes Gulf Stream possible. Solana has no global mempool where transactions wait. Instead, because every node knows who the next leader will be, directly from gossip data, RPC nodes and validators forward incoming transactions straight to that leader’s TPU port over QUIC. Transactions skip the network-wide broadcast step entirely and arrive at the leader before its slot even begins. This is how Solana achieves sub-second pre-confirmation times.

Source: Solana Beach

TPU

When a validator’s slot arrives it becomes the leader and switches into TPU mode. Its job for that ~400ms slot is to receive the incoming transactions forwarded to it, verify them, execute them, and package the results into a block for the rest of the network. This happens across four sequential stages.

Source: anza.xyz

Fetch Stage: The leader’s TPU receives packets through three QUIC sockets: tpu (transactions), tpu_vote (votes), and tpu_forwards (forwarded packets). Packets are grouped into batches of ~128 before moving forward. Stake-weighted QoS prioritizes connections from staked validators, helping protect the network from spam.

SigVerify Stage: Transactions are deduplicated and their ed25519 signatures are verified. Invalid ones are flagged (discard=true) rather than immediately dropped so batches can move efficiently through the pipeline. When available, GPUs verify signatures in parallel for higher throughput.

Banking Stage: Transactions are sorted by priority fee and grouped into non-conflicting batches based on account access. The batch is recorded into the PoH hash chain, then executed by the SVM where transactions touching different accounts run in parallel. Successful state changes update the in-memory bank for the current slot.

Forwarding Stage: While this is happening, the banking stage also runs a Forwarding Stage in a side path: if the validator knows it is not the current leader, it forwards the incoming transactions (sorted by priority fee) to whoever will be leader, ensuring transactions are not dropped when the local buffer is full.

Broadcast Stage: Processed entries are split into small network packets called shreds. The leader signs them, applies erasure coding, and distributes them through the Turbine protocol. This stake-weighted tree rapidly propagates the block across the validator network.

TVU

While the leader’s TPU is building the block, every other validator is running its TVU. The TVU’s job is to receive the shreds broadcast by the leader, verify them, persist them to disk, retransmit them to the next layer of the Turbine tree, reconstruct the full block, re-execute every transaction independently, and then cast a vote. This is how consensus is formed.

Source: anza.xyz

Shred Fetch: Shreds arrive from the Turbine network over the TVU UDP port and are distributed across parallel sockets. The Window Service tracks which shreds for each slot have arrived and detects gaps. If shreds are missing, the Repair Service requests them from peer validators.

Shred Verify & Retransmit: Each shred’s signature is verified against the expected slot leader. Valid shreds are written to the Blockstore (the validator’s on-disk ledger) while being immediately retransmitted to neighbours in the Turbine tree. Fast retransmission helps propagate blocks quickly across the network.

Replay Stage: Once enough shreds are collected to reconstruct a block, the validator verifies the PoH hash chain and re-executes all transactions using the SVM. It then compares the resulting state hash with the leader’s and uses Tower BFT to determine the canonical fork.

Vote Transaction: If the block is valid, the validator submits a vote transaction for the slot hash via gossip. When votes from more than two-thirds of stake accumulate, the slot becomes confirmed and eventually finalized.

Validator Clients

Multiple teams maintain independent implementations of the Solana protocol:

ClientMaintainerDescription
AgaveAnzaReference implementation, forked from original Solana Labs client in 2024
Jito-SolanaJito FoundationAdds block engine for MEV bundle submission, tip payment infrastructure
FiredancerJump CryptoIndependent implementation; Frankendancer hybrid (C networking + Agave runtime) on mainnet; demonstrated 1M+ TPS in controlled testnet benchmarks
SigSyndicaNew independent implementation; gossip, TVU, and SVM components in active development

ℹ️ The Solana Foundation and major stakeholders actively encourage running multiple validator clients to reduce the risk of a single client bug causing a network-wide outage. The Solana HCL (Hardware Compatibility List) community tracks optimal hardware configurations across all clients.

Validator Setup

This guide covers testnet setup, which is the recommended starting point. The same steps apply to mainnet with different cluster URLs and the need for real SOL.

Before getting started, make sure that you have all the requirements listed below:

Solana Validator Node Hardware Requirements

ComponentMinimumRecommended
CPU12 cores / 24 threads, 2.8 GHz baseAMD EPYC 7543 or Intel Xeon Gold (32 cores, 3.0+ GHz)
RAM256 GB DDR4 ECC512 GB DDR4 ECC
Accounts NVMePCIe Gen3 x4, 1 TB+, high TBWSamsung 970/980 Pro 2 TB
Ledger NVMe1 TB+, separate from accounts disk2 TB enterprise NVMe (separate drive)
Snapshots500 GB+ NVMe or SATA SSDCan share with OS disk
Network1 Gbps symmetric10 Gbps unmetered
OSUbuntu 24.04 LTSUbuntu 24.04 LTS (bare metal; no Docker/VMs)
IPStatic public IPDedicated public IP, no NAT

Step-by-step validator setup

Configure system limits

Solana validators open a very large number of file descriptors. The OS default limit must be raised before the validator will function correctly.

# Raise file descriptor and memory lock limits
sudo bash -c "cat > /etc/security/limits.d/90-solana-nofiles.conf <<EOF
* - nofile 1000000
* - memlock 2000000
EOF"

# Tune kernel networking for UDP throughput
sudo bash -c "cat >> /etc/sysctl.d/20-solana-udp-buffers.conf <<EOF
net.core.rmem_default = 134217728
net.core.rmem_max = 134217728
net.core.wmem_default = 134217728
net.core.wmem_max = 134217728
EOF"
sudo sysctl -p /etc/sysctl.d/20-solana-udp-buffers.conf

Log out and back in for the file descriptor limits to take effect.

Install Agave CLI

The official installer fetches the latest stable release. Check github.com/anza-xyz/agave/releases for the current stable version before running.

# Install the Agave release
sh -c "$(curl -sSfL <https://release.anza.xyz/stable/install>)"

# Add to PATH (add this to ~/.bashrc for persistence)
export PATH="$HOME/.local/share/solana/install/active_release/bin:$PATH"

# Verify install
solana --version
agave-validator --version
Generate keypairs

A validator requires three keypairs: an identity keypair (your validator’s public ID), a vote account keypair, and an authorized withdrawer keypair. The withdrawer keypair should be generated on an air-gapped machine and stored offline, if you lose it, you permanently lose control of your vote account.

# 1. Validator identity : used for gossip, TPU, and TVU identification
solana-keygen new -o ~/validator-keypair.json

# 2. Vote account : tracks your validator's votes on-chain
solana-keygen new -o ~/vote-account-keypair.json

# 3. Authorized withdrawer : GENERATE ON AIR-GAPPED MACHINE, STORE OFFLINE
solana-keygen new -o ~/authorized-withdrawer-keypair.json

# Set default CLI keypair and cluster
solana config set --keypair ~/validator-keypair.json
solana config set --url <https://api.testnet.solana.com>
Create the vote account on-chain

The vote account is an on-chain account that records your validator’s votes and accumulates staking rewards. Creating it requires a small SOL deposit for rent-exemption (0.02685864 SOL). On testnet you can airdrop SOL; on mainnet you must acquire and transfer SOL.

# Airdrop SOL for testnet
solana airdrop 1

# Create the vote account (-ut flag = use testnet cluster)
solana create-vote-account -ut \\\\
  --fee-payer ~/validator-keypair.json \\\\
  ~/vote-account-keypair.json \\\\
  ~/validator-keypair.json \\\\
  ~/authorized-withdrawer-keypair.json

# CRITICAL : Store authorized-withdrawer-keypair.json securely NOW
# Move or encrypt it. Never leave it on the server unprotected
Mount and prepare storage

It is recommended to separate NVMe drives for the accounts database, ledger, and snapshots combining them degrades performance under Solana’s write intensity.

# Create mount points (adjust to your actual device names)
sudo mkdir -p /mnt/ledger /mnt/accounts /mnt/snapshots

# Format (one-time, destroys data on device)
sudo mkfs.ext4 /dev/nvme1n1   # ledger disk
sudo mkfs.ext4 /dev/nvme2n1   # accounts disk

# Add to /etc/fstab for persistent mount on reboot
# UUID=$(blkid -s UUID -o value /dev/nvme1n1)

# Set permissions
sudo chown -R solana:solana /mnt/ledger /mnt/accounts /mnt/snapshots
Start the validator

The startup command passes all configuration as flags. For a production system, wrap this in a systemd service file.

agave-validator \\\\
  --identity                    ~/validator-keypair.json \\\\
  --vote-account                ~/vote-account-keypair.json \\\\
  --ledger                      /mnt/ledger \\\\
  --accounts                    /mnt/accounts \\\\
  --snapshots                   /mnt/snapshots \\\\

  # Cluster entry points : bootstrap peers for gossip
  --entrypoint                  entrypoint.testnet.solana.com:8001 \\\\
  --entrypoint                  entrypoint2.testnet.solana.com:8001 \\\\
  --entrypoint                  entrypoint3.testnet.solana.com:8001 \\\\

  # Known validators : only trust snapshots from these nodes at startup
  --known-validator             5D1fNXzvv5NjV1ysLjirC4WY92RNsVH18vjmcszZd8on \\\\
  --known-validator             dDzy5SR3AXdYWVqbDEkVFdvSPCtS9ihF5kJkHCtXoFs \\\\
  --only-known-rpc \\\\

  # Dynamic port range for gossip, turbine, repair, TPU
  --dynamic-port-range          8000-8020 \\\\

  # Bind RPC locally only (don't expose public RPC on a voting validator)
  --rpc-port                    8899 \\\\
  --rpc-bind-address            127.0.0.1 \\\\

  # Snapshot intervals (in slots; 1 slot ≈ 400ms)
  --full-snapshot-interval-slots        25000 \\\\
  --incremental-snapshot-interval-slots 100 \\\\
  --maximum-full-snapshots-to-retain    2 \\\\
  --maximum-incremental-snapshots-to-retain 4 \\\\

  # Log output : pipe to logrotate in production
  --log                         ~/validator.log \\\\

  # Expected genesis hash for testnet : prevents connecting to wrong cluster
  --expected-genesis-hash       4uhcVJyU9pJkvQyS88uRDiswHXSCkY3zQawwpjk2NsNY
Verify the validator is connected

Check that your validator has registered with the gossip network.

# Get your validator's pubkey
solana-keygen pubkey ~/validator-keypair.json

# Confirm it appears in the gossip network
solana gossip | grep <YOUR_PUBKEY>

# Check current sync status
solana catchup <YOUR_PUBKEY> --url <https://api.testnet.solana.com>

# Monitor vote account and skip rate
solana vote-account ~/vote-account-keypair.json

Monitor your validator’s health on community dashboards

Solana RPC Node

An RPC node is a non-voting validator. It runs the same Agave software as a validator, maintains a full or partial copy of the ledger, and tracks the live cluster state but it does not hold a funded vote account and therefore does not participate in consensus. It is the interface layer between your application and the Solana network. It exposes the JSON-RPC API that dApps, wallets, indexers, and bots use to read chain state, submit transactions, and subscribe to real-time events.

The JSON-RPC surface covers everything an application needs: account reads (getAccountInfogetBalance, getMultipleAccounts), program account scans (getProgramAccounts), transaction history (getTransactiongetSignaturesForAddress), block and slot data, and transaction submission via sendTransactionwhich does not execute the transaction locally but forwards it toward the current leader via Gulf Stream. WebSocket subscriptions (accountSubscribe,  logsSubscribe) open a persistent connection and push state changes as they happen, without polling and the notoriously resource intensive getProgramAccounts.

Commitment levels: Every Solana RPC call accepts an optional commitment parameter

Unlike a validator, an RPC node has to serve external traffic, potentially thousands of concurrent requests while simultaneously keeping pace with the network. That combination drives hardware requirements significantly higher than a standard validator.

ComponentMinimumRecommended
CPU12 cores / 24 threads16–24 cores (AMD EPYC 7xx3)
RAM256 GB DDR4512 GB DDR4 ECC
Accounts NVMe1 TB+ PCIe Gen32 TB+ enterprise NVMe
Ledger NVMe1 TB+10 TB+ for archival history
Network1 Gbps5–10 Gbps (high outbound)
OSUbuntu 24.04 LTSUbuntu 24.04 LTS bare metal

📖 The official Agave docs specify 512 GB RAM for RPC nodes with all account indexes enabled, compared to 256 GB for a standard validator. This is because accounts and index structures must remain memory-resident for fast getAccountInfo responses.

Beyond hardware, operating an RPC node means managing snapshot syncing from genesis (a multi-hour process), coordinating Agave version upgrades before each network restart, monitoring disk growth on the ledger volume, and tuning OS-level parameters for UDP and file descriptor limits. For a team whose core competency is building products, this is substantial ongoing operational cost before a single API call is served.

Chainstack’s managed RPC infrastructure removes that operational layer entirely. Instead of provisioning bare-metal servers, syncing from snapshots, and maintaining uptime, you get a production-ready endpoint immediately with the performance characteristics of dedicated infrastructure.

Pricing is based on Request Units (RU) a standardised measure of compute cost per RPC method. This means you pay for actual compute consumed rather than a flat request count. See current pricing at chainstack.com/pricing.

To get an endpoint, sign up at Chainstack, create a project, deploy a Solana mainnet node, and copy the HTTPS and WSS endpoint URLs from the dashboard.

The example below uses those endpoints directly. No other configuration needed.

Using the RPC API with Chainstack

import { createSolanaRpc, createSolanaRpcSubscriptions, address } from "@solana/web3.js";

// Get your endpoint at <https://chainstack.com/build-better-with-solana>
const rpc = createSolanaRpc("<https://solana-mainnet.core.chainstack.com/YOUR_KEY>");
const ws  = createSolanaRpcSubscriptions("wss://solana-mainnet.core.chainstack.com/YOUR_KEY");

// Read : account balance at confirmed commitment
// Use "confirmed" for display
const { value: lamports } = await rpc
  .getBalance(address("YourPubkey"), { commitment: "confirmed" })
  .send();
console.log(`Balance: ${lamports / 1e9} SOL`);

// Read: full transaction history
const sigs = await rpc
  .getSignaturesForAddress(address("YourPubkey"), { limit: 10 })
  .send();

// Subscribe : real-time account changes via WebSocket
// Fires on every lamport or data change. No polling required
const sub = await ws.accountNotifications(address("ProgramStateAccount"));
for await (const update of sub) {
  console.log("Account changed", update.value.lamports);
}

// Submit : sendTransaction forwards to the current leader
// Use "finalized" commitment when you need to confirm the tx is irreversible
const sig = await rpc.sendTransaction(signedTx, { encoding: "base64" }).send();

Geyser plugin system

JSON-RPC works well for request-response patterns: query an account, fetch a transaction. But some applications need to react to every state change as it happens: a DEX bot tracking every tick of a price feed, a liquidation engine watching collateral ratios, a block explorer indexing every transaction. Polling JSON-RPC at high frequency to approximate this is inefficient and puts heavy load on the RPC node itself. getProgramAccounts scans gigabytes of account data on every call.

The solution is the Geyser plugin interface, a Rust plugin system built into the Agave validator. Instead of polling, Geyser pushes every state change directly to an external data store the moment it happens inside the validator process. The plugin implements four callbacks:

CallbackWhen it firesWhat it enables
update_accountEvery account lamport, data, or owner changeReal-time price feeds, position monitors, liquidation triggers
notify_transactionEvery processed transactionTransaction indexing, analytics pipelines, audit trails
update_slot_statusEvery Processed → Confirmed → Finalized transitionCommitment tracking, confirmed-state event systems
notify_block_metadataEach new blockBlock-level indexing, explorer backends

The most widely deployed Geyser plugin is Yellowstone gRPC, which exposes all four event streams over a gRPC connection with Protobuf schemas. Clients subscribe to exactly the data they need, a specific account, all transactions matching a program, all slot status changes and receive a filtered stream with far lower overhead than polling.

Chainstack’s Yellowstone gRPC Geyser Plugin runs this as a managed service, with Jito ShredStream integration so blocks arrive at the plugin before they are fully propagated to the rest of the network for ultra-low block latency. For teams who need raw Geyser access without provisioning and operating their own Geyser-enabled RPC node, this is the direct path.

Solana Trader Node

A trader node is validator software, Agave or Jito-Solana, configured and deployed specifically to minimize the time between deciding to submit a transaction and that transaction landing in a confirmed block. It does not vote, does not serve RPC queries, and typically does not maintain full ledger history. Its entire purpose is transaction submission latency.

This matters in environments where multiple competing actors want to land the same transaction in the same block: liquidations, arbitrage, NFT mints, and any strategy where being one block ahead of a competitor is worth money.

Solana Trader Node Hardware Requirements

ComponentMinimumRecommended
CPU8 cores, high single-thread clockAMD Ryzen 9 7950X or Intel Core i9-14900K
RAM128 GB DDR4256 GB+ ECC
NVMe Storage1 TB enterprise NVMe2 TB Samsung PM9A3 or Micron 7450 Pro
Network1 Gbps10 Gbps+ with direct IX peering
Clock syncStandard NTPPTP or GPS-disciplined NTP
ColocationRegional datacenterEquinix DA11, AM3, or TY2

Self-hosting gives you full control over software versions, configuration, and hardware selection. This matters if you are running custom validator software (e.g., Jito-Solana), need specific kernel or network stack tuning, or have regulatory requirements around data residency.

Managed infrastructure reduces time-to-deployment significantly. For teams whose core competency is not infrastructure operations, the DevOps overhead of self-hosting a validator or RPC node on mainnet-beta is substantial. Snapshot management alone, which involves downloading large state snapshots during node restarts, a process that can take hours, is a frequently underestimated operational burden.

Running a trader node requires an extensive setup and consistent service, making sure that your setup is resistant to outages caused by electricity, flooding, theft etc in addition to performance considerations. An easier solution is to run a trader node using Chainstack.

Chainstack’s Solana Trader Node service combines direct TPU submission with Warp transactions, a propagation layer powered by bloXroute, a specialist in high-speed blockchain transaction propagation. Warp transactions bypass standard gossip propagation and reach validator nodes through bloXroute’s private relay network, achieving up to 99% landing rates and confirming 40% of transactions within 2 blocks at normal priority fees, based on internal benchmark conditions. Switching from a standard RPC endpoint to a Chainstack Trader Node endpoint requires changing one URL in your configuration. No code changes, no custom infrastructure.

Choosing the right Solana node for your workload

Different Solana workloads require different RPC capabilities. Below are common application types and the infrastructure setup that fits them best.

MVP / Startup

Early-stage projects can start with Chainstack’s free Solana RPC endpoint. It provides a geo-balanced endpoint with no credit card required. The managed infrastructure handles rate limits, geographic routing, and uptime so teams can focus on building their product instead of running nodes.

const { Connection } = require("@solana/web3.js");

const connection = new Connection(
  "YOUR_CHAINSTACK_ENDPOINT",
  { commitment: "confirmed" }
);

const slot = await connection.getSlot();
console.log("Current slot:", slot);

Wallet Backend

Wallet backends (similar to apps like Phantom Wallet or Backpack Wallet) require fast balance reads, token account lookups, and transaction history queries. Using a dedicated Solana RPC endpoint with WebSocket support ensures reliable performance for high-frequency requests and real-time updates.

For individual account queries, use the getAccountInfo RPC method. Fetch all SPL token accounts for a wallet owner in a single call:

async function fetchSingleAccount() {
  const connection = new Connection('YOUR_CHAINSTACK_ENDPOINT');
  const publicKey = new PublicKey('Hr5GK3f5WqqLsr4TJ7cgVCnDaM5gY8QrD2GTPZ7K3Kpz');
  const accountInfo = await connection.getAccountInfo(publicKey, 'jsonParsed');
  console.log(accountInfo);
}

fetchSingleAccount().catch(console.error);

When your backend needs data from multiple accounts at once, for e.g., wallet dashboards, leaderboards, or batch processinggetMultipleAccounts is more efficient. It reduces network overhead by fetching multiple accounts in a single request:

async function fetchMultipleAccounts() {
  const connection = new Connection('YOUR_CHAINSTACK_ENDPOINT');
  const publicKeys = [
    new PublicKey('Hr5GK3f5WqqLsr4TJ7cgVCnDaM5gY8QrD2GTPZ7K3Kpz'),
    new PublicKey('EAaijviraKWCWsVZtiZ5thhXoyoB5RP3HH1ZiLeLDcuv')
  ];
  const accountsInfo = await connection.getMultipleAccountsInfo(publicKeys, 'jsonParsed');
  console.log(accountsInfo);
}

fetchMultipleAccounts().catch(console.error);

To learn more: https://docs.chainstack.com/docs/solana-getaccountinfo-getmultipleaccounts

NFT Marketplace

NFT marketplaces such as Magic Eden or Tensor run heavy ownership queries and real-time mint monitoring. Dedicated Solana nodes can be provisioned quickly using Chainstack’s infrastructure, allowing applications to handle high traffic without shared-resource rate limits.

Transfer an SPL token (or NFT) with automatic ATA creation for the recipient:

import { Connection, Keypair, PublicKey } from "@solana/web3.js";
import {
  getOrCreateAssociatedTokenAccount,
  transfer
} from "@solana/spl-token";

const connection = new Connection(process.env.SOLANA_RPC!);
const sender = Keypair.fromSecretKey(/* your key */);
const mint  = new PublicKey("MINT_ADDRESS");
const dest  = new PublicKey("RECIPIENT_WALLET");

// Creates the ATA if it doesn't exist yet
const destATA = await getOrCreateAssociatedTokenAccount(
  connection, sender, mint, dest
);

await transfer(
  connection,
  sender,
  senderATA.address,
  destATA.address,
  sender,
  1   // amount in smallest unit (1 for NFTs)
);

DeFi Protocol

Protocols like Jupiter Exchange and Orca require high-availability state queries and reliable transaction relay. Deploying RPC nodes across multiple regions reduces latency for global users. Chainstack’s infrastructure spans 12 data centers across North America, Europe, and Asia-Pacific, enabling region-optimized deployments.

Fetch recent priority fees scoped to your program accounts and apply a multiplier for a competitive bid:

import requests, statistics

RPC             = "YOUR_CHAINSTACK_ENDPOINT"
JUPITER_PROGRAM = "JUP6LkbZbjS1jKKwapdHNy74zcZ3tLUZoi5QNyVTaV4"

# Fetch priority fee data from the last 150 blocks
response = requests.post(RPC, json={
    "jsonrpc": "2.0", "id": 1,
    "method": "getRecentPrioritizationFees",
    "params": [[JUPITER_PROGRAM]]
})
fees = [f["prioritizationFee"] for f in response.json()["result"]]

# Multiply median by 1.25 to stay ahead in the block queue
multiplier   = 1.25
priority_fee = int(statistics.median(fees) * multiplier)
print(f"Priority fee: {priority_fee} micro-lamports")

For maximum throughput, separate read and write connections:

const writeConn = new Connection("YOUR_CHAINSTACK_TRADER_ENDPOINT");
const readConn  = new Connection("YOUR_CHAINSTACK_RPC_ENDPOINT");

// Reads
const balance = await readConn.getBalance(wallet.publicKey);

// Writes
const sig = await writeConn.sendRawTransaction(tx.serialize());

DeFi protocols often interact with DEXs like Raydium. Using their SDKs, you can perform token swaps programmatically while leveraging your optimized RPC setup for reliability. See Raydium SDK token swaps for detailed guidance.

Real-Time Data Pipeline

Applications that depend on live blockchain data, such as trading systems, analytics platforms, and monitoring tools benefit from streaming data directly from validator events instead of polling JSON-RPC endpoints.

With the Yellowstone gRPC Geyser Plugin, account updates, transactions, and slot status changes are streamed to your infrastructure in real time using gRPC with Protobuf schemas. This allows systems to react to on-chain activity the moment it occurs inside the validator.

When combined with Jito ShredStream, block data can arrive before full network propagation, giving latency-sensitive applications an additional edge.

const { default: Client } = require("@triton-one/yellowstone-grpc");

const ENDPOINT = "CHAINSTACK_GEYSER_URL";
const TOKEN    = "CHAINSTACK_GEYSER_TOKEN";

const DEX_PROGRAMS = [
  "6EF8rrecthR5Dkzon8Nwu78hRvfCKubJ14M5uBEwF6P",
  "675kPX9MHTjS2zt1qfr1NYHuzeLXfQM9H24wFSUt1Mp8",
  "whirLbMiicVdio4qvUfM5KAg6Ct8VwpYzGff3uctyCc",
];

async function main() {
  const client = new Client(ENDPOINT, TOKEN);
  const stream = await client.subscribe();

  stream.write({
    transactions: {
      dexFilter: {
        accountInclude: DEX_PROGRAMS,
        vote: false,
        failed: false,
      }
    },
    commitment: 1 // CONFIRMED
  });

  stream.on("data", (update) => {
    if (update.transaction) handleTx(update.transaction);
  });
}
main();

This setup allows you to monitor programs in real time. For a complete guide to implementing this in Node.js, see Program listener via Geyser + Yellowstone gRPC

Liquidation Bot

Liquidation bots prioritize ultra-low latency. Typical setups include:

This configuration maximizes the chance of winning liquidation opportunities.

Calculate competitive priority fees by analysing transactions in the same block slot:

const connection = new Connection("YOUR_CHAINSTACK_ENDPOINT");

const slot  = await connection.getSlot("confirmed");
const block = await connection.getBlock(slot, {
  maxSupportedTransactionVersion: 0,
  rewards: false,
});

const fees = block.transactions
  .map(tx => {
    const cu  = tx.meta?.computeUnitsConsumed ?? 0;
    const fee = tx.meta?.fee ?? 0;
    return cu > 0 ? Math.floor((fee * 1_000_000) / cu) : 0;
  })
  .filter(f => f > 0)
  .sort((a, b) => a - b);

// Use the 75th percentile as your bid
const p75 = fees[Math.floor(fees.length * 0.75)];
console.log("Competitive priority fee:", p75, "micro-lamports/CU");

For more details on this methodology, see Analyzing adjacent transactions for priority fees and Priority fees for faster transactions

Arbitrage / MEV

Arbitrage strategies on Solana typically involve multi-leg atomic execution — if one leg fails, the entire bundle reverts. This requires Jito’s block engine for sealed-bid bundle submission, combined with leader-schedule-aware routing so transactions arrive at the correct TPU port before the slot begins.

Chainstack Solana Trader Node with Jito bundle support. Switch with a single URL change. No custom integration required. Archive to Genesis included for backtesting strategies.

Eg: Detect new Pump.fun token mints via Geyser, the fastest path to new token discovery:

PUMP_PROGRAM        = "6EF8rrecthR5Dkzon8Nwu78hRvfCKubJ14M5uBEwF6P"
CREATE_DISCRIMINATOR = b"\\x18\\xad\\x99\\x4e\\xa1\\x07\\x00\\x00"  # first 8 bytes

subscribe_request = {
    "transactions": {
        "pump": {
            "accountInclude": [PUMP_PROGRAM],
            "vote": False,
            "failed": False,
        }
    },
    "commitment": "CONFIRMED",
}

async for update in stub.Subscribe([subscribe_request]):
    tx = update.transaction
    for ix in tx.transaction.message.instructions:
        if ix.data[:8] == CREATE_DISCRIMINATOR:
            token_info = decode_token_info(ix.data[8:])
            print("New mint:", token_info.mint, token_info.symbol)

For full guide, see Creating a Pump.fun trading bot and Pump.fun mint listener via Geyser to implement a real-time arbitrage/MEV pipeline.

Agentic Micro-payments

Autonomous agents can pay for API access per request using on-chain micro transactions. With Solana’s low fees and fast finality, an agent can attach a payment to each call instead of relying on API keys or subscriptions. RPC latency and reliability directly affect the request → payment → retry cycle. Using a dedicated Chainstack endpoint ensures consistent settlement for automated clients such as trading bots, schedulers, or data collectors.

Using the x402 Express middleware with a Chainstack Solana RPC endpoint allows servers to accept on-chain payments seamlessly and AI agents can automatically pay per request using the x402-fetch helper:

const wallet = Keypair.fromSecretKey(
  Buffer.from(process.env.AGENT_PRIVATE_KEY, "base64")
);

const fetchWithPayment = wrapFetchWithPayment(fetch, wallet, {
  network: "solana-devnet",
  rpcUrl: "YOUR_CHAINSTACK_ENDPOINT"
});

const res = await fetchWithPayment("https://api.example.com/paid-endpoint");
const data = await res.json();
console.log("Agent received:", data);

Combine x402 with a Telegram bot or scheduled tasks to fetch premium data automatically. Each payment step is settled through Chainstack’s RPC endpoint, so the reliability and low latency of the node directly impacts how fast the agent receives data. For full guidance and implementation:

Migrating from Another Provider

Switching RPC providers only takes minutes. Chainstack endpoints are fully compatible with standard Solana JSON-RPC and WebSocket interfaces, meaning most migrations require only a URL change.

For example, teams moving from Helius getTokenAccounts can switch to the standard getProgramAccounts method.

import requests

url             = "YOUR_CHAINSTACK_RPC"
TOKEN_PROGRAM   = "TokenkegQfeZyiNwAJbNbGKPFXCWuBvf9Ss623VQ5DA"
MINT_ADDRESS    = "ATLASXmbPQxBUYbxPsV97usA3fPQYEqzQBUHgiFCUsXx"

# Standard replacement for Helius getTokenAccounts
response = requests.post(url, json={
    "jsonrpc": "2.0", "id": 1,
    "method": "getProgramAccounts",
    "params": [
        TOKEN_PROGRAM,
        {
            "dataSlice": { "offset": 0, "length": 0 },
            "filters": [
                { "dataSize": 165 },
                { "memcmp": { "offset": 0, "bytes": MINT_ADDRESS } }
            ]
        }
    ]
})
holders = response.json()["result"]
print(f"Total holders: {len(holders)}")

For teams migrating from Syndica, the swap is a single line:

// Before Syndica
const connection = new Connection("<https://solana-api.syndica.io/>...");

// After Chainstack no other changes required
const connection = new Connection("YOUR_CHAINSTACK_ENDPOINT");

Refer to the step-by-step guides for smooth transitions with minimal disruption:

Staking Provider

Staking platforms such as Sol Blaze and Marinade Finance operate validators as core infrastructure. Many validators run enhanced clients such as the Jito-modified Solana client to capture additional MEV tip revenue alongside staking rewards. See the validator setup guide above and the hardware requirements section.

Enterprise / Institutional

Institutional integrations typically require:

Chainstack’s enterprise plans provide dedicated nodes, global deployments, and uptime commitments above 99.99%. Learn more: Chainstack Enterprise.

Not sure which tier you need? Start on the free Global Node. It covers most development and early-production workloads. When you hit rate limits or need dedicated capacity, Bolt provisions a dedicated node the same day. No migration required, same endpoint format.

Developer tools for Solana node infrastructure

If you’re building on Solana, these are the core infrastructure tools you’ll likely need to run and scale your application.

SDKs & Frameworks

MEV & Transaction Infrastructure

Managed RPC Providers

Monitoring & Observability

Conclusion

Solana’s performance model creates genuine infrastructure differentiation that has direct business consequences. The two protocol roles (validator and RPC) plus the trader-optimized deployment pattern are purpose-built for different workload needs. Running the wrong node type for your workload produces results that look like software problems but are actually infrastructure mismatches.

Validators secure the network and earn staking rewards, but they require significant hardware, uptime, and operational discipline. RPC nodes act as the application serving layer, handling high query throughput and keeping account state memory-resident for fast responses. For latency-sensitive use cases, trader nodes focus on the fastest possible transaction submission to the current leader.

The ecosystem around these node types is now mature and production-ready. Tools like Firedancer improve client diversity and performance, while Jito enables programmable MEV workflows. Managed infrastructure providers such as Chainstack also reduce deployment complexity, allowing teams to select the right node architecture based on workload requirements and scale reliably.

Reliable Solana RPC node infrastructure

Getting started with Solana on Chainstack is fast and straightforward. Developers can deploy a reliable Solana node within seconds through an intuitive Console — no complex setup or hardware management required. 

Chainstack provides low-latency Solana RPC access and real-time gRPC data streaming via Yellowstone Geyser Plugin, ensuring seamless connectivity for building, testing, and scaling DeFi, analytics, and trading applications. With Solana low-latency endpoints powered by global infrastructure, you can achieve lightning-fast response times and consistent performance across regions. 

Start for free, connect your app to a reliable Solana RPC endpoint, and experience how easy it is to build and scale on Solana with Chainstack – one of the best RPC providers.

FAQ

What Solana node roles should I know?

Solana has two protocol-level roles: voting validators and non-voting RPC nodes. “Trader node” is a common low-latency deployment pattern built for faster transaction routing and submission. Validators secure the network and produce blocks, RPC nodes serve blockchain data to applications, and trader nodes optimize transaction submission for ultra-low latency.

What does a Solana validator node do?

A Solana validator node participates in consensus by validating transactions, producing blocks, and voting on the state of the network. Validators maintain a full ledger, run Agave or compatible clients, and earn staking rewards from delegated SOL.

What is a Solana RPC node?

A Solana RPC node provides an interface between applications and the Solana network. It exposes the JSON-RPC API used by wallets, dApps, and analytics systems to read blockchain data, submit transactions, and subscribe to real-time events.

What is a Solana trader node?

A trader node is a Solana node optimized for the fastest possible transaction submission. It is typically used by trading bots, arbitrage systems, and liquidation engines that require minimal latency to land transactions in the current block.

Do I need to run my own Solana node?

Most developers do not need to run their own infrastructure. Managed RPC providers offer production-ready endpoints that remove the operational overhead of maintaining hardware, syncing the ledger, and handling network upgrades.

How much hardware does a Solana node require?

Hardware requirements depend on the node type. Validators typically require high-performance CPUs, large NVMe storage, and hundreds of gigabytes of RAM. RPC nodes often require even more memory to keep account indexes in RAM for fast queries.

Learn more about Solana from our articles

Exit mobile version