Chainstack Dedicated Nodes cross-protocol and cloud environment benchmark test

At Chainstack, we believe in delivering superior performance that sets us apart in the blockchain space. Our dedicated nodes form the backbone of our network, each replicating the entire blockchain data structure and keeping the information up-to-date. By verifying and validating new information broadcasts, these nodes enhance the network’s security, providing robust and reliable service.
In this comprehensive blog post, we’re peeling back the layers to disclose an in-depth benchmark analysis of Chainstack Dedicated Nodes, part of our latest study. Focusing on key protocols such as Ethereum, Polygon, BNB Smart Chain, and Solana, our nodes underwent a rigorous assessment to measure their performance, highlighting their exemplary efficiency and capacity. Let’s get to it!
Understanding the load test methodology
To ensure that our blockchain node services stand up to the test of real-world demands, we at Chainstack embarked on an extensive performance assessment journey. Our methodology involved emulating realistic user interactivity patterns by employing specialized load test profiles.
The purpose of load test profiles is to recreate the rigorous conditions that nodes often face in real-life situations. These profiles enable us to simulate diverse user interactions and request loads that our nodes might encounter, thereby providing us with data that truly reflects the capacity and potential of our dedicated nodes.
Our study stretches beyond just load tests. We also considered various configurations and cloud environment settings to better grasp how these factors might affect the performance of each node. This multipronged approach assured broader coverage of potential scenarios, thereby adding depth and perspective to our analysis.
The cloud environments included in our study were Chainstack Cloud Latitude, Virtuozzo (VZO), Amazon Web Services (AWS), and Chainstack Cloud environments (AMS and NYC), each with its unique characteristics carefully considered during testing.
Our mission was twofold: to maximize the realism of our tests to ensure the results had practical applicability and to explore a broad spectrum of cloud environments to help our clients understand how different scenarios could impact node performance.
Key findings across protocols
Our thorough examination encompassed several blockchain protocols, each demonstrating unique characteristics and performances under various testing conditions. Here’s a summary of our vital findings:
Ethereum
In the Chainstack Cloud Latitude environment, our dedicated Ethereum nodes excelled, showcasing a remarkable transaction-handling capacity with a maximum RPS of 1670, significantly outperforming other environments. This demonstrated superior response times and efficient resource consumption.

On the Virtuozzo (VZO) platform, these nodes exhibited robustness under load, managing a maximum RPS of 740. This performance underlines their dependability with appreciable response times and peak resource utilization.

When tested on Amazon Web Services (AWS), the Ethereum nodes achieved compelling results, especially notable was their performance in response times, with a maximum RPS of 677.6, the best average response time amongst all tested environments.

Polygon
Our dedicated Polygon nodes showcased exceptional transaction throughput in the Chainstack Cloud Latitude environment, achieving the highest rate of 800 RPS. They maintained optimal response times, effectively balancing efficiency with substantial resource utilization.

In the VZO environment, these nodes consistently handled a moderate throughput of 550 RPS with minimal fluctuations in response time, emphasizing stability.

In the AWS environment, Polygon nodes demonstrated endurance, supporting a throughput of 460 RPS, closely matching the VZO’s upper performance limits and indicating robustness under various operational conditions.

BNB Smart Chain
In the Chainstack Cloud Latitude environment, the dedicated nodes for BNB Smart Chain registered an exceptional maximum RPS of 910, the highest among environments, showcasing efficient handling of peak loads while keeping resource consumption relatively low.

When tested in the VZO environment, they depicted reliable performance with a throughput of 460 RPS, demonstrating resilience under varying transactional demands.

In the AWS environment, the BNB Smart Chain nodes highlighted consistent performance with a robust throughput of 710 RPS, maintaining stability across operational stresses.

Solana
In the Chainstack Cloud AMS environment without GPA, the transaction processing capabilities were tested, yielding an average RPS of 1389. This measurement gives insight into the base performance of the nodes in handling transactions without the complexity added by the GPA feature.

Conversely, when GPA was enabled within the AMS setting, the average RPS was slightly lower at 1811. This indicates the additional computational overhead that GPA introduces, affecting the transaction throughput but potentially enhancing other aspects of node performance, such as security or data consistency.

Turning to the NYC environment, the impact of not utilizing GPA was significant, allowing the nodes to achieve a much higher average RPS of 5906.84. This showcases the environment’s robust capacity to handle transactions at a high velocity, emphasizing the efficiency of the nodes under less complex operational demands.

With GPA activated in the NYC environment, the average RPS was recorded at 2203.62. Although this is a decrease compared to the non-GPA scenario, it still demonstrates considerable processing power, underscoring the trade-offs between system stability and transaction speed when GPA is utilized.

Response times and resource consumption
Exploring the correlation between response times and resource consumption gives us insightful data relating to system efficiency and transaction handling capacity. Here’s an overview of our findings:
Ethereum
Our dedicated Ethereum nodes in the Chainstack Cloud Latitude environment demonstrated superior performance, with response times recorded at 320 ms for the 95th percentile and 800 ms for the 99th percentile of requests. Resource consumption in this environment peaked at 20 GB of RAM and 6.92 CPU cores, showcasing efficient scaling capabilities and an excellent balance between response times and resource usage.

On the Virtuozzo (VZO) platform, the nodes provided an average response time of 240 ms for the 95th percentile and 820 ms for the 99th percentile of requests. Peak resource utilization was measured at 9.86 GB of RAM and 6.61 CPU cores, indicating robustness under load and optimally balanced load handling.

In the AWS environment, these nodes achieved the best average response times among the environments tested, clocking at 180 ms for the 95th percentile and 590 ms for the 99th percentile of requests. With resource usage peaking at 15.9 GB of RAM and 5.4 CPU cores, these nodes are an excellent choice for applications sensitive to response times.

Polygon
Our dedicated Polygon nodes in the Chainstack Cloud Latitude environment demonstrated substantial throughput while maintaining optimal response times of 160 ms for the 95th percentile and 960 ms for the 99th percentile of requests. These nodes showcased an impressive capacity for high transaction throughput without sacrificing response times, highlighting an inverse relationship between these factors. Resource utilization in this environment included 11.7 CPU cores and 37.6 GB of memory.

In the VZO and AWS environments, Polygon nodes displayed a well-balanced performance between response times and throughput. The VZO environment recorded response times of 270 ms for the 95th percentile and 1400 ms for the 99th percentile, with resource use peaking at 12.7 CPU cores and 89.9 GB of memory, emphasizing a high-capacity operation mode.

Similarly, the AWS environment achieved response times of 310 ms for the 95th percentile and 1400 ms for the 99th percentile, with peak resource consumption at 14.1 CPU cores and 39.1 GB of memory. This consistency makes them a reliable choice for applications requiring dependable performance.

BNB Smart Chain
Our BNB Smart Chain nodes, particularly in the Chainstack Cloud Latitude environment, showcased exceptional handling of peak loads. The nodes achieved a maximum response time of only 230 ms for the 99th percentile of requests, while keeping resource consumption low at 21.4 GB of memory and 13.5 CPU cores.

In the VZO and AWS environments, these nodes displayed robust performance tailored to diverse operational demands. In the VZO environment, nodes showed a response time of 300 ms for the 95th percentile and 640 ms for the 99th percentile, with resource consumption peaking at 82.3 GB of memory and 13.9 CPU cores.

Meanwhile, the AWS environment maintained consistent performance, with response times of 320 ms for the 95th percentile and 860 ms for the 99th percentile, and peak resource usage at 88.4 GB of memory and 12.4 CPU cores. These attributes make them an optimal solution for a variety of operational needs.

Solana
In the Chainstack Cloud AMS environment without GPA, the Solana nodes demonstrated a median response time of 67 ms. Despite this quick response rate, the 99th percentile response times reached up to 3400 ms, indicating how the system handles intense load conditions without the complexity of GPA. Resource utilization in this setup peaked at 1070 GB of memory, which, combined with a reduced average CPU usage of 18.0, suggests a focused distribution of computational resources, aimed at maintaining performance stability under varied load conditions.

With GPA enabled in the AMS environment, the nodes showed slightly slower median response times of 69 ms, and 99th percentile response times stretched to 3300 ms. This setup required more extensive resource usage, peaking at 1060 GB of memory, indicating that GPA inclusion significantly influences the operational demands on system resources. This higher resource utilization reflects the additional processing load that GPA entails, which is crucial for handling more complex queries and enhancing node stability.

In the NYC environment, the performance of the Solana nodes without GPA was notably more efficient, exhibiting a median response time of 66 ms and a much lower 99th percentile response time of 2000 ms. Resource usage in this scenario was also lower, with a maximum memory usage of 860 GB. These figures highlight the enhanced processing capabilities and resource efficiency achieved when GPA is not active, allowing for faster transaction processing and reduced system strain.

Conversely, when GPA was active in the NYC environment, the nodes managed a median response time of 55 ms, demonstrating rapid initial responses to queries. However, under heavy load conditions, the 99th percentile response times soared to 5000 ms. Resource consumption was substantially higher in this setting, with CPU usage peaking at 42 and memory usage reaching up to 880 GB. This increased resource demand underlines the significant impact of GPA on the node’s ability to manage complex operations, ensuring stability but at the cost of higher resource consumption and potential delays during peak loads.

Implications of GPAs on efficiency and transaction processing
The getProgramAccounts
or GPA, an integral part of Solana’s protocol, is crucial in enhancing stability and handling peak transaction loads. However, how does GPA impact the node’s efficiency and transaction processing capabilities? We’ve laid out our key findings here:
Efficiency with GPA enabled
In the Chainstack Cloud NYC environment with GPA enabled, we observed rapid median response times of 55 ms. Although the 99th percentile response times peaked at 5000 ms under maximum loads, the overall CPU utilization peaked at 42, and memory usage reached 880 GB, demonstrating a significant resource commitment to maintain stability even under heavy loads.
Efficiency without GPA enabled
Without GPA, the Solana nodes in the NYC environment displayed a remarkable average RPS of 5906.84, maintaining excellent median response times of 66 ms. This configuration demonstrated superior performance and efficiency, highlighting the trade-off between transaction speed and the stability provided by the more complex queries involved in GPA.
Comparison across environments
The NYC and AMS environments offered distinct performance metrics. NYC consistently outperformed AMS in terms of RPS, achieving faster transaction speeds regardless of whether GPA was enabled. This underscores the location-specific advantages of the NYC setting.
Conversely, AMS, while demonstrating substantial memory usage with an average of 1005 GB with GPA and 1040 GB without, had lower CPU utilization, indicating different resource management strategies. This contrast highlights the nuanced approaches in managing resources between the two locations.
The inclusion or exclusion of GPA, along with the choice of cloud environment, plays a crucial role in determining node efficiency and transaction processing capabilities. This insight enables us to strategically tailor our solutions to optimize both stability and performance, offering more effective and customized solutions to our clients.
Strategic implications for node infrastructure planning
Our comprehensive research offers crucial insights that pave the way for strategic decisions regarding node deployment and infrastructure planning. Here’s what we’ve learned:
Optimal environment selection
Our testing across various environments—Virtuozzo (VZO), AWS, and Chainstack Cloud—highlighted distinct advantages and constraints. For instance, Ethereum nodes performed best in the Chainstack Cloud Latitude environment with the highest RPS and efficient resource utilization, suggesting that environment selection should be tailored to optimize specific blockchain protocols.
Resource management
Insights into resource utilization across different settings assist in efficient resource allocation. For example, the BNB Smart Chain’s low resource use combined with high transaction rates in the Chainstack Cloud Latitude environment suggests that computational efficiency can significantly enhance performance. Such data help balance robust transaction handling capacity against resource demands to ensure optimal performance under peak loads.
Efficiency vs. stability
Our exploration into the effects of enabling or disabling getProgramAccounts
(GPA) provides a framework for managing the trade-off between operational efficiency and stability. The choice to use GPA will depend on the specific needs of clients, influenced by how different load profiles affect performance outcomes.
Location-specific deployment
Our findings underline the significance of location-specific advantages. The NYC and AMS environments demonstrated notable differences in handling transactions, with NYC consistently outperforming AMS in terms of RPS. This can inform strategic decisions on where to deploy nodes to maximize transaction processing efficiency.
Our commitment to understanding every aspect of our dedicated blockchain nodes’ performance enables us to deliver unparalleled service. By leveraging this detailed analysis, Chainstack is well-positioned to offer optimized, efficient, and robust blockchain solutions, precisely tailored to meet diverse client needs.
Bringing it all together
Our journey into the depths of Chainstack Dedicated Nodes’ performance is testament to our commitment to delivering superior services. In adopting a meticulous approach through specialized load tests and simulating real-world interactions, we’ve uncovered invaluable insights on performance, resource efficiency, and scalability. Our exploration stretched across multiple protocols—Ethereum, Polygon, BNB Smart Chain, and Solana—and diverse cloud environments.
We’ve unpacked how varying environments influence nodes’ performance and how the inclusion or exclusion of getProgramAccounts
(GPA) affects node operation and transaction load management. In scaling these heights, we’ve paved the way for optimized infrastructure planning and precise node deployment.
Moreover, our findings underline the flexibility, resilience, and remarkable efficiency of Chainstack Dedicated Nodes. They’ve shone a light on the significance of understanding response times, resource consumption, and efficiency in transaction handling capability.
At Chainstack, these findings represent more than just data. They demonstrate our dedication to offering efficient, secure, and robust solutions tailored specifically to our client’s needs. This extensive study lays a strong foundation for the future, enabling us to continue advancing and offering unmatched blockchain services.
We work tirelessly in our pursuit of maximizing the potential of blockchain technology because we believe in the capacity of our dedicated nodes. We believe in delivering solutions that revolutionize how business is done. And as we continue to innovate and drive forward, we invite you to join us on this exciting journey.
Power-boost your project on Chainstack
- Discover how you can save thousands in infra costs every month with our unbeatable pricing on the most complete Web3 development platform.
- Input your workload and see how affordable Chainstack is compared to other RPC providers.
- Connect to Ethereum, Solana, BNB Smart Chain, Polygon, Arbitrum, Base, Optimism, Avalanche, TON, Ronin, zkSync Era, Starknet, Scroll, Aptos, Fantom, Cronos, Gnosis Chain, Klaytn, Moonbeam, Celo, Aurora, Oasis Sapphire, Polygon zkEVM, Bitcoin and Harmony mainnet or testnets through an interface designed to help you get the job done.
- To learn more about Chainstack, visit our Developer Portal or join our Discord server and Telegram group.
- Are you in need of testnet tokens? Request some from our faucets. Multi-chain faucet, Sepolia faucet, Holesky faucet, BNB faucet, zkSync faucet, Scroll faucet.
Have you already explored what you can achieve with Chainstack? Get started for free today.