Chainstack’s RPC nodes ensure reliable, spec-compliant blockchain nodes for your AI agents—providing standardized endpoints that your agent can easily integrate with.
Chainstack supplies two different kinds of MCP servers—an RPC MCP server for the chains supported by Chainstack and the Developer Portal MCP server so that your agents can autonomously ingest the best practices and apply for on-chain operations
Plug-and-play native and spec-compliant RPC APIs.
Built-in health checks — both node native and platform health APIs.
Easy usage control — cap or uncap your requests usage in a click, works both for RPS & request volume.
Debug & Trace APIs — full API support for deep analysis and replication.
Transparent pricing — each standard request is 1 RU; each archive or Debug & Trace request is 2 RUs. That’s it.
Thanks to full adherence to each protocol’s native RPC specs (and comprehensive docs with examples), any LLM can connect to our nodes and interact with blockchain networks out-of-the-box. We truly mean any model—from the largest cloud AI to your local lightweight LM.
Compatible with any Large Language Model (LLM)—from cutting-edge giants (OpenAI GPT, Anthropic Claude, Google Gemini, DeepSeek) to compact local models at 1.5B or even less. If it’s an LLM, it works with Chainstack.
We constantly monitor the chains TVL & activity—we both have and constantly add all the chains that your AI Agents need to start extracting value right away.
We track blockchain activity and TVL closely—ensuring we support all the chains your AI agent might need, and continuously adding new ones. EVMs and non-EVMs alike.
Our platform API is well-documented and lets your AI agents programmatically deploy new node endpoints for the networks they require on the fly.
We also have pre-deployed subgraphs for the popular networks and on-chain DeFi protocols like Uniswap, Curve, Pancake, QuickSwap, Lido, Aave and so on — on all the chains with the biggest TVL.
We expose all standard health check endpoints for easy use by your AI agents. For example, an agent can call node-specific APIs like `eth_syncing` on any supported protocol, or even check Chainstack’s overall platform status via our live status API. This gives your agent real-time awareness of node health and sync status.
Your AI agents can query node software versions at any time (e.g. via the `web3_clientVersion` call), ensuring they’re always aware of the exact client and version they’re interacting with. This helps keep agents in sync with the underlying infrastructure. We handle all the updates and forks.
All the API health checks endpoints are available for easy AI Agent function calling: from the node native APIs like `eth_syncing` for each of the protocols to the overall platform status through our live dashboard.
All responses from our APIs are standardized and easy to parse—even error messages include helpful details and links to documentation. This consistency is especially useful when an agent hits a usage limit or needs to adjust its behavior (for example, if it must upgrade plans or reduce its requests per second). Your AI agent will know exactly what happened and what to do next.
Monitor your agent’s RPC usage in real-time via our dashboard. You can set quotas to cap usage to your plan’s limits (to control costs) and get email alerts before each threshold is reached. This ensures your agent doesn’t unexpectedly run up usage without you knowing.
With all the forking and call simulation, the AI Agents can increase the inference time as much as they need to make the best decision. They have all the tools.
Your AI agents can also leverage archive nodes and use Debug & Trace APIs. This means they have access to full historical chain data and low-level execution traces—enabling advanced analysis or extended reasoning on on-chain events when neededю
Once your AI agent is fine-tuned and generating value, you can switch to pay-as-you-go to lift any request quotas. Don’t hold your agent back with preset limits if it’s printing money.
You can enable the Pay-As-You-Go option and not limit the agent to a pre-allocated quota of requests. If the agent is printing money, you have the option to not stop the presses.
Our usage is measured in Request Units (RUs) – for reference, a standard full-node call consumes 1 RU, while archive or Debug & Trace calls count as 2 RUs each. This simple metric makes it easy to predict costs as your agent scales.