For most teams, the first chain is easy. You pick Ethereum, you ship a contract, you scrape logs, you call it done. The second chain — usually Base, Arbitrum, Optimism, or zkSync — feels like it should be easier still. Same EVM. Same ABI. Redeploy, repoint your RPC URL, move on.
This is where the industry has been lying to itself for two years.
“Same architecture” is not the same as “same data”
An L2 is not a cheaper L1. It is a separate execution environment with a different finality model, a different reorg depth, a different fee market, and in the case of zk-rollups, a delay between the sequencer accepting your transaction and the L1 proving it. Your data pipeline has to understand all of that or it will, at some point, lie to a user about money.
The failure modes stack up quickly:
- Optimistic rollups (Arbitrum, Optimism, Base) — you have soft finality from the sequencer in < 1s, but actual finality (fraud-proof window closes) is ~7 days. An indexer that treats sequencer confirmations as final will happily emit events that later get rewritten during a sequencer reorg.
- ZK rollups (zkSync, Starknet, Linea) — the sequencer is effectively final, but the L1 state root lags by minutes to hours. An AI agent trading a position based on zkSync logs might be trading against state that won't land on L1 for 40 minutes.
- L1 reorgs (even on Ethereum) — 1–2 block reorgs are routine. Your indexer has to unwind and re-apply events, not just append them.
- L2 sequencer outages — Arbitrum, Optimism, and Base have all had multi-hour sequencer pauses. If your transport layer doesn't detect this and failover, your product silently stops.
What “just deploy the same contracts” leaves out
Deploying the same Solidity bytecode to 5 EVM chains gives you 5 copies of the same contract. It does not give you:
- A decoder that understands the per-chain variant of
eth_getLogsCU pricing - A transport layer that can failover between Alchemy, QuickNode, and Infura without leaking state
- A reorg-safe storage layer that knows Optimism's finality guarantees differ from Ethereum's
- A unified view across L1 and L2 where a bridge deposit on L1 correlates with a mint on L2
This is exactly what chainrpc and chainindex are for. Both crates ship today, across EVM, Solana, Cosmos, Substrate, Bitcoin, Aptos, and Sui. The EVM family alone covers Ethereum plus 200+ L2s and sidechains behind the same API surface.
A concrete example
Say you're building an analytics dashboard that shows a user's positions across Ethereum, Base, and Arbitrum. The naive version:
class="tok-c">// naive — three providers, three error-handling paths, zero reorg awareness
const eth = createPublicClient({ transport: http(ETH_RPC) });
const base = createPublicClient({ transport: http(BASE_RPC) });
const arb = createPublicClient({ transport: http(ARB_RPC) });
const [ethBal, baseBal, arbBal] = await Promise.all([
eth.getBalance({ address }),
base.getBalance({ address }),
arb.getBalance({ address }),
]);This works on the happy path. It fails on the day Base has a 30-minute sequencer pause, or Arbitrum rolls back 2 blocks, or your RPC provider rate-limits you silently. With ChainFoundry:
use chainrpc::ChainClient;
use chainindex::Indexer;
let eth = ChainClient::evm(ETH_RPC).with_failover(&[ETH_RPC_2, ETH_RPC_3]);
let base = ChainClient::evm(BASE_RPC).with_failover(&[BASE_RPC_2]);
let arb = ChainClient::evm(ARB_RPC).with_failover(&[ARB_RPC_2]);
class="tok-c">// Reorg-safe, finality-aware indexer per chain
let indexer = Indexer::sqlite(class="tok-s">"./portfolio.db")
.chain(eth).chain(base).chain(arb)
.build()?;
class="tok-c">// Same canonical event shape across all three — L1 and L2
for evt in indexer.events_for(address).await? {
class="tok-c">// evt.finality is Finality::Confirmed | Soft | Pending
class="tok-c">// evt.chain_id tells you which L1/L2 it landed on
match evt.finality {
Finality::Pending => show_as_unconfirmed(&evt),
_ => update_portfolio(&evt),
}
}Why this matters for AI agents specifically
An AI agent that trades, monitors, or alerts on blockchain state has no way to tell a user “wait, this transaction might unwind” unless the underlying data pipeline surfaces finality. Today most AI agents call a single RPC, get a response, and treat it as ground truth. On L2s that's wrong; on L1s during a reorg it's wrong; during a sequencer outage it's wrong. The agent's reasoning is only as honest as the data layer underneath.
ChainFoundry's job is to make that data layer honest — across 7 architectures, including the one that most people think is “solved”: Ethereum and its L2s.
TL;DR — adding “L2 support” is not a contract deployment problem. It's a data infrastructure problem, and the people who treat it as the first will ship bugs that the people who treat it as the second will not. chainrpc and chainindex are the primitives we built so your team can be in the second group.
New posts every 2–3 weeks. Engineering deep-dives, not fluff.
Subscribe