πŸ“Š Data Availability: Celestia & EigenDA

Learn how rollups prove data is available without executing it

Separate consensus, data availability, and execution

Data Availability

**Data Availability (DA)** ensures that all transaction data is published and accessible so anyone can verify the blockchain state. In modular architecture, DA is separated from executionβ€”rollups publish data to specialized DA layers like Celestia instead of expensive L1 calldata.

This separation is the **key to scaling**: a rollup can process 100,000 TPS but only needs to publish compressed data (not full state) to the DA layer. Light clients use **data availability sampling** to verify data without downloading everything.

πŸ’Ύ Why Data Availability Matters

1.
State Reconstruction
Anyone can rebuild current state from published data (no trust needed)
2.
Fraud Proof Validity
Can challenge invalid state transitions with published data
3.
Censorship Resistance
Public data prevents sequencer from hiding or manipulating transactions

Interactive: DA Provider Comparison

Compare different data availability providers and their tradeoffs.

Ethereum DA

Calldata / Blobs

Cost per KB
High ($0.01-$1 per KB)
Throughput
~1.4 MB/block (calldata)
Max Block Size
~1.5 MB/block (EIP-4844)
Security Level
Maximum (50M+ ETH staked)
βœ“ Advantages
  • β€’Highest security
  • β€’Native Ethereum integration
  • β€’Battle-tested
βœ— Disadvantages
  • β€’Most expensive
  • β€’Limited throughput
  • β€’No DA sampling

Interactive: Cost Calculator

Calculate DA costs for different providers based on rollup block size.

0.1 MB10 MB
Cost per Block
$5.000
Daily Cost (7200 blocks)
$36000.00
Monthly Cost
$1080000.00

Cost Comparison

β€’ Ethereum is most expensive but offers maximum security from ETH staking

πŸ” Data Availability Sampling

DA sampling allows light clients to verify data availability without downloading full blocks. They randomly sample small pieces and use **erasure coding** to ensure data is available with high probability.

Light Client
Downloads ~100 KB samples instead of full 10 MB block
Erasure Coding
2x data expansion: 50% chunks sufficient to reconstruct
Probability
99.99% confidence with just 30 random samples