Why block capacity feels confusing at first
On normal apps, providers can add servers when traffic spikes. On blockchains, capacity is shared and fixed per block, so spikes show up as queues and higher fees. That feels broken if you expect web-style scaling with a hard limit that is part of the protocol design.
- We expect apps to scale invisibly.
- We expect identical actions to have predictable latency.
- We expect capacity to be controlled by one operator.
- Blockchains trade that away for open verification and decentralization.

Blocks are limited on purpose
Block limits are protocol parameters chosen to keep verification and propagation feasible for many participants — not just data centers. That decentralization trade-off caps how much activity fits in a block, and therefore caps throughput over time. Raising limits is not a simple switch: it shifts bandwidth, storage, and verification costs onto every node.
Pro Tip:Block limits are a trade-off parameter: increasing them can reduce congestion short-term, but raises bandwidth, storage, and verification costs — pushing the network toward fewer, larger operators.
Key facts
Decentralization
Keeping resource requirements moderate so many different people and organizations can run full nodes, not just a few large data centers.
Security
Ensuring each block can be fully checked for validity, which becomes harder and slower if blocks are too large and complex.
Propagation
Blocks must reach the network fast enough to avoid frequent disagreements and forks.
What block capacity actually means
Block size is the protocol limit on data included in a block (bytes or weight). Block capacity is how many transactions, or how much valid work, fits under that limit — which depends on transaction types. Throughput is the confirmed processing rate over time (for example, transactions per second). Different transactions consume different bytes/weight and different verification work, so “transactions per block” varies even when block size limits stay the same.

Why unlimited blocks would break the system
At first glance, it can seem obvious: if blocks are too small, why not make them huge and fit everything in? But each block is produced by one proposer at a time, then must be propagated and verified by many independent nodes. Removing limits would slow propagation, raise verification costs, and push the network toward fewer operators.
Key facts
Network propagation
Extremely large blocks move slowly over real‑world internet connections, so different parts of the network see new information at very different times.
Verification costs
The bigger each block is, the more computing and bandwidth are required to fully check it, raising the bar for honest participation.
Centralization pressure
As costs rise, it becomes economical for fewer, well-resourced operators to keep up, concentrating power and reducing diversity.
Pluses
More transactions can fit into each block, increasing short‑term capacity.
May reduce queues during short-term congestion, but shifts costs to nodes.
Minuses
Very large blocks take longer to spread across the network, slowing down synchronization.
Participants need stronger hardware and more bandwidth to stay fully up to date and verify everything.
Higher requirements gradually push out smaller operators, leaving a more centralized set of powerful participants.

How limited capacity affects throughput
Throughput is capped by (1) block capacity and (2) the block interval. In other words, it is how much confirmed work fits per block, multiplied by how often blocks are produced. When demand spikes, throughput does not instantly increase; the system keeps producing blocks at the protocol’s fixed rate.
- More demand than capacity creates a backlog of pending transactions.
- That backlog clears only at the protocol’s fixed processing rate (capacity × interval).
- Spikes increase waiting time; they do not increase throughput.

How capacity limits lead to congestion and fees
When demand exceeds capacity, inclusion becomes competitive. Fees (or other priority signals) help decide which valid transactions get included sooner. This is a coordination mechanism under scarcity, not a sign the network stopped working.
- Scarcity: more valid demand than block capacity means not everyone can be included immediately.
- Priority: higher fees can get transactions included sooner when blocks are full.
- Outcome: spikes increase waiting times and typically raise fees until demand eases.
Pro Tip:Fee spikes reflect demand pressing against limited block capacity, not a sudden rule change. The same limits exist during quiet periods; they are just less visible when there is plenty of room.
What block limits do NOT mean
Seeing full blocks, queues, or high fees doesn’t imply protocol failure — it usually implies demand is above capacity. These limits are not simple throttles that a single company can turn up or down at will. Adjusting them typically requires broad agreement and careful consideration of the side effects. Because block limits are tightly connected to who can participate, how quickly information spreads, and how thoroughly blocks are checked, changing them is a serious protocol decision rather than a casual configuration tweak.
- Block limits are not a simple on/off switch that can be flipped without consequences.
- Raising capacity cannot be done instantly without affecting verification costs and participation requirements.
- A full block or temporary congestion does not mean the system is collapsing; it means demand is high relative to capacity.
- No single party is supposed to unilaterally dictate these limits in a healthy, decentralized network.
A simple mental model to remember
Imagine a building elevator with a posted weight limit. When more people arrive than fit in one trip, a line forms. The elevator keeps moving at a steady pace, carrying only what it can safely handle each trip. Blocks work similarly: each block is one “trip” with a fixed limit, and new trips happen on a schedule.
Calm closing and TL;DR
Intentional block limits keep verification and propagation within reach of many independent participants, not just a few large operators. That broad participation supports decentralization and long-term resilience. If you are curious why throughput and fees spike, block limits are the root constraint.
- Block limits exist to keep verification and propagation feasible.
- Throughput is capped by capacity × interval.
- Higher limits can reduce queues short-term but raise node costs.
- Raising limits is a protocol trade-off, not a simple switch.