Gallus and Simo Debate Whether the Bitcoin Block Size Limit Should be Increased

Gallus: Bitcoin needs an increase in the block chain max size to avert disaster. If the block size limit isn’t increased soon, the limit will be hit and disaster will ensue.

Simo: Current blocks are nowhere near the size limit.

Gallus: At the current rate of growth of transactions, we’ll get there soon.

Simo: Lightning Network can handle it!

Gallus: Lightning Network isn’t working yet, forces big transactions to happen when its timeouts sunset and requires a lot of complexity and endpoint diligence.

Simo: Just wait for the new paper! With a relative timelock opcode everything nets out until the settlement between two counterparties exceeds the amount they initially deposited no matter how long their relationship has gone on, and the diligence can be outsourced to third parties.

Besides, most of the current transactions might be garbage anyway and the right way to handle everything is with transaction fee increases

Gallus: You’re just speculating about how much is garbage, and transaction fees destroy zeroconf.

Simo: If you’re more conservative about what sort of malleability you accept then zeroconf works, uh, about as well as it does now. Specifically, if you disallow changes to the outputs other than decreasing payouts and thus implicitly increasing the transaction fee then there’s little opportunity to defraud via the usual channels. Zeroconf is still a profoundly bad idea though. If it ever became widespread, then that would inevitably lead to the creation of a darknet where alternative transactions could get posted which included kickbacks to the targets of previous mining rewards. There is no good counter to this. Zeroconf advocates should get over it.
Reiterating on transaction fees are the right way to handle everything, the current technique for avoiding denial of service by putting through lots of garbage transactions boils down to letting through larger transactions first, so anyone trying to make lots of transactions with a de minimis amount of cash fronted will have their amounts spread so thin that legitimate transactions will take priority. Since there are an average of 600 seconds in a block, and blocks can handle about 4 transactions per second, if we add a factor of 10 assuming that the attacker wants to keep transactions from going through even when blocks happen to be bigger, then if an attacker wanted to prevent any transactions of less than $10 from going through, they’d have to front $10 * 600 * 4 * 10 or about $25,000 to keep that from happening. And that’s fronted, not spent, the attacker can always sell their coins off later (although their value would likely have been badly damaged in the interim). Even adding significantly to this wouldn’t make the security margin particularly good. Transaction fees really are needed.

Gallus: The wallet codebases are poorly written and maintained and can’t be realistically expected to be made to handle real transaction fees.

Simo: If they really needed to then wallets would fix their shit. This is Bitcoin, where the whole point is supposed to be that all the endpoint mutually enforce security from everybody else. If you’re concerned about supporting code which is so shitty it shouldn’t have existed in the first place you should go work for Microsoft. Besides, even a busted wallet can have its keys extracted and put into a wallet which isn’t busted.

Gallus: There aren’t any good way to handle transaction fees.

Simo: Receiver pays (where a new transaction is created spending the output of an old one so it can pay the fee for both of them to go through) works well, even when some wallets are busted, as does the previously mentioned conservative approach to allowing malleability.

Gallus: Receiver pays doubles the number of transactions, making the block size limit problem worse.

Simo: If a new opcode were added requiring that a particular thing *not* be in the current utxo set, then that would allow for multiple receiver pays to be bundled together with each of them using few bits, be very robust against history reorgs, and only require a single lookup in the already necessary utxo database on miners.

Gallus: This is all very complicated to handle.

Simo: It’s just software. See my earlier comment about working for Microsoft.

We need to find out how much those transactions are really worth to be able to use good judgement, and hitting the limit is the only way to find out.

Gallus: You’re proposing invoking disaster just to gather some academic data!

Simo: If anything we should be going the other way, artificially forcing transaction fees up in advance of needing to by limiting block sizes below the requirement. Then if there were problems we could let the limit go back to normal and spend some time fixing the problems without creating a compatibility problem. Besides, we don’t even know if any real damage would be done by hitting limits, because without a demonstrated willingness to pay transaction fees, even temporarily, we have little evidence that bitcoin transactions are creating any real value. Core developers favor doing an experiment like this more than they favor increasing the block size limit.

Getting back to the main point, Increasing the max block size limit would be ruinous to the bitcoin ecosystem, with vastly fewer full nodes being run.

Gallus: The rate of bandwidth increase is exponential, and there will be plenty.

Simo: As of today, the amount of bandwidth to run a full node is a significant disincentive to running one. The start-up time to get the current blockchain history when starting a new one makes the problem much worse than the ongoing rate of download. The rate of growth of bandwidth is much slower than Moore’s law is for computational power, and if you assume that everybody has mass quantities of bandwidth it would be much better to use it to have wallets run full nodes and retire SPV.

Besides, increasing the block size is a hard fork, which is unlikely to even happen. At best it would result in two different chains. The miners, who hardly even respond to developers’s entreaties about urgent issues, have little reason to go for it because the whole goal is to avoid or reduce transaction fees, which cuts directly into their bottom line, and demonstrating an ability to make a backwards incompatible change undermines the claim that Bitcoin is a truly decentralized system.

Gallus: The new fork can be merge-mined along with the classic fork. Miners will do whatever is of marginal value to them, and if they can mine both at once at no extra cost they will.

Simo: With only partial miner cooperation the new fork would have substantially less security, and the two of them coexisting would be a disaster of indeterminate state of coins which were spent on one fork but not the other, causing far worse problems for wallets than transaction fees would.

Eventually the only mining incentive left will be transaction fees. If transaction fees aren’t made significant by then, disaster will ensue.

Gallus: Mining rewards could be changed as well.

Simo: Increasing mining fees would be a yet even more outrageous hard fork than increasing the block size limit. It would cause extraordinary amounts of real world waste for no proven value. Our goal is eventually to make Bitcoin be more than a cryptographic curiosity and an exercise in the platonic ideal of marxist value creation, it should provide some service of value. If it can’t do that, it deserves to fail and be abandoned.

Leave a Reply

Your email address will not be published. Required fields are marked *