14. On Block Time, an Automated Test, and the Libertarian Viewpoint
14
On Block Time, an Automated Test, and the Libertarian Viewpoint
IN THIS POST, Satoshi explains why a single pending transaction pool is required and how these transactions are kept given that parallel branches of blocks exist. He references a few functions within the code. Recall the discussion on proof-of-work in Chapter 2. Not all miners might have assembled the same transactions, some of which might have come too late to be included in the block on which they are working. As new transactions arrive while they are working on the hash for their existing block, they will store these transactions in a transaction pool.
Then, he touches again on transaction propagation and the 10 minutes allocated per creation of a block, discussing the issue as to whether that might be too short a period of time.
Lastly, he makes a reference to how Bitcoin could be attractive to libertarians, people who advocate individual liberties.
Re: Bitcoin P2P e-cash paper
Satoshi Nakamoto Fri, 14 Nov 2008 14:29:220800
Hal Finney wrote:
I think it is necessary that nodes keep a separate pendingtransaction list associated with each candidate chain.
. . .One might also ask . . . how many candidate chains must a given node keep track of at one time, on average?
Fortunately, it’s only necessary to keep a pending-transaction pool for the current best branch. When a new block arrives for the best branch, ConnectBlock removes the block’s transactions from the pending-tx pool. If a different branch becomes longer, it calls DisconnectBlock on the main branch down to the fork, returning the block transactions to the pending-tx pool, and calls ConnectBlock on the new branch, sopping back up anytransactions that were in both branches. It’s expected that reorgs like this would be rare and shallow.
With this optimisation, candidate branches are not really any burden. They just sit on the disk and don’t require attention unless they ever become the main chain.
Or as James raised earlier, if the network broadcast is reliable but depends on a potentially slow flooding algorithm, how does that impact performance?
Broadcasts will probably be almost completely reliable. TCP transmissions are rarely ever dropped these days, and the broadcast protocol has a retry mechanism to get the data from other nodes after a while. If broadcasts turn out to be slower in practice than expected, the target time between blocks may have to be increased to avoid wasting resources. We want blocks to usually propagate in much less time than it takes to generate them, otherwise nodes would spend too much time working on obsolete blocks.
I’m planning to run an automated test with computers randomly sending payments to each other and randomly dropping packets.
3. The bitcoin system turns out to be socially useful and valuable, so that node operators feel that they are makinga beneficial contribution to the world by their efforts (similar to the various “@Home” compute projects where people volunteer their compute resources for good causes).
In this case it seems to me that simple altruism can suffice to keep the network running properly.
It’s very attractive to the libertarian viewpoint if we can explain it properly. I’m better with code than with words though.
Satoshi Nakamot
The Cryptography Mailing List
Last updated