Not logged in
Log in now
Create an account
Subscribe to LWN
LWN.net Weekly Edition for May 16, 2013
A look at the PyPy 2.0 release
PostgreSQL 9.3 beta: Federated databases and more
LWN.net Weekly Edition for May 9, 2013
(Nearly) full tickless operation in 3.10
Bitcoin: Virtual money created by CPU cycles
Posted Nov 11, 2010 16:04 UTC (Thu) by nybble41 (subscriber, #55106)
Worse, those who generate the most blocks within a single private network, with their peer connections most likely routed through a single server, will tend to get updated most quickly when new blocks are found. This lets them start work on the new chain sooner, and thus use their CPU time more effectively. Speeding up block generation would tend to give them even more of an edge in that regard.
Posted Nov 19, 2010 22:18 UTC (Fri) by creighto (guest, #71377)
In a presumed future with a market as large as PayPal(TM), the difficulty would presumedly be high enough to render such an attack technicly unfeasable in addition to economicly unfeasable.
Posted Nov 22, 2010 22:16 UTC (Mon) by nybble41 (subscriber, #55106)
It is the size of the network which protects against "brute force" attacks, not the difficultly of each individual block.
Posted Nov 23, 2010 0:33 UTC (Tue) by creighto (guest, #71377)
Those two variables are on opposite sides of an equation. When the network grows, the difficulty automaticly increases to compensate and maintain a relatively consistant block interval.
Posted Nov 23, 2010 8:21 UTC (Tue) by nybble41 (subscriber, #55106)
Obviously you don't want the rate at which blocks are generated to increase sharply, since that would make it relatively easy to invalidate some or all of the work that went into creating the existing block chain. If it takes a year to get to block 1000, but only another week to get to block 2000, then you have a problem--a concerted effort could supplant the original block chain, or at least a significant suffix of it, with a new one of the attacker's choice, causing the system to break down (double-spending, etc.). For that to happen to the last day's transactions is one thing, but with a large rate of change a few day's effort could invalidate a much longer period of the chain history.
To avoid that the network has to regulate the rate at which new blocks are generated such that acceleration of that rate, if any, is extremely gradual; currently that means no average change at all, although there is some variation above and below the goal rate between difficulty adjustments.
What I'm saying, however, is that this goal rate doesn't have to be six blocks per hour to prevent so-called "brute force" attacks; provided the clients remain synchronized, it could just as easily be six blocks per *second*. I refer, of course, to a constant rate of six blocks per second since the network was formed, not a sudden 3600x drop in the difficulty from an existing six-blocks-per-hour chain. If there was to be a transition from one rate to the other it would have to be extremely gradual.
Posted Nov 23, 2010 23:01 UTC (Tue) by creighto (guest, #71377)
The clients could not stay synchronized anywhere near such a rate, and the rate of blocks does directly affect the growth of the total size of the blockchain. The target interval is an arbitrary decision, and one that could be changed with consensus in the future; but if it does change I would guess that it would be reduced from 6 blocks an hour to 4 or 3 per hour. The p2p network is currently very small, but in a future with Bitcoin as much of the online economy as PayPal; the network would likely be bogged down with latency at anything faster than 10 per minute.
Posted Nov 23, 2010 23:22 UTC (Tue) by creighto (guest, #71377)
Posted Nov 24, 2010 3:21 UTC (Wed) by nybble41 (subscriber, #55106)
Posted Nov 24, 2010 20:00 UTC (Wed) by creighto (guest, #71377)
Copyright © 2013, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds