The current proposals for increasing the maximum possible Blocksize have been 'good'. A number of proposals discussed will actually work well including just flatly bumping up maximum Blocksize to 20MB. The difference between working well and working optimally are two very different things.

First off increasing the Maximum Blocksize to 20MB (10MB,8MB,4MB etc) which has been proposed and in the past, and which has been tested by Gavin, is a reasonable solution and does currently solve our overarching scalability issue of increasing Bitcoins maximum number of possible Transactions per second. With discussion regarding centralization of full nodes aside, my biggest problem with this solution is that it solves the problem in such a way that we will have too have humans revisit the same problem again.

This solution is just too simple. The debate whether a fully packed 20MB block is too large will not matter in ~ 2 years time. Computer hardware has been increasing exponentially and eventually 20MB blocks will seem tiny by most standards, for some now 20MB blocks are easily possible.

If we assume every block is full @ 20MB we have the following calculation:

20MB * 144 Blocks * 365 Days = Max possible blockchain growth per year

Even with a gross overestimation assuming each Block is entirely packed with 20MB's worth of Transaction data.

Python:

'~'+ str((20*144)*365/1024)+'GB'  
'~1026GB'  

With full 20MB blocks each year a user would need ~1TB of disk space to maintain a full Bitcoin Node. This today is not an unreasonable request with the cost of 1TB equivalent too 70 USD. At the time of writing this article one Bitcoin is ~225 USD. It would cost ~ 1/3rd of 1 Bitcoin to maintain a node per year at today's cost. In approximately one year 1TB will be worth nearly half of the current cost making 1TB close to 35 USD.

Their is quite much more to this discussion than storage space alone, I could talk about the Block Propagation time increase in correlation with the size of Blocks, larger blocks makes it harder for those with sub-par internet to compete. With Gavin's solution the size of Blocks will not balloon to 20MB immediately, we still need the transaction data to fill up the blocks. The flaw I see with Gavins solution is eventually someone will need to up the maximum size of Blocks again manually to what we feel is good. At some point in the future we will most likely not care about how large a blocks maximum size is and we can debate removing the size limit if hardware advances allow this.

getBlocksize()

An optimal solution until hardware far exceeds our needs to have an imposed maximum Block size, is to have the maximum size for Bitcoin Blocks scale in function along side the average number of transactions * 115% over previous 2016 Blocks. This is a similar too how we currently evaluate Difficulty ~every 14 Days. With this solution Blocksize can go up or down based on the average number of transactions included in blocks over the previous 14 Days.

My actual percentage of increase and block evaluation periods are up for debate, but the idea is to implement a solution that scales up or down with the network. The maximum possible size for Blocks would in essence solve itself and average out overtime to be what the network really needs.

A scenario where an attacker will try to consistently inflate the maximum possible Blocksize will involve them trying to solve and create blocks fully loaded with their own transactions, doing this is not sustainable and would cost the attacker too much to maintain an average number of solves in the last 2016 Blocks. After an attack like this Blocksize would eventually scale down after the attacker had to stop due too lack of funds.