Commit graph

71 commits

Author SHA1 Message Date
moneromooo-monero
ecbb732faa
Fix leak on real output when using a very recent output
The wallet and the daemon applied different height considerations
when selecting outputs to use. This can leak information on which
input in a ring signature is the real one.

Found and originally fixed by smooth on Aeon.
2015-10-25 16:34:57 +00:00
moneromooo-monero
b13e7f284b
blockchain_export can now export to a blocks.dat format
Also make the number of blocks endian independant, and add
support for testnet
2015-10-17 00:11:26 +01:00
moneromooo-monero
ac90d488e7
from hard fork 2, all outputs must be decomposed
The wallet decomposes fully as of now too.
2015-10-11 13:02:55 +01:00
moneromooo-monero
90ccad1236
from hard fork 2, claim a quantized reward in coinbase
The small leftover is carried forward
2015-10-10 12:28:44 +01:00
moneromooo-monero
33affd2d17
blockchain: on hardfork 2, require mixin 2 at least if possible 2015-09-27 22:46:53 +01:00
moneromooo-monero
0a7421b607
hardfork: rescan speedup
Add a block height before which version 1 is assumed
Use DB transactions
2015-09-27 22:46:41 +01:00
moneromooo-monero
4bbf944df0
blockchain: on hardfork 2, allow miners to claim less money than allowed
So they can avoid dust if they so wish
2015-09-27 22:46:30 +01:00
moneromooo-monero
198f557d38
blockchain: use different hard fork settings for testnet and mainnet 2015-09-27 22:46:13 +01:00
moneromooo-monero
5b11a89a76
hardfork: most state now saved to the DB
There will be a delay on first load of an existing blockchain
as it gets reparsed for this state data.
2015-09-20 18:42:52 +01:00
moneromooo-monero
e546f3724a
Add an RPC call and daemon command to get info on hard fork voting 2015-09-19 16:47:48 +01:00
moneromooo-monero
d06713199e
blockchain: force a hardfork recalculation at load time
Since the state isn't actually saved anywhere, as the archive
code isn't called in the new DB version.
2015-09-19 16:47:42 +01:00
moneromooo-monero
a7177610b3
core: add consts where appropriate 2015-09-19 16:47:35 +01:00
moneromooo-monero
8ffc508cef
core: moan when we think an update is needed to get latest hard fork info 2015-09-13 18:09:57 +01:00
moneromooo-monero
f85498422d
blockchain: use the new hardfork class 2015-09-12 11:15:53 +01:00
moneromooo-monero
813e758b62
blockchain: remove obsolete call to libc srand
crypto::rand is now used for output selection
2015-08-24 21:58:19 +01:00
roman
59cc92b388 removed some gcc warnings. mainly unused variables. 2015-08-23 17:59:24 +02:00
moneromooo-monero
378d004b37
blockchain: mark two places where the new code differs from the old
And I'd like a comment from tewinget or someone else
2015-08-15 18:46:19 +01:00
moneromooo-monero
73d42a75d4
blockchain: update cumulative size after block addition
Block addition can fail, and the old code would not update the
cumulative size in that case.
2015-08-15 18:44:56 +01:00
moneromooo-monero
4a443775e8
blockchain: remove dead code 2015-08-15 18:44:31 +01:00
moneromooo-monero
3f9089a767
blockchain: do not try to add a tx the pool when it was nor taken out
This is an unintended difference from the old code. Though I don't
think it can actually happen in practice with the current take_tx
implementation.
2015-08-15 18:42:29 +01:00
moneromooo-monero
769d5ef0e6
blockchain: fix off by 1 in timestamp median calculations
The height function apparently used to return the index of
the last block, rather than the height of the chain. This now
seems to be incorrect, judging the the code, so we remove the
now wrong comment, as well as a couple +/- 1 adjustments
which now cause the median calculation to differ from the
original blockchain_storage version.
2015-08-15 12:37:23 +01:00
moneromooo-monero
35abef1b92
blockchain: remove dead code 2015-08-11 10:48:51 +01:00
moneromooo-monero
4f19e68476
blockchain: factor get_num_outpouts(amount) calls
It has to stay constant as we get the blockchain lock for the
entire function. Avoids some unnecessary DB accesses.
2015-08-09 18:14:30 +01:00
moneromooo-monero
275894cdef
blockchain: always select random outs using triangular distribution
It was only used by the older blockchain_storage.
We also move the code to the calling blockchain level, to avoid
replicating the code in every DB implementation. This also makes
the get_random_out method obsolete, and we delete it.
2015-08-09 18:07:44 +01:00
Riccardo Spagni
5a26676932
Merge pull request #343
e20a4dd blockchain: fix testnet syncing (to not use blocks.dat) (moneromooo-monero)
2015-07-18 22:59:02 +02:00
moneromooo-monero
e20a4ddc76
blockchain: fix testnet syncing (to not use blocks.dat)
These are mainnet blocks, and would cause syncing on testnet to
reject all incoming blocks.
2015-07-18 10:25:22 +01:00
NoodleDoodleNoodleDoodleNoodleDoodleNoo
70ae2ee711 Fixed threadpool bug when running on single core systems.
*Thanks to freshman for reporting bug.
2015-07-17 20:02:29 -07:00
NoodleDoodleNoodleDoodleNoodleDoodleNoo
94ea3e8ed2 Removed on_idle() calls to Blockchain::store_blockchain() for lmdb.
Added option to cache tx-input verification results.
2015-07-15 23:20:25 -07:00
NoodleDoodleNoodleDoodleNoodleDoodleNoo
e5d2680094 ** CHANGES ARE EXPERIMENTAL (FOR TESTING ONLY)
Bockchain:
1. Optim: Multi-thread long-hash computation when encountering groups of blocks.
2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible.
3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible.
4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks.
5. Optim: Multi-thread signature computation whenever possible.
6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD)
7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???).
8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads).

Berkeley-DB:
1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc).
2. Fix: Unable to pop blocks on reorganize due to transaction errors.
3. Patch: Large number of transaction aborts when running multi-threaded bulk queries.
4. Patch: Insufficient locks error when running full sync.
5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation.
6. Optim: Add bulk queries to get output global indices.
7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
9. Optim: Added thread-safe buffers used when multi-threading bulk queries.
10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details)
11. Mod: Added checkpoint thread and auto-remove-logs option.
12. *Now usable on 32-bit systems like RPI2.

LMDB:
1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect)
2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details)
5. Mod: Auto resize to +1GB instead of multiplier x1.5

ETC:
1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete.
2. Fix: 32-bit saturation bug when computing next difficulty on large blocks.

[PENDING ISSUES]
1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization.
   This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD.
2. Berkeley db, possible bug "unable to allocate memory". TBD.

[NEW OPTIONS] (*Currently all enabled for testing purposes)
1. --fast-block-sync arg=[0:1] (default: 1)
	a. 0 = Compute long hash per block (may take a while depending on CPU)
	b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence)
2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000)
	a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions.
	b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache.
	Fast    - Write meta-data but defer data flush.
	Fastest - Defer meta-data and data flush.
	Sync    - Flush data after nblocks_per_sync and wait.
	Async   - Flush data after nblocks_per_sync but do not wait for the operation to finish.
3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower)
        Max number of threads to use when computing long-hash in groups.
4. --show-time-stats arg=[0:1] (default: 1)
	Show benchmark related time stats.
5. --db-auto-remove-logs arg=[0:1] (default: 1)
	For berkeley-db only. Auto remove logs if enabled.

**Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version.
	At the moment, you need a full resync to use this optimized version.

[PERFORMANCE COMPARISON]
**Some figures are approximations only.
Using a baseline machine of an i7-2600K+SSD+(with full pow computation):
1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain.
2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain.
3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain.

Averate procesing times (with full pow computation):
lmdb-optimized:
1. tx_ave = 2.5 ms / tx
2. block_ave = 5.87 ms / block
memory-official-repo:
1. tx_ave = 8.85 ms / tx
2. block_ave = 19.68 ms / block
lmdb-official-repo (0f4a036437)
1. tx_ave = 47.8 ms / tx
2. block_ave = 64.2 ms / block

**Note: The following data denotes processing times only (does not include p2p download time)
lmdb-optimized processing times (with full pow computation):
1. Desktop,  Quad-core / 8-threads 2600k  (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000).
2. Laptop,   Dual-core / 4-threads U4200  (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000).
3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000).

lmdb-optimized processing times (with per-block-checkpoint)
1. Desktop,  Quad-core / 8-threads 2600k  (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000).

berkeley-db optimized processing times (with full pow computation)
1. Desktop, Quad-core / 8-threads 2600k  (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000).
2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).

berkeley-db optimized processing times (with per-block-checkpoint)
1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
2015-07-15 23:20:16 -07:00
NoodleDoodleNoodleDoodleNoodleDoodleNoo
1f83444d3d Update blockchain.cpp
Fix compilation error
2015-07-15 23:20:15 -07:00
NoodleDoodleNoodleDoodleNoodleDoodleNoo
da1d3c01de
Experimental BDB workaround optimizations 2015-07-15 21:13:42 -07:00
Riccardo Spagni
e01d32e52d
cleaning up, removing redundant files, renaming, fixing incorrect licenses 2015-05-31 13:40:18 +02:00
Thomas Winget
7b14d4a17f
Steps toward multiple dbs available -- working
There will need to be some more refactoring for these changes to be
considered complete/correct, but for now it's working.

new daemon cli argument "--db-type", works for LMDB and BerkeleyDB.

A good deal of refactoring is also present in this commit, namely
Blockchain no longer instantiates BlockchainDB, but rather is passed a
pointer to an already-instantiated BlockchainDB on init().
2015-03-25 12:09:44 -04:00
warptangent
275cbd4348
Add support for database open with flags
Add support to:
  - BlockchainDB, BlockchainLMDB
  - blockchain_import utility to open LMDB database with one or more
    LMDB flags.

Sample use:
  $ blockchain_import --database lmdb#nosync
  $ blockchain_import --database lmdb#nosync,nometasync
2015-03-16 00:26:59 -07:00
Thomas Winget
eee3ee7073
BlockchainDB implementations have names now
In order to make things more general, BlockchainDB now has get_db_name()
which should return a string with the "name" of that type of db.
This "name" will be the subfolder name that holds that db type's files
within the monero folder.

Small bugfix: blockchain_converter was not correctly appending this in
the prior hard-coded-string implementation of the subfolder data
directory concept.
2015-03-13 21:39:27 -04:00
Thomas Winget
5eab480cb1
Moved BlockchainDB into its own src/ subfolder
Ostensibly janitorial work, but should be more relevant later down the
line.  Things that depend on core cryptonote things (i.e.
    cryptonote_core) don't necessarily depend on BlockchainDB and thus
have no need to have BlockchainDB baked in with them.
2015-03-06 15:20:45 -05:00
warptangent
ce71abd0fe
Move LMDB storage to subfolder 2015-02-23 00:33:37 -08:00
warptangent
59305d3137
Blockchain: match original function declaration from blockchain_storage 2015-02-23 00:33:35 -08:00
warptangent
b88ab643ca
Fix Blockchain::get_tail_id() to set parameter to last block number instead of height
This reflects the behavior of blockchain_storage::get_tail_id().

Fixes #27 so that RPC method getlastblockheader works.
2015-02-22 10:41:41 -08:00
warptangent
8bd1983cdc
Blockchain: reflect log updates from blockchain_storage
See commit 4ba680f294
2015-02-01 19:30:20 -08:00
warptangent
7f9b070165
Blockchain: reflect log and assert updates from blockchain_storage
See commit cf5a8b1d6c
2015-02-01 19:30:14 -08:00
warptangent
70342ecada
Blockchain: reflect log level of blockchain_storage
Update to match LOG_PRINT_RED_Lx statements.
See commit cf5a8b1d6c
2015-02-01 19:29:18 -08:00
warptangent
c8d27fb38d
Blockchain: reflect assert behavior of blockchain_storage for get_tx_outputs_gindexs() 2015-02-01 19:29:03 -08:00
warptangent
d00ee784db
Update recently added log statement to fix possible null dereference
This would have been triggered if function was called without fourth
parameter and ring signature check failed.
2015-02-01 19:28:58 -08:00
Thomas Winget
acd4c369e4
Should fix std::min issues related to size_t 2015-01-19 17:39:38 -05:00
warptangent
800d9b9247
Remove code previously made unused and marked unused 2015-01-14 13:41:57 -08:00
warptangent
0840c2fd7e
Fix height assertion in Blockchain::handle_alternative_block()
It expects the total number of blocks of main chain, not last block id
(off-by-one error).

This again behaves like the same height assertion done in original
implementation in blockchain_storage::handle_alternative_block().

This allows a reorganization to proceed after an alternative block has
been added.
2015-01-11 21:23:02 -08:00
warptangent
63051bea1c
Fix comparison between main and alternate chain's cumulative
difficulty.

This fixes the continual reorganization between a main and alternate
chain, using the same two latest blocks from each.

The check that cumulative difficulty of the alternate chain is bigger
than main's was not using main's last block, but incorrectly using the
passed-in block's previous block. main_chain_cumulative_difficulty was
being used in two different ways. This has been split up to keep use
of main_chain_cumulative_difficulty consistent.
2015-01-11 21:23:02 -08:00
warptangent
909ea81067
Remove a have_block() check so alternate block can be processed
Remove have_block() check from Blockchain::handle_block_to_main_chain().
Add logging to have_block().

This allows blockchain reorganization to proceed further.

have_block() check here causes an error after a blockchain reorganize
begins with error: "Attempting to add block to main chain, but it's
already either there or in an alternate chain."

While reorganizing to become the main chain, a block in the
alternative chain would be refused due to have_block() rightfully
finding it in the alternative chain. The reorganization would end in
rollback, restoring to previous blockchain.

Original implementation didn't call it here, and it doesn't appear
necessary to be called from here in this implementation either. When
needed, it appears it's called prior to handle_block_to_main_chain().
2015-01-11 21:23:02 -08:00
warptangent
1701c26750
Use block index when obtaining block's difficulty for log statement
Use last block id, not number of blocks (off-by-one error).

Fixes error at start of blockchain reorganization: "Attempt to get
cumulative difficulty from height <XXXXXX> failed -- difficulty not in
db"
2015-01-11 19:57:46 -08:00