Commit graph

128 commits

Author SHA1 Message Date
warptangent
525bf5811f
Fix estimation of batch storage size when no blocks exist
If there's no blocks in database (m_height == 0):
  Don't assign incorrect block range to check.
  Skip average block size check.

Test:

Run blockchain_converter with an existing source blockchain.bin and
a non-existent LMDB destination database.

The converter creates a BlockchainLMDB instance with zero height, due to
not being initialized with a genesis block, normally done by
Blockchain::init(). While different than the behavior of bitmonerod,
blockchain_import, and blockchain_export, the initialization hasn't been
strictly necessary.

The db batch size estimation normally uses an average block size, or a
default minimum block size, whichever is greater. In this case, as
there's no existing blocks to check for an average block size, the
default should be used.
2015-08-04 17:11:30 -07:00
warptangent
71793ef43f Add batch support to BlockchainLMDB::get_output_key
This allows blockchain_import to work with batch and verify modes enabled
(the default).
2015-07-16 00:36:26 -07:00
NoodleDoodleNoodleDoodleNoodleDoodleNoo
94ea3e8ed2 Removed on_idle() calls to Blockchain::store_blockchain() for lmdb.
Added option to cache tx-input verification results.
2015-07-15 23:20:25 -07:00
NoodleDoodleNoodleDoodleNoodleDoodleNoo
2e293a563e Fixed binary size issue due to embedded checkpoint data.
Fixed OSX compilation issues due to random lmdb resize points.
Fixed infinite loop bug when calling core::get_block_template(..).
2015-07-15 23:20:20 -07:00
NoodleDoodleNoodleDoodleNoodleDoodleNoo
e5d2680094 ** CHANGES ARE EXPERIMENTAL (FOR TESTING ONLY)
Bockchain:
1. Optim: Multi-thread long-hash computation when encountering groups of blocks.
2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible.
3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible.
4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks.
5. Optim: Multi-thread signature computation whenever possible.
6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD)
7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???).
8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads).

Berkeley-DB:
1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc).
2. Fix: Unable to pop blocks on reorganize due to transaction errors.
3. Patch: Large number of transaction aborts when running multi-threaded bulk queries.
4. Patch: Insufficient locks error when running full sync.
5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation.
6. Optim: Add bulk queries to get output global indices.
7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
9. Optim: Added thread-safe buffers used when multi-threading bulk queries.
10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details)
11. Mod: Added checkpoint thread and auto-remove-logs option.
12. *Now usable on 32-bit systems like RPI2.

LMDB:
1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect)
2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details)
5. Mod: Auto resize to +1GB instead of multiplier x1.5

ETC:
1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete.
2. Fix: 32-bit saturation bug when computing next difficulty on large blocks.

[PENDING ISSUES]
1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization.
   This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD.
2. Berkeley db, possible bug "unable to allocate memory". TBD.

[NEW OPTIONS] (*Currently all enabled for testing purposes)
1. --fast-block-sync arg=[0:1] (default: 1)
	a. 0 = Compute long hash per block (may take a while depending on CPU)
	b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence)
2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000)
	a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions.
	b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache.
	Fast    - Write meta-data but defer data flush.
	Fastest - Defer meta-data and data flush.
	Sync    - Flush data after nblocks_per_sync and wait.
	Async   - Flush data after nblocks_per_sync but do not wait for the operation to finish.
3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower)
        Max number of threads to use when computing long-hash in groups.
4. --show-time-stats arg=[0:1] (default: 1)
	Show benchmark related time stats.
5. --db-auto-remove-logs arg=[0:1] (default: 1)
	For berkeley-db only. Auto remove logs if enabled.

**Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version.
	At the moment, you need a full resync to use this optimized version.

[PERFORMANCE COMPARISON]
**Some figures are approximations only.
Using a baseline machine of an i7-2600K+SSD+(with full pow computation):
1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain.
2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain.
3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain.

Averate procesing times (with full pow computation):
lmdb-optimized:
1. tx_ave = 2.5 ms / tx
2. block_ave = 5.87 ms / block
memory-official-repo:
1. tx_ave = 8.85 ms / tx
2. block_ave = 19.68 ms / block
lmdb-official-repo (0f4a036437)
1. tx_ave = 47.8 ms / tx
2. block_ave = 64.2 ms / block

**Note: The following data denotes processing times only (does not include p2p download time)
lmdb-optimized processing times (with full pow computation):
1. Desktop,  Quad-core / 8-threads 2600k  (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000).
2. Laptop,   Dual-core / 4-threads U4200  (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000).
3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000).

lmdb-optimized processing times (with per-block-checkpoint)
1. Desktop,  Quad-core / 8-threads 2600k  (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000).

berkeley-db optimized processing times (with full pow computation)
1. Desktop, Quad-core / 8-threads 2600k  (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000).
2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).

berkeley-db optimized processing times (with per-block-checkpoint)
1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
2015-07-15 23:20:16 -07:00
NoodleDoodleNoodleDoodleNoodleDoodleNoo
da1d3c01de
Experimental BDB workaround optimizations 2015-07-15 21:13:42 -07:00
warptangent
fd73d9cc3a
Check and resize if needed at batch transaction start
This currently only affects blockchain_import and blockchain_converter.

When the number of blocks expected for the batch transaction is
provided, make an estimate of the DB space needed. If not enough free
space remains, resize the DB.

The estimate is made based on:
- the average size of the last 500 blocks, or if larger, a min. block
  size of 4k
- a factor for the expanded size a block occupies in the DB across the
  sub-dbs/tables
- a safety factor (1.7) to allow for a "reasonable" average block size
  increase over the batch

Increase the DB size by whichever is greater: the estimated size needed
or a minimum increase size, currently 128 MB.

The conservative factors in the estimate help in testing that the resize
occurs when needed, and without gratuitous size increases. For common
use, the safety factor and minimum increase size could reasonably be
increased.

For testing, setting DEFAULT_MAPSIZE (blockchain_db/lmdb/db_lmdb.h) to 1
<< 27 (128 MB) and recompiling will ensure DB resizes take place sooner
and more frequently.
2015-07-12 00:51:39 -07:00
warptangent
6e170c8b78
Optionally allow DB to know expected number of blocks at batch transaction start
This will assist in a DB resize check.
2015-07-11 23:54:12 -07:00
Riccardo Spagni
6aee052001
Merge pull request #297
5680604 Replace hardcoded value with existing constant of same value (warptangent)
f37ee2f Update database resize behavior (warptangent)
f85cd8e Include database error in more error messages (warptangent)
2015-05-30 22:32:23 +02:00
warptangent
5680604437
Replace hardcoded value with existing constant of same value
This was likely the intent.
2015-05-30 09:27:54 -07:00
warptangent
f37ee2f304
Update database resize behavior
On an existing database, don't set LMDB map size to be the initial size
for a new database.

Check if resize is needed at startup.
2015-05-30 09:27:49 -07:00
warptangent
f85cd8e10b
Include database error in more error messages 2015-05-30 09:27:44 -07:00
moneromooo-monero
8069b3ba7f
blockchain_db: add a few const 2015-05-27 19:16:37 +01:00
Riccardo Spagni
634e367ff5
Merge pull request #289
01076ae Check if LMDB needs resize every 1000 blocks (Thomas Winget)
b0d849e null out batch txn pointer as needed (BlockchainLMDB) (Thomas Winget)
7b7ef73 LMDB should now dynamically resize the mapsize (Thomas Winget)
ac79502 Move mdb_txn_safe implementation to cpp file (Thomas Winget)
2015-05-26 10:44:48 +02:00
Thomas Winget
01076ae700
Check if LMDB needs resize every 1000 blocks
(this was 10 for testing purposes)
2015-05-18 06:18:31 -04:00
Thomas Winget
b0d849e0a4
null out batch txn pointer as needed (BlockchainLMDB) 2015-05-18 06:12:54 -04:00
Thomas Winget
7b7ef73c15
LMDB should now dynamically resize the mapsize
Some filesystems (*cough* NTFS *cough*) aren't good with sparse files,
so this makes LMDB dynamically resize its mapsize as needed.  Note: the
check interval is currently every 10 blocks (for testing) and will
probably need to change to 1000 or something.  Default mapsize set to
1GiB.

Blockchain conversion tools using batching will probably segfault, I'll
fix that in the next commit.
2015-05-16 22:05:54 -04:00
warptangent
d35bffb950
Allow BlockchainLMDB to be opened in read-only mode
Have blockchain_export use read-only mode when source is BlockchainLMDB.
2015-05-16 01:34:58 -07:00
Thomas Winget
ac79502308
Move mdb_txn_safe implementation to cpp file 2015-05-15 20:42:47 -04:00
warptangent
71af04669c
Update log statements
Use filesystem path conversion to string() instead of c_str().
Windows may otherwise output an address.
2015-05-08 14:12:06 -07:00
Thomas Winget
7b14d4a17f
Steps toward multiple dbs available -- working
There will need to be some more refactoring for these changes to be
considered complete/correct, but for now it's working.

new daemon cli argument "--db-type", works for LMDB and BerkeleyDB.

A good deal of refactoring is also present in this commit, namely
Blockchain no longer instantiates BlockchainDB, but rather is passed a
pointer to an already-instantiated BlockchainDB on init().
2015-03-25 12:09:44 -04:00
Thomas Winget
8e3347f310
Pull blockchain changes into berkeleydb branch 2015-03-17 19:52:53 -04:00
Thomas Winget
5112dc37d7
Try to not pollute cryptonote namespace 2015-03-16 04:17:44 -04:00
warptangent
275cbd4348
Add support for database open with flags
Add support to:
  - BlockchainDB, BlockchainLMDB
  - blockchain_import utility to open LMDB database with one or more
    LMDB flags.

Sample use:
  $ blockchain_import --database lmdb#nosync
  $ blockchain_import --database lmdb#nosync,nometasync
2015-03-16 00:26:59 -07:00
warptangent
cb862cb81a
Add mdb_flags variable to LMDB database open 2015-03-16 00:26:58 -07:00
warptangent
acb5d291b8
Update and relocate comment that applies class wide 2015-03-15 13:22:32 -07:00
Thomas Winget
eee3ee7073
BlockchainDB implementations have names now
In order to make things more general, BlockchainDB now has get_db_name()
which should return a string with the "name" of that type of db.
This "name" will be the subfolder name that holds that db type's files
within the monero folder.

Small bugfix: blockchain_converter was not correctly appending this in
the prior hard-coded-string implementation of the subfolder data
directory concept.
2015-03-13 21:39:27 -04:00
Thomas Winget
5eab480cb1
Moved BlockchainDB into its own src/ subfolder
Ostensibly janitorial work, but should be more relevant later down the
line.  Things that depend on core cryptonote things (i.e.
    cryptonote_core) don't necessarily depend on BlockchainDB and thus
have no need to have BlockchainDB baked in with them.
2015-03-06 15:20:45 -05:00