Commit graph

56 commits

Author SHA1 Message Date
moneromooo-monero
cca95c1c7a
blockchain_db: do not throw on expected partial results getting keys
When scanning for outputs used in a set of incoming blocks,
we expect that some of the inputs in their transactions will
not be found in the blockchain, as they could be in previous
blocks in that set. Those outputs will be scanned there at
a later point. In this case, we add a flag to control wehther
an output not being found is expected or not.
2017-02-13 19:05:30 +00:00
Riccardo Spagni
65e33b1bc5
Merge pull request #1506
3ff54bdd Check for correct thread before ending batch transaction (Howard Chu)
eaf8470b Must wait for previous batch to finish before starting new one (Howard Chu)
c903c554 Don't cache block height, always get from DB (Howard Chu)
eb1fb601 Tweak default db-sync-mode to fast:async:1 (Howard Chu)
0693cff9 Use batch transactions when syncing (Howard Chu)
2017-01-15 14:43:12 -05:00
Howard Chu
c903c5541e
Don't cache block height, always get from DB 2017-01-14 22:43:06 +00:00
Howard Chu
0693cff925
Use batch transactions when syncing
Faster throughput while avoiding corruption. I.e., makes
running with --db-sync-mode safe more tolerable.
2017-01-14 22:43:06 +00:00
moneromooo-monero
0478ac6848
blockchain: allow marking "tx not found" without an exception
This is a normal occurence in many cases, and there is no need
to spam the log with those when it is.
2017-01-07 20:52:17 +00:00
moneromooo-monero
88faec75fe
wallet: select part of the fake outs from recent outputs
25% of the outputs are selected from the last 5 days (if possible),
in order to avoid the common case of sending recently received
outputs again. 25% and 5 days are subject to review later, since
it's just a wallet level change.
2016-10-15 18:17:16 +01:00
moneromooo-monero
6cf8ca2a7f
core: faster find_blockchain_supplement
Since this queries block heights for blocks that may or may not
exist, queries for non existing blocks would throw an exception,
and that would slow down the loop a lot. 7 seconds to go through
a 30 hash list.

Fix this by adding an optional return block height to block_exists
and using this instead. Actual errors will still throw an
exception.

This also cuts down on log exception spam.
2016-08-31 10:03:32 +01:00
moneromooo-monero
59a66e209a
move the rct commitments to the output_amounts database
Since these are needed at the same time as the output pubkeys,
this is a whole lot faster, and takes less space. Only outputs
of 0 amount store the commitment. When reading other outputs,
a fake commitment is regenerated on the fly. This avoids having
to rewrite the database to add space for fake commitments for
existing outputs.

This code relies on two things:

- LMDB must support fixed size records per key, rather than
per database (ie, all records on key 0 are the same size, all
records for non 0 keys are same size, but records from key 0
and non 0 keys do have different sizes).

- the commitment must be directly after the rest of the data
in outkey and output_data_t.
2016-08-28 21:29:02 +01:00
moneromooo-monero
eb56d0f994
blockchain_db: add functions for adding/removing/getting rct commitments 2016-08-28 21:28:11 +01:00
moneromooo-monero
1593553e03
new unlocked parameter to output_histogram
This constrains the number of instances of any amount
to the unlocked ones (as defined by the default unlock time
setting: outputs with non default unlock time are not
considered, so may be counted as unlocked even if they are
not actually unlocked).
2016-08-01 22:16:00 +01:00
moneromooo-monero
d7b681cd65
remove hf_starting_height db
It's not really needed, it used to be an optimization for when
that code was not using the db and needed to recalculate things
fast on startup.
2016-07-13 21:38:34 +01:00
Howard Chu
c14f9efd52 Migration
Migrate from DB version 0 to version 1 on startup
2016-04-08 03:11:05 +01:00
Howard Chu
d7ea7d9a23 Merge branch 'performance' into master 2016-04-05 21:13:16 +01:00
Howard Chu
372acee723 Cleanup
drop obsolete remove_output()
fix get_output_key(global), fix crash in blockchain_dump
2016-04-05 21:05:24 +01:00
Howard Chu
591e421875 Cleanup and clarify
Try to rationalize the variable names, document usage.
2016-04-05 20:57:45 +01:00
Howard Chu
118dd69dd5 Use DUPFIXED for block_info and output_txs
Saves another ~150MB or so on the full blockchain
2016-04-05 20:55:16 +01:00
Howard Chu
6225716f3c More outputs consolidation
Also bumped DB VERSION to 1
Another significant speedup and space savings:
Get rid of global_output_indices, remove indirection from output to keys

This is the change warptangent described on irc but never got to finish.
2016-04-05 20:55:12 +01:00
warptangent
9aadedb1d0 Schema update: tx_indices - consolidate the tx subdbs from 5 to 3 2016-04-05 20:54:06 +01:00
warptangent
a2f518aa01 Schema update: tx_indices - yet less indirection 2016-04-05 20:54:06 +01:00
warptangent
8d12a8df2c Schema update: tx_indices - improve further with less indirection 2016-04-05 20:54:06 +01:00
warptangent
ae0854a431 Schema update: tx_indices 2016-04-05 20:54:06 +01:00
Howard Chu
8d252a4214 Consolidated block info 2016-04-05 20:53:59 +01:00
warptangent
132c666f67 Update schema for "tx_outputs" to use array containing amount output indices
This speeds up wallet refresh by directly retrieving a tx's amount output indices.

It removes the indirection and walking the amount output duplicate list
for every amount in each requested tx.

"tx_outputs" is used by:
Amount output indices are needed for wallet refresh.
Global output indices are needed for removing a tx.

Both amount output indices and global output indices are now stored in
an array of 64-bit unsigned ints:

tx_outputs[<tx_hash>] -> [ <a1_oi, a1_gi, a2_oi, a2_gi, ...> ]

Previously it was:
tx_outputs[<tx_hash>] -> duplicate list of <a1_gi, a2_gi, a3_gi, ...>

The amount output list had to be walked for every amount in order to
find each amount's output index, by comparing the amount's global output
index with each one in the duplicate list until a match was found.

See also d045dfa7ce
2016-04-05 20:30:50 +01:00
moneromooo-monero
600a3cf0c0
New RPC and daemon command to get output histogram
This is a list of existing output amounts along with the number
of outputs of that amount in the blockchain.

The daemon command takes:
- no parameters: all outputs with at least 3 instances
- one parameter: all outputs with at least that many instances
- two parameters: all outputs within that many instances

The default starts at 3 to avoid massive spamming of all dust
outputs in the blockchain, and is the current minimum mixin
requirement.

An optional vector of amounts may be passed, to request
histogram only for those outputs.
2016-03-26 21:10:43 +00:00
Howard Chu
a74348e115 Add destructor for readtxns
Only if we created the readtxn. Was missing cleanups from exceptions before.
2016-03-16 11:34:13 +00:00
Riccardo Spagni
240a50f3fb
Merge pull request #723
2abdb2c avoid some val copies (Howard Chu)
2016-03-14 22:20:42 +02:00
Howard Chu
92dd4ec6d6 Hack for read/write txn mixup
save the thread ID of the writer thread so we don't try to use
the writetxn from reader threads
2016-03-14 20:19:46 +00:00
Howard Chu
2abdb2c9fd avoid some val copies 2016-03-14 09:40:49 +00:00
Howard Chu
8cc7a36f0b read txn/cursor stuff
Could wrap more later.
2016-02-23 20:47:15 +00:00
Howard Chu
090b548c3b Use cursors in write txns
Saves a bit of seek overhead. LMDB frees them automatically
in txn_(commit|abort) so they need no cleanup.
2016-02-17 04:05:29 +00:00
Howard Chu
ed08d2152e Keep a running blocksize count
Used in batch size estimation, avoids rereading already processed
blocks during import
2016-02-17 04:05:28 +00:00
warptangent
f3a6000094
BlockchainDB/LMDB/BDB: Extract DB txn functions for block add/remove 2016-02-08 09:28:14 -08:00
warptangent
c657e772c4
blockchain_import: Add --drop-hard-fork command 2016-02-08 08:50:47 -08:00
Howard Chu
30f92f5630 Fix hf when import with verify off
Delete the hf tables, so the next open will rescan and regenerate
2016-01-15 17:26:19 +00:00
Riccardo Spagni
810a11267c
fixed copyrights with bad year references 2015-12-31 08:37:27 +02:00
warptangent
ee9d71e9f9
BlockchainDB: skip fixup check if read-only database 2015-12-26 14:30:20 -08:00
warptangent
725acc7f17
Replace tabs with two spaces for consistency with rest of codebase
Remove trailing whitespace in same files.
2015-12-15 06:22:06 -08:00
moneromooo-monero
a98e976f9e
blockchain_db: fixup missing key images in early DB version
Early DB versions did not store key images for inputs if the
transaction spending them had no outputs (ie, all fee). This
is not correct, as this would allow these outputs to be double
spent. This was fixed in 533acc30ed
a few months ago, but databases having synced blocks 2021612 and
685498 with a faulty version will be missing those key images
in the spent keys database. This code checks for this, and adds
those key images if they are missing.
2015-12-06 21:55:05 +00:00
moneromooo-monero
4f873bcbaa
Remove some old/obsolete/unused code
git history's here if needed to get any of this back
2015-10-27 10:01:20 +00:00
moneromooo-monero
f7e99047e4
db_lmdb: add versioning, to detect incompatible format changes 2015-10-26 18:09:46 +00:00
moneromooo-monero
5f397e4412
Add functions to iterate through blocks, txes, outputs, key images 2015-10-25 12:36:11 +00:00
moneromooo-monero
5b11a89a76
hardfork: most state now saved to the DB
There will be a delay on first load of an existing blockchain
as it gets reparsed for this state data.
2015-09-20 18:42:52 +01:00
moneromooo-monero
275894cdef
blockchain: always select random outs using triangular distribution
It was only used by the older blockchain_storage.
We also move the code to the calling blockchain level, to avoid
replicating the code in every DB implementation. This also makes
the get_random_out method obsolete, and we delete it.
2015-08-09 18:07:44 +01:00
NoodleDoodleNoodleDoodleNoodleDoodleNoo
e5d2680094 ** CHANGES ARE EXPERIMENTAL (FOR TESTING ONLY)
Bockchain:
1. Optim: Multi-thread long-hash computation when encountering groups of blocks.
2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible.
3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible.
4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks.
5. Optim: Multi-thread signature computation whenever possible.
6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD)
7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???).
8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads).

Berkeley-DB:
1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc).
2. Fix: Unable to pop blocks on reorganize due to transaction errors.
3. Patch: Large number of transaction aborts when running multi-threaded bulk queries.
4. Patch: Insufficient locks error when running full sync.
5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation.
6. Optim: Add bulk queries to get output global indices.
7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
9. Optim: Added thread-safe buffers used when multi-threading bulk queries.
10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details)
11. Mod: Added checkpoint thread and auto-remove-logs option.
12. *Now usable on 32-bit systems like RPI2.

LMDB:
1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect)
2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details)
5. Mod: Auto resize to +1GB instead of multiplier x1.5

ETC:
1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete.
2. Fix: 32-bit saturation bug when computing next difficulty on large blocks.

[PENDING ISSUES]
1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization.
   This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD.
2. Berkeley db, possible bug "unable to allocate memory". TBD.

[NEW OPTIONS] (*Currently all enabled for testing purposes)
1. --fast-block-sync arg=[0:1] (default: 1)
	a. 0 = Compute long hash per block (may take a while depending on CPU)
	b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence)
2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000)
	a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions.
	b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache.
	Fast    - Write meta-data but defer data flush.
	Fastest - Defer meta-data and data flush.
	Sync    - Flush data after nblocks_per_sync and wait.
	Async   - Flush data after nblocks_per_sync but do not wait for the operation to finish.
3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower)
        Max number of threads to use when computing long-hash in groups.
4. --show-time-stats arg=[0:1] (default: 1)
	Show benchmark related time stats.
5. --db-auto-remove-logs arg=[0:1] (default: 1)
	For berkeley-db only. Auto remove logs if enabled.

**Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version.
	At the moment, you need a full resync to use this optimized version.

[PERFORMANCE COMPARISON]
**Some figures are approximations only.
Using a baseline machine of an i7-2600K+SSD+(with full pow computation):
1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain.
2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain.
3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain.

Averate procesing times (with full pow computation):
lmdb-optimized:
1. tx_ave = 2.5 ms / tx
2. block_ave = 5.87 ms / block
memory-official-repo:
1. tx_ave = 8.85 ms / tx
2. block_ave = 19.68 ms / block
lmdb-official-repo (0f4a036437)
1. tx_ave = 47.8 ms / tx
2. block_ave = 64.2 ms / block

**Note: The following data denotes processing times only (does not include p2p download time)
lmdb-optimized processing times (with full pow computation):
1. Desktop,  Quad-core / 8-threads 2600k  (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000).
2. Laptop,   Dual-core / 4-threads U4200  (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000).
3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000).

lmdb-optimized processing times (with per-block-checkpoint)
1. Desktop,  Quad-core / 8-threads 2600k  (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000).

berkeley-db optimized processing times (with full pow computation)
1. Desktop, Quad-core / 8-threads 2600k  (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000).
2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).

berkeley-db optimized processing times (with per-block-checkpoint)
1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
2015-07-15 23:20:16 -07:00
NoodleDoodleNoodleDoodleNoodleDoodleNoo
da1d3c01de
Experimental BDB workaround optimizations 2015-07-15 21:13:42 -07:00
warptangent
fd73d9cc3a
Check and resize if needed at batch transaction start
This currently only affects blockchain_import and blockchain_converter.

When the number of blocks expected for the batch transaction is
provided, make an estimate of the DB space needed. If not enough free
space remains, resize the DB.

The estimate is made based on:
- the average size of the last 500 blocks, or if larger, a min. block
  size of 4k
- a factor for the expanded size a block occupies in the DB across the
  sub-dbs/tables
- a safety factor (1.7) to allow for a "reasonable" average block size
  increase over the batch

Increase the DB size by whichever is greater: the estimated size needed
or a minimum increase size, currently 128 MB.

The conservative factors in the estimate help in testing that the resize
occurs when needed, and without gratuitous size increases. For common
use, the safety factor and minimum increase size could reasonably be
increased.

For testing, setting DEFAULT_MAPSIZE (blockchain_db/lmdb/db_lmdb.h) to 1
<< 27 (128 MB) and recompiling will ensure DB resizes take place sooner
and more frequently.
2015-07-12 00:51:39 -07:00
warptangent
6e170c8b78
Optionally allow DB to know expected number of blocks at batch transaction start
This will assist in a DB resize check.
2015-07-11 23:54:12 -07:00
moneromooo-monero
8069b3ba7f
blockchain_db: add a few const 2015-05-27 19:16:37 +01:00
Thomas Winget
7b7ef73c15
LMDB should now dynamically resize the mapsize
Some filesystems (*cough* NTFS *cough*) aren't good with sparse files,
so this makes LMDB dynamically resize its mapsize as needed.  Note: the
check interval is currently every 10 blocks (for testing) and will
probably need to change to 1000 or something.  Default mapsize set to
1GiB.

Blockchain conversion tools using batching will probably segfault, I'll
fix that in the next commit.
2015-05-16 22:05:54 -04:00
Thomas Winget
ac79502308
Move mdb_txn_safe implementation to cpp file 2015-05-15 20:42:47 -04:00