3c10239 unbound: use the mini event fallback implementation (moneromooo-monero)
4e138a0 dns_utils: remove unnecessary string conversion (moneromooo-monero)
f928468 dns_utils: factor the fetching code for different DNS record types (moneromooo-monero)
4ef0da1 dns_utils: simplify string handling and fix leak (moneromooo-monero)
ae5f28c dns_utils: add a const where possible (moneromooo-monero)
f43d465 dns_utils: lock access to the singleton (moneromooo-monero)
5990344 dns: make ctor private (moneromooo-monero)
This ensures one can't instanciate a DNSResolver object by
mistake, but uses the singleton. A separate create static
function is added for cases where a new object is explicitely
needed.
0a4bc84 Added ref10 shen_ed25519_ref code, which includes code that can replace crypto-ops with a version straight from Bernstein's ref 10 (ShenNoether)
0d70fdc revert to 776b4fc91a (ShenNoether)
b01f286 Added shen_ed25519_ref to crypto ops subfolder, the point is to directly have bitmonero's crypto code come from bernstein et al's ref 10 code (ShenNoether)
f197599 wallet: encrypt the cache file (moneromooo-monero)
98c76a3 chacha8: add a key generation variant that take a pointer and size (moneromooo-monero)
It contains private data, such as a record of transactions.
The key is derived from the view and spend secret keys.
The encryption currently is one shot, so may require a lot of
memory for large wallet caches.
7c4d6f1 simplewallet: Use default log file name when executable's file path is unknown (warptangent)
b5b0f08 epee: Don't set log file name when process path name isn't found (warptangent)
Default to "simplewallet.log" in current directory when file path isn't
obtained from epee.
In this situation previously, it defaulted to the file name of ".log"
("" + ".log") in the current directory.
(Thanks to @sammy007 for reporting bug.)
An earlier version yet used "" + "/" + ".log" = "/.log", which resulted
in silently not logging in most cases, due to lack of permission.
Test:
PATH=$PATH:</path/to/simplewallet/folder> && simplewallet --wallet-file /dev/null
This results in epee not finding the executable's file path, so
simplewallet will now use a default log filename.
The height function apparently used to return the index of
the last block, rather than the height of the chain. This now
seems to be incorrect, judging the the code, so we remove the
now wrong comment, as well as a couple +/- 1 adjustments
which now cause the median calculation to differ from the
original blockchain_storage version.
This obsoletes the need for a lengthy blockchain rescan when
a transaction doesn't end up in the chain after being accepted
by the daemon, or any other reason why the wallet's idea of
spent and unspent outputs gets out of sync from the blockchain's.
The original code removed key images from a tx from the blockchain
when an non to-key nor gen input was found in that tx. Additionally,
the remainder of the tx data was added to the blockchain only after
the double spend check passed.
2634307 daemon: omit extra set of <> in error message (moneromooo-monero)
0822933 daemon: print a decoded tx in print_tx (moneromooo-monero)
1d678b1 daemon: fix print_tx not find transactions (moneromooo-monero)
It was only used by the older blockchain_storage.
We also move the code to the calling blockchain level, to avoid
replicating the code in every DB implementation. This also makes
the get_random_out method obsolete, and we delete it.
Pros:
- smaller on the blockchain
- shorter integrated addresses
Cons:
- less sparseness
- less ability to embed actual information
The boolean argument to encrypt payment ids is now gone from the
RPC calls, since the decision is made based on the length of the
payment id passed.
A payment ID may be encrypted using the tx secret key and the
receiver's public view key. The receiver can decrypt it with
the tx public key and the receiver's secret view key.
Using integrated addresses now cause the payment IDs to be
encrypted. Payment IDs used manually are not encrypted by default,
but can be encrypted using the new 'encrypt_payment_id' field
in the transfer and transfer_split RPC calls. It is not possible
to use an encrypted payment ID by specifying a manual simplewallet
transfer/transfer_new command, though this is just a limitation
due to input parsing.
If there's no blocks in database (m_height == 0):
Don't assign incorrect block range to check.
Skip average block size check.
Test:
Run blockchain_converter with an existing source blockchain.bin and
a non-existent LMDB destination database.
The converter creates a BlockchainLMDB instance with zero height, due to
not being initialized with a genesis block, normally done by
Blockchain::init(). While different than the behavior of bitmonerod,
blockchain_import, and blockchain_export, the initialization hasn't been
strictly necessary.
The db batch size estimation normally uses an average block size, or a
default minimum block size, whichever is greater. In this case, as
there's no existing blocks to check for an average block size, the
default should be used.
It should avoid a lot of the issues sending more than half the
wallet's contents due to change.
Actual output selection is still random. Changing this would
improve the matching of transaction amounts to output sizes,
but may have non obvious effects on blockchain analysis.
Mapped to the new transfer_new command in simplewallet, and
transfer uses the existing algorithm.
To use in RPC, add "new_algorithm: true" in the transfer_split
JSON command. It is not used in the transfer command.
boost doesn't support %zu for size_t, and the previous change
to %u could technically lose bits (though it would require splitting
a transfer into 4 billion transactions, which seems unlikely).
Bockchain:
1. Optim: Multi-thread long-hash computation when encountering groups of blocks.
2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible.
3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible.
4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks.
5. Optim: Multi-thread signature computation whenever possible.
6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD)
7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???).
8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads).
Berkeley-DB:
1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc).
2. Fix: Unable to pop blocks on reorganize due to transaction errors.
3. Patch: Large number of transaction aborts when running multi-threaded bulk queries.
4. Patch: Insufficient locks error when running full sync.
5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation.
6. Optim: Add bulk queries to get output global indices.
7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
9. Optim: Added thread-safe buffers used when multi-threading bulk queries.
10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details)
11. Mod: Added checkpoint thread and auto-remove-logs option.
12. *Now usable on 32-bit systems like RPI2.
LMDB:
1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect)
2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details)
5. Mod: Auto resize to +1GB instead of multiplier x1.5
ETC:
1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete.
2. Fix: 32-bit saturation bug when computing next difficulty on large blocks.
[PENDING ISSUES]
1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization.
This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD.
2. Berkeley db, possible bug "unable to allocate memory". TBD.
[NEW OPTIONS] (*Currently all enabled for testing purposes)
1. --fast-block-sync arg=[0:1] (default: 1)
a. 0 = Compute long hash per block (may take a while depending on CPU)
b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence)
2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000)
a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions.
b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache.
Fast - Write meta-data but defer data flush.
Fastest - Defer meta-data and data flush.
Sync - Flush data after nblocks_per_sync and wait.
Async - Flush data after nblocks_per_sync but do not wait for the operation to finish.
3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower)
Max number of threads to use when computing long-hash in groups.
4. --show-time-stats arg=[0:1] (default: 1)
Show benchmark related time stats.
5. --db-auto-remove-logs arg=[0:1] (default: 1)
For berkeley-db only. Auto remove logs if enabled.
**Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version.
At the moment, you need a full resync to use this optimized version.
[PERFORMANCE COMPARISON]
**Some figures are approximations only.
Using a baseline machine of an i7-2600K+SSD+(with full pow computation):
1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain.
2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain.
3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain.
Averate procesing times (with full pow computation):
lmdb-optimized:
1. tx_ave = 2.5 ms / tx
2. block_ave = 5.87 ms / block
memory-official-repo:
1. tx_ave = 8.85 ms / tx
2. block_ave = 19.68 ms / block
lmdb-official-repo (0f4a036437)
1. tx_ave = 47.8 ms / tx
2. block_ave = 64.2 ms / block
**Note: The following data denotes processing times only (does not include p2p download time)
lmdb-optimized processing times (with full pow computation):
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000).
2. Laptop, Dual-core / 4-threads U4200 (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000).
3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000).
lmdb-optimized processing times (with per-block-checkpoint)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with full pow computation)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000).
2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with per-block-checkpoint)
1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
fd73d9c Check and resize if needed at batch transaction start (warptangent)
f9e4afd blockchain_utilities: Increase debug statement's log level (warptangent)
699e4b3 blockchain_utilities: Pass expected number of blocks when starting batch (warptangent)
6e170c8 Optionally allow DB to know expected number of blocks at batch transaction start (warptangent)