2015-12-31 06:39:56 +00:00
// Copyright (c) 2014-2016, The Monero Project
2015-12-14 04:54:39 +00:00
//
2014-07-23 13:03:52 +00:00
// All rights reserved.
2015-12-14 04:54:39 +00:00
//
2014-07-23 13:03:52 +00:00
// Redistribution and use in source and binary forms, with or without modification, are
// permitted provided that the following conditions are met:
2015-12-14 04:54:39 +00:00
//
2014-07-23 13:03:52 +00:00
// 1. Redistributions of source code must retain the above copyright notice, this list of
// conditions and the following disclaimer.
2015-12-14 04:54:39 +00:00
//
2014-07-23 13:03:52 +00:00
// 2. Redistributions in binary form must reproduce the above copyright notice, this list
// of conditions and the following disclaimer in the documentation and/or other
// materials provided with the distribution.
2015-12-14 04:54:39 +00:00
//
2014-07-23 13:03:52 +00:00
// 3. Neither the name of the copyright holder nor the names of its contributors may be
// used to endorse or promote products derived from this software without specific
// prior written permission.
2015-12-14 04:54:39 +00:00
//
2014-07-23 13:03:52 +00:00
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
2015-12-14 04:54:39 +00:00
//
2014-07-23 13:03:52 +00:00
// Parts of this file are originally copyright (c) 2012-2013 The Cryptonote developers
2014-03-03 22:07:58 +00:00
# include "include_base_utils.h"
using namespace epee ;
# include <boost/foreach.hpp>
# include <unordered_set>
# include "cryptonote_core.h"
# include "common/command_line.h"
# include "common/util.h"
# include "warnings.h"
# include "crypto/crypto.h"
# include "cryptonote_config.h"
# include "cryptonote_format_utils.h"
# include "misc_language.h"
2014-09-30 19:24:43 +00:00
# include <csignal>
2015-01-29 22:10:53 +00:00
# include "cryptonote_core/checkpoints_create.h"
2015-03-25 15:41:30 +00:00
# include "blockchain_db/blockchain_db.h"
# include "blockchain_db/lmdb/db_lmdb.h"
** CHANGES ARE EXPERIMENTAL (FOR TESTING ONLY)
Bockchain:
1. Optim: Multi-thread long-hash computation when encountering groups of blocks.
2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible.
3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible.
4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks.
5. Optim: Multi-thread signature computation whenever possible.
6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD)
7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???).
8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads).
Berkeley-DB:
1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc).
2. Fix: Unable to pop blocks on reorganize due to transaction errors.
3. Patch: Large number of transaction aborts when running multi-threaded bulk queries.
4. Patch: Insufficient locks error when running full sync.
5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation.
6. Optim: Add bulk queries to get output global indices.
7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
9. Optim: Added thread-safe buffers used when multi-threading bulk queries.
10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details)
11. Mod: Added checkpoint thread and auto-remove-logs option.
12. *Now usable on 32-bit systems like RPI2.
LMDB:
1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect)
2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details)
5. Mod: Auto resize to +1GB instead of multiplier x1.5
ETC:
1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete.
2. Fix: 32-bit saturation bug when computing next difficulty on large blocks.
[PENDING ISSUES]
1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization.
This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD.
2. Berkeley db, possible bug "unable to allocate memory". TBD.
[NEW OPTIONS] (*Currently all enabled for testing purposes)
1. --fast-block-sync arg=[0:1] (default: 1)
a. 0 = Compute long hash per block (may take a while depending on CPU)
b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence)
2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000)
a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions.
b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache.
Fast - Write meta-data but defer data flush.
Fastest - Defer meta-data and data flush.
Sync - Flush data after nblocks_per_sync and wait.
Async - Flush data after nblocks_per_sync but do not wait for the operation to finish.
3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower)
Max number of threads to use when computing long-hash in groups.
4. --show-time-stats arg=[0:1] (default: 1)
Show benchmark related time stats.
5. --db-auto-remove-logs arg=[0:1] (default: 1)
For berkeley-db only. Auto remove logs if enabled.
**Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version.
At the moment, you need a full resync to use this optimized version.
[PERFORMANCE COMPARISON]
**Some figures are approximations only.
Using a baseline machine of an i7-2600K+SSD+(with full pow computation):
1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain.
2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain.
3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain.
Averate procesing times (with full pow computation):
lmdb-optimized:
1. tx_ave = 2.5 ms / tx
2. block_ave = 5.87 ms / block
memory-official-repo:
1. tx_ave = 8.85 ms / tx
2. block_ave = 19.68 ms / block
lmdb-official-repo (0f4a036437fd41a5498ee5e74e2422ea6177aa3e)
1. tx_ave = 47.8 ms / tx
2. block_ave = 64.2 ms / block
**Note: The following data denotes processing times only (does not include p2p download time)
lmdb-optimized processing times (with full pow computation):
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000).
2. Laptop, Dual-core / 4-threads U4200 (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000).
3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000).
lmdb-optimized processing times (with per-block-checkpoint)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with full pow computation)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000).
2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with per-block-checkpoint)
1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
2015-07-10 20:09:32 +00:00
# if defined(BERKELEY_DB)
2015-03-25 15:41:30 +00:00
# include "blockchain_db/berkeleydb/db_bdb.h"
2015-04-07 18:27:37 +00:00
# endif
2014-03-03 22:07:58 +00:00
DISABLE_VS_WARNINGS ( 4355 )
namespace cryptonote
{
//-----------------------------------------------------------------------------------------------
core : : core ( i_cryptonote_protocol * pprotocol ) :
m_mempool ( m_blockchain_storage ) ,
2015-01-26 05:36:09 +00:00
# if BLOCKCHAIN_DB == DB_LMDB
2014-03-03 22:07:58 +00:00
m_blockchain_storage ( m_mempool ) ,
2015-01-26 05:36:09 +00:00
# else
m_blockchain_storage ( & m_mempool ) ,
# endif
2014-03-03 22:07:58 +00:00
m_miner ( this ) ,
2015-12-14 04:54:39 +00:00
m_miner_address ( boost : : value_initialized < account_public_address > ( ) ) ,
2014-06-04 20:50:13 +00:00
m_starter_message_showed ( false ) ,
2014-09-26 05:01:58 +00:00
m_target_blockchain_height ( 0 ) ,
m_checkpoints_path ( " " ) ,
2014-09-29 20:30:47 +00:00
m_last_dns_checkpoints_update ( 0 ) ,
m_last_json_checkpoints_update ( 0 )
2014-03-03 22:07:58 +00:00
{
set_cryptonote_protocol ( pprotocol ) ;
}
void core : : set_cryptonote_protocol ( i_cryptonote_protocol * pprotocol )
{
if ( pprotocol )
m_pprotocol = pprotocol ;
else
m_pprotocol = & m_protocol_stub ;
}
//-----------------------------------------------------------------------------------
void core : : set_checkpoints ( checkpoints & & chk_pts )
{
m_blockchain_storage . set_checkpoints ( std : : move ( chk_pts ) ) ;
2014-09-25 05:15:28 +00:00
}
//-----------------------------------------------------------------------------------
void core : : set_checkpoints_file_path ( const std : : string & path )
{
m_checkpoints_path = path ;
}
2014-09-26 05:01:58 +00:00
//-----------------------------------------------------------------------------------
void core : : set_enforce_dns_checkpoints ( bool enforce_dns )
{
m_blockchain_storage . set_enforce_dns_checkpoints ( enforce_dns ) ;
}
2014-09-25 05:15:28 +00:00
//-----------------------------------------------------------------------------------------------
2014-09-26 05:01:58 +00:00
bool core : : update_checkpoints ( )
2014-09-25 05:15:28 +00:00
{
2015-12-13 11:38:37 +00:00
if ( m_testnet | | m_fakechain ) return true ;
2015-04-08 22:07:46 +00:00
2015-04-22 08:36:39 +00:00
if ( m_checkpoints_updating . test_and_set ( ) ) return true ;
2014-09-26 05:01:58 +00:00
bool res = true ;
2014-09-29 20:30:47 +00:00
if ( time ( NULL ) - m_last_dns_checkpoints_update > = 3600 )
2014-09-25 05:15:28 +00:00
{
2014-09-29 20:30:47 +00:00
res = m_blockchain_storage . update_checkpoints ( m_checkpoints_path , true ) ;
m_last_dns_checkpoints_update = time ( NULL ) ;
m_last_json_checkpoints_update = time ( NULL ) ;
}
else if ( time ( NULL ) - m_last_json_checkpoints_update > = 600 )
{
res = m_blockchain_storage . update_checkpoints ( m_checkpoints_path , false ) ;
m_last_json_checkpoints_update = time ( NULL ) ;
2014-09-25 05:15:28 +00:00
}
2014-09-30 19:24:43 +00:00
2015-04-22 08:36:39 +00:00
m_checkpoints_updating . clear ( ) ;
2014-09-30 19:24:43 +00:00
// if anything fishy happened getting new checkpoints, bring down the house
if ( ! res )
{
graceful_exit ( ) ;
}
2014-09-26 05:01:58 +00:00
return res ;
2014-03-03 22:07:58 +00:00
}
//-----------------------------------------------------------------------------------
2015-02-05 09:11:20 +00:00
void core : : stop ( )
{
graceful_exit ( ) ;
}
//-----------------------------------------------------------------------------------
2015-12-08 23:06:29 +00:00
void core : : init_options ( boost : : program_options : : options_description & desc )
2014-03-03 22:07:58 +00:00
{
2015-12-08 23:06:29 +00:00
command_line : : add_arg ( desc , command_line : : arg_data_dir , tools : : get_default_data_dir ( ) ) ;
command_line : : add_arg ( desc , command_line : : arg_testnet_data_dir , ( boost : : filesystem : : path ( tools : : get_default_data_dir ( ) ) / " testnet " ) . string ( ) ) ;
command_line : : add_arg ( desc , command_line : : arg_test_drop_download ) ;
command_line : : add_arg ( desc , command_line : : arg_test_drop_download_height ) ;
command_line : : add_arg ( desc , command_line : : arg_testnet_on ) ;
command_line : : add_arg ( desc , command_line : : arg_dns_checkpoints ) ;
command_line : : add_arg ( desc , command_line : : arg_db_type ) ;
command_line : : add_arg ( desc , command_line : : arg_prep_blocks_threads ) ;
command_line : : add_arg ( desc , command_line : : arg_fast_block_sync ) ;
command_line : : add_arg ( desc , command_line : : arg_db_sync_mode ) ;
command_line : : add_arg ( desc , command_line : : arg_show_time_stats ) ;
command_line : : add_arg ( desc , command_line : : arg_db_auto_remove_logs ) ;
2015-12-13 11:38:37 +00:00
command_line : : add_arg ( desc , command_line : : arg_fakechain ) ;
2014-03-03 22:07:58 +00:00
}
//-----------------------------------------------------------------------------------------------
2015-01-29 22:10:53 +00:00
bool core : : handle_command_line ( const boost : : program_options : : variables_map & vm )
2014-03-03 22:07:58 +00:00
{
2015-12-08 23:06:29 +00:00
m_testnet = command_line : : get_arg ( vm , command_line : : arg_testnet_on ) ;
2015-12-13 11:38:37 +00:00
m_fakechain = command_line : : get_arg ( vm , command_line : : arg_fakechain ) ;
2015-01-29 22:10:53 +00:00
auto data_dir_arg = m_testnet ? command_line : : arg_testnet_data_dir : command_line : : arg_data_dir ;
2014-09-08 21:30:50 +00:00
m_config_folder = command_line : : get_arg ( vm , data_dir_arg ) ;
2015-01-29 22:10:53 +00:00
auto data_dir = boost : : filesystem : : path ( m_config_folder ) ;
2015-12-13 11:38:37 +00:00
if ( ! m_testnet & & ! m_fakechain )
2015-01-29 22:10:53 +00:00
{
cryptonote : : checkpoints checkpoints ;
if ( ! cryptonote : : create_checkpoints ( checkpoints ) )
{
throw std : : runtime_error ( " Failed to initialize checkpoints " ) ;
}
set_checkpoints ( std : : move ( checkpoints ) ) ;
boost : : filesystem : : path json ( JSON_HASH_FILE_NAME ) ;
boost : : filesystem : : path checkpoint_json_hashfile_fullpath = data_dir / json ;
set_checkpoints_file_path ( checkpoint_json_hashfile_fullpath . string ( ) ) ;
}
2015-12-08 23:06:29 +00:00
set_enforce_dns_checkpoints ( command_line : : get_arg ( vm , command_line : : arg_dns_checkpoints ) ) ;
2015-04-01 17:00:45 +00:00
test_drop_download_height ( command_line : : get_arg ( vm , command_line : : arg_test_drop_download_height ) ) ;
2015-12-14 04:54:39 +00:00
2015-04-01 17:00:45 +00:00
if ( command_line : : get_arg ( vm , command_line : : arg_test_drop_download ) = = true )
2015-12-14 04:54:39 +00:00
test_drop_download ( ) ;
2014-03-03 22:07:58 +00:00
return true ;
}
//-----------------------------------------------------------------------------------------------
2015-09-19 10:25:57 +00:00
uint64_t core : : get_current_blockchain_height ( ) const
2014-03-03 22:07:58 +00:00
{
return m_blockchain_storage . get_current_blockchain_height ( ) ;
}
//-----------------------------------------------------------------------------------------------
2015-09-19 10:25:57 +00:00
bool core : : get_blockchain_top ( uint64_t & height , crypto : : hash & top_id ) const
2014-03-03 22:07:58 +00:00
{
top_id = m_blockchain_storage . get_tail_id ( height ) ;
return true ;
}
//-----------------------------------------------------------------------------------------------
2015-09-19 10:25:57 +00:00
bool core : : get_blocks ( uint64_t start_offset , size_t count , std : : list < block > & blocks , std : : list < transaction > & txs ) const
2014-03-03 22:07:58 +00:00
{
return m_blockchain_storage . get_blocks ( start_offset , count , blocks , txs ) ;
}
//-----------------------------------------------------------------------------------------------
2015-09-19 10:25:57 +00:00
bool core : : get_blocks ( uint64_t start_offset , size_t count , std : : list < block > & blocks ) const
2014-03-03 22:07:58 +00:00
{
return m_blockchain_storage . get_blocks ( start_offset , count , blocks ) ;
} //-----------------------------------------------------------------------------------------------
2015-09-19 10:25:57 +00:00
bool core : : get_transactions ( const std : : vector < crypto : : hash > & txs_ids , std : : list < transaction > & txs , std : : list < crypto : : hash > & missed_txs ) const
2014-03-03 22:07:58 +00:00
{
return m_blockchain_storage . get_transactions ( txs_ids , txs , missed_txs ) ;
}
//-----------------------------------------------------------------------------------------------
2015-09-19 10:25:57 +00:00
bool core : : get_alternative_blocks ( std : : list < block > & blocks ) const
2014-03-03 22:07:58 +00:00
{
return m_blockchain_storage . get_alternative_blocks ( blocks ) ;
}
//-----------------------------------------------------------------------------------------------
2015-09-19 10:25:57 +00:00
size_t core : : get_alternative_blocks_count ( ) const
2014-03-03 22:07:58 +00:00
{
return m_blockchain_storage . get_alternative_blocks_count ( ) ;
}
//-----------------------------------------------------------------------------------------------
2016-01-31 12:58:08 +00:00
bool core : : lock_db_directory ( const boost : : filesystem : : path & path )
{
// boost doesn't like locking directories...
const boost : : filesystem : : path lock_path = path / " .daemon_lock " ;
try
{
// ensure the file exists
std : : ofstream ( lock_path . string ( ) , std : : ios : : out ) . close ( ) ;
db_lock = boost : : interprocess : : file_lock ( lock_path . string ( ) . c_str ( ) ) ;
LOG_PRINT_L1 ( " Locking " < < lock_path . string ( ) ) ;
if ( ! db_lock . try_lock ( ) )
{
LOG_PRINT_L0 ( " Failed to lock " < < lock_path . string ( ) ) ;
return false ;
}
return true ;
}
catch ( const std : : exception & e )
{
LOG_PRINT_L0 ( " Error trying to lock " < < lock_path . string ( ) < < " : " < < e . what ( ) ) ;
return false ;
}
}
//-----------------------------------------------------------------------------------------------
bool core : : unlock_db_directory ( )
{
db_lock . unlock ( ) ;
db_lock = boost : : interprocess : : file_lock ( ) ;
return true ;
}
//-----------------------------------------------------------------------------------------------
2015-01-29 22:10:53 +00:00
bool core : : init ( const boost : : program_options : : variables_map & vm )
2014-03-03 22:07:58 +00:00
{
2015-01-29 22:10:53 +00:00
bool r = handle_command_line ( vm ) ;
2014-03-03 22:07:58 +00:00
2015-12-31 14:24:06 +00:00
r = m_mempool . init ( m_fakechain ? std : : string ( ) : m_config_folder ) ;
2014-03-03 22:07:58 +00:00
CHECK_AND_ASSERT_MES ( r , false , " Failed to initialize memory pool " ) ;
2015-03-25 15:41:30 +00:00
# if BLOCKCHAIN_DB == DB_LMDB
2015-12-08 23:06:29 +00:00
std : : string db_type = command_line : : get_arg ( vm , command_line : : arg_db_type ) ;
std : : string db_sync_mode = command_line : : get_arg ( vm , command_line : : arg_db_sync_mode ) ;
bool fast_sync = command_line : : get_arg ( vm , command_line : : arg_fast_block_sync ) ! = 0 ;
uint64_t blocks_threads = command_line : : get_arg ( vm , command_line : : arg_prep_blocks_threads ) ;
2015-03-25 15:41:30 +00:00
2015-12-31 17:09:00 +00:00
boost : : filesystem : : path folder ( m_config_folder ) ;
if ( m_fakechain )
folder / = " fake " ;
2016-02-04 18:09:45 +00:00
// make sure the data directory exists, and try to lock it
CHECK_AND_ASSERT_MES ( boost : : filesystem : : exists ( folder ) | | boost : : filesystem : : create_directories ( folder ) , false ,
std : : string ( " Failed to create directory " ) . append ( folder . string ( ) ) . c_str ( ) ) ;
if ( ! lock_db_directory ( folder ) )
{
LOG_ERROR ( " Failed to lock " < < folder ) ;
return false ;
}
2015-12-31 17:09:00 +00:00
// check for blockchain.bin
try
{
const boost : : filesystem : : path old_files = folder ;
if ( boost : : filesystem : : exists ( old_files / " blockchain.bin " ) )
{
LOG_PRINT_RED_L0 ( " Found old-style blockchain.bin in " < < old_files . string ( ) ) ;
LOG_PRINT_RED_L0 ( " Monero now uses a new format. You can either remove blockchain.bin to start syncing " ) ;
LOG_PRINT_RED_L0 ( " the blockchain anew, or use blockchain_export and blockchain_import to convert your " ) ;
LOG_PRINT_RED_L0 ( " existing blockchain.bin to the new format. See README.md for instructions. " ) ;
return false ;
}
}
// folder might not be a directory, etc, etc
catch ( . . . ) { }
2015-03-25 15:41:30 +00:00
BlockchainDB * db = nullptr ;
** CHANGES ARE EXPERIMENTAL (FOR TESTING ONLY)
Bockchain:
1. Optim: Multi-thread long-hash computation when encountering groups of blocks.
2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible.
3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible.
4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks.
5. Optim: Multi-thread signature computation whenever possible.
6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD)
7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???).
8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads).
Berkeley-DB:
1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc).
2. Fix: Unable to pop blocks on reorganize due to transaction errors.
3. Patch: Large number of transaction aborts when running multi-threaded bulk queries.
4. Patch: Insufficient locks error when running full sync.
5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation.
6. Optim: Add bulk queries to get output global indices.
7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
9. Optim: Added thread-safe buffers used when multi-threading bulk queries.
10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details)
11. Mod: Added checkpoint thread and auto-remove-logs option.
12. *Now usable on 32-bit systems like RPI2.
LMDB:
1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect)
2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details)
5. Mod: Auto resize to +1GB instead of multiplier x1.5
ETC:
1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete.
2. Fix: 32-bit saturation bug when computing next difficulty on large blocks.
[PENDING ISSUES]
1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization.
This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD.
2. Berkeley db, possible bug "unable to allocate memory". TBD.
[NEW OPTIONS] (*Currently all enabled for testing purposes)
1. --fast-block-sync arg=[0:1] (default: 1)
a. 0 = Compute long hash per block (may take a while depending on CPU)
b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence)
2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000)
a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions.
b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache.
Fast - Write meta-data but defer data flush.
Fastest - Defer meta-data and data flush.
Sync - Flush data after nblocks_per_sync and wait.
Async - Flush data after nblocks_per_sync but do not wait for the operation to finish.
3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower)
Max number of threads to use when computing long-hash in groups.
4. --show-time-stats arg=[0:1] (default: 1)
Show benchmark related time stats.
5. --db-auto-remove-logs arg=[0:1] (default: 1)
For berkeley-db only. Auto remove logs if enabled.
**Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version.
At the moment, you need a full resync to use this optimized version.
[PERFORMANCE COMPARISON]
**Some figures are approximations only.
Using a baseline machine of an i7-2600K+SSD+(with full pow computation):
1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain.
2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain.
3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain.
Averate procesing times (with full pow computation):
lmdb-optimized:
1. tx_ave = 2.5 ms / tx
2. block_ave = 5.87 ms / block
memory-official-repo:
1. tx_ave = 8.85 ms / tx
2. block_ave = 19.68 ms / block
lmdb-official-repo (0f4a036437fd41a5498ee5e74e2422ea6177aa3e)
1. tx_ave = 47.8 ms / tx
2. block_ave = 64.2 ms / block
**Note: The following data denotes processing times only (does not include p2p download time)
lmdb-optimized processing times (with full pow computation):
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000).
2. Laptop, Dual-core / 4-threads U4200 (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000).
3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000).
lmdb-optimized processing times (with per-block-checkpoint)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with full pow computation)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000).
2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with per-block-checkpoint)
1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
2015-07-10 20:09:32 +00:00
uint64_t BDB_FAST_MODE = 0 ;
uint64_t BDB_FASTEST_MODE = 0 ;
uint64_t BDB_SAFE_MODE = 0 ;
2015-03-25 15:41:30 +00:00
if ( db_type = = " lmdb " )
{
db = new BlockchainLMDB ( ) ;
}
else if ( db_type = = " berkeley " )
{
** CHANGES ARE EXPERIMENTAL (FOR TESTING ONLY)
Bockchain:
1. Optim: Multi-thread long-hash computation when encountering groups of blocks.
2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible.
3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible.
4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks.
5. Optim: Multi-thread signature computation whenever possible.
6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD)
7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???).
8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads).
Berkeley-DB:
1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc).
2. Fix: Unable to pop blocks on reorganize due to transaction errors.
3. Patch: Large number of transaction aborts when running multi-threaded bulk queries.
4. Patch: Insufficient locks error when running full sync.
5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation.
6. Optim: Add bulk queries to get output global indices.
7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
9. Optim: Added thread-safe buffers used when multi-threading bulk queries.
10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details)
11. Mod: Added checkpoint thread and auto-remove-logs option.
12. *Now usable on 32-bit systems like RPI2.
LMDB:
1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect)
2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details)
5. Mod: Auto resize to +1GB instead of multiplier x1.5
ETC:
1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete.
2. Fix: 32-bit saturation bug when computing next difficulty on large blocks.
[PENDING ISSUES]
1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization.
This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD.
2. Berkeley db, possible bug "unable to allocate memory". TBD.
[NEW OPTIONS] (*Currently all enabled for testing purposes)
1. --fast-block-sync arg=[0:1] (default: 1)
a. 0 = Compute long hash per block (may take a while depending on CPU)
b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence)
2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000)
a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions.
b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache.
Fast - Write meta-data but defer data flush.
Fastest - Defer meta-data and data flush.
Sync - Flush data after nblocks_per_sync and wait.
Async - Flush data after nblocks_per_sync but do not wait for the operation to finish.
3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower)
Max number of threads to use when computing long-hash in groups.
4. --show-time-stats arg=[0:1] (default: 1)
Show benchmark related time stats.
5. --db-auto-remove-logs arg=[0:1] (default: 1)
For berkeley-db only. Auto remove logs if enabled.
**Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version.
At the moment, you need a full resync to use this optimized version.
[PERFORMANCE COMPARISON]
**Some figures are approximations only.
Using a baseline machine of an i7-2600K+SSD+(with full pow computation):
1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain.
2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain.
3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain.
Averate procesing times (with full pow computation):
lmdb-optimized:
1. tx_ave = 2.5 ms / tx
2. block_ave = 5.87 ms / block
memory-official-repo:
1. tx_ave = 8.85 ms / tx
2. block_ave = 19.68 ms / block
lmdb-official-repo (0f4a036437fd41a5498ee5e74e2422ea6177aa3e)
1. tx_ave = 47.8 ms / tx
2. block_ave = 64.2 ms / block
**Note: The following data denotes processing times only (does not include p2p download time)
lmdb-optimized processing times (with full pow computation):
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000).
2. Laptop, Dual-core / 4-threads U4200 (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000).
3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000).
lmdb-optimized processing times (with per-block-checkpoint)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with full pow computation)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000).
2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with per-block-checkpoint)
1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
2015-07-10 20:09:32 +00:00
# if defined(BERKELEY_DB)
2015-03-25 15:41:30 +00:00
db = new BlockchainBDB ( ) ;
** CHANGES ARE EXPERIMENTAL (FOR TESTING ONLY)
Bockchain:
1. Optim: Multi-thread long-hash computation when encountering groups of blocks.
2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible.
3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible.
4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks.
5. Optim: Multi-thread signature computation whenever possible.
6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD)
7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???).
8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads).
Berkeley-DB:
1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc).
2. Fix: Unable to pop blocks on reorganize due to transaction errors.
3. Patch: Large number of transaction aborts when running multi-threaded bulk queries.
4. Patch: Insufficient locks error when running full sync.
5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation.
6. Optim: Add bulk queries to get output global indices.
7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
9. Optim: Added thread-safe buffers used when multi-threading bulk queries.
10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details)
11. Mod: Added checkpoint thread and auto-remove-logs option.
12. *Now usable on 32-bit systems like RPI2.
LMDB:
1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect)
2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details)
5. Mod: Auto resize to +1GB instead of multiplier x1.5
ETC:
1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete.
2. Fix: 32-bit saturation bug when computing next difficulty on large blocks.
[PENDING ISSUES]
1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization.
This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD.
2. Berkeley db, possible bug "unable to allocate memory". TBD.
[NEW OPTIONS] (*Currently all enabled for testing purposes)
1. --fast-block-sync arg=[0:1] (default: 1)
a. 0 = Compute long hash per block (may take a while depending on CPU)
b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence)
2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000)
a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions.
b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache.
Fast - Write meta-data but defer data flush.
Fastest - Defer meta-data and data flush.
Sync - Flush data after nblocks_per_sync and wait.
Async - Flush data after nblocks_per_sync but do not wait for the operation to finish.
3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower)
Max number of threads to use when computing long-hash in groups.
4. --show-time-stats arg=[0:1] (default: 1)
Show benchmark related time stats.
5. --db-auto-remove-logs arg=[0:1] (default: 1)
For berkeley-db only. Auto remove logs if enabled.
**Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version.
At the moment, you need a full resync to use this optimized version.
[PERFORMANCE COMPARISON]
**Some figures are approximations only.
Using a baseline machine of an i7-2600K+SSD+(with full pow computation):
1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain.
2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain.
3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain.
Averate procesing times (with full pow computation):
lmdb-optimized:
1. tx_ave = 2.5 ms / tx
2. block_ave = 5.87 ms / block
memory-official-repo:
1. tx_ave = 8.85 ms / tx
2. block_ave = 19.68 ms / block
lmdb-official-repo (0f4a036437fd41a5498ee5e74e2422ea6177aa3e)
1. tx_ave = 47.8 ms / tx
2. block_ave = 64.2 ms / block
**Note: The following data denotes processing times only (does not include p2p download time)
lmdb-optimized processing times (with full pow computation):
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000).
2. Laptop, Dual-core / 4-threads U4200 (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000).
3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000).
lmdb-optimized processing times (with per-block-checkpoint)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with full pow computation)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000).
2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with per-block-checkpoint)
1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
2015-07-10 20:09:32 +00:00
BDB_FAST_MODE = DB_TXN_WRITE_NOSYNC ;
BDB_FASTEST_MODE = DB_TXN_NOSYNC ;
BDB_SAFE_MODE = DB_TXN_SYNC ;
2015-04-07 18:27:37 +00:00
# else
** CHANGES ARE EXPERIMENTAL (FOR TESTING ONLY)
Bockchain:
1. Optim: Multi-thread long-hash computation when encountering groups of blocks.
2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible.
3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible.
4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks.
5. Optim: Multi-thread signature computation whenever possible.
6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD)
7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???).
8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads).
Berkeley-DB:
1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc).
2. Fix: Unable to pop blocks on reorganize due to transaction errors.
3. Patch: Large number of transaction aborts when running multi-threaded bulk queries.
4. Patch: Insufficient locks error when running full sync.
5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation.
6. Optim: Add bulk queries to get output global indices.
7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
9. Optim: Added thread-safe buffers used when multi-threading bulk queries.
10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details)
11. Mod: Added checkpoint thread and auto-remove-logs option.
12. *Now usable on 32-bit systems like RPI2.
LMDB:
1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect)
2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details)
5. Mod: Auto resize to +1GB instead of multiplier x1.5
ETC:
1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete.
2. Fix: 32-bit saturation bug when computing next difficulty on large blocks.
[PENDING ISSUES]
1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization.
This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD.
2. Berkeley db, possible bug "unable to allocate memory". TBD.
[NEW OPTIONS] (*Currently all enabled for testing purposes)
1. --fast-block-sync arg=[0:1] (default: 1)
a. 0 = Compute long hash per block (may take a while depending on CPU)
b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence)
2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000)
a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions.
b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache.
Fast - Write meta-data but defer data flush.
Fastest - Defer meta-data and data flush.
Sync - Flush data after nblocks_per_sync and wait.
Async - Flush data after nblocks_per_sync but do not wait for the operation to finish.
3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower)
Max number of threads to use when computing long-hash in groups.
4. --show-time-stats arg=[0:1] (default: 1)
Show benchmark related time stats.
5. --db-auto-remove-logs arg=[0:1] (default: 1)
For berkeley-db only. Auto remove logs if enabled.
**Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version.
At the moment, you need a full resync to use this optimized version.
[PERFORMANCE COMPARISON]
**Some figures are approximations only.
Using a baseline machine of an i7-2600K+SSD+(with full pow computation):
1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain.
2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain.
3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain.
Averate procesing times (with full pow computation):
lmdb-optimized:
1. tx_ave = 2.5 ms / tx
2. block_ave = 5.87 ms / block
memory-official-repo:
1. tx_ave = 8.85 ms / tx
2. block_ave = 19.68 ms / block
lmdb-official-repo (0f4a036437fd41a5498ee5e74e2422ea6177aa3e)
1. tx_ave = 47.8 ms / tx
2. block_ave = 64.2 ms / block
**Note: The following data denotes processing times only (does not include p2p download time)
lmdb-optimized processing times (with full pow computation):
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000).
2. Laptop, Dual-core / 4-threads U4200 (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000).
3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000).
lmdb-optimized processing times (with per-block-checkpoint)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with full pow computation)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000).
2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with per-block-checkpoint)
1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
2015-07-10 20:09:32 +00:00
LOG_ERROR ( " BerkeleyDB support disabled. " ) ;
2015-04-07 18:27:37 +00:00
return false ;
# endif
2015-03-25 15:41:30 +00:00
}
else
{
LOG_ERROR ( " Attempted to use non-existant database type " ) ;
return false ;
}
folder / = db - > get_db_name ( ) ;
2015-05-08 18:32:20 +00:00
LOG_PRINT_L0 ( " Loading blockchain from folder " < < folder . string ( ) < < " ... " ) ;
2015-03-25 15:41:30 +00:00
const std : : string filename = folder . string ( ) ;
** CHANGES ARE EXPERIMENTAL (FOR TESTING ONLY)
Bockchain:
1. Optim: Multi-thread long-hash computation when encountering groups of blocks.
2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible.
3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible.
4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks.
5. Optim: Multi-thread signature computation whenever possible.
6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD)
7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???).
8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads).
Berkeley-DB:
1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc).
2. Fix: Unable to pop blocks on reorganize due to transaction errors.
3. Patch: Large number of transaction aborts when running multi-threaded bulk queries.
4. Patch: Insufficient locks error when running full sync.
5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation.
6. Optim: Add bulk queries to get output global indices.
7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
9. Optim: Added thread-safe buffers used when multi-threading bulk queries.
10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details)
11. Mod: Added checkpoint thread and auto-remove-logs option.
12. *Now usable on 32-bit systems like RPI2.
LMDB:
1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect)
2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details)
5. Mod: Auto resize to +1GB instead of multiplier x1.5
ETC:
1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete.
2. Fix: 32-bit saturation bug when computing next difficulty on large blocks.
[PENDING ISSUES]
1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization.
This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD.
2. Berkeley db, possible bug "unable to allocate memory". TBD.
[NEW OPTIONS] (*Currently all enabled for testing purposes)
1. --fast-block-sync arg=[0:1] (default: 1)
a. 0 = Compute long hash per block (may take a while depending on CPU)
b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence)
2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000)
a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions.
b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache.
Fast - Write meta-data but defer data flush.
Fastest - Defer meta-data and data flush.
Sync - Flush data after nblocks_per_sync and wait.
Async - Flush data after nblocks_per_sync but do not wait for the operation to finish.
3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower)
Max number of threads to use when computing long-hash in groups.
4. --show-time-stats arg=[0:1] (default: 1)
Show benchmark related time stats.
5. --db-auto-remove-logs arg=[0:1] (default: 1)
For berkeley-db only. Auto remove logs if enabled.
**Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version.
At the moment, you need a full resync to use this optimized version.
[PERFORMANCE COMPARISON]
**Some figures are approximations only.
Using a baseline machine of an i7-2600K+SSD+(with full pow computation):
1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain.
2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain.
3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain.
Averate procesing times (with full pow computation):
lmdb-optimized:
1. tx_ave = 2.5 ms / tx
2. block_ave = 5.87 ms / block
memory-official-repo:
1. tx_ave = 8.85 ms / tx
2. block_ave = 19.68 ms / block
lmdb-official-repo (0f4a036437fd41a5498ee5e74e2422ea6177aa3e)
1. tx_ave = 47.8 ms / tx
2. block_ave = 64.2 ms / block
**Note: The following data denotes processing times only (does not include p2p download time)
lmdb-optimized processing times (with full pow computation):
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000).
2. Laptop, Dual-core / 4-threads U4200 (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000).
3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000).
lmdb-optimized processing times (with per-block-checkpoint)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with full pow computation)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000).
2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with per-block-checkpoint)
1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
2015-07-10 20:09:32 +00:00
// temporarily default to fastest:async:1000
blockchain_db_sync_mode sync_mode = db_async ;
uint64_t blocks_per_sync = 1000 ;
2015-03-25 15:41:30 +00:00
try
{
** CHANGES ARE EXPERIMENTAL (FOR TESTING ONLY)
Bockchain:
1. Optim: Multi-thread long-hash computation when encountering groups of blocks.
2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible.
3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible.
4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks.
5. Optim: Multi-thread signature computation whenever possible.
6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD)
7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???).
8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads).
Berkeley-DB:
1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc).
2. Fix: Unable to pop blocks on reorganize due to transaction errors.
3. Patch: Large number of transaction aborts when running multi-threaded bulk queries.
4. Patch: Insufficient locks error when running full sync.
5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation.
6. Optim: Add bulk queries to get output global indices.
7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
9. Optim: Added thread-safe buffers used when multi-threading bulk queries.
10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details)
11. Mod: Added checkpoint thread and auto-remove-logs option.
12. *Now usable on 32-bit systems like RPI2.
LMDB:
1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect)
2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details)
5. Mod: Auto resize to +1GB instead of multiplier x1.5
ETC:
1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete.
2. Fix: 32-bit saturation bug when computing next difficulty on large blocks.
[PENDING ISSUES]
1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization.
This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD.
2. Berkeley db, possible bug "unable to allocate memory". TBD.
[NEW OPTIONS] (*Currently all enabled for testing purposes)
1. --fast-block-sync arg=[0:1] (default: 1)
a. 0 = Compute long hash per block (may take a while depending on CPU)
b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence)
2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000)
a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions.
b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache.
Fast - Write meta-data but defer data flush.
Fastest - Defer meta-data and data flush.
Sync - Flush data after nblocks_per_sync and wait.
Async - Flush data after nblocks_per_sync but do not wait for the operation to finish.
3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower)
Max number of threads to use when computing long-hash in groups.
4. --show-time-stats arg=[0:1] (default: 1)
Show benchmark related time stats.
5. --db-auto-remove-logs arg=[0:1] (default: 1)
For berkeley-db only. Auto remove logs if enabled.
**Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version.
At the moment, you need a full resync to use this optimized version.
[PERFORMANCE COMPARISON]
**Some figures are approximations only.
Using a baseline machine of an i7-2600K+SSD+(with full pow computation):
1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain.
2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain.
3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain.
Averate procesing times (with full pow computation):
lmdb-optimized:
1. tx_ave = 2.5 ms / tx
2. block_ave = 5.87 ms / block
memory-official-repo:
1. tx_ave = 8.85 ms / tx
2. block_ave = 19.68 ms / block
lmdb-official-repo (0f4a036437fd41a5498ee5e74e2422ea6177aa3e)
1. tx_ave = 47.8 ms / tx
2. block_ave = 64.2 ms / block
**Note: The following data denotes processing times only (does not include p2p download time)
lmdb-optimized processing times (with full pow computation):
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000).
2. Laptop, Dual-core / 4-threads U4200 (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000).
3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000).
lmdb-optimized processing times (with per-block-checkpoint)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with full pow computation)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000).
2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with per-block-checkpoint)
1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
2015-07-10 20:09:32 +00:00
uint64_t db_flags = 0 ;
bool islmdb = db_type = = " lmdb " ;
std : : vector < std : : string > options ;
boost : : trim ( db_sync_mode ) ;
boost : : split ( options , db_sync_mode , boost : : is_any_of ( " : " ) ) ;
for ( const auto & option : options )
LOG_PRINT_L0 ( " option: " < < option ) ;
// temporarily default to fastest:async:1000
uint64_t DEFAULT_FLAGS = islmdb ? MDB_WRITEMAP | MDB_MAPASYNC | MDB_NORDAHEAD | MDB_NOMETASYNC | MDB_NOSYNC :
BDB_FASTEST_MODE ;
if ( options . size ( ) = = 0 )
{
// temporarily default to fastest:async:1000
db_flags = DEFAULT_FLAGS ;
}
bool safemode = false ;
if ( options . size ( ) > = 1 )
{
if ( options [ 0 ] = = " safe " )
{
safemode = true ;
db_flags = islmdb ? MDB_NORDAHEAD : BDB_SAFE_MODE ;
sync_mode = db_nosync ;
}
else if ( options [ 0 ] = = " fast " )
db_flags = islmdb ? MDB_NOMETASYNC | MDB_NOSYNC | MDB_NORDAHEAD : BDB_FAST_MODE ;
else if ( options [ 0 ] = = " fastest " )
db_flags = islmdb ? MDB_WRITEMAP | MDB_MAPASYNC | MDB_NORDAHEAD | MDB_NOMETASYNC | MDB_NOSYNC : BDB_FASTEST_MODE ;
else
db_flags = DEFAULT_FLAGS ;
}
if ( options . size ( ) > = 2 & & ! safemode )
{
if ( options [ 1 ] = = " sync " )
sync_mode = db_sync ;
else if ( options [ 1 ] = = " async " )
sync_mode = db_async ;
}
if ( options . size ( ) > = 3 & & ! safemode )
{
blocks_per_sync = atoll ( options [ 2 ] . c_str ( ) ) ;
if ( blocks_per_sync > 5000 )
blocks_per_sync = 5000 ;
if ( blocks_per_sync = = 0 )
blocks_per_sync = 1 ;
}
2015-12-08 23:06:29 +00:00
bool auto_remove_logs = command_line : : get_arg ( vm , command_line : : arg_db_auto_remove_logs ) ! = 0 ;
** CHANGES ARE EXPERIMENTAL (FOR TESTING ONLY)
Bockchain:
1. Optim: Multi-thread long-hash computation when encountering groups of blocks.
2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible.
3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible.
4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks.
5. Optim: Multi-thread signature computation whenever possible.
6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD)
7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???).
8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads).
Berkeley-DB:
1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc).
2. Fix: Unable to pop blocks on reorganize due to transaction errors.
3. Patch: Large number of transaction aborts when running multi-threaded bulk queries.
4. Patch: Insufficient locks error when running full sync.
5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation.
6. Optim: Add bulk queries to get output global indices.
7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
9. Optim: Added thread-safe buffers used when multi-threading bulk queries.
10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details)
11. Mod: Added checkpoint thread and auto-remove-logs option.
12. *Now usable on 32-bit systems like RPI2.
LMDB:
1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect)
2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details)
5. Mod: Auto resize to +1GB instead of multiplier x1.5
ETC:
1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete.
2. Fix: 32-bit saturation bug when computing next difficulty on large blocks.
[PENDING ISSUES]
1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization.
This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD.
2. Berkeley db, possible bug "unable to allocate memory". TBD.
[NEW OPTIONS] (*Currently all enabled for testing purposes)
1. --fast-block-sync arg=[0:1] (default: 1)
a. 0 = Compute long hash per block (may take a while depending on CPU)
b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence)
2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000)
a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions.
b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache.
Fast - Write meta-data but defer data flush.
Fastest - Defer meta-data and data flush.
Sync - Flush data after nblocks_per_sync and wait.
Async - Flush data after nblocks_per_sync but do not wait for the operation to finish.
3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower)
Max number of threads to use when computing long-hash in groups.
4. --show-time-stats arg=[0:1] (default: 1)
Show benchmark related time stats.
5. --db-auto-remove-logs arg=[0:1] (default: 1)
For berkeley-db only. Auto remove logs if enabled.
**Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version.
At the moment, you need a full resync to use this optimized version.
[PERFORMANCE COMPARISON]
**Some figures are approximations only.
Using a baseline machine of an i7-2600K+SSD+(with full pow computation):
1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain.
2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain.
3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain.
Averate procesing times (with full pow computation):
lmdb-optimized:
1. tx_ave = 2.5 ms / tx
2. block_ave = 5.87 ms / block
memory-official-repo:
1. tx_ave = 8.85 ms / tx
2. block_ave = 19.68 ms / block
lmdb-official-repo (0f4a036437fd41a5498ee5e74e2422ea6177aa3e)
1. tx_ave = 47.8 ms / tx
2. block_ave = 64.2 ms / block
**Note: The following data denotes processing times only (does not include p2p download time)
lmdb-optimized processing times (with full pow computation):
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000).
2. Laptop, Dual-core / 4-threads U4200 (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000).
3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000).
lmdb-optimized processing times (with per-block-checkpoint)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with full pow computation)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000).
2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with per-block-checkpoint)
1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
2015-07-10 20:09:32 +00:00
db - > set_auto_remove_logs ( auto_remove_logs ) ;
db - > open ( filename , db_flags ) ;
if ( ! db - > m_open )
2015-12-14 04:54:39 +00:00
return false ;
2015-03-25 15:41:30 +00:00
}
catch ( const DB_ERROR & e )
{
LOG_PRINT_L0 ( " Error opening database: " < < e . what ( ) ) ;
return false ;
}
** CHANGES ARE EXPERIMENTAL (FOR TESTING ONLY)
Bockchain:
1. Optim: Multi-thread long-hash computation when encountering groups of blocks.
2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible.
3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible.
4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks.
5. Optim: Multi-thread signature computation whenever possible.
6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD)
7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???).
8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads).
Berkeley-DB:
1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc).
2. Fix: Unable to pop blocks on reorganize due to transaction errors.
3. Patch: Large number of transaction aborts when running multi-threaded bulk queries.
4. Patch: Insufficient locks error when running full sync.
5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation.
6. Optim: Add bulk queries to get output global indices.
7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
9. Optim: Added thread-safe buffers used when multi-threading bulk queries.
10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details)
11. Mod: Added checkpoint thread and auto-remove-logs option.
12. *Now usable on 32-bit systems like RPI2.
LMDB:
1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect)
2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details)
5. Mod: Auto resize to +1GB instead of multiplier x1.5
ETC:
1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete.
2. Fix: 32-bit saturation bug when computing next difficulty on large blocks.
[PENDING ISSUES]
1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization.
This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD.
2. Berkeley db, possible bug "unable to allocate memory". TBD.
[NEW OPTIONS] (*Currently all enabled for testing purposes)
1. --fast-block-sync arg=[0:1] (default: 1)
a. 0 = Compute long hash per block (may take a while depending on CPU)
b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence)
2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000)
a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions.
b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache.
Fast - Write meta-data but defer data flush.
Fastest - Defer meta-data and data flush.
Sync - Flush data after nblocks_per_sync and wait.
Async - Flush data after nblocks_per_sync but do not wait for the operation to finish.
3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower)
Max number of threads to use when computing long-hash in groups.
4. --show-time-stats arg=[0:1] (default: 1)
Show benchmark related time stats.
5. --db-auto-remove-logs arg=[0:1] (default: 1)
For berkeley-db only. Auto remove logs if enabled.
**Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version.
At the moment, you need a full resync to use this optimized version.
[PERFORMANCE COMPARISON]
**Some figures are approximations only.
Using a baseline machine of an i7-2600K+SSD+(with full pow computation):
1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain.
2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain.
3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain.
Averate procesing times (with full pow computation):
lmdb-optimized:
1. tx_ave = 2.5 ms / tx
2. block_ave = 5.87 ms / block
memory-official-repo:
1. tx_ave = 8.85 ms / tx
2. block_ave = 19.68 ms / block
lmdb-official-repo (0f4a036437fd41a5498ee5e74e2422ea6177aa3e)
1. tx_ave = 47.8 ms / tx
2. block_ave = 64.2 ms / block
**Note: The following data denotes processing times only (does not include p2p download time)
lmdb-optimized processing times (with full pow computation):
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000).
2. Laptop, Dual-core / 4-threads U4200 (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000).
3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000).
lmdb-optimized processing times (with per-block-checkpoint)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with full pow computation)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000).
2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with per-block-checkpoint)
1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
2015-07-10 20:09:32 +00:00
m_blockchain_storage . set_user_options ( blocks_threads ,
blocks_per_sync , sync_mode , fast_sync ) ;
2015-12-13 11:38:37 +00:00
r = m_blockchain_storage . init ( db , m_testnet , m_fakechain ) ;
** CHANGES ARE EXPERIMENTAL (FOR TESTING ONLY)
Bockchain:
1. Optim: Multi-thread long-hash computation when encountering groups of blocks.
2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible.
3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible.
4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks.
5. Optim: Multi-thread signature computation whenever possible.
6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD)
7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???).
8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads).
Berkeley-DB:
1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc).
2. Fix: Unable to pop blocks on reorganize due to transaction errors.
3. Patch: Large number of transaction aborts when running multi-threaded bulk queries.
4. Patch: Insufficient locks error when running full sync.
5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation.
6. Optim: Add bulk queries to get output global indices.
7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
9. Optim: Added thread-safe buffers used when multi-threading bulk queries.
10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details)
11. Mod: Added checkpoint thread and auto-remove-logs option.
12. *Now usable on 32-bit systems like RPI2.
LMDB:
1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect)
2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details)
5. Mod: Auto resize to +1GB instead of multiplier x1.5
ETC:
1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete.
2. Fix: 32-bit saturation bug when computing next difficulty on large blocks.
[PENDING ISSUES]
1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization.
This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD.
2. Berkeley db, possible bug "unable to allocate memory". TBD.
[NEW OPTIONS] (*Currently all enabled for testing purposes)
1. --fast-block-sync arg=[0:1] (default: 1)
a. 0 = Compute long hash per block (may take a while depending on CPU)
b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence)
2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000)
a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions.
b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache.
Fast - Write meta-data but defer data flush.
Fastest - Defer meta-data and data flush.
Sync - Flush data after nblocks_per_sync and wait.
Async - Flush data after nblocks_per_sync but do not wait for the operation to finish.
3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower)
Max number of threads to use when computing long-hash in groups.
4. --show-time-stats arg=[0:1] (default: 1)
Show benchmark related time stats.
5. --db-auto-remove-logs arg=[0:1] (default: 1)
For berkeley-db only. Auto remove logs if enabled.
**Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version.
At the moment, you need a full resync to use this optimized version.
[PERFORMANCE COMPARISON]
**Some figures are approximations only.
Using a baseline machine of an i7-2600K+SSD+(with full pow computation):
1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain.
2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain.
3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain.
Averate procesing times (with full pow computation):
lmdb-optimized:
1. tx_ave = 2.5 ms / tx
2. block_ave = 5.87 ms / block
memory-official-repo:
1. tx_ave = 8.85 ms / tx
2. block_ave = 19.68 ms / block
lmdb-official-repo (0f4a036437fd41a5498ee5e74e2422ea6177aa3e)
1. tx_ave = 47.8 ms / tx
2. block_ave = 64.2 ms / block
**Note: The following data denotes processing times only (does not include p2p download time)
lmdb-optimized processing times (with full pow computation):
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000).
2. Laptop, Dual-core / 4-threads U4200 (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000).
3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000).
lmdb-optimized processing times (with per-block-checkpoint)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with full pow computation)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000).
2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with per-block-checkpoint)
1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
2015-07-10 20:09:32 +00:00
2016-01-29 15:09:17 +00:00
// now that we have a valid m_blockchain_storage, we can clean out any
// transactions in the pool that do not conform to the current fork
m_mempool . validate ( m_blockchain_storage . get_current_hard_fork_version ( ) ) ;
2015-12-08 23:06:29 +00:00
bool show_time_stats = command_line : : get_arg ( vm , command_line : : arg_show_time_stats ) ! = 0 ;
** CHANGES ARE EXPERIMENTAL (FOR TESTING ONLY)
Bockchain:
1. Optim: Multi-thread long-hash computation when encountering groups of blocks.
2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible.
3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible.
4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks.
5. Optim: Multi-thread signature computation whenever possible.
6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD)
7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???).
8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads).
Berkeley-DB:
1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc).
2. Fix: Unable to pop blocks on reorganize due to transaction errors.
3. Patch: Large number of transaction aborts when running multi-threaded bulk queries.
4. Patch: Insufficient locks error when running full sync.
5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation.
6. Optim: Add bulk queries to get output global indices.
7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
9. Optim: Added thread-safe buffers used when multi-threading bulk queries.
10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details)
11. Mod: Added checkpoint thread and auto-remove-logs option.
12. *Now usable on 32-bit systems like RPI2.
LMDB:
1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect)
2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details)
5. Mod: Auto resize to +1GB instead of multiplier x1.5
ETC:
1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete.
2. Fix: 32-bit saturation bug when computing next difficulty on large blocks.
[PENDING ISSUES]
1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization.
This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD.
2. Berkeley db, possible bug "unable to allocate memory". TBD.
[NEW OPTIONS] (*Currently all enabled for testing purposes)
1. --fast-block-sync arg=[0:1] (default: 1)
a. 0 = Compute long hash per block (may take a while depending on CPU)
b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence)
2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000)
a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions.
b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache.
Fast - Write meta-data but defer data flush.
Fastest - Defer meta-data and data flush.
Sync - Flush data after nblocks_per_sync and wait.
Async - Flush data after nblocks_per_sync but do not wait for the operation to finish.
3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower)
Max number of threads to use when computing long-hash in groups.
4. --show-time-stats arg=[0:1] (default: 1)
Show benchmark related time stats.
5. --db-auto-remove-logs arg=[0:1] (default: 1)
For berkeley-db only. Auto remove logs if enabled.
**Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version.
At the moment, you need a full resync to use this optimized version.
[PERFORMANCE COMPARISON]
**Some figures are approximations only.
Using a baseline machine of an i7-2600K+SSD+(with full pow computation):
1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain.
2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain.
3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain.
Averate procesing times (with full pow computation):
lmdb-optimized:
1. tx_ave = 2.5 ms / tx
2. block_ave = 5.87 ms / block
memory-official-repo:
1. tx_ave = 8.85 ms / tx
2. block_ave = 19.68 ms / block
lmdb-official-repo (0f4a036437fd41a5498ee5e74e2422ea6177aa3e)
1. tx_ave = 47.8 ms / tx
2. block_ave = 64.2 ms / block
**Note: The following data denotes processing times only (does not include p2p download time)
lmdb-optimized processing times (with full pow computation):
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000).
2. Laptop, Dual-core / 4-threads U4200 (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000).
3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000).
lmdb-optimized processing times (with per-block-checkpoint)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with full pow computation)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000).
2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with per-block-checkpoint)
1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
2015-07-10 20:09:32 +00:00
m_blockchain_storage . set_show_time_stats ( show_time_stats ) ;
2015-03-25 15:41:30 +00:00
# else
2015-01-29 22:10:53 +00:00
r = m_blockchain_storage . init ( m_config_folder , m_testnet ) ;
2015-03-25 15:41:30 +00:00
# endif
2014-03-03 22:07:58 +00:00
CHECK_AND_ASSERT_MES ( r , false , " Failed to initialize blockchain storage " ) ;
2014-09-27 21:46:12 +00:00
// load json & DNS checkpoints, and verify them
// with respect to what blocks we already have
2014-09-29 20:30:47 +00:00
CHECK_AND_ASSERT_MES ( update_checkpoints ( ) , false , " One or more checkpoints loaded from json or dns conflicted with existing checkpoints. " ) ;
2014-09-27 21:46:12 +00:00
2015-01-29 22:10:53 +00:00
r = m_miner . init ( vm , m_testnet ) ;
2014-03-03 22:07:58 +00:00
CHECK_AND_ASSERT_MES ( r , false , " Failed to initialize blockchain storage " ) ;
return load_state_data ( ) ;
}
//-----------------------------------------------------------------------------------------------
bool core : : set_genesis_block ( const block & b )
{
return m_blockchain_storage . reset_and_set_genesis_block ( b ) ;
}
//-----------------------------------------------------------------------------------------------
bool core : : load_state_data ( )
{
// may be some code later
return true ;
}
//-----------------------------------------------------------------------------------------------
bool core : : deinit ( )
{
2015-12-14 04:54:39 +00:00
m_miner . stop ( ) ;
m_mempool . deinit ( ) ;
if ( ! m_fast_exit )
{
m_blockchain_storage . deinit ( ) ;
}
2016-01-31 12:58:08 +00:00
unlock_db_directory ( ) ;
2014-03-03 22:07:58 +00:00
return true ;
}
2015-02-12 19:59:39 +00:00
//-----------------------------------------------------------------------------------------------
void core : : set_fast_exit ( )
{
m_fast_exit = true ;
}
//-----------------------------------------------------------------------------------------------
bool core : : get_fast_exit ( )
{
return m_fast_exit ;
}
//-----------------------------------------------------------------------------------------------
2015-02-20 21:28:03 +00:00
void core : : test_drop_download ( )
2015-02-12 19:59:39 +00:00
{
2015-12-14 04:54:39 +00:00
m_test_drop_download = false ;
2015-02-12 19:59:39 +00:00
}
//-----------------------------------------------------------------------------------------------
2015-02-20 21:28:03 +00:00
void core : : test_drop_download_height ( uint64_t height )
2015-02-12 19:59:39 +00:00
{
2015-12-14 04:54:39 +00:00
m_test_drop_download_height = height ;
2015-02-20 21:28:03 +00:00
}
//-----------------------------------------------------------------------------------------------
2015-09-19 10:25:57 +00:00
bool core : : get_test_drop_download ( ) const
2015-02-20 21:28:03 +00:00
{
2015-12-14 04:54:39 +00:00
return m_test_drop_download ;
2015-02-20 21:28:03 +00:00
}
//-----------------------------------------------------------------------------------------------
2015-09-19 10:25:57 +00:00
bool core : : get_test_drop_download_height ( ) const
2015-02-20 21:28:03 +00:00
{
2015-12-14 04:54:39 +00:00
if ( m_test_drop_download_height = = 0 )
return true ;
if ( get_blockchain_storage ( ) . get_current_blockchain_height ( ) < = m_test_drop_download_height )
return true ;
2015-02-20 21:28:03 +00:00
2015-12-14 04:54:39 +00:00
return false ;
2015-02-12 19:59:39 +00:00
}
2014-03-03 22:07:58 +00:00
//-----------------------------------------------------------------------------------------------
2015-11-21 00:26:48 +00:00
bool core : : handle_incoming_tx ( const blobdata & tx_blob , tx_verification_context & tvc , bool keeped_by_block , bool relayed )
2014-03-03 22:07:58 +00:00
{
tvc = boost : : value_initialized < tx_verification_context > ( ) ;
//want to process all transactions sequentially
CRITICAL_REGION_LOCAL ( m_incoming_tx_lock ) ;
if ( tx_blob . size ( ) > get_max_tx_size ( ) )
{
2014-09-09 09:32:00 +00:00
LOG_PRINT_L1 ( " WRONG TRANSACTION BLOB, too big size " < < tx_blob . size ( ) < < " , rejected " ) ;
2014-03-03 22:07:58 +00:00
tvc . m_verifivation_failed = true ;
return false ;
}
crypto : : hash tx_hash = null_hash ;
crypto : : hash tx_prefixt_hash = null_hash ;
transaction tx ;
if ( ! parse_tx_from_blob ( tx , tx_hash , tx_prefixt_hash , tx_blob ) )
{
2014-09-09 09:32:00 +00:00
LOG_PRINT_L1 ( " WRONG TRANSACTION BLOB, Failed to parse, rejected " ) ;
2014-03-03 22:07:58 +00:00
tvc . m_verifivation_failed = true ;
return false ;
}
//std::cout << "!"<< tx.vin.size() << std::endl;
if ( ! check_tx_syntax ( tx ) )
{
2014-09-09 09:32:00 +00:00
LOG_PRINT_L1 ( " WRONG TRANSACTION BLOB, Failed to check tx " < < tx_hash < < " syntax, rejected " ) ;
2014-03-03 22:07:58 +00:00
tvc . m_verifivation_failed = true ;
return false ;
}
if ( ! check_tx_semantic ( tx , keeped_by_block ) )
{
2014-09-09 09:32:00 +00:00
LOG_PRINT_L1 ( " WRONG TRANSACTION BLOB, Failed to check tx " < < tx_hash < < " semantic, rejected " ) ;
2014-03-03 22:07:58 +00:00
tvc . m_verifivation_failed = true ;
return false ;
}
2015-11-21 00:26:48 +00:00
bool r = add_new_tx ( tx , tx_hash , tx_prefixt_hash , tx_blob . size ( ) , tvc , keeped_by_block , relayed ) ;
2014-03-03 22:07:58 +00:00
if ( tvc . m_verifivation_failed )
2014-09-09 09:32:00 +00:00
{ LOG_PRINT_RED_L1 ( " Transaction verification failed: " < < tx_hash ) ; }
2014-03-03 22:07:58 +00:00
else if ( tvc . m_verifivation_impossible )
2014-09-09 09:32:00 +00:00
{ LOG_PRINT_RED_L1 ( " Transaction verification impossible: " < < tx_hash ) ; }
2014-03-03 22:07:58 +00:00
if ( tvc . m_added_to_pool )
LOG_PRINT_L1 ( " tx added: " < < tx_hash ) ;
return r ;
}
//-----------------------------------------------------------------------------------------------
2015-09-19 10:25:57 +00:00
bool core : : get_stat_info ( core_stat_info & st_inf ) const
2014-03-03 22:07:58 +00:00
{
st_inf . mining_speed = m_miner . get_speed ( ) ;
st_inf . alternative_blocks = m_blockchain_storage . get_alternative_blocks_count ( ) ;
st_inf . blockchain_height = m_blockchain_storage . get_current_blockchain_height ( ) ;
st_inf . tx_pool_size = m_mempool . get_transactions_count ( ) ;
st_inf . top_block_id_str = epee : : string_tools : : pod_to_hex ( m_blockchain_storage . get_tail_id ( ) ) ;
return true ;
}
//-----------------------------------------------------------------------------------------------
2015-09-19 10:25:57 +00:00
bool core : : check_tx_semantic ( const transaction & tx , bool keeped_by_block ) const
2014-03-03 22:07:58 +00:00
{
if ( ! tx . vin . size ( ) )
{
2014-09-09 09:32:00 +00:00
LOG_PRINT_RED_L1 ( " tx with empty inputs, rejected for tx id= " < < get_transaction_hash ( tx ) ) ;
2014-03-03 22:07:58 +00:00
return false ;
}
if ( ! check_inputs_types_supported ( tx ) )
{
2014-09-09 09:32:00 +00:00
LOG_PRINT_RED_L1 ( " unsupported input types for tx id= " < < get_transaction_hash ( tx ) ) ;
2014-03-03 22:07:58 +00:00
return false ;
}
if ( ! check_outs_valid ( tx ) )
{
2014-09-09 09:32:00 +00:00
LOG_PRINT_RED_L1 ( " tx with invalid outputs, rejected for tx id= " < < get_transaction_hash ( tx ) ) ;
2014-03-03 22:07:58 +00:00
return false ;
}
if ( ! check_money_overflow ( tx ) )
{
2014-09-09 09:32:00 +00:00
LOG_PRINT_RED_L1 ( " tx has money overflow, rejected for tx id= " < < get_transaction_hash ( tx ) ) ;
2014-03-03 22:07:58 +00:00
return false ;
}
2014-03-20 11:46:11 +00:00
uint64_t amount_in = 0 ;
2014-03-03 22:07:58 +00:00
get_inputs_money_amount ( tx , amount_in ) ;
2014-03-20 11:46:11 +00:00
uint64_t amount_out = get_outs_money_amount ( tx ) ;
2014-03-03 22:07:58 +00:00
if ( amount_in < = amount_out )
{
2014-09-09 09:32:00 +00:00
LOG_PRINT_RED_L1 ( " tx with wrong amounts: ins " < < amount_in < < " , outs " < < amount_out < < " , rejected for tx id= " < < get_transaction_hash ( tx ) ) ;
2014-03-03 22:07:58 +00:00
return false ;
}
2014-10-28 03:44:45 +00:00
if ( ! keeped_by_block & & get_object_blobsize ( tx ) > = m_blockchain_storage . get_current_cumulative_blocksize_limit ( ) - CRYPTONOTE_COINBASE_BLOB_RESERVED_SIZE )
2014-03-03 22:07:58 +00:00
{
2014-10-28 03:44:45 +00:00
LOG_PRINT_RED_L1 ( " tx is too large " < < get_object_blobsize ( tx ) < < " , expected not bigger than " < < m_blockchain_storage . get_current_cumulative_blocksize_limit ( ) - CRYPTONOTE_COINBASE_BLOB_RESERVED_SIZE ) ;
2014-03-03 22:07:58 +00:00
return false ;
}
//check if tx use different key images
if ( ! check_tx_inputs_keyimages_diff ( tx ) )
{
2014-09-13 08:04:05 +00:00
LOG_PRINT_RED_L1 ( " tx uses a single key image more than once " ) ;
2014-03-03 22:07:58 +00:00
return false ;
}
return true ;
}
//-----------------------------------------------------------------------------------------------
2015-09-19 10:25:57 +00:00
bool core : : is_key_image_spent ( const crypto : : key_image & key_image ) const
2015-08-11 09:49:15 +00:00
{
return m_blockchain_storage . have_tx_keyimg_as_spent ( key_image ) ;
}
//-----------------------------------------------------------------------------------------------
2015-09-19 10:25:57 +00:00
bool core : : are_key_images_spent ( const std : : vector < crypto : : key_image > & key_im , std : : vector < bool > & spent ) const
2015-08-11 09:49:15 +00:00
{
spent . clear ( ) ;
BOOST_FOREACH ( auto & ki , key_im )
{
spent . push_back ( m_blockchain_storage . have_tx_keyimg_as_spent ( ki ) ) ;
}
return true ;
}
//-----------------------------------------------------------------------------------------------
2015-09-19 10:25:57 +00:00
bool core : : check_tx_inputs_keyimages_diff ( const transaction & tx ) const
2014-03-03 22:07:58 +00:00
{
std : : unordered_set < crypto : : key_image > ki ;
BOOST_FOREACH ( const auto & in , tx . vin )
{
CHECKED_GET_SPECIFIC_VARIANT ( in , const txin_to_key , tokey_in , false ) ;
if ( ! ki . insert ( tokey_in . k_image ) . second )
return false ;
}
return true ;
}
//-----------------------------------------------------------------------------------------------
2015-11-21 00:26:48 +00:00
bool core : : add_new_tx ( const transaction & tx , tx_verification_context & tvc , bool keeped_by_block , bool relayed )
2014-03-03 22:07:58 +00:00
{
crypto : : hash tx_hash = get_transaction_hash ( tx ) ;
crypto : : hash tx_prefix_hash = get_transaction_prefix_hash ( tx ) ;
blobdata bl ;
t_serializable_object_to_blob ( tx , bl ) ;
2015-11-21 00:26:48 +00:00
return add_new_tx ( tx , tx_hash , tx_prefix_hash , bl . size ( ) , tvc , keeped_by_block , relayed ) ;
2014-03-03 22:07:58 +00:00
}
//-----------------------------------------------------------------------------------------------
2015-09-19 10:25:57 +00:00
size_t core : : get_blockchain_total_transactions ( ) const
2014-03-03 22:07:58 +00:00
{
return m_blockchain_storage . get_total_transactions ( ) ;
}
//-----------------------------------------------------------------------------------------------
2014-06-27 22:06:21 +00:00
//bool core::get_outs(uint64_t amount, std::list<crypto::public_key>& pkeys)
//{
// return m_blockchain_storage.get_outs(amount, pkeys);
//}
2014-03-03 22:07:58 +00:00
//-----------------------------------------------------------------------------------------------
2015-11-21 00:26:48 +00:00
bool core : : add_new_tx ( const transaction & tx , const crypto : : hash & tx_hash , const crypto : : hash & tx_prefix_hash , size_t blob_size , tx_verification_context & tvc , bool keeped_by_block , bool relayed )
2014-03-03 22:07:58 +00:00
{
if ( m_mempool . have_tx ( tx_hash ) )
{
LOG_PRINT_L2 ( " tx " < < tx_hash < < " already have transaction in tx_pool " ) ;
return true ;
}
if ( m_blockchain_storage . have_tx ( tx_hash ) )
{
LOG_PRINT_L2 ( " tx " < < tx_hash < < " already have transaction in blockchain " ) ;
return true ;
}
2016-01-29 15:09:17 +00:00
uint8_t version = m_blockchain_storage . get_current_hard_fork_version ( ) ;
return m_mempool . add_tx ( tx , tx_hash , blob_size , tvc , keeped_by_block , relayed , version ) ;
2015-11-21 00:26:48 +00:00
}
//-----------------------------------------------------------------------------------------------
bool core : : relay_txpool_transactions ( )
{
// we attempt to relay txes that should be relayed, but were not
std : : list < std : : pair < crypto : : hash , cryptonote : : transaction > > txs ;
if ( m_mempool . get_relayable_transactions ( txs ) )
{
cryptonote_connection_context fake_context = AUTO_VAL_INIT ( fake_context ) ;
tx_verification_context tvc = AUTO_VAL_INIT ( tvc ) ;
NOTIFY_NEW_TRANSACTIONS : : request r ;
blobdata bl ;
for ( auto it = txs . begin ( ) ; it ! = txs . end ( ) ; + + it )
{
t_serializable_object_to_blob ( it - > second , bl ) ;
r . txs . push_back ( bl ) ;
}
get_protocol ( ) - > relay_transactions ( r , fake_context ) ;
m_mempool . set_relayed ( txs ) ;
}
return true ;
2014-03-03 22:07:58 +00:00
}
//-----------------------------------------------------------------------------------------------
bool core : : get_block_template ( block & b , const account_public_address & adr , difficulty_type & diffic , uint64_t & height , const blobdata & ex_nonce )
{
return m_blockchain_storage . create_block_template ( b , adr , diffic , height , ex_nonce ) ;
}
//-----------------------------------------------------------------------------------------------
2015-09-19 10:25:57 +00:00
bool core : : find_blockchain_supplement ( const std : : list < crypto : : hash > & qblock_ids , NOTIFY_RESPONSE_CHAIN_ENTRY : : request & resp ) const
2014-03-03 22:07:58 +00:00
{
return m_blockchain_storage . find_blockchain_supplement ( qblock_ids , resp ) ;
}
//-----------------------------------------------------------------------------------------------
2015-09-19 10:25:57 +00:00
bool core : : find_blockchain_supplement ( const uint64_t req_start_block , const std : : list < crypto : : hash > & qblock_ids , std : : list < std : : pair < block , std : : list < transaction > > > & blocks , uint64_t & total_height , uint64_t & start_height , size_t max_count ) const
2014-03-03 22:07:58 +00:00
{
2014-08-01 08:17:50 +00:00
return m_blockchain_storage . find_blockchain_supplement ( req_start_block , qblock_ids , blocks , total_height , start_height , max_count ) ;
2014-03-03 22:07:58 +00:00
}
//-----------------------------------------------------------------------------------------------
2015-09-19 10:25:57 +00:00
void core : : print_blockchain ( uint64_t start_index , uint64_t end_index ) const
2014-03-03 22:07:58 +00:00
{
m_blockchain_storage . print_blockchain ( start_index , end_index ) ;
}
//-----------------------------------------------------------------------------------------------
2015-09-19 10:25:57 +00:00
void core : : print_blockchain_index ( ) const
2014-03-03 22:07:58 +00:00
{
m_blockchain_storage . print_blockchain_index ( ) ;
}
//-----------------------------------------------------------------------------------------------
void core : : print_blockchain_outs ( const std : : string & file )
{
m_blockchain_storage . print_blockchain_outs ( file ) ;
}
//-----------------------------------------------------------------------------------------------
2015-09-19 10:25:57 +00:00
bool core : : get_random_outs_for_amounts ( const COMMAND_RPC_GET_RANDOM_OUTPUTS_FOR_AMOUNTS : : request & req , COMMAND_RPC_GET_RANDOM_OUTPUTS_FOR_AMOUNTS : : response & res ) const
2014-03-03 22:07:58 +00:00
{
return m_blockchain_storage . get_random_outs_for_amounts ( req , res ) ;
}
//-----------------------------------------------------------------------------------------------
2015-09-19 10:25:57 +00:00
bool core : : get_tx_outputs_gindexs ( const crypto : : hash & tx_id , std : : vector < uint64_t > & indexs ) const
2014-03-03 22:07:58 +00:00
{
return m_blockchain_storage . get_tx_outputs_gindexs ( tx_id , indexs ) ;
}
//-----------------------------------------------------------------------------------------------
void core : : pause_mine ( )
{
m_miner . pause ( ) ;
}
//-----------------------------------------------------------------------------------------------
void core : : resume_mine ( )
{
m_miner . resume ( ) ;
}
//-----------------------------------------------------------------------------------------------
bool core : : handle_block_found ( block & b )
{
block_verification_context bvc = boost : : value_initialized < block_verification_context > ( ) ;
m_miner . pause ( ) ;
m_blockchain_storage . add_new_block ( b , bvc ) ;
//anyway - update miner template
update_miner_block_template ( ) ;
m_miner . resume ( ) ;
CHECK_AND_ASSERT_MES ( ! bvc . m_verifivation_failed , false , " mined block failed verification " ) ;
if ( bvc . m_added_to_main_chain )
{
cryptonote_connection_context exclude_context = boost : : value_initialized < cryptonote_connection_context > ( ) ;
NOTIFY_NEW_BLOCK : : request arg = AUTO_VAL_INIT ( arg ) ;
arg . hop = 0 ;
arg . current_blockchain_height = m_blockchain_storage . get_current_blockchain_height ( ) ;
std : : list < crypto : : hash > missed_txs ;
std : : list < transaction > txs ;
m_blockchain_storage . get_transactions ( b . tx_hashes , txs , missed_txs ) ;
if ( missed_txs . size ( ) & & m_blockchain_storage . get_block_id_by_height ( get_block_height ( b ) ) ! = get_block_hash ( b ) )
{
2014-09-09 09:32:00 +00:00
LOG_PRINT_L1 ( " Block found but, seems that reorganize just happened after that, do not relay this block " ) ;
2014-03-03 22:07:58 +00:00
return true ;
}
CHECK_AND_ASSERT_MES ( txs . size ( ) = = b . tx_hashes . size ( ) & & ! missed_txs . size ( ) , false , " cant find some transactions in found block: " < < get_block_hash ( b ) < < " txs.size()= " < < txs . size ( )
< < " , b.tx_hashes.size()= " < < b . tx_hashes . size ( ) < < " , missed_txs.size() " < < missed_txs . size ( ) ) ;
block_to_blob ( b , arg . b . block ) ;
//pack transactions
BOOST_FOREACH ( auto & tx , txs )
arg . b . txs . push_back ( t_serializable_object_to_blob ( tx ) ) ;
m_pprotocol - > relay_block ( arg , exclude_context ) ;
}
return bvc . m_added_to_main_chain ;
}
//-----------------------------------------------------------------------------------------------
void core : : on_synchronized ( )
{
m_miner . on_synchronized ( ) ;
}
2014-06-27 22:06:21 +00:00
//bool core::get_backward_blocks_sizes(uint64_t from_height, std::vector<size_t>& sizes, size_t count)
//{
// return m_blockchain_storage.get_backward_blocks_sizes(from_height, sizes, count);
//}
2014-03-03 22:07:58 +00:00
//-----------------------------------------------------------------------------------------------
bool core : : add_new_block ( const block & b , block_verification_context & bvc )
{
return m_blockchain_storage . add_new_block ( b , bvc ) ;
}
** CHANGES ARE EXPERIMENTAL (FOR TESTING ONLY)
Bockchain:
1. Optim: Multi-thread long-hash computation when encountering groups of blocks.
2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible.
3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible.
4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks.
5. Optim: Multi-thread signature computation whenever possible.
6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD)
7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???).
8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads).
Berkeley-DB:
1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc).
2. Fix: Unable to pop blocks on reorganize due to transaction errors.
3. Patch: Large number of transaction aborts when running multi-threaded bulk queries.
4. Patch: Insufficient locks error when running full sync.
5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation.
6. Optim: Add bulk queries to get output global indices.
7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
9. Optim: Added thread-safe buffers used when multi-threading bulk queries.
10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details)
11. Mod: Added checkpoint thread and auto-remove-logs option.
12. *Now usable on 32-bit systems like RPI2.
LMDB:
1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect)
2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details)
5. Mod: Auto resize to +1GB instead of multiplier x1.5
ETC:
1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete.
2. Fix: 32-bit saturation bug when computing next difficulty on large blocks.
[PENDING ISSUES]
1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization.
This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD.
2. Berkeley db, possible bug "unable to allocate memory". TBD.
[NEW OPTIONS] (*Currently all enabled for testing purposes)
1. --fast-block-sync arg=[0:1] (default: 1)
a. 0 = Compute long hash per block (may take a while depending on CPU)
b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence)
2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000)
a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions.
b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache.
Fast - Write meta-data but defer data flush.
Fastest - Defer meta-data and data flush.
Sync - Flush data after nblocks_per_sync and wait.
Async - Flush data after nblocks_per_sync but do not wait for the operation to finish.
3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower)
Max number of threads to use when computing long-hash in groups.
4. --show-time-stats arg=[0:1] (default: 1)
Show benchmark related time stats.
5. --db-auto-remove-logs arg=[0:1] (default: 1)
For berkeley-db only. Auto remove logs if enabled.
**Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version.
At the moment, you need a full resync to use this optimized version.
[PERFORMANCE COMPARISON]
**Some figures are approximations only.
Using a baseline machine of an i7-2600K+SSD+(with full pow computation):
1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain.
2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain.
3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain.
Averate procesing times (with full pow computation):
lmdb-optimized:
1. tx_ave = 2.5 ms / tx
2. block_ave = 5.87 ms / block
memory-official-repo:
1. tx_ave = 8.85 ms / tx
2. block_ave = 19.68 ms / block
lmdb-official-repo (0f4a036437fd41a5498ee5e74e2422ea6177aa3e)
1. tx_ave = 47.8 ms / tx
2. block_ave = 64.2 ms / block
**Note: The following data denotes processing times only (does not include p2p download time)
lmdb-optimized processing times (with full pow computation):
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000).
2. Laptop, Dual-core / 4-threads U4200 (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000).
3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000).
lmdb-optimized processing times (with per-block-checkpoint)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with full pow computation)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000).
2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with per-block-checkpoint)
1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
2015-07-10 20:09:32 +00:00
//-----------------------------------------------------------------------------------------------
bool core : : prepare_handle_incoming_blocks ( const std : : list < block_complete_entry > & blocks )
{
# if BLOCKCHAIN_DB == DB_LMDB
m_blockchain_storage . prepare_handle_incoming_blocks ( blocks ) ;
# endif
return true ;
}
//-----------------------------------------------------------------------------------------------
bool core : : cleanup_handle_incoming_blocks ( bool force_sync )
{
# if BLOCKCHAIN_DB == DB_LMDB
m_blockchain_storage . cleanup_handle_incoming_blocks ( force_sync ) ;
# endif
return true ;
}
2014-03-03 22:07:58 +00:00
//-----------------------------------------------------------------------------------------------
bool core : : handle_incoming_block ( const blobdata & block_blob , block_verification_context & bvc , bool update_miner_blocktemplate )
{
2014-09-29 20:30:47 +00:00
// load json & DNS checkpoints every 10min/hour respectively,
// and verify them with respect to what blocks we already have
CHECK_AND_ASSERT_MES ( update_checkpoints ( ) , false , " One or more checkpoints loaded from json or dns conflicted with existing checkpoints. " ) ;
2014-03-03 22:07:58 +00:00
bvc = boost : : value_initialized < block_verification_context > ( ) ;
if ( block_blob . size ( ) > get_max_block_size ( ) )
{
2014-09-09 09:32:00 +00:00
LOG_PRINT_L1 ( " WRONG BLOCK BLOB, too big size " < < block_blob . size ( ) < < " , rejected " ) ;
2014-03-03 22:07:58 +00:00
bvc . m_verifivation_failed = true ;
return false ;
}
block b = AUTO_VAL_INIT ( b ) ;
if ( ! parse_and_validate_block_from_blob ( block_blob , b ) )
{
2014-09-09 09:32:00 +00:00
LOG_PRINT_L1 ( " Failed to parse and validate new block " ) ;
2014-03-03 22:07:58 +00:00
bvc . m_verifivation_failed = true ;
return false ;
}
add_new_block ( b , bvc ) ;
if ( update_miner_blocktemplate & & bvc . m_added_to_main_chain )
update_miner_block_template ( ) ;
return true ;
}
//-----------------------------------------------------------------------------------------------
2014-06-11 21:32:53 +00:00
// Used by the RPC server to check the size of an incoming
// block_blob
2015-09-19 10:25:57 +00:00
bool core : : check_incoming_block_size ( const blobdata & block_blob ) const
2014-06-11 21:32:53 +00:00
{
if ( block_blob . size ( ) > get_max_block_size ( ) )
{
2014-09-09 09:32:00 +00:00
LOG_PRINT_L1 ( " WRONG BLOCK BLOB, too big size " < < block_blob . size ( ) < < " , rejected " ) ;
2014-06-11 21:32:53 +00:00
return false ;
}
return true ;
}
//-----------------------------------------------------------------------------------------------
2015-09-19 10:25:57 +00:00
crypto : : hash core : : get_tail_id ( ) const
2014-03-03 22:07:58 +00:00
{
return m_blockchain_storage . get_tail_id ( ) ;
}
//-----------------------------------------------------------------------------------------------
2015-09-19 10:25:57 +00:00
size_t core : : get_pool_transactions_count ( ) const
2014-03-03 22:07:58 +00:00
{
return m_mempool . get_transactions_count ( ) ;
}
//-----------------------------------------------------------------------------------------------
2015-09-19 10:25:57 +00:00
bool core : : have_block ( const crypto : : hash & id ) const
2014-03-03 22:07:58 +00:00
{
return m_blockchain_storage . have_block ( id ) ;
}
//-----------------------------------------------------------------------------------------------
2015-09-19 10:25:57 +00:00
bool core : : parse_tx_from_blob ( transaction & tx , crypto : : hash & tx_hash , crypto : : hash & tx_prefix_hash , const blobdata & blob ) const
2014-03-03 22:07:58 +00:00
{
return parse_and_validate_tx_from_blob ( blob , tx , tx_hash , tx_prefix_hash ) ;
}
//-----------------------------------------------------------------------------------------------
2015-09-19 10:25:57 +00:00
bool core : : check_tx_syntax ( const transaction & tx ) const
2014-03-03 22:07:58 +00:00
{
return true ;
}
//-----------------------------------------------------------------------------------------------
2015-09-19 10:25:57 +00:00
bool core : : get_pool_transactions ( std : : list < transaction > & txs ) const
2014-03-03 22:07:58 +00:00
{
2014-07-17 14:31:44 +00:00
m_mempool . get_transactions ( txs ) ;
return true ;
2014-03-03 22:07:58 +00:00
}
//-----------------------------------------------------------------------------------------------
2015-04-23 12:13:07 +00:00
bool core : : get_pool_transactions_and_spent_keys_info ( std : : vector < tx_info > & tx_infos , std : : vector < spent_key_image_info > & key_image_infos ) const
{
return m_mempool . get_transactions_and_spent_keys_info ( tx_infos , key_image_infos ) ;
}
//-----------------------------------------------------------------------------------------------
2015-09-19 10:25:57 +00:00
bool core : : get_short_chain_history ( std : : list < crypto : : hash > & ids ) const
2014-03-03 22:07:58 +00:00
{
return m_blockchain_storage . get_short_chain_history ( ids ) ;
}
//-----------------------------------------------------------------------------------------------
bool core : : handle_get_objects ( NOTIFY_REQUEST_GET_OBJECTS : : request & arg , NOTIFY_RESPONSE_GET_OBJECTS : : request & rsp , cryptonote_connection_context & context )
{
return m_blockchain_storage . handle_get_objects ( arg , rsp ) ;
}
//-----------------------------------------------------------------------------------------------
2015-09-19 10:25:57 +00:00
crypto : : hash core : : get_block_id_by_height ( uint64_t height ) const
2014-03-03 22:07:58 +00:00
{
return m_blockchain_storage . get_block_id_by_height ( height ) ;
}
//-----------------------------------------------------------------------------------------------
2015-09-19 10:25:57 +00:00
bool core : : get_block_by_hash ( const crypto : : hash & h , block & blk ) const
** CHANGES ARE EXPERIMENTAL (FOR TESTING ONLY)
Bockchain:
1. Optim: Multi-thread long-hash computation when encountering groups of blocks.
2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible.
3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible.
4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks.
5. Optim: Multi-thread signature computation whenever possible.
6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD)
7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???).
8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads).
Berkeley-DB:
1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc).
2. Fix: Unable to pop blocks on reorganize due to transaction errors.
3. Patch: Large number of transaction aborts when running multi-threaded bulk queries.
4. Patch: Insufficient locks error when running full sync.
5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation.
6. Optim: Add bulk queries to get output global indices.
7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
9. Optim: Added thread-safe buffers used when multi-threading bulk queries.
10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details)
11. Mod: Added checkpoint thread and auto-remove-logs option.
12. *Now usable on 32-bit systems like RPI2.
LMDB:
1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect)
2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details)
5. Mod: Auto resize to +1GB instead of multiplier x1.5
ETC:
1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete.
2. Fix: 32-bit saturation bug when computing next difficulty on large blocks.
[PENDING ISSUES]
1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization.
This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD.
2. Berkeley db, possible bug "unable to allocate memory". TBD.
[NEW OPTIONS] (*Currently all enabled for testing purposes)
1. --fast-block-sync arg=[0:1] (default: 1)
a. 0 = Compute long hash per block (may take a while depending on CPU)
b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence)
2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000)
a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions.
b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache.
Fast - Write meta-data but defer data flush.
Fastest - Defer meta-data and data flush.
Sync - Flush data after nblocks_per_sync and wait.
Async - Flush data after nblocks_per_sync but do not wait for the operation to finish.
3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower)
Max number of threads to use when computing long-hash in groups.
4. --show-time-stats arg=[0:1] (default: 1)
Show benchmark related time stats.
5. --db-auto-remove-logs arg=[0:1] (default: 1)
For berkeley-db only. Auto remove logs if enabled.
**Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version.
At the moment, you need a full resync to use this optimized version.
[PERFORMANCE COMPARISON]
**Some figures are approximations only.
Using a baseline machine of an i7-2600K+SSD+(with full pow computation):
1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain.
2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain.
3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain.
Averate procesing times (with full pow computation):
lmdb-optimized:
1. tx_ave = 2.5 ms / tx
2. block_ave = 5.87 ms / block
memory-official-repo:
1. tx_ave = 8.85 ms / tx
2. block_ave = 19.68 ms / block
lmdb-official-repo (0f4a036437fd41a5498ee5e74e2422ea6177aa3e)
1. tx_ave = 47.8 ms / tx
2. block_ave = 64.2 ms / block
**Note: The following data denotes processing times only (does not include p2p download time)
lmdb-optimized processing times (with full pow computation):
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000).
2. Laptop, Dual-core / 4-threads U4200 (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000).
3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000).
lmdb-optimized processing times (with per-block-checkpoint)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with full pow computation)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000).
2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with per-block-checkpoint)
1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
2015-07-10 20:09:32 +00:00
{
2014-03-03 22:07:58 +00:00
return m_blockchain_storage . get_block_by_hash ( h , blk ) ;
}
//-----------------------------------------------------------------------------------------------
2014-06-27 22:06:21 +00:00
//void core::get_all_known_block_ids(std::list<crypto::hash> &main, std::list<crypto::hash> &alt, std::list<crypto::hash> &invalid) {
// m_blockchain_storage.get_all_known_block_ids(main, alt, invalid);
//}
2014-03-03 22:07:58 +00:00
//-----------------------------------------------------------------------------------------------
2015-09-19 10:25:57 +00:00
std : : string core : : print_pool ( bool short_format ) const
2014-03-03 22:07:58 +00:00
{
return m_mempool . print_pool ( short_format ) ;
}
//-----------------------------------------------------------------------------------------------
bool core : : update_miner_block_template ( )
{
m_miner . on_block_chain_update ( ) ;
return true ;
}
//-----------------------------------------------------------------------------------------------
bool core : : on_idle ( )
{
if ( ! m_starter_message_showed )
{
2015-12-14 04:54:39 +00:00
LOG_PRINT_L0 ( ENDL < < " ********************************************************************** " < < ENDL
< < " The daemon will start synchronizing with the network. It may take up to several hours. " < < ENDL
2014-03-03 22:07:58 +00:00
< < ENDL
2014-05-25 17:06:40 +00:00
< < " You can set the level of process detailization* through \" set_log <level> \" command*, where <level> is between 0 (no details) and 4 (very verbose). " < < ENDL
2014-03-03 22:07:58 +00:00
< < ENDL
< < " Use \" help \" command to see the list of available commands. " < < ENDL
< < ENDL
2015-12-14 04:54:39 +00:00
< < " Note: in case you need to interrupt the process, use \" exit \" command. Otherwise, the current progress won't be saved. " < < ENDL
2014-03-03 22:07:58 +00:00
< < " ********************************************************************** " ) ;
m_starter_message_showed = true ;
}
2015-01-26 05:36:09 +00:00
# if BLOCKCHAIN_DB == DB_LMDB
2015-07-15 05:47:07 +00:00
// m_store_blockchain_interval.do_call(boost::bind(&Blockchain::store_blockchain, &m_blockchain_storage));
2015-01-26 05:36:09 +00:00
# else
2014-03-03 22:07:58 +00:00
m_store_blockchain_interval . do_call ( boost : : bind ( & blockchain_storage : : store_blockchain , & m_blockchain_storage ) ) ;
2015-01-26 05:36:09 +00:00
# endif
2015-09-13 17:09:57 +00:00
m_fork_moaner . do_call ( boost : : bind ( & core : : check_fork_time , this ) ) ;
2015-11-21 00:26:48 +00:00
m_txpool_auto_relayer . do_call ( boost : : bind ( & core : : relay_txpool_transactions , this ) ) ;
2014-03-03 22:07:58 +00:00
m_miner . on_idle ( ) ;
2014-06-15 07:48:13 +00:00
m_mempool . on_idle ( ) ;
2014-03-03 22:07:58 +00:00
return true ;
}
//-----------------------------------------------------------------------------------------------
2015-09-13 17:09:57 +00:00
bool core : : check_fork_time ( )
{
# if BLOCKCHAIN_DB == DB_LMDB
HardFork : : State state = m_blockchain_storage . get_hard_fork_state ( ) ;
switch ( state ) {
case HardFork : : LikelyForked :
LOG_PRINT_L0 ( ENDL
< < " ********************************************************************** " < < ENDL
< < " Last scheduled hard fork is too far in the past. " < < ENDL
< < " We are most likely forked from the network. Daemon update needed now. " < < ENDL
< < " ********************************************************************** " < < ENDL ) ;
break ;
case HardFork : : UpdateNeeded :
LOG_PRINT_L0 ( ENDL
< < " ********************************************************************** " < < ENDL
< < " Last scheduled hard fork time shows a daemon update is needed now. " < < ENDL
< < " ********************************************************************** " < < ENDL ) ;
break ;
default :
break ;
}
# endif
return true ;
}
//-----------------------------------------------------------------------------------------------
** CHANGES ARE EXPERIMENTAL (FOR TESTING ONLY)
Bockchain:
1. Optim: Multi-thread long-hash computation when encountering groups of blocks.
2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible.
3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible.
4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks.
5. Optim: Multi-thread signature computation whenever possible.
6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD)
7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???).
8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads).
Berkeley-DB:
1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc).
2. Fix: Unable to pop blocks on reorganize due to transaction errors.
3. Patch: Large number of transaction aborts when running multi-threaded bulk queries.
4. Patch: Insufficient locks error when running full sync.
5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation.
6. Optim: Add bulk queries to get output global indices.
7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
9. Optim: Added thread-safe buffers used when multi-threading bulk queries.
10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details)
11. Mod: Added checkpoint thread and auto-remove-logs option.
12. *Now usable on 32-bit systems like RPI2.
LMDB:
1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect)
2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details)
5. Mod: Auto resize to +1GB instead of multiplier x1.5
ETC:
1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete.
2. Fix: 32-bit saturation bug when computing next difficulty on large blocks.
[PENDING ISSUES]
1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization.
This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD.
2. Berkeley db, possible bug "unable to allocate memory". TBD.
[NEW OPTIONS] (*Currently all enabled for testing purposes)
1. --fast-block-sync arg=[0:1] (default: 1)
a. 0 = Compute long hash per block (may take a while depending on CPU)
b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence)
2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000)
a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions.
b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache.
Fast - Write meta-data but defer data flush.
Fastest - Defer meta-data and data flush.
Sync - Flush data after nblocks_per_sync and wait.
Async - Flush data after nblocks_per_sync but do not wait for the operation to finish.
3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower)
Max number of threads to use when computing long-hash in groups.
4. --show-time-stats arg=[0:1] (default: 1)
Show benchmark related time stats.
5. --db-auto-remove-logs arg=[0:1] (default: 1)
For berkeley-db only. Auto remove logs if enabled.
**Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version.
At the moment, you need a full resync to use this optimized version.
[PERFORMANCE COMPARISON]
**Some figures are approximations only.
Using a baseline machine of an i7-2600K+SSD+(with full pow computation):
1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain.
2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain.
3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain.
Averate procesing times (with full pow computation):
lmdb-optimized:
1. tx_ave = 2.5 ms / tx
2. block_ave = 5.87 ms / block
memory-official-repo:
1. tx_ave = 8.85 ms / tx
2. block_ave = 19.68 ms / block
lmdb-official-repo (0f4a036437fd41a5498ee5e74e2422ea6177aa3e)
1. tx_ave = 47.8 ms / tx
2. block_ave = 64.2 ms / block
**Note: The following data denotes processing times only (does not include p2p download time)
lmdb-optimized processing times (with full pow computation):
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000).
2. Laptop, Dual-core / 4-threads U4200 (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000).
3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000).
lmdb-optimized processing times (with per-block-checkpoint)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with full pow computation)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000).
2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with per-block-checkpoint)
1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
2015-07-10 20:09:32 +00:00
void core : : set_target_blockchain_height ( uint64_t target_blockchain_height )
{
2014-06-04 20:50:13 +00:00
m_target_blockchain_height = target_blockchain_height ;
}
//-----------------------------------------------------------------------------------------------
** CHANGES ARE EXPERIMENTAL (FOR TESTING ONLY)
Bockchain:
1. Optim: Multi-thread long-hash computation when encountering groups of blocks.
2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible.
3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible.
4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks.
5. Optim: Multi-thread signature computation whenever possible.
6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD)
7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???).
8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads).
Berkeley-DB:
1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc).
2. Fix: Unable to pop blocks on reorganize due to transaction errors.
3. Patch: Large number of transaction aborts when running multi-threaded bulk queries.
4. Patch: Insufficient locks error when running full sync.
5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation.
6. Optim: Add bulk queries to get output global indices.
7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
9. Optim: Added thread-safe buffers used when multi-threading bulk queries.
10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details)
11. Mod: Added checkpoint thread and auto-remove-logs option.
12. *Now usable on 32-bit systems like RPI2.
LMDB:
1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect)
2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details)
5. Mod: Auto resize to +1GB instead of multiplier x1.5
ETC:
1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete.
2. Fix: 32-bit saturation bug when computing next difficulty on large blocks.
[PENDING ISSUES]
1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization.
This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD.
2. Berkeley db, possible bug "unable to allocate memory". TBD.
[NEW OPTIONS] (*Currently all enabled for testing purposes)
1. --fast-block-sync arg=[0:1] (default: 1)
a. 0 = Compute long hash per block (may take a while depending on CPU)
b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence)
2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000)
a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions.
b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache.
Fast - Write meta-data but defer data flush.
Fastest - Defer meta-data and data flush.
Sync - Flush data after nblocks_per_sync and wait.
Async - Flush data after nblocks_per_sync but do not wait for the operation to finish.
3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower)
Max number of threads to use when computing long-hash in groups.
4. --show-time-stats arg=[0:1] (default: 1)
Show benchmark related time stats.
5. --db-auto-remove-logs arg=[0:1] (default: 1)
For berkeley-db only. Auto remove logs if enabled.
**Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version.
At the moment, you need a full resync to use this optimized version.
[PERFORMANCE COMPARISON]
**Some figures are approximations only.
Using a baseline machine of an i7-2600K+SSD+(with full pow computation):
1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain.
2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain.
3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain.
Averate procesing times (with full pow computation):
lmdb-optimized:
1. tx_ave = 2.5 ms / tx
2. block_ave = 5.87 ms / block
memory-official-repo:
1. tx_ave = 8.85 ms / tx
2. block_ave = 19.68 ms / block
lmdb-official-repo (0f4a036437fd41a5498ee5e74e2422ea6177aa3e)
1. tx_ave = 47.8 ms / tx
2. block_ave = 64.2 ms / block
**Note: The following data denotes processing times only (does not include p2p download time)
lmdb-optimized processing times (with full pow computation):
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000).
2. Laptop, Dual-core / 4-threads U4200 (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000).
3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000).
lmdb-optimized processing times (with per-block-checkpoint)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with full pow computation)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000).
2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with per-block-checkpoint)
1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
2015-07-10 20:09:32 +00:00
uint64_t core : : get_target_blockchain_height ( ) const
{
2014-06-04 20:50:13 +00:00
return m_target_blockchain_height ;
}
2014-09-30 19:24:43 +00:00
//-----------------------------------------------------------------------------------------------
void core : : graceful_exit ( )
{
raise ( SIGTERM ) ;
}
2015-12-14 04:54:39 +00:00
2015-02-12 19:59:39 +00:00
std : : atomic < bool > core : : m_fast_exit ( false ) ;
2014-03-03 22:07:58 +00:00
}