2015-12-31 06:39:56 +00:00
// Copyright (c) 2014-2016, The Monero Project
2015-12-14 04:54:39 +00:00
//
2014-07-23 13:03:52 +00:00
// All rights reserved.
2015-12-14 04:54:39 +00:00
//
2014-07-23 13:03:52 +00:00
// Redistribution and use in source and binary forms, with or without modification, are
// permitted provided that the following conditions are met:
2015-12-14 04:54:39 +00:00
//
2014-07-23 13:03:52 +00:00
// 1. Redistributions of source code must retain the above copyright notice, this list of
// conditions and the following disclaimer.
2015-12-14 04:54:39 +00:00
//
2014-07-23 13:03:52 +00:00
// 2. Redistributions in binary form must reproduce the above copyright notice, this list
// of conditions and the following disclaimer in the documentation and/or other
// materials provided with the distribution.
2015-12-14 04:54:39 +00:00
//
2014-07-23 13:03:52 +00:00
// 3. Neither the name of the copyright holder nor the names of its contributors may be
// used to endorse or promote products derived from this software without specific
// prior written permission.
2015-12-14 04:54:39 +00:00
//
2014-07-23 13:03:52 +00:00
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
2015-12-14 04:54:39 +00:00
//
2014-07-23 13:03:52 +00:00
// Parts of this file are originally copyright (c) 2012-2013 The Cryptonote developers
2014-03-03 22:07:58 +00:00
# include "include_base_utils.h"
using namespace epee ;
# include <boost/foreach.hpp>
# include <unordered_set>
# include "cryptonote_core.h"
# include "common/command_line.h"
# include "common/util.h"
# include "warnings.h"
# include "crypto/crypto.h"
# include "cryptonote_config.h"
# include "cryptonote_format_utils.h"
# include "misc_language.h"
2014-09-30 19:24:43 +00:00
# include <csignal>
2016-03-25 06:22:06 +00:00
# include "cryptonote_core/checkpoints.h"
2016-06-15 22:37:13 +00:00
# include "ringct/rctTypes.h"
2015-03-25 15:41:30 +00:00
# include "blockchain_db/blockchain_db.h"
# include "blockchain_db/lmdb/db_lmdb.h"
** CHANGES ARE EXPERIMENTAL (FOR TESTING ONLY)
Bockchain:
1. Optim: Multi-thread long-hash computation when encountering groups of blocks.
2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible.
3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible.
4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks.
5. Optim: Multi-thread signature computation whenever possible.
6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD)
7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???).
8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads).
Berkeley-DB:
1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc).
2. Fix: Unable to pop blocks on reorganize due to transaction errors.
3. Patch: Large number of transaction aborts when running multi-threaded bulk queries.
4. Patch: Insufficient locks error when running full sync.
5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation.
6. Optim: Add bulk queries to get output global indices.
7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
9. Optim: Added thread-safe buffers used when multi-threading bulk queries.
10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details)
11. Mod: Added checkpoint thread and auto-remove-logs option.
12. *Now usable on 32-bit systems like RPI2.
LMDB:
1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect)
2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details)
5. Mod: Auto resize to +1GB instead of multiplier x1.5
ETC:
1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete.
2. Fix: 32-bit saturation bug when computing next difficulty on large blocks.
[PENDING ISSUES]
1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization.
This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD.
2. Berkeley db, possible bug "unable to allocate memory". TBD.
[NEW OPTIONS] (*Currently all enabled for testing purposes)
1. --fast-block-sync arg=[0:1] (default: 1)
a. 0 = Compute long hash per block (may take a while depending on CPU)
b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence)
2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000)
a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions.
b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache.
Fast - Write meta-data but defer data flush.
Fastest - Defer meta-data and data flush.
Sync - Flush data after nblocks_per_sync and wait.
Async - Flush data after nblocks_per_sync but do not wait for the operation to finish.
3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower)
Max number of threads to use when computing long-hash in groups.
4. --show-time-stats arg=[0:1] (default: 1)
Show benchmark related time stats.
5. --db-auto-remove-logs arg=[0:1] (default: 1)
For berkeley-db only. Auto remove logs if enabled.
**Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version.
At the moment, you need a full resync to use this optimized version.
[PERFORMANCE COMPARISON]
**Some figures are approximations only.
Using a baseline machine of an i7-2600K+SSD+(with full pow computation):
1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain.
2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain.
3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain.
Averate procesing times (with full pow computation):
lmdb-optimized:
1. tx_ave = 2.5 ms / tx
2. block_ave = 5.87 ms / block
memory-official-repo:
1. tx_ave = 8.85 ms / tx
2. block_ave = 19.68 ms / block
lmdb-official-repo (0f4a036437fd41a5498ee5e74e2422ea6177aa3e)
1. tx_ave = 47.8 ms / tx
2. block_ave = 64.2 ms / block
**Note: The following data denotes processing times only (does not include p2p download time)
lmdb-optimized processing times (with full pow computation):
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000).
2. Laptop, Dual-core / 4-threads U4200 (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000).
3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000).
lmdb-optimized processing times (with per-block-checkpoint)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with full pow computation)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000).
2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with per-block-checkpoint)
1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
2015-07-10 20:09:32 +00:00
# if defined(BERKELEY_DB)
2015-03-25 15:41:30 +00:00
# include "blockchain_db/berkeleydb/db_bdb.h"
2015-04-07 18:27:37 +00:00
# endif
2017-01-14 13:29:08 +00:00
# include "ringct/rctSigs.h"
2014-03-03 22:07:58 +00:00
DISABLE_VS_WARNINGS ( 4355 )
namespace cryptonote
{
//-----------------------------------------------------------------------------------------------
core : : core ( i_cryptonote_protocol * pprotocol ) :
m_mempool ( m_blockchain_storage ) ,
m_blockchain_storage ( m_mempool ) ,
m_miner ( this ) ,
2015-12-14 04:54:39 +00:00
m_miner_address ( boost : : value_initialized < account_public_address > ( ) ) ,
2014-06-04 20:50:13 +00:00
m_starter_message_showed ( false ) ,
2014-09-26 05:01:58 +00:00
m_target_blockchain_height ( 0 ) ,
m_checkpoints_path ( " " ) ,
2014-09-29 20:30:47 +00:00
m_last_dns_checkpoints_update ( 0 ) ,
m_last_json_checkpoints_update ( 0 )
2014-03-03 22:07:58 +00:00
{
set_cryptonote_protocol ( pprotocol ) ;
}
void core : : set_cryptonote_protocol ( i_cryptonote_protocol * pprotocol )
{
if ( pprotocol )
m_pprotocol = pprotocol ;
else
m_pprotocol = & m_protocol_stub ;
}
//-----------------------------------------------------------------------------------
void core : : set_checkpoints ( checkpoints & & chk_pts )
{
m_blockchain_storage . set_checkpoints ( std : : move ( chk_pts ) ) ;
2014-09-25 05:15:28 +00:00
}
//-----------------------------------------------------------------------------------
void core : : set_checkpoints_file_path ( const std : : string & path )
{
m_checkpoints_path = path ;
}
2014-09-26 05:01:58 +00:00
//-----------------------------------------------------------------------------------
void core : : set_enforce_dns_checkpoints ( bool enforce_dns )
{
m_blockchain_storage . set_enforce_dns_checkpoints ( enforce_dns ) ;
}
2014-09-25 05:15:28 +00:00
//-----------------------------------------------------------------------------------------------
2014-09-26 05:01:58 +00:00
bool core : : update_checkpoints ( )
2014-09-25 05:15:28 +00:00
{
2015-12-13 11:38:37 +00:00
if ( m_testnet | | m_fakechain ) return true ;
2015-04-08 22:07:46 +00:00
2015-04-22 08:36:39 +00:00
if ( m_checkpoints_updating . test_and_set ( ) ) return true ;
2014-09-26 05:01:58 +00:00
bool res = true ;
2014-09-29 20:30:47 +00:00
if ( time ( NULL ) - m_last_dns_checkpoints_update > = 3600 )
2014-09-25 05:15:28 +00:00
{
2014-09-29 20:30:47 +00:00
res = m_blockchain_storage . update_checkpoints ( m_checkpoints_path , true ) ;
m_last_dns_checkpoints_update = time ( NULL ) ;
m_last_json_checkpoints_update = time ( NULL ) ;
}
else if ( time ( NULL ) - m_last_json_checkpoints_update > = 600 )
{
res = m_blockchain_storage . update_checkpoints ( m_checkpoints_path , false ) ;
m_last_json_checkpoints_update = time ( NULL ) ;
2014-09-25 05:15:28 +00:00
}
2014-09-30 19:24:43 +00:00
2015-04-22 08:36:39 +00:00
m_checkpoints_updating . clear ( ) ;
2014-09-30 19:24:43 +00:00
// if anything fishy happened getting new checkpoints, bring down the house
if ( ! res )
{
graceful_exit ( ) ;
}
2014-09-26 05:01:58 +00:00
return res ;
2014-03-03 22:07:58 +00:00
}
//-----------------------------------------------------------------------------------
2015-02-05 09:11:20 +00:00
void core : : stop ( )
{
2016-12-04 12:27:45 +00:00
m_blockchain_storage . cancel ( ) ;
2015-02-05 09:11:20 +00:00
}
//-----------------------------------------------------------------------------------
2015-12-08 23:06:29 +00:00
void core : : init_options ( boost : : program_options : : options_description & desc )
2014-03-03 22:07:58 +00:00
{
2015-12-08 23:06:29 +00:00
command_line : : add_arg ( desc , command_line : : arg_data_dir , tools : : get_default_data_dir ( ) ) ;
command_line : : add_arg ( desc , command_line : : arg_testnet_data_dir , ( boost : : filesystem : : path ( tools : : get_default_data_dir ( ) ) / " testnet " ) . string ( ) ) ;
command_line : : add_arg ( desc , command_line : : arg_test_drop_download ) ;
command_line : : add_arg ( desc , command_line : : arg_test_drop_download_height ) ;
command_line : : add_arg ( desc , command_line : : arg_testnet_on ) ;
command_line : : add_arg ( desc , command_line : : arg_dns_checkpoints ) ;
command_line : : add_arg ( desc , command_line : : arg_db_type ) ;
command_line : : add_arg ( desc , command_line : : arg_prep_blocks_threads ) ;
command_line : : add_arg ( desc , command_line : : arg_fast_block_sync ) ;
command_line : : add_arg ( desc , command_line : : arg_db_sync_mode ) ;
command_line : : add_arg ( desc , command_line : : arg_show_time_stats ) ;
2016-09-24 16:00:19 +00:00
command_line : : add_arg ( desc , command_line : : arg_block_sync_size ) ;
2014-03-03 22:07:58 +00:00
}
//-----------------------------------------------------------------------------------------------
2015-01-29 22:10:53 +00:00
bool core : : handle_command_line ( const boost : : program_options : : variables_map & vm )
2014-03-03 22:07:58 +00:00
{
2015-12-08 23:06:29 +00:00
m_testnet = command_line : : get_arg ( vm , command_line : : arg_testnet_on ) ;
2015-01-29 22:10:53 +00:00
auto data_dir_arg = m_testnet ? command_line : : arg_testnet_data_dir : command_line : : arg_data_dir ;
2014-09-08 21:30:50 +00:00
m_config_folder = command_line : : get_arg ( vm , data_dir_arg ) ;
2015-01-29 22:10:53 +00:00
auto data_dir = boost : : filesystem : : path ( m_config_folder ) ;
2015-12-13 11:38:37 +00:00
if ( ! m_testnet & & ! m_fakechain )
2015-01-29 22:10:53 +00:00
{
cryptonote : : checkpoints checkpoints ;
2016-03-25 06:22:06 +00:00
if ( ! checkpoints . init_default_checkpoints ( ) )
2015-01-29 22:10:53 +00:00
{
2016-03-21 10:12:12 +00:00
throw std : : runtime_error ( " Failed to initialize checkpoints " ) ;
2015-01-29 22:10:53 +00:00
}
set_checkpoints ( std : : move ( checkpoints ) ) ;
boost : : filesystem : : path json ( JSON_HASH_FILE_NAME ) ;
boost : : filesystem : : path checkpoint_json_hashfile_fullpath = data_dir / json ;
set_checkpoints_file_path ( checkpoint_json_hashfile_fullpath . string ( ) ) ;
}
2015-12-08 23:06:29 +00:00
set_enforce_dns_checkpoints ( command_line : : get_arg ( vm , command_line : : arg_dns_checkpoints ) ) ;
2015-04-01 17:00:45 +00:00
test_drop_download_height ( command_line : : get_arg ( vm , command_line : : arg_test_drop_download_height ) ) ;
2015-12-14 04:54:39 +00:00
2015-04-01 17:00:45 +00:00
if ( command_line : : get_arg ( vm , command_line : : arg_test_drop_download ) = = true )
2015-12-14 04:54:39 +00:00
test_drop_download ( ) ;
2014-03-03 22:07:58 +00:00
return true ;
}
//-----------------------------------------------------------------------------------------------
2015-09-19 10:25:57 +00:00
uint64_t core : : get_current_blockchain_height ( ) const
2014-03-03 22:07:58 +00:00
{
return m_blockchain_storage . get_current_blockchain_height ( ) ;
}
//-----------------------------------------------------------------------------------------------
2015-09-19 10:25:57 +00:00
bool core : : get_blockchain_top ( uint64_t & height , crypto : : hash & top_id ) const
2014-03-03 22:07:58 +00:00
{
top_id = m_blockchain_storage . get_tail_id ( height ) ;
return true ;
}
//-----------------------------------------------------------------------------------------------
2015-09-19 10:25:57 +00:00
bool core : : get_blocks ( uint64_t start_offset , size_t count , std : : list < block > & blocks , std : : list < transaction > & txs ) const
2014-03-03 22:07:58 +00:00
{
return m_blockchain_storage . get_blocks ( start_offset , count , blocks , txs ) ;
}
//-----------------------------------------------------------------------------------------------
2015-09-19 10:25:57 +00:00
bool core : : get_blocks ( uint64_t start_offset , size_t count , std : : list < block > & blocks ) const
2014-03-03 22:07:58 +00:00
{
return m_blockchain_storage . get_blocks ( start_offset , count , blocks ) ;
} //-----------------------------------------------------------------------------------------------
2015-09-19 10:25:57 +00:00
bool core : : get_transactions ( const std : : vector < crypto : : hash > & txs_ids , std : : list < transaction > & txs , std : : list < crypto : : hash > & missed_txs ) const
2014-03-03 22:07:58 +00:00
{
return m_blockchain_storage . get_transactions ( txs_ids , txs , missed_txs ) ;
}
//-----------------------------------------------------------------------------------------------
2015-09-19 10:25:57 +00:00
bool core : : get_alternative_blocks ( std : : list < block > & blocks ) const
2014-03-03 22:07:58 +00:00
{
return m_blockchain_storage . get_alternative_blocks ( blocks ) ;
}
//-----------------------------------------------------------------------------------------------
2015-09-19 10:25:57 +00:00
size_t core : : get_alternative_blocks_count ( ) const
2014-03-03 22:07:58 +00:00
{
return m_blockchain_storage . get_alternative_blocks_count ( ) ;
}
//-----------------------------------------------------------------------------------------------
2016-01-31 12:58:08 +00:00
bool core : : lock_db_directory ( const boost : : filesystem : : path & path )
{
// boost doesn't like locking directories...
const boost : : filesystem : : path lock_path = path / " .daemon_lock " ;
try
{
// ensure the file exists
std : : ofstream ( lock_path . string ( ) , std : : ios : : out ) . close ( ) ;
db_lock = boost : : interprocess : : file_lock ( lock_path . string ( ) . c_str ( ) ) ;
LOG_PRINT_L1 ( " Locking " < < lock_path . string ( ) ) ;
if ( ! db_lock . try_lock ( ) )
{
2016-10-03 21:11:00 +00:00
LOG_ERROR ( " Failed to lock " < < lock_path . string ( ) ) ;
2016-01-31 12:58:08 +00:00
return false ;
}
return true ;
}
catch ( const std : : exception & e )
{
2016-10-03 21:11:00 +00:00
LOG_ERROR ( " Error trying to lock " < < lock_path . string ( ) < < " : " < < e . what ( ) ) ;
2016-01-31 12:58:08 +00:00
return false ;
}
}
//-----------------------------------------------------------------------------------------------
bool core : : unlock_db_directory ( )
{
db_lock . unlock ( ) ;
db_lock = boost : : interprocess : : file_lock ( ) ;
2016-10-03 21:11:00 +00:00
LOG_PRINT_L1 ( " Blockchain directory successfully unlocked " ) ;
2016-01-31 12:58:08 +00:00
return true ;
}
//-----------------------------------------------------------------------------------------------
2016-02-08 18:47:56 +00:00
bool core : : init ( const boost : : program_options : : variables_map & vm , const cryptonote : : test_options * test_options )
2014-03-03 22:07:58 +00:00
{
2017-01-08 23:50:29 +00:00
start_time = std : : time ( nullptr ) ;
2016-02-08 18:47:56 +00:00
m_fakechain = test_options ! = NULL ;
2015-01-29 22:10:53 +00:00
bool r = handle_command_line ( vm ) ;
2014-03-03 22:07:58 +00:00
2015-12-31 14:24:06 +00:00
r = m_mempool . init ( m_fakechain ? std : : string ( ) : m_config_folder ) ;
2014-03-03 22:07:58 +00:00
CHECK_AND_ASSERT_MES ( r , false , " Failed to initialize memory pool " ) ;
2015-12-08 23:06:29 +00:00
std : : string db_type = command_line : : get_arg ( vm , command_line : : arg_db_type ) ;
std : : string db_sync_mode = command_line : : get_arg ( vm , command_line : : arg_db_sync_mode ) ;
bool fast_sync = command_line : : get_arg ( vm , command_line : : arg_fast_block_sync ) ! = 0 ;
uint64_t blocks_threads = command_line : : get_arg ( vm , command_line : : arg_prep_blocks_threads ) ;
2015-03-25 15:41:30 +00:00
2015-12-31 17:09:00 +00:00
boost : : filesystem : : path folder ( m_config_folder ) ;
if ( m_fakechain )
folder / = " fake " ;
2016-02-04 18:09:45 +00:00
// make sure the data directory exists, and try to lock it
CHECK_AND_ASSERT_MES ( boost : : filesystem : : exists ( folder ) | | boost : : filesystem : : create_directories ( folder ) , false ,
std : : string ( " Failed to create directory " ) . append ( folder . string ( ) ) . c_str ( ) ) ;
if ( ! lock_db_directory ( folder ) )
{
LOG_ERROR ( " Failed to lock " < < folder ) ;
return false ;
}
2015-12-31 17:09:00 +00:00
// check for blockchain.bin
try
{
const boost : : filesystem : : path old_files = folder ;
if ( boost : : filesystem : : exists ( old_files / " blockchain.bin " ) )
{
LOG_PRINT_RED_L0 ( " Found old-style blockchain.bin in " < < old_files . string ( ) ) ;
LOG_PRINT_RED_L0 ( " Monero now uses a new format. You can either remove blockchain.bin to start syncing " ) ;
2016-09-03 20:36:09 +00:00
LOG_PRINT_RED_L0 ( " the blockchain anew, or use monero-blockchain-export and monero-blockchain-import to " ) ;
2016-09-03 20:03:44 +00:00
LOG_PRINT_RED_L0 ( " convert your existing blockchain.bin to the new format. See README.md for instructions. " ) ;
2015-12-31 17:09:00 +00:00
return false ;
}
}
// folder might not be a directory, etc, etc
catch ( . . . ) { }
2015-03-25 15:41:30 +00:00
BlockchainDB * db = nullptr ;
2016-08-28 11:07:00 +00:00
uint64_t DBS_FAST_MODE = 0 ;
uint64_t DBS_FASTEST_MODE = 0 ;
uint64_t DBS_SAFE_MODE = 0 ;
2015-03-25 15:41:30 +00:00
if ( db_type = = " lmdb " )
{
db = new BlockchainLMDB ( ) ;
2016-08-28 11:07:00 +00:00
DBS_SAFE_MODE = MDB_NORDAHEAD ;
DBS_FAST_MODE = MDB_NORDAHEAD | MDB_NOSYNC ;
DBS_FASTEST_MODE = MDB_NORDAHEAD | MDB_NOSYNC | MDB_WRITEMAP | MDB_MAPASYNC ;
2015-03-25 15:41:30 +00:00
}
else
{
2016-12-04 13:13:54 +00:00
LOG_ERROR ( " Attempted to use non-existent database type " ) ;
2015-03-25 15:41:30 +00:00
return false ;
}
folder / = db - > get_db_name ( ) ;
2015-05-08 18:32:20 +00:00
LOG_PRINT_L0 ( " Loading blockchain from folder " < < folder . string ( ) < < " ... " ) ;
2015-03-25 15:41:30 +00:00
const std : : string filename = folder . string ( ) ;
2017-01-01 03:38:55 +00:00
// default to fast:async:1
** CHANGES ARE EXPERIMENTAL (FOR TESTING ONLY)
Bockchain:
1. Optim: Multi-thread long-hash computation when encountering groups of blocks.
2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible.
3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible.
4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks.
5. Optim: Multi-thread signature computation whenever possible.
6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD)
7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???).
8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads).
Berkeley-DB:
1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc).
2. Fix: Unable to pop blocks on reorganize due to transaction errors.
3. Patch: Large number of transaction aborts when running multi-threaded bulk queries.
4. Patch: Insufficient locks error when running full sync.
5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation.
6. Optim: Add bulk queries to get output global indices.
7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
9. Optim: Added thread-safe buffers used when multi-threading bulk queries.
10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details)
11. Mod: Added checkpoint thread and auto-remove-logs option.
12. *Now usable on 32-bit systems like RPI2.
LMDB:
1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect)
2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details)
5. Mod: Auto resize to +1GB instead of multiplier x1.5
ETC:
1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete.
2. Fix: 32-bit saturation bug when computing next difficulty on large blocks.
[PENDING ISSUES]
1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization.
This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD.
2. Berkeley db, possible bug "unable to allocate memory". TBD.
[NEW OPTIONS] (*Currently all enabled for testing purposes)
1. --fast-block-sync arg=[0:1] (default: 1)
a. 0 = Compute long hash per block (may take a while depending on CPU)
b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence)
2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000)
a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions.
b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache.
Fast - Write meta-data but defer data flush.
Fastest - Defer meta-data and data flush.
Sync - Flush data after nblocks_per_sync and wait.
Async - Flush data after nblocks_per_sync but do not wait for the operation to finish.
3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower)
Max number of threads to use when computing long-hash in groups.
4. --show-time-stats arg=[0:1] (default: 1)
Show benchmark related time stats.
5. --db-auto-remove-logs arg=[0:1] (default: 1)
For berkeley-db only. Auto remove logs if enabled.
**Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version.
At the moment, you need a full resync to use this optimized version.
[PERFORMANCE COMPARISON]
**Some figures are approximations only.
Using a baseline machine of an i7-2600K+SSD+(with full pow computation):
1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain.
2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain.
3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain.
Averate procesing times (with full pow computation):
lmdb-optimized:
1. tx_ave = 2.5 ms / tx
2. block_ave = 5.87 ms / block
memory-official-repo:
1. tx_ave = 8.85 ms / tx
2. block_ave = 19.68 ms / block
lmdb-official-repo (0f4a036437fd41a5498ee5e74e2422ea6177aa3e)
1. tx_ave = 47.8 ms / tx
2. block_ave = 64.2 ms / block
**Note: The following data denotes processing times only (does not include p2p download time)
lmdb-optimized processing times (with full pow computation):
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000).
2. Laptop, Dual-core / 4-threads U4200 (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000).
3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000).
lmdb-optimized processing times (with per-block-checkpoint)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with full pow computation)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000).
2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with per-block-checkpoint)
1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
2015-07-10 20:09:32 +00:00
blockchain_db_sync_mode sync_mode = db_async ;
2017-01-01 03:38:55 +00:00
uint64_t blocks_per_sync = 1 ;
** CHANGES ARE EXPERIMENTAL (FOR TESTING ONLY)
Bockchain:
1. Optim: Multi-thread long-hash computation when encountering groups of blocks.
2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible.
3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible.
4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks.
5. Optim: Multi-thread signature computation whenever possible.
6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD)
7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???).
8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads).
Berkeley-DB:
1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc).
2. Fix: Unable to pop blocks on reorganize due to transaction errors.
3. Patch: Large number of transaction aborts when running multi-threaded bulk queries.
4. Patch: Insufficient locks error when running full sync.
5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation.
6. Optim: Add bulk queries to get output global indices.
7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
9. Optim: Added thread-safe buffers used when multi-threading bulk queries.
10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details)
11. Mod: Added checkpoint thread and auto-remove-logs option.
12. *Now usable on 32-bit systems like RPI2.
LMDB:
1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect)
2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details)
5. Mod: Auto resize to +1GB instead of multiplier x1.5
ETC:
1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete.
2. Fix: 32-bit saturation bug when computing next difficulty on large blocks.
[PENDING ISSUES]
1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization.
This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD.
2. Berkeley db, possible bug "unable to allocate memory". TBD.
[NEW OPTIONS] (*Currently all enabled for testing purposes)
1. --fast-block-sync arg=[0:1] (default: 1)
a. 0 = Compute long hash per block (may take a while depending on CPU)
b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence)
2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000)
a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions.
b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache.
Fast - Write meta-data but defer data flush.
Fastest - Defer meta-data and data flush.
Sync - Flush data after nblocks_per_sync and wait.
Async - Flush data after nblocks_per_sync but do not wait for the operation to finish.
3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower)
Max number of threads to use when computing long-hash in groups.
4. --show-time-stats arg=[0:1] (default: 1)
Show benchmark related time stats.
5. --db-auto-remove-logs arg=[0:1] (default: 1)
For berkeley-db only. Auto remove logs if enabled.
**Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version.
At the moment, you need a full resync to use this optimized version.
[PERFORMANCE COMPARISON]
**Some figures are approximations only.
Using a baseline machine of an i7-2600K+SSD+(with full pow computation):
1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain.
2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain.
3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain.
Averate procesing times (with full pow computation):
lmdb-optimized:
1. tx_ave = 2.5 ms / tx
2. block_ave = 5.87 ms / block
memory-official-repo:
1. tx_ave = 8.85 ms / tx
2. block_ave = 19.68 ms / block
lmdb-official-repo (0f4a036437fd41a5498ee5e74e2422ea6177aa3e)
1. tx_ave = 47.8 ms / tx
2. block_ave = 64.2 ms / block
**Note: The following data denotes processing times only (does not include p2p download time)
lmdb-optimized processing times (with full pow computation):
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000).
2. Laptop, Dual-core / 4-threads U4200 (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000).
3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000).
lmdb-optimized processing times (with per-block-checkpoint)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with full pow computation)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000).
2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with per-block-checkpoint)
1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
2015-07-10 20:09:32 +00:00
2015-03-25 15:41:30 +00:00
try
{
** CHANGES ARE EXPERIMENTAL (FOR TESTING ONLY)
Bockchain:
1. Optim: Multi-thread long-hash computation when encountering groups of blocks.
2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible.
3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible.
4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks.
5. Optim: Multi-thread signature computation whenever possible.
6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD)
7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???).
8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads).
Berkeley-DB:
1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc).
2. Fix: Unable to pop blocks on reorganize due to transaction errors.
3. Patch: Large number of transaction aborts when running multi-threaded bulk queries.
4. Patch: Insufficient locks error when running full sync.
5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation.
6. Optim: Add bulk queries to get output global indices.
7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
9. Optim: Added thread-safe buffers used when multi-threading bulk queries.
10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details)
11. Mod: Added checkpoint thread and auto-remove-logs option.
12. *Now usable on 32-bit systems like RPI2.
LMDB:
1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect)
2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details)
5. Mod: Auto resize to +1GB instead of multiplier x1.5
ETC:
1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete.
2. Fix: 32-bit saturation bug when computing next difficulty on large blocks.
[PENDING ISSUES]
1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization.
This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD.
2. Berkeley db, possible bug "unable to allocate memory". TBD.
[NEW OPTIONS] (*Currently all enabled for testing purposes)
1. --fast-block-sync arg=[0:1] (default: 1)
a. 0 = Compute long hash per block (may take a while depending on CPU)
b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence)
2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000)
a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions.
b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache.
Fast - Write meta-data but defer data flush.
Fastest - Defer meta-data and data flush.
Sync - Flush data after nblocks_per_sync and wait.
Async - Flush data after nblocks_per_sync but do not wait for the operation to finish.
3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower)
Max number of threads to use when computing long-hash in groups.
4. --show-time-stats arg=[0:1] (default: 1)
Show benchmark related time stats.
5. --db-auto-remove-logs arg=[0:1] (default: 1)
For berkeley-db only. Auto remove logs if enabled.
**Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version.
At the moment, you need a full resync to use this optimized version.
[PERFORMANCE COMPARISON]
**Some figures are approximations only.
Using a baseline machine of an i7-2600K+SSD+(with full pow computation):
1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain.
2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain.
3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain.
Averate procesing times (with full pow computation):
lmdb-optimized:
1. tx_ave = 2.5 ms / tx
2. block_ave = 5.87 ms / block
memory-official-repo:
1. tx_ave = 8.85 ms / tx
2. block_ave = 19.68 ms / block
lmdb-official-repo (0f4a036437fd41a5498ee5e74e2422ea6177aa3e)
1. tx_ave = 47.8 ms / tx
2. block_ave = 64.2 ms / block
**Note: The following data denotes processing times only (does not include p2p download time)
lmdb-optimized processing times (with full pow computation):
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000).
2. Laptop, Dual-core / 4-threads U4200 (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000).
3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000).
lmdb-optimized processing times (with per-block-checkpoint)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with full pow computation)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000).
2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with per-block-checkpoint)
1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
2015-07-10 20:09:32 +00:00
uint64_t db_flags = 0 ;
std : : vector < std : : string > options ;
boost : : trim ( db_sync_mode ) ;
boost : : split ( options , db_sync_mode , boost : : is_any_of ( " : " ) ) ;
for ( const auto & option : options )
LOG_PRINT_L0 ( " option: " < < option ) ;
2017-01-01 03:38:55 +00:00
// default to fast:async:1
2016-08-28 11:07:00 +00:00
uint64_t DEFAULT_FLAGS = DBS_FAST_MODE ;
** CHANGES ARE EXPERIMENTAL (FOR TESTING ONLY)
Bockchain:
1. Optim: Multi-thread long-hash computation when encountering groups of blocks.
2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible.
3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible.
4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks.
5. Optim: Multi-thread signature computation whenever possible.
6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD)
7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???).
8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads).
Berkeley-DB:
1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc).
2. Fix: Unable to pop blocks on reorganize due to transaction errors.
3. Patch: Large number of transaction aborts when running multi-threaded bulk queries.
4. Patch: Insufficient locks error when running full sync.
5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation.
6. Optim: Add bulk queries to get output global indices.
7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
9. Optim: Added thread-safe buffers used when multi-threading bulk queries.
10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details)
11. Mod: Added checkpoint thread and auto-remove-logs option.
12. *Now usable on 32-bit systems like RPI2.
LMDB:
1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect)
2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details)
5. Mod: Auto resize to +1GB instead of multiplier x1.5
ETC:
1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete.
2. Fix: 32-bit saturation bug when computing next difficulty on large blocks.
[PENDING ISSUES]
1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization.
This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD.
2. Berkeley db, possible bug "unable to allocate memory". TBD.
[NEW OPTIONS] (*Currently all enabled for testing purposes)
1. --fast-block-sync arg=[0:1] (default: 1)
a. 0 = Compute long hash per block (may take a while depending on CPU)
b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence)
2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000)
a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions.
b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache.
Fast - Write meta-data but defer data flush.
Fastest - Defer meta-data and data flush.
Sync - Flush data after nblocks_per_sync and wait.
Async - Flush data after nblocks_per_sync but do not wait for the operation to finish.
3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower)
Max number of threads to use when computing long-hash in groups.
4. --show-time-stats arg=[0:1] (default: 1)
Show benchmark related time stats.
5. --db-auto-remove-logs arg=[0:1] (default: 1)
For berkeley-db only. Auto remove logs if enabled.
**Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version.
At the moment, you need a full resync to use this optimized version.
[PERFORMANCE COMPARISON]
**Some figures are approximations only.
Using a baseline machine of an i7-2600K+SSD+(with full pow computation):
1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain.
2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain.
3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain.
Averate procesing times (with full pow computation):
lmdb-optimized:
1. tx_ave = 2.5 ms / tx
2. block_ave = 5.87 ms / block
memory-official-repo:
1. tx_ave = 8.85 ms / tx
2. block_ave = 19.68 ms / block
lmdb-official-repo (0f4a036437fd41a5498ee5e74e2422ea6177aa3e)
1. tx_ave = 47.8 ms / tx
2. block_ave = 64.2 ms / block
**Note: The following data denotes processing times only (does not include p2p download time)
lmdb-optimized processing times (with full pow computation):
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000).
2. Laptop, Dual-core / 4-threads U4200 (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000).
3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000).
lmdb-optimized processing times (with per-block-checkpoint)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with full pow computation)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000).
2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with per-block-checkpoint)
1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
2015-07-10 20:09:32 +00:00
if ( options . size ( ) = = 0 )
{
2017-01-01 03:38:55 +00:00
// default to fast:async:1
** CHANGES ARE EXPERIMENTAL (FOR TESTING ONLY)
Bockchain:
1. Optim: Multi-thread long-hash computation when encountering groups of blocks.
2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible.
3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible.
4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks.
5. Optim: Multi-thread signature computation whenever possible.
6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD)
7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???).
8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads).
Berkeley-DB:
1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc).
2. Fix: Unable to pop blocks on reorganize due to transaction errors.
3. Patch: Large number of transaction aborts when running multi-threaded bulk queries.
4. Patch: Insufficient locks error when running full sync.
5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation.
6. Optim: Add bulk queries to get output global indices.
7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
9. Optim: Added thread-safe buffers used when multi-threading bulk queries.
10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details)
11. Mod: Added checkpoint thread and auto-remove-logs option.
12. *Now usable on 32-bit systems like RPI2.
LMDB:
1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect)
2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details)
5. Mod: Auto resize to +1GB instead of multiplier x1.5
ETC:
1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete.
2. Fix: 32-bit saturation bug when computing next difficulty on large blocks.
[PENDING ISSUES]
1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization.
This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD.
2. Berkeley db, possible bug "unable to allocate memory". TBD.
[NEW OPTIONS] (*Currently all enabled for testing purposes)
1. --fast-block-sync arg=[0:1] (default: 1)
a. 0 = Compute long hash per block (may take a while depending on CPU)
b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence)
2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000)
a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions.
b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache.
Fast - Write meta-data but defer data flush.
Fastest - Defer meta-data and data flush.
Sync - Flush data after nblocks_per_sync and wait.
Async - Flush data after nblocks_per_sync but do not wait for the operation to finish.
3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower)
Max number of threads to use when computing long-hash in groups.
4. --show-time-stats arg=[0:1] (default: 1)
Show benchmark related time stats.
5. --db-auto-remove-logs arg=[0:1] (default: 1)
For berkeley-db only. Auto remove logs if enabled.
**Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version.
At the moment, you need a full resync to use this optimized version.
[PERFORMANCE COMPARISON]
**Some figures are approximations only.
Using a baseline machine of an i7-2600K+SSD+(with full pow computation):
1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain.
2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain.
3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain.
Averate procesing times (with full pow computation):
lmdb-optimized:
1. tx_ave = 2.5 ms / tx
2. block_ave = 5.87 ms / block
memory-official-repo:
1. tx_ave = 8.85 ms / tx
2. block_ave = 19.68 ms / block
lmdb-official-repo (0f4a036437fd41a5498ee5e74e2422ea6177aa3e)
1. tx_ave = 47.8 ms / tx
2. block_ave = 64.2 ms / block
**Note: The following data denotes processing times only (does not include p2p download time)
lmdb-optimized processing times (with full pow computation):
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000).
2. Laptop, Dual-core / 4-threads U4200 (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000).
3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000).
lmdb-optimized processing times (with per-block-checkpoint)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with full pow computation)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000).
2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with per-block-checkpoint)
1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
2015-07-10 20:09:32 +00:00
db_flags = DEFAULT_FLAGS ;
}
bool safemode = false ;
if ( options . size ( ) > = 1 )
{
if ( options [ 0 ] = = " safe " )
{
safemode = true ;
2016-08-28 11:07:00 +00:00
db_flags = DBS_SAFE_MODE ;
** CHANGES ARE EXPERIMENTAL (FOR TESTING ONLY)
Bockchain:
1. Optim: Multi-thread long-hash computation when encountering groups of blocks.
2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible.
3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible.
4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks.
5. Optim: Multi-thread signature computation whenever possible.
6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD)
7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???).
8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads).
Berkeley-DB:
1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc).
2. Fix: Unable to pop blocks on reorganize due to transaction errors.
3. Patch: Large number of transaction aborts when running multi-threaded bulk queries.
4. Patch: Insufficient locks error when running full sync.
5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation.
6. Optim: Add bulk queries to get output global indices.
7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
9. Optim: Added thread-safe buffers used when multi-threading bulk queries.
10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details)
11. Mod: Added checkpoint thread and auto-remove-logs option.
12. *Now usable on 32-bit systems like RPI2.
LMDB:
1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect)
2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details)
5. Mod: Auto resize to +1GB instead of multiplier x1.5
ETC:
1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete.
2. Fix: 32-bit saturation bug when computing next difficulty on large blocks.
[PENDING ISSUES]
1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization.
This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD.
2. Berkeley db, possible bug "unable to allocate memory". TBD.
[NEW OPTIONS] (*Currently all enabled for testing purposes)
1. --fast-block-sync arg=[0:1] (default: 1)
a. 0 = Compute long hash per block (may take a while depending on CPU)
b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence)
2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000)
a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions.
b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache.
Fast - Write meta-data but defer data flush.
Fastest - Defer meta-data and data flush.
Sync - Flush data after nblocks_per_sync and wait.
Async - Flush data after nblocks_per_sync but do not wait for the operation to finish.
3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower)
Max number of threads to use when computing long-hash in groups.
4. --show-time-stats arg=[0:1] (default: 1)
Show benchmark related time stats.
5. --db-auto-remove-logs arg=[0:1] (default: 1)
For berkeley-db only. Auto remove logs if enabled.
**Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version.
At the moment, you need a full resync to use this optimized version.
[PERFORMANCE COMPARISON]
**Some figures are approximations only.
Using a baseline machine of an i7-2600K+SSD+(with full pow computation):
1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain.
2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain.
3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain.
Averate procesing times (with full pow computation):
lmdb-optimized:
1. tx_ave = 2.5 ms / tx
2. block_ave = 5.87 ms / block
memory-official-repo:
1. tx_ave = 8.85 ms / tx
2. block_ave = 19.68 ms / block
lmdb-official-repo (0f4a036437fd41a5498ee5e74e2422ea6177aa3e)
1. tx_ave = 47.8 ms / tx
2. block_ave = 64.2 ms / block
**Note: The following data denotes processing times only (does not include p2p download time)
lmdb-optimized processing times (with full pow computation):
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000).
2. Laptop, Dual-core / 4-threads U4200 (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000).
3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000).
lmdb-optimized processing times (with per-block-checkpoint)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with full pow computation)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000).
2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with per-block-checkpoint)
1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
2015-07-10 20:09:32 +00:00
sync_mode = db_nosync ;
}
else if ( options [ 0 ] = = " fast " )
2016-08-28 11:07:00 +00:00
db_flags = DBS_FAST_MODE ;
** CHANGES ARE EXPERIMENTAL (FOR TESTING ONLY)
Bockchain:
1. Optim: Multi-thread long-hash computation when encountering groups of blocks.
2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible.
3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible.
4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks.
5. Optim: Multi-thread signature computation whenever possible.
6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD)
7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???).
8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads).
Berkeley-DB:
1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc).
2. Fix: Unable to pop blocks on reorganize due to transaction errors.
3. Patch: Large number of transaction aborts when running multi-threaded bulk queries.
4. Patch: Insufficient locks error when running full sync.
5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation.
6. Optim: Add bulk queries to get output global indices.
7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
9. Optim: Added thread-safe buffers used when multi-threading bulk queries.
10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details)
11. Mod: Added checkpoint thread and auto-remove-logs option.
12. *Now usable on 32-bit systems like RPI2.
LMDB:
1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect)
2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details)
5. Mod: Auto resize to +1GB instead of multiplier x1.5
ETC:
1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete.
2. Fix: 32-bit saturation bug when computing next difficulty on large blocks.
[PENDING ISSUES]
1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization.
This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD.
2. Berkeley db, possible bug "unable to allocate memory". TBD.
[NEW OPTIONS] (*Currently all enabled for testing purposes)
1. --fast-block-sync arg=[0:1] (default: 1)
a. 0 = Compute long hash per block (may take a while depending on CPU)
b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence)
2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000)
a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions.
b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache.
Fast - Write meta-data but defer data flush.
Fastest - Defer meta-data and data flush.
Sync - Flush data after nblocks_per_sync and wait.
Async - Flush data after nblocks_per_sync but do not wait for the operation to finish.
3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower)
Max number of threads to use when computing long-hash in groups.
4. --show-time-stats arg=[0:1] (default: 1)
Show benchmark related time stats.
5. --db-auto-remove-logs arg=[0:1] (default: 1)
For berkeley-db only. Auto remove logs if enabled.
**Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version.
At the moment, you need a full resync to use this optimized version.
[PERFORMANCE COMPARISON]
**Some figures are approximations only.
Using a baseline machine of an i7-2600K+SSD+(with full pow computation):
1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain.
2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain.
3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain.
Averate procesing times (with full pow computation):
lmdb-optimized:
1. tx_ave = 2.5 ms / tx
2. block_ave = 5.87 ms / block
memory-official-repo:
1. tx_ave = 8.85 ms / tx
2. block_ave = 19.68 ms / block
lmdb-official-repo (0f4a036437fd41a5498ee5e74e2422ea6177aa3e)
1. tx_ave = 47.8 ms / tx
2. block_ave = 64.2 ms / block
**Note: The following data denotes processing times only (does not include p2p download time)
lmdb-optimized processing times (with full pow computation):
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000).
2. Laptop, Dual-core / 4-threads U4200 (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000).
3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000).
lmdb-optimized processing times (with per-block-checkpoint)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with full pow computation)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000).
2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with per-block-checkpoint)
1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
2015-07-10 20:09:32 +00:00
else if ( options [ 0 ] = = " fastest " )
2017-01-01 03:38:55 +00:00
{
2016-08-28 11:07:00 +00:00
db_flags = DBS_FASTEST_MODE ;
2017-01-01 03:38:55 +00:00
blocks_per_sync = 1000 ; // default to fastest:async:1000
}
** CHANGES ARE EXPERIMENTAL (FOR TESTING ONLY)
Bockchain:
1. Optim: Multi-thread long-hash computation when encountering groups of blocks.
2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible.
3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible.
4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks.
5. Optim: Multi-thread signature computation whenever possible.
6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD)
7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???).
8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads).
Berkeley-DB:
1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc).
2. Fix: Unable to pop blocks on reorganize due to transaction errors.
3. Patch: Large number of transaction aborts when running multi-threaded bulk queries.
4. Patch: Insufficient locks error when running full sync.
5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation.
6. Optim: Add bulk queries to get output global indices.
7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
9. Optim: Added thread-safe buffers used when multi-threading bulk queries.
10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details)
11. Mod: Added checkpoint thread and auto-remove-logs option.
12. *Now usable on 32-bit systems like RPI2.
LMDB:
1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect)
2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details)
5. Mod: Auto resize to +1GB instead of multiplier x1.5
ETC:
1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete.
2. Fix: 32-bit saturation bug when computing next difficulty on large blocks.
[PENDING ISSUES]
1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization.
This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD.
2. Berkeley db, possible bug "unable to allocate memory". TBD.
[NEW OPTIONS] (*Currently all enabled for testing purposes)
1. --fast-block-sync arg=[0:1] (default: 1)
a. 0 = Compute long hash per block (may take a while depending on CPU)
b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence)
2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000)
a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions.
b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache.
Fast - Write meta-data but defer data flush.
Fastest - Defer meta-data and data flush.
Sync - Flush data after nblocks_per_sync and wait.
Async - Flush data after nblocks_per_sync but do not wait for the operation to finish.
3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower)
Max number of threads to use when computing long-hash in groups.
4. --show-time-stats arg=[0:1] (default: 1)
Show benchmark related time stats.
5. --db-auto-remove-logs arg=[0:1] (default: 1)
For berkeley-db only. Auto remove logs if enabled.
**Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version.
At the moment, you need a full resync to use this optimized version.
[PERFORMANCE COMPARISON]
**Some figures are approximations only.
Using a baseline machine of an i7-2600K+SSD+(with full pow computation):
1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain.
2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain.
3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain.
Averate procesing times (with full pow computation):
lmdb-optimized:
1. tx_ave = 2.5 ms / tx
2. block_ave = 5.87 ms / block
memory-official-repo:
1. tx_ave = 8.85 ms / tx
2. block_ave = 19.68 ms / block
lmdb-official-repo (0f4a036437fd41a5498ee5e74e2422ea6177aa3e)
1. tx_ave = 47.8 ms / tx
2. block_ave = 64.2 ms / block
**Note: The following data denotes processing times only (does not include p2p download time)
lmdb-optimized processing times (with full pow computation):
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000).
2. Laptop, Dual-core / 4-threads U4200 (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000).
3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000).
lmdb-optimized processing times (with per-block-checkpoint)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with full pow computation)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000).
2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with per-block-checkpoint)
1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
2015-07-10 20:09:32 +00:00
else
db_flags = DEFAULT_FLAGS ;
}
if ( options . size ( ) > = 2 & & ! safemode )
{
if ( options [ 1 ] = = " sync " )
sync_mode = db_sync ;
else if ( options [ 1 ] = = " async " )
sync_mode = db_async ;
}
if ( options . size ( ) > = 3 & & ! safemode )
{
2016-08-29 15:42:52 +00:00
char * endptr ;
uint64_t bps = strtoull ( options [ 2 ] . c_str ( ) , & endptr , 0 ) ;
if ( * endptr = = ' \0 ' )
blocks_per_sync = bps ;
** CHANGES ARE EXPERIMENTAL (FOR TESTING ONLY)
Bockchain:
1. Optim: Multi-thread long-hash computation when encountering groups of blocks.
2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible.
3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible.
4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks.
5. Optim: Multi-thread signature computation whenever possible.
6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD)
7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???).
8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads).
Berkeley-DB:
1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc).
2. Fix: Unable to pop blocks on reorganize due to transaction errors.
3. Patch: Large number of transaction aborts when running multi-threaded bulk queries.
4. Patch: Insufficient locks error when running full sync.
5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation.
6. Optim: Add bulk queries to get output global indices.
7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
9. Optim: Added thread-safe buffers used when multi-threading bulk queries.
10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details)
11. Mod: Added checkpoint thread and auto-remove-logs option.
12. *Now usable on 32-bit systems like RPI2.
LMDB:
1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect)
2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details)
5. Mod: Auto resize to +1GB instead of multiplier x1.5
ETC:
1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete.
2. Fix: 32-bit saturation bug when computing next difficulty on large blocks.
[PENDING ISSUES]
1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization.
This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD.
2. Berkeley db, possible bug "unable to allocate memory". TBD.
[NEW OPTIONS] (*Currently all enabled for testing purposes)
1. --fast-block-sync arg=[0:1] (default: 1)
a. 0 = Compute long hash per block (may take a while depending on CPU)
b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence)
2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000)
a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions.
b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache.
Fast - Write meta-data but defer data flush.
Fastest - Defer meta-data and data flush.
Sync - Flush data after nblocks_per_sync and wait.
Async - Flush data after nblocks_per_sync but do not wait for the operation to finish.
3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower)
Max number of threads to use when computing long-hash in groups.
4. --show-time-stats arg=[0:1] (default: 1)
Show benchmark related time stats.
5. --db-auto-remove-logs arg=[0:1] (default: 1)
For berkeley-db only. Auto remove logs if enabled.
**Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version.
At the moment, you need a full resync to use this optimized version.
[PERFORMANCE COMPARISON]
**Some figures are approximations only.
Using a baseline machine of an i7-2600K+SSD+(with full pow computation):
1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain.
2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain.
3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain.
Averate procesing times (with full pow computation):
lmdb-optimized:
1. tx_ave = 2.5 ms / tx
2. block_ave = 5.87 ms / block
memory-official-repo:
1. tx_ave = 8.85 ms / tx
2. block_ave = 19.68 ms / block
lmdb-official-repo (0f4a036437fd41a5498ee5e74e2422ea6177aa3e)
1. tx_ave = 47.8 ms / tx
2. block_ave = 64.2 ms / block
**Note: The following data denotes processing times only (does not include p2p download time)
lmdb-optimized processing times (with full pow computation):
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000).
2. Laptop, Dual-core / 4-threads U4200 (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000).
3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000).
lmdb-optimized processing times (with per-block-checkpoint)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with full pow computation)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000).
2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with per-block-checkpoint)
1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
2015-07-10 20:09:32 +00:00
}
db - > open ( filename , db_flags ) ;
if ( ! db - > m_open )
2015-12-14 04:54:39 +00:00
return false ;
2015-03-25 15:41:30 +00:00
}
catch ( const DB_ERROR & e )
{
2016-10-03 21:11:00 +00:00
LOG_ERROR ( " Error opening database: " < < e . what ( ) ) ;
2015-03-25 15:41:30 +00:00
return false ;
}
** CHANGES ARE EXPERIMENTAL (FOR TESTING ONLY)
Bockchain:
1. Optim: Multi-thread long-hash computation when encountering groups of blocks.
2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible.
3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible.
4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks.
5. Optim: Multi-thread signature computation whenever possible.
6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD)
7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???).
8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads).
Berkeley-DB:
1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc).
2. Fix: Unable to pop blocks on reorganize due to transaction errors.
3. Patch: Large number of transaction aborts when running multi-threaded bulk queries.
4. Patch: Insufficient locks error when running full sync.
5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation.
6. Optim: Add bulk queries to get output global indices.
7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
9. Optim: Added thread-safe buffers used when multi-threading bulk queries.
10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details)
11. Mod: Added checkpoint thread and auto-remove-logs option.
12. *Now usable on 32-bit systems like RPI2.
LMDB:
1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect)
2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details)
5. Mod: Auto resize to +1GB instead of multiplier x1.5
ETC:
1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete.
2. Fix: 32-bit saturation bug when computing next difficulty on large blocks.
[PENDING ISSUES]
1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization.
This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD.
2. Berkeley db, possible bug "unable to allocate memory". TBD.
[NEW OPTIONS] (*Currently all enabled for testing purposes)
1. --fast-block-sync arg=[0:1] (default: 1)
a. 0 = Compute long hash per block (may take a while depending on CPU)
b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence)
2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000)
a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions.
b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache.
Fast - Write meta-data but defer data flush.
Fastest - Defer meta-data and data flush.
Sync - Flush data after nblocks_per_sync and wait.
Async - Flush data after nblocks_per_sync but do not wait for the operation to finish.
3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower)
Max number of threads to use when computing long-hash in groups.
4. --show-time-stats arg=[0:1] (default: 1)
Show benchmark related time stats.
5. --db-auto-remove-logs arg=[0:1] (default: 1)
For berkeley-db only. Auto remove logs if enabled.
**Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version.
At the moment, you need a full resync to use this optimized version.
[PERFORMANCE COMPARISON]
**Some figures are approximations only.
Using a baseline machine of an i7-2600K+SSD+(with full pow computation):
1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain.
2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain.
3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain.
Averate procesing times (with full pow computation):
lmdb-optimized:
1. tx_ave = 2.5 ms / tx
2. block_ave = 5.87 ms / block
memory-official-repo:
1. tx_ave = 8.85 ms / tx
2. block_ave = 19.68 ms / block
lmdb-official-repo (0f4a036437fd41a5498ee5e74e2422ea6177aa3e)
1. tx_ave = 47.8 ms / tx
2. block_ave = 64.2 ms / block
**Note: The following data denotes processing times only (does not include p2p download time)
lmdb-optimized processing times (with full pow computation):
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000).
2. Laptop, Dual-core / 4-threads U4200 (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000).
3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000).
lmdb-optimized processing times (with per-block-checkpoint)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with full pow computation)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000).
2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with per-block-checkpoint)
1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
2015-07-10 20:09:32 +00:00
m_blockchain_storage . set_user_options ( blocks_threads ,
blocks_per_sync , sync_mode , fast_sync ) ;
2016-02-08 18:47:56 +00:00
r = m_blockchain_storage . init ( db , m_testnet , test_options ) ;
** CHANGES ARE EXPERIMENTAL (FOR TESTING ONLY)
Bockchain:
1. Optim: Multi-thread long-hash computation when encountering groups of blocks.
2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible.
3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible.
4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks.
5. Optim: Multi-thread signature computation whenever possible.
6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD)
7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???).
8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads).
Berkeley-DB:
1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc).
2. Fix: Unable to pop blocks on reorganize due to transaction errors.
3. Patch: Large number of transaction aborts when running multi-threaded bulk queries.
4. Patch: Insufficient locks error when running full sync.
5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation.
6. Optim: Add bulk queries to get output global indices.
7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
9. Optim: Added thread-safe buffers used when multi-threading bulk queries.
10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details)
11. Mod: Added checkpoint thread and auto-remove-logs option.
12. *Now usable on 32-bit systems like RPI2.
LMDB:
1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect)
2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details)
5. Mod: Auto resize to +1GB instead of multiplier x1.5
ETC:
1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete.
2. Fix: 32-bit saturation bug when computing next difficulty on large blocks.
[PENDING ISSUES]
1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization.
This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD.
2. Berkeley db, possible bug "unable to allocate memory". TBD.
[NEW OPTIONS] (*Currently all enabled for testing purposes)
1. --fast-block-sync arg=[0:1] (default: 1)
a. 0 = Compute long hash per block (may take a while depending on CPU)
b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence)
2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000)
a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions.
b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache.
Fast - Write meta-data but defer data flush.
Fastest - Defer meta-data and data flush.
Sync - Flush data after nblocks_per_sync and wait.
Async - Flush data after nblocks_per_sync but do not wait for the operation to finish.
3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower)
Max number of threads to use when computing long-hash in groups.
4. --show-time-stats arg=[0:1] (default: 1)
Show benchmark related time stats.
5. --db-auto-remove-logs arg=[0:1] (default: 1)
For berkeley-db only. Auto remove logs if enabled.
**Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version.
At the moment, you need a full resync to use this optimized version.
[PERFORMANCE COMPARISON]
**Some figures are approximations only.
Using a baseline machine of an i7-2600K+SSD+(with full pow computation):
1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain.
2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain.
3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain.
Averate procesing times (with full pow computation):
lmdb-optimized:
1. tx_ave = 2.5 ms / tx
2. block_ave = 5.87 ms / block
memory-official-repo:
1. tx_ave = 8.85 ms / tx
2. block_ave = 19.68 ms / block
lmdb-official-repo (0f4a036437fd41a5498ee5e74e2422ea6177aa3e)
1. tx_ave = 47.8 ms / tx
2. block_ave = 64.2 ms / block
**Note: The following data denotes processing times only (does not include p2p download time)
lmdb-optimized processing times (with full pow computation):
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000).
2. Laptop, Dual-core / 4-threads U4200 (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000).
3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000).
lmdb-optimized processing times (with per-block-checkpoint)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with full pow computation)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000).
2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with per-block-checkpoint)
1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
2015-07-10 20:09:32 +00:00
2016-01-29 15:09:17 +00:00
// now that we have a valid m_blockchain_storage, we can clean out any
// transactions in the pool that do not conform to the current fork
m_mempool . validate ( m_blockchain_storage . get_current_hard_fork_version ( ) ) ;
2015-12-08 23:06:29 +00:00
bool show_time_stats = command_line : : get_arg ( vm , command_line : : arg_show_time_stats ) ! = 0 ;
** CHANGES ARE EXPERIMENTAL (FOR TESTING ONLY)
Bockchain:
1. Optim: Multi-thread long-hash computation when encountering groups of blocks.
2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible.
3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible.
4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks.
5. Optim: Multi-thread signature computation whenever possible.
6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD)
7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???).
8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads).
Berkeley-DB:
1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc).
2. Fix: Unable to pop blocks on reorganize due to transaction errors.
3. Patch: Large number of transaction aborts when running multi-threaded bulk queries.
4. Patch: Insufficient locks error when running full sync.
5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation.
6. Optim: Add bulk queries to get output global indices.
7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
9. Optim: Added thread-safe buffers used when multi-threading bulk queries.
10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details)
11. Mod: Added checkpoint thread and auto-remove-logs option.
12. *Now usable on 32-bit systems like RPI2.
LMDB:
1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect)
2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details)
5. Mod: Auto resize to +1GB instead of multiplier x1.5
ETC:
1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete.
2. Fix: 32-bit saturation bug when computing next difficulty on large blocks.
[PENDING ISSUES]
1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization.
This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD.
2. Berkeley db, possible bug "unable to allocate memory". TBD.
[NEW OPTIONS] (*Currently all enabled for testing purposes)
1. --fast-block-sync arg=[0:1] (default: 1)
a. 0 = Compute long hash per block (may take a while depending on CPU)
b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence)
2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000)
a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions.
b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache.
Fast - Write meta-data but defer data flush.
Fastest - Defer meta-data and data flush.
Sync - Flush data after nblocks_per_sync and wait.
Async - Flush data after nblocks_per_sync but do not wait for the operation to finish.
3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower)
Max number of threads to use when computing long-hash in groups.
4. --show-time-stats arg=[0:1] (default: 1)
Show benchmark related time stats.
5. --db-auto-remove-logs arg=[0:1] (default: 1)
For berkeley-db only. Auto remove logs if enabled.
**Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version.
At the moment, you need a full resync to use this optimized version.
[PERFORMANCE COMPARISON]
**Some figures are approximations only.
Using a baseline machine of an i7-2600K+SSD+(with full pow computation):
1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain.
2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain.
3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain.
Averate procesing times (with full pow computation):
lmdb-optimized:
1. tx_ave = 2.5 ms / tx
2. block_ave = 5.87 ms / block
memory-official-repo:
1. tx_ave = 8.85 ms / tx
2. block_ave = 19.68 ms / block
lmdb-official-repo (0f4a036437fd41a5498ee5e74e2422ea6177aa3e)
1. tx_ave = 47.8 ms / tx
2. block_ave = 64.2 ms / block
**Note: The following data denotes processing times only (does not include p2p download time)
lmdb-optimized processing times (with full pow computation):
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000).
2. Laptop, Dual-core / 4-threads U4200 (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000).
3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000).
lmdb-optimized processing times (with per-block-checkpoint)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with full pow computation)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000).
2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with per-block-checkpoint)
1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
2015-07-10 20:09:32 +00:00
m_blockchain_storage . set_show_time_stats ( show_time_stats ) ;
2014-03-03 22:07:58 +00:00
CHECK_AND_ASSERT_MES ( r , false , " Failed to initialize blockchain storage " ) ;
2016-09-24 16:00:19 +00:00
block_sync_size = command_line : : get_arg ( vm , command_line : : arg_block_sync_size ) ;
if ( block_sync_size = = 0 )
block_sync_size = BLOCKS_SYNCHRONIZING_DEFAULT_COUNT ;
2014-09-27 21:46:12 +00:00
// load json & DNS checkpoints, and verify them
// with respect to what blocks we already have
2014-09-29 20:30:47 +00:00
CHECK_AND_ASSERT_MES ( update_checkpoints ( ) , false , " One or more checkpoints loaded from json or dns conflicted with existing checkpoints. " ) ;
2014-09-27 21:46:12 +00:00
2015-01-29 22:10:53 +00:00
r = m_miner . init ( vm , m_testnet ) ;
2016-03-25 06:22:06 +00:00
CHECK_AND_ASSERT_MES ( r , false , " Failed to initialize miner instance " ) ;
2014-03-03 22:07:58 +00:00
return load_state_data ( ) ;
}
//-----------------------------------------------------------------------------------------------
bool core : : set_genesis_block ( const block & b )
{
return m_blockchain_storage . reset_and_set_genesis_block ( b ) ;
}
//-----------------------------------------------------------------------------------------------
bool core : : load_state_data ( )
{
// may be some code later
return true ;
}
//-----------------------------------------------------------------------------------------------
bool core : : deinit ( )
{
2015-12-14 04:54:39 +00:00
m_miner . stop ( ) ;
m_mempool . deinit ( ) ;
2016-10-03 01:06:55 +00:00
m_blockchain_storage . deinit ( ) ;
2016-01-31 12:58:08 +00:00
unlock_db_directory ( ) ;
2014-03-03 22:07:58 +00:00
return true ;
}
2015-02-12 19:59:39 +00:00
//-----------------------------------------------------------------------------------------------
2015-02-20 21:28:03 +00:00
void core : : test_drop_download ( )
2015-02-12 19:59:39 +00:00
{
2015-12-14 04:54:39 +00:00
m_test_drop_download = false ;
2015-02-12 19:59:39 +00:00
}
//-----------------------------------------------------------------------------------------------
2015-02-20 21:28:03 +00:00
void core : : test_drop_download_height ( uint64_t height )
2015-02-12 19:59:39 +00:00
{
2015-12-14 04:54:39 +00:00
m_test_drop_download_height = height ;
2015-02-20 21:28:03 +00:00
}
//-----------------------------------------------------------------------------------------------
2015-09-19 10:25:57 +00:00
bool core : : get_test_drop_download ( ) const
2015-02-20 21:28:03 +00:00
{
2015-12-14 04:54:39 +00:00
return m_test_drop_download ;
2015-02-20 21:28:03 +00:00
}
//-----------------------------------------------------------------------------------------------
2015-09-19 10:25:57 +00:00
bool core : : get_test_drop_download_height ( ) const
2015-02-20 21:28:03 +00:00
{
2015-12-14 04:54:39 +00:00
if ( m_test_drop_download_height = = 0 )
return true ;
if ( get_blockchain_storage ( ) . get_current_blockchain_height ( ) < = m_test_drop_download_height )
return true ;
2015-02-20 21:28:03 +00:00
2015-12-14 04:54:39 +00:00
return false ;
2015-02-12 19:59:39 +00:00
}
2014-03-03 22:07:58 +00:00
//-----------------------------------------------------------------------------------------------
2017-01-14 13:01:21 +00:00
bool core : : handle_incoming_tx ( const blobdata & tx_blob , tx_verification_context & tvc , bool keeped_by_block , bool relayed , bool do_not_relay )
2014-03-03 22:07:58 +00:00
{
tvc = boost : : value_initialized < tx_verification_context > ( ) ;
//want to process all transactions sequentially
CRITICAL_REGION_LOCAL ( m_incoming_tx_lock ) ;
if ( tx_blob . size ( ) > get_max_tx_size ( ) )
{
2014-09-09 09:32:00 +00:00
LOG_PRINT_L1 ( " WRONG TRANSACTION BLOB, too big size " < < tx_blob . size ( ) < < " , rejected " ) ;
2014-03-03 22:07:58 +00:00
tvc . m_verifivation_failed = true ;
2016-03-27 11:35:36 +00:00
tvc . m_too_big = true ;
2014-03-03 22:07:58 +00:00
return false ;
}
crypto : : hash tx_hash = null_hash ;
crypto : : hash tx_prefixt_hash = null_hash ;
transaction tx ;
if ( ! parse_tx_from_blob ( tx , tx_hash , tx_prefixt_hash , tx_blob ) )
{
2014-09-09 09:32:00 +00:00
LOG_PRINT_L1 ( " WRONG TRANSACTION BLOB, Failed to parse, rejected " ) ;
2014-03-03 22:07:58 +00:00
tvc . m_verifivation_failed = true ;
return false ;
}
//std::cout << "!"<< tx.vin.size() << std::endl;
2016-06-29 22:00:20 +00:00
uint8_t version = m_blockchain_storage . get_current_hard_fork_version ( ) ;
const size_t max_tx_version = version = = 1 ? 1 : 2 ;
if ( tx . version = = 0 | | tx . version > max_tx_version )
{
// v2 is the latest one we know
tvc . m_verifivation_failed = true ;
return false ;
}
2014-03-03 22:07:58 +00:00
if ( ! check_tx_syntax ( tx ) )
{
2014-09-09 09:32:00 +00:00
LOG_PRINT_L1 ( " WRONG TRANSACTION BLOB, Failed to check tx " < < tx_hash < < " syntax, rejected " ) ;
2014-03-03 22:07:58 +00:00
tvc . m_verifivation_failed = true ;
return false ;
}
2017-01-14 13:29:08 +00:00
// resolve outPk references in rct txes
// outPk aren't the only thing that need resolving for a fully resolved tx,
// but outPk (1) are needed now to check range proof semantics, and
// (2) do not need access to the blockchain to find data
if ( tx . version > = 2 )
{
rct : : rctSig & rv = tx . rct_signatures ;
if ( rv . outPk . size ( ) ! = tx . vout . size ( ) )
{
LOG_PRINT_L1 ( " WRONG TRANSACTION BLOB, Bad outPk size in tx " < < tx_hash < < " , rejected " ) ;
return false ;
}
for ( size_t n = 0 ; n < tx . rct_signatures . outPk . size ( ) ; + + n )
rv . outPk [ n ] . dest = rct : : pk2rct ( boost : : get < txout_to_key > ( tx . vout [ n ] . target ) . key ) ;
}
2014-03-03 22:07:58 +00:00
if ( ! check_tx_semantic ( tx , keeped_by_block ) )
{
2014-09-09 09:32:00 +00:00
LOG_PRINT_L1 ( " WRONG TRANSACTION BLOB, Failed to check tx " < < tx_hash < < " semantic, rejected " ) ;
2014-03-03 22:07:58 +00:00
tvc . m_verifivation_failed = true ;
return false ;
}
2017-01-14 13:01:21 +00:00
bool r = add_new_tx ( tx , tx_hash , tx_prefixt_hash , tx_blob . size ( ) , tvc , keeped_by_block , relayed , do_not_relay ) ;
2014-03-03 22:07:58 +00:00
if ( tvc . m_verifivation_failed )
2014-09-09 09:32:00 +00:00
{ LOG_PRINT_RED_L1 ( " Transaction verification failed: " < < tx_hash ) ; }
2014-03-03 22:07:58 +00:00
else if ( tvc . m_verifivation_impossible )
2014-09-09 09:32:00 +00:00
{ LOG_PRINT_RED_L1 ( " Transaction verification impossible: " < < tx_hash ) ; }
2014-03-03 22:07:58 +00:00
if ( tvc . m_added_to_pool )
LOG_PRINT_L1 ( " tx added: " < < tx_hash ) ;
return r ;
}
//-----------------------------------------------------------------------------------------------
2015-09-19 10:25:57 +00:00
bool core : : get_stat_info ( core_stat_info & st_inf ) const
2014-03-03 22:07:58 +00:00
{
st_inf . mining_speed = m_miner . get_speed ( ) ;
st_inf . alternative_blocks = m_blockchain_storage . get_alternative_blocks_count ( ) ;
st_inf . blockchain_height = m_blockchain_storage . get_current_blockchain_height ( ) ;
st_inf . tx_pool_size = m_mempool . get_transactions_count ( ) ;
st_inf . top_block_id_str = epee : : string_tools : : pod_to_hex ( m_blockchain_storage . get_tail_id ( ) ) ;
return true ;
}
//-----------------------------------------------------------------------------------------------
2015-09-19 10:25:57 +00:00
bool core : : check_tx_semantic ( const transaction & tx , bool keeped_by_block ) const
2014-03-03 22:07:58 +00:00
{
if ( ! tx . vin . size ( ) )
{
2014-09-09 09:32:00 +00:00
LOG_PRINT_RED_L1 ( " tx with empty inputs, rejected for tx id= " < < get_transaction_hash ( tx ) ) ;
2014-03-03 22:07:58 +00:00
return false ;
}
if ( ! check_inputs_types_supported ( tx ) )
{
2014-09-09 09:32:00 +00:00
LOG_PRINT_RED_L1 ( " unsupported input types for tx id= " < < get_transaction_hash ( tx ) ) ;
2014-03-03 22:07:58 +00:00
return false ;
}
if ( ! check_outs_valid ( tx ) )
{
2014-09-09 09:32:00 +00:00
LOG_PRINT_RED_L1 ( " tx with invalid outputs, rejected for tx id= " < < get_transaction_hash ( tx ) ) ;
2014-03-03 22:07:58 +00:00
return false ;
}
2016-06-15 22:37:13 +00:00
if ( tx . version > 1 )
{
if ( tx . rct_signatures . outPk . size ( ) ! = tx . vout . size ( ) )
{
LOG_PRINT_RED_L1 ( " tx with mismatched vout/outPk count, rejected for tx id= " < < get_transaction_hash ( tx ) ) ;
return false ;
}
}
2014-03-03 22:07:58 +00:00
if ( ! check_money_overflow ( tx ) )
{
2014-09-09 09:32:00 +00:00
LOG_PRINT_RED_L1 ( " tx has money overflow, rejected for tx id= " < < get_transaction_hash ( tx ) ) ;
2014-03-03 22:07:58 +00:00
return false ;
}
2016-06-15 22:37:13 +00:00
if ( tx . version = = 1 )
2014-03-03 22:07:58 +00:00
{
2016-06-15 22:37:13 +00:00
uint64_t amount_in = 0 ;
get_inputs_money_amount ( tx , amount_in ) ;
uint64_t amount_out = get_outs_money_amount ( tx ) ;
if ( amount_in < = amount_out )
{
LOG_PRINT_RED_L1 ( " tx with wrong amounts: ins " < < amount_in < < " , outs " < < amount_out < < " , rejected for tx id= " < < get_transaction_hash ( tx ) ) ;
return false ;
}
2014-03-03 22:07:58 +00:00
}
2016-06-15 22:37:13 +00:00
// for version > 1, ringct signatures check verifies amounts match
2014-03-03 22:07:58 +00:00
2014-10-28 03:44:45 +00:00
if ( ! keeped_by_block & & get_object_blobsize ( tx ) > = m_blockchain_storage . get_current_cumulative_blocksize_limit ( ) - CRYPTONOTE_COINBASE_BLOB_RESERVED_SIZE )
2014-03-03 22:07:58 +00:00
{
2014-10-28 03:44:45 +00:00
LOG_PRINT_RED_L1 ( " tx is too large " < < get_object_blobsize ( tx ) < < " , expected not bigger than " < < m_blockchain_storage . get_current_cumulative_blocksize_limit ( ) - CRYPTONOTE_COINBASE_BLOB_RESERVED_SIZE ) ;
2014-03-03 22:07:58 +00:00
return false ;
}
//check if tx use different key images
if ( ! check_tx_inputs_keyimages_diff ( tx ) )
{
2014-09-13 08:04:05 +00:00
LOG_PRINT_RED_L1 ( " tx uses a single key image more than once " ) ;
2014-03-03 22:07:58 +00:00
return false ;
}
2017-01-14 13:29:08 +00:00
if ( tx . version > = 2 )
{
const rct : : rctSig & rv = tx . rct_signatures ;
switch ( rv . type ) {
case rct : : RCTTypeNull :
// coinbase should not come here, so we reject for all other types
LOG_PRINT_RED_L1 ( " Unexpected Null rctSig type " ) ;
return false ;
case rct : : RCTTypeSimple :
if ( ! rct : : verRctSimple ( rv , true ) )
{
LOG_PRINT_RED_L1 ( " rct signature semantics check failed " ) ;
return false ;
}
break ;
case rct : : RCTTypeFull :
if ( ! rct : : verRct ( rv , true ) )
{
LOG_PRINT_RED_L1 ( " rct signature semantics check failed " ) ;
return false ;
}
break ;
default :
LOG_PRINT_RED_L1 ( " Unknown rct type: " < < rv . type ) ;
return false ;
}
}
2014-03-03 22:07:58 +00:00
return true ;
}
//-----------------------------------------------------------------------------------------------
2015-09-19 10:25:57 +00:00
bool core : : is_key_image_spent ( const crypto : : key_image & key_image ) const
2015-08-11 09:49:15 +00:00
{
return m_blockchain_storage . have_tx_keyimg_as_spent ( key_image ) ;
}
//-----------------------------------------------------------------------------------------------
2015-09-19 10:25:57 +00:00
bool core : : are_key_images_spent ( const std : : vector < crypto : : key_image > & key_im , std : : vector < bool > & spent ) const
2015-08-11 09:49:15 +00:00
{
spent . clear ( ) ;
BOOST_FOREACH ( auto & ki , key_im )
{
spent . push_back ( m_blockchain_storage . have_tx_keyimg_as_spent ( ki ) ) ;
}
return true ;
}
//-----------------------------------------------------------------------------------------------
2016-10-13 16:49:25 +00:00
std : : pair < uint64_t , uint64_t > core : : get_coinbase_tx_sum ( const uint64_t start_offset , const size_t count )
2016-10-10 19:45:51 +00:00
{
std : : list < block > blocks ;
2016-10-10 23:55:18 +00:00
uint64_t emission_amount = 0 ;
uint64_t total_fee_amount = 0 ;
2016-10-10 21:19:36 +00:00
this - > get_blocks ( start_offset , count , blocks ) ;
2016-10-10 19:45:51 +00:00
BOOST_FOREACH ( auto & b , blocks )
{
2016-12-21 11:18:33 +00:00
std : : list < transaction > txs ;
std : : list < crypto : : hash > missed_txs ;
uint64_t coinbase_amount = get_outs_money_amount ( b . miner_tx ) ;
2016-10-10 23:55:18 +00:00
this - > get_transactions ( b . tx_hashes , txs , missed_txs ) ;
2016-12-21 11:18:33 +00:00
uint64_t tx_fee_amount = 0 ;
2016-10-10 23:55:18 +00:00
BOOST_FOREACH ( const auto & tx , txs )
{
tx_fee_amount + = get_tx_fee ( tx ) ;
}
emission_amount + = coinbase_amount - tx_fee_amount ;
total_fee_amount + = tx_fee_amount ;
2016-10-10 19:45:51 +00:00
}
2016-10-10 23:55:18 +00:00
return std : : pair < uint64_t , uint64_t > ( emission_amount , total_fee_amount ) ;
2016-10-10 19:45:51 +00:00
}
//-----------------------------------------------------------------------------------------------
2015-09-19 10:25:57 +00:00
bool core : : check_tx_inputs_keyimages_diff ( const transaction & tx ) const
2014-03-03 22:07:58 +00:00
{
std : : unordered_set < crypto : : key_image > ki ;
BOOST_FOREACH ( const auto & in , tx . vin )
{
CHECKED_GET_SPECIFIC_VARIANT ( in , const txin_to_key , tokey_in , false ) ;
if ( ! ki . insert ( tokey_in . k_image ) . second )
return false ;
}
return true ;
}
//-----------------------------------------------------------------------------------------------
2017-01-14 13:01:21 +00:00
bool core : : add_new_tx ( const transaction & tx , tx_verification_context & tvc , bool keeped_by_block , bool relayed , bool do_not_relay )
2014-03-03 22:07:58 +00:00
{
crypto : : hash tx_hash = get_transaction_hash ( tx ) ;
crypto : : hash tx_prefix_hash = get_transaction_prefix_hash ( tx ) ;
blobdata bl ;
t_serializable_object_to_blob ( tx , bl ) ;
2017-01-14 13:01:21 +00:00
return add_new_tx ( tx , tx_hash , tx_prefix_hash , bl . size ( ) , tvc , keeped_by_block , relayed , do_not_relay ) ;
2014-03-03 22:07:58 +00:00
}
//-----------------------------------------------------------------------------------------------
2015-09-19 10:25:57 +00:00
size_t core : : get_blockchain_total_transactions ( ) const
2014-03-03 22:07:58 +00:00
{
return m_blockchain_storage . get_total_transactions ( ) ;
}
//-----------------------------------------------------------------------------------------------
2017-01-14 13:01:21 +00:00
bool core : : add_new_tx ( const transaction & tx , const crypto : : hash & tx_hash , const crypto : : hash & tx_prefix_hash , size_t blob_size , tx_verification_context & tvc , bool keeped_by_block , bool relayed , bool do_not_relay )
2014-03-03 22:07:58 +00:00
{
if ( m_mempool . have_tx ( tx_hash ) )
{
LOG_PRINT_L2 ( " tx " < < tx_hash < < " already have transaction in tx_pool " ) ;
return true ;
}
if ( m_blockchain_storage . have_tx ( tx_hash ) )
{
LOG_PRINT_L2 ( " tx " < < tx_hash < < " already have transaction in blockchain " ) ;
return true ;
}
2016-01-29 15:09:17 +00:00
uint8_t version = m_blockchain_storage . get_current_hard_fork_version ( ) ;
2017-01-14 13:01:21 +00:00
return m_mempool . add_tx ( tx , tx_hash , blob_size , tvc , keeped_by_block , relayed , do_not_relay , version ) ;
2015-11-21 00:26:48 +00:00
}
//-----------------------------------------------------------------------------------------------
bool core : : relay_txpool_transactions ( )
{
// we attempt to relay txes that should be relayed, but were not
std : : list < std : : pair < crypto : : hash , cryptonote : : transaction > > txs ;
if ( m_mempool . get_relayable_transactions ( txs ) )
{
cryptonote_connection_context fake_context = AUTO_VAL_INIT ( fake_context ) ;
tx_verification_context tvc = AUTO_VAL_INIT ( tvc ) ;
NOTIFY_NEW_TRANSACTIONS : : request r ;
blobdata bl ;
for ( auto it = txs . begin ( ) ; it ! = txs . end ( ) ; + + it )
{
t_serializable_object_to_blob ( it - > second , bl ) ;
r . txs . push_back ( bl ) ;
}
get_protocol ( ) - > relay_transactions ( r , fake_context ) ;
m_mempool . set_relayed ( txs ) ;
}
return true ;
2014-03-03 22:07:58 +00:00
}
//-----------------------------------------------------------------------------------------------
2016-10-22 19:46:51 +00:00
void core : : on_transaction_relayed ( const cryptonote : : blobdata & tx_blob )
{
std : : list < std : : pair < crypto : : hash , cryptonote : : transaction > > txs ;
cryptonote : : transaction tx ;
crypto : : hash tx_hash , tx_prefix_hash ;
if ( ! parse_and_validate_tx_from_blob ( tx_blob , tx , tx_hash , tx_prefix_hash ) )
{
2016-12-04 13:13:54 +00:00
LOG_ERROR ( " Failed to parse relayed transaction " ) ;
2016-10-22 19:46:51 +00:00
return ;
}
txs . push_back ( std : : make_pair ( tx_hash , std : : move ( tx ) ) ) ;
m_mempool . set_relayed ( txs ) ;
}
//-----------------------------------------------------------------------------------------------
2014-03-03 22:07:58 +00:00
bool core : : get_block_template ( block & b , const account_public_address & adr , difficulty_type & diffic , uint64_t & height , const blobdata & ex_nonce )
{
return m_blockchain_storage . create_block_template ( b , adr , diffic , height , ex_nonce ) ;
}
//-----------------------------------------------------------------------------------------------
2015-09-19 10:25:57 +00:00
bool core : : find_blockchain_supplement ( const std : : list < crypto : : hash > & qblock_ids , NOTIFY_RESPONSE_CHAIN_ENTRY : : request & resp ) const
2014-03-03 22:07:58 +00:00
{
return m_blockchain_storage . find_blockchain_supplement ( qblock_ids , resp ) ;
}
//-----------------------------------------------------------------------------------------------
2015-09-19 10:25:57 +00:00
bool core : : find_blockchain_supplement ( const uint64_t req_start_block , const std : : list < crypto : : hash > & qblock_ids , std : : list < std : : pair < block , std : : list < transaction > > > & blocks , uint64_t & total_height , uint64_t & start_height , size_t max_count ) const
2014-03-03 22:07:58 +00:00
{
2014-08-01 08:17:50 +00:00
return m_blockchain_storage . find_blockchain_supplement ( req_start_block , qblock_ids , blocks , total_height , start_height , max_count ) ;
2014-03-03 22:07:58 +00:00
}
//-----------------------------------------------------------------------------------------------
2015-09-19 10:25:57 +00:00
void core : : print_blockchain ( uint64_t start_index , uint64_t end_index ) const
2014-03-03 22:07:58 +00:00
{
m_blockchain_storage . print_blockchain ( start_index , end_index ) ;
}
//-----------------------------------------------------------------------------------------------
2015-09-19 10:25:57 +00:00
void core : : print_blockchain_index ( ) const
2014-03-03 22:07:58 +00:00
{
m_blockchain_storage . print_blockchain_index ( ) ;
}
//-----------------------------------------------------------------------------------------------
void core : : print_blockchain_outs ( const std : : string & file )
{
m_blockchain_storage . print_blockchain_outs ( file ) ;
}
//-----------------------------------------------------------------------------------------------
2015-09-19 10:25:57 +00:00
bool core : : get_random_outs_for_amounts ( const COMMAND_RPC_GET_RANDOM_OUTPUTS_FOR_AMOUNTS : : request & req , COMMAND_RPC_GET_RANDOM_OUTPUTS_FOR_AMOUNTS : : response & res ) const
2014-03-03 22:07:58 +00:00
{
return m_blockchain_storage . get_random_outs_for_amounts ( req , res ) ;
}
//-----------------------------------------------------------------------------------------------
2016-11-22 20:00:40 +00:00
bool core : : get_outs ( const COMMAND_RPC_GET_OUTPUTS_BIN : : request & req , COMMAND_RPC_GET_OUTPUTS_BIN : : response & res ) const
2016-08-02 20:48:09 +00:00
{
return m_blockchain_storage . get_outs ( req , res ) ;
}
//-----------------------------------------------------------------------------------------------
2016-06-05 09:46:18 +00:00
bool core : : get_random_rct_outs ( const COMMAND_RPC_GET_RANDOM_RCT_OUTPUTS : : request & req , COMMAND_RPC_GET_RANDOM_RCT_OUTPUTS : : response & res ) const
{
return m_blockchain_storage . get_random_rct_outs ( req , res ) ;
}
//-----------------------------------------------------------------------------------------------
2015-09-19 10:25:57 +00:00
bool core : : get_tx_outputs_gindexs ( const crypto : : hash & tx_id , std : : vector < uint64_t > & indexs ) const
2014-03-03 22:07:58 +00:00
{
return m_blockchain_storage . get_tx_outputs_gindexs ( tx_id , indexs ) ;
}
//-----------------------------------------------------------------------------------------------
void core : : pause_mine ( )
{
m_miner . pause ( ) ;
}
//-----------------------------------------------------------------------------------------------
void core : : resume_mine ( )
{
m_miner . resume ( ) ;
}
//-----------------------------------------------------------------------------------------------
bool core : : handle_block_found ( block & b )
{
block_verification_context bvc = boost : : value_initialized < block_verification_context > ( ) ;
m_miner . pause ( ) ;
m_blockchain_storage . add_new_block ( b , bvc ) ;
//anyway - update miner template
update_miner_block_template ( ) ;
m_miner . resume ( ) ;
CHECK_AND_ASSERT_MES ( ! bvc . m_verifivation_failed , false , " mined block failed verification " ) ;
if ( bvc . m_added_to_main_chain )
{
cryptonote_connection_context exclude_context = boost : : value_initialized < cryptonote_connection_context > ( ) ;
NOTIFY_NEW_BLOCK : : request arg = AUTO_VAL_INIT ( arg ) ;
arg . hop = 0 ;
arg . current_blockchain_height = m_blockchain_storage . get_current_blockchain_height ( ) ;
std : : list < crypto : : hash > missed_txs ;
std : : list < transaction > txs ;
m_blockchain_storage . get_transactions ( b . tx_hashes , txs , missed_txs ) ;
if ( missed_txs . size ( ) & & m_blockchain_storage . get_block_id_by_height ( get_block_height ( b ) ) ! = get_block_hash ( b ) )
{
2014-09-09 09:32:00 +00:00
LOG_PRINT_L1 ( " Block found but, seems that reorganize just happened after that, do not relay this block " ) ;
2014-03-03 22:07:58 +00:00
return true ;
}
CHECK_AND_ASSERT_MES ( txs . size ( ) = = b . tx_hashes . size ( ) & & ! missed_txs . size ( ) , false , " cant find some transactions in found block: " < < get_block_hash ( b ) < < " txs.size()= " < < txs . size ( )
< < " , b.tx_hashes.size()= " < < b . tx_hashes . size ( ) < < " , missed_txs.size() " < < missed_txs . size ( ) ) ;
block_to_blob ( b , arg . b . block ) ;
//pack transactions
BOOST_FOREACH ( auto & tx , txs )
arg . b . txs . push_back ( t_serializable_object_to_blob ( tx ) ) ;
m_pprotocol - > relay_block ( arg , exclude_context ) ;
}
return bvc . m_added_to_main_chain ;
}
//-----------------------------------------------------------------------------------------------
void core : : on_synchronized ( )
{
m_miner . on_synchronized ( ) ;
}
//-----------------------------------------------------------------------------------------------
bool core : : add_new_block ( const block & b , block_verification_context & bvc )
{
return m_blockchain_storage . add_new_block ( b , bvc ) ;
}
** CHANGES ARE EXPERIMENTAL (FOR TESTING ONLY)
Bockchain:
1. Optim: Multi-thread long-hash computation when encountering groups of blocks.
2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible.
3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible.
4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks.
5. Optim: Multi-thread signature computation whenever possible.
6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD)
7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???).
8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads).
Berkeley-DB:
1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc).
2. Fix: Unable to pop blocks on reorganize due to transaction errors.
3. Patch: Large number of transaction aborts when running multi-threaded bulk queries.
4. Patch: Insufficient locks error when running full sync.
5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation.
6. Optim: Add bulk queries to get output global indices.
7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
9. Optim: Added thread-safe buffers used when multi-threading bulk queries.
10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details)
11. Mod: Added checkpoint thread and auto-remove-logs option.
12. *Now usable on 32-bit systems like RPI2.
LMDB:
1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect)
2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details)
5. Mod: Auto resize to +1GB instead of multiplier x1.5
ETC:
1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete.
2. Fix: 32-bit saturation bug when computing next difficulty on large blocks.
[PENDING ISSUES]
1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization.
This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD.
2. Berkeley db, possible bug "unable to allocate memory". TBD.
[NEW OPTIONS] (*Currently all enabled for testing purposes)
1. --fast-block-sync arg=[0:1] (default: 1)
a. 0 = Compute long hash per block (may take a while depending on CPU)
b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence)
2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000)
a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions.
b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache.
Fast - Write meta-data but defer data flush.
Fastest - Defer meta-data and data flush.
Sync - Flush data after nblocks_per_sync and wait.
Async - Flush data after nblocks_per_sync but do not wait for the operation to finish.
3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower)
Max number of threads to use when computing long-hash in groups.
4. --show-time-stats arg=[0:1] (default: 1)
Show benchmark related time stats.
5. --db-auto-remove-logs arg=[0:1] (default: 1)
For berkeley-db only. Auto remove logs if enabled.
**Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version.
At the moment, you need a full resync to use this optimized version.
[PERFORMANCE COMPARISON]
**Some figures are approximations only.
Using a baseline machine of an i7-2600K+SSD+(with full pow computation):
1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain.
2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain.
3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain.
Averate procesing times (with full pow computation):
lmdb-optimized:
1. tx_ave = 2.5 ms / tx
2. block_ave = 5.87 ms / block
memory-official-repo:
1. tx_ave = 8.85 ms / tx
2. block_ave = 19.68 ms / block
lmdb-official-repo (0f4a036437fd41a5498ee5e74e2422ea6177aa3e)
1. tx_ave = 47.8 ms / tx
2. block_ave = 64.2 ms / block
**Note: The following data denotes processing times only (does not include p2p download time)
lmdb-optimized processing times (with full pow computation):
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000).
2. Laptop, Dual-core / 4-threads U4200 (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000).
3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000).
lmdb-optimized processing times (with per-block-checkpoint)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with full pow computation)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000).
2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with per-block-checkpoint)
1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
2015-07-10 20:09:32 +00:00
//-----------------------------------------------------------------------------------------------
bool core : : prepare_handle_incoming_blocks ( const std : : list < block_complete_entry > & blocks )
{
m_blockchain_storage . prepare_handle_incoming_blocks ( blocks ) ;
return true ;
}
//-----------------------------------------------------------------------------------------------
bool core : : cleanup_handle_incoming_blocks ( bool force_sync )
{
m_blockchain_storage . cleanup_handle_incoming_blocks ( force_sync ) ;
return true ;
}
2014-03-03 22:07:58 +00:00
//-----------------------------------------------------------------------------------------------
bool core : : handle_incoming_block ( const blobdata & block_blob , block_verification_context & bvc , bool update_miner_blocktemplate )
{
2014-09-29 20:30:47 +00:00
// load json & DNS checkpoints every 10min/hour respectively,
// and verify them with respect to what blocks we already have
CHECK_AND_ASSERT_MES ( update_checkpoints ( ) , false , " One or more checkpoints loaded from json or dns conflicted with existing checkpoints. " ) ;
2014-03-03 22:07:58 +00:00
bvc = boost : : value_initialized < block_verification_context > ( ) ;
if ( block_blob . size ( ) > get_max_block_size ( ) )
{
2014-09-09 09:32:00 +00:00
LOG_PRINT_L1 ( " WRONG BLOCK BLOB, too big size " < < block_blob . size ( ) < < " , rejected " ) ;
2014-03-03 22:07:58 +00:00
bvc . m_verifivation_failed = true ;
return false ;
}
block b = AUTO_VAL_INIT ( b ) ;
if ( ! parse_and_validate_block_from_blob ( block_blob , b ) )
{
2014-09-09 09:32:00 +00:00
LOG_PRINT_L1 ( " Failed to parse and validate new block " ) ;
2014-03-03 22:07:58 +00:00
bvc . m_verifivation_failed = true ;
return false ;
}
add_new_block ( b , bvc ) ;
if ( update_miner_blocktemplate & & bvc . m_added_to_main_chain )
update_miner_block_template ( ) ;
return true ;
}
//-----------------------------------------------------------------------------------------------
2014-06-11 21:32:53 +00:00
// Used by the RPC server to check the size of an incoming
// block_blob
2015-09-19 10:25:57 +00:00
bool core : : check_incoming_block_size ( const blobdata & block_blob ) const
2014-06-11 21:32:53 +00:00
{
if ( block_blob . size ( ) > get_max_block_size ( ) )
{
2014-09-09 09:32:00 +00:00
LOG_PRINT_L1 ( " WRONG BLOCK BLOB, too big size " < < block_blob . size ( ) < < " , rejected " ) ;
2014-06-11 21:32:53 +00:00
return false ;
}
return true ;
}
//-----------------------------------------------------------------------------------------------
2015-09-19 10:25:57 +00:00
crypto : : hash core : : get_tail_id ( ) const
2014-03-03 22:07:58 +00:00
{
return m_blockchain_storage . get_tail_id ( ) ;
}
//-----------------------------------------------------------------------------------------------
2015-09-19 10:25:57 +00:00
size_t core : : get_pool_transactions_count ( ) const
2014-03-03 22:07:58 +00:00
{
return m_mempool . get_transactions_count ( ) ;
}
//-----------------------------------------------------------------------------------------------
2015-09-19 10:25:57 +00:00
bool core : : have_block ( const crypto : : hash & id ) const
2014-03-03 22:07:58 +00:00
{
return m_blockchain_storage . have_block ( id ) ;
}
//-----------------------------------------------------------------------------------------------
2015-09-19 10:25:57 +00:00
bool core : : parse_tx_from_blob ( transaction & tx , crypto : : hash & tx_hash , crypto : : hash & tx_prefix_hash , const blobdata & blob ) const
2014-03-03 22:07:58 +00:00
{
return parse_and_validate_tx_from_blob ( blob , tx , tx_hash , tx_prefix_hash ) ;
}
//-----------------------------------------------------------------------------------------------
2015-09-19 10:25:57 +00:00
bool core : : check_tx_syntax ( const transaction & tx ) const
2014-03-03 22:07:58 +00:00
{
return true ;
}
//-----------------------------------------------------------------------------------------------
2015-09-19 10:25:57 +00:00
bool core : : get_pool_transactions ( std : : list < transaction > & txs ) const
2014-03-03 22:07:58 +00:00
{
2014-07-17 14:31:44 +00:00
m_mempool . get_transactions ( txs ) ;
return true ;
2014-03-03 22:07:58 +00:00
}
2016-10-26 19:00:08 +00:00
//-----------------------------------------------------------------------------------------------
bool core : : get_pool_transaction ( const crypto : : hash & id , transaction & tx ) const
{
return m_mempool . get_transaction ( id , tx ) ;
}
2014-03-03 22:07:58 +00:00
//-----------------------------------------------------------------------------------------------
2015-04-23 12:13:07 +00:00
bool core : : get_pool_transactions_and_spent_keys_info ( std : : vector < tx_info > & tx_infos , std : : vector < spent_key_image_info > & key_image_infos ) const
{
return m_mempool . get_transactions_and_spent_keys_info ( tx_infos , key_image_infos ) ;
}
//-----------------------------------------------------------------------------------------------
2015-09-19 10:25:57 +00:00
bool core : : get_short_chain_history ( std : : list < crypto : : hash > & ids ) const
2014-03-03 22:07:58 +00:00
{
return m_blockchain_storage . get_short_chain_history ( ids ) ;
}
//-----------------------------------------------------------------------------------------------
bool core : : handle_get_objects ( NOTIFY_REQUEST_GET_OBJECTS : : request & arg , NOTIFY_RESPONSE_GET_OBJECTS : : request & rsp , cryptonote_connection_context & context )
{
return m_blockchain_storage . handle_get_objects ( arg , rsp ) ;
}
//-----------------------------------------------------------------------------------------------
2015-09-19 10:25:57 +00:00
crypto : : hash core : : get_block_id_by_height ( uint64_t height ) const
2014-03-03 22:07:58 +00:00
{
return m_blockchain_storage . get_block_id_by_height ( height ) ;
}
//-----------------------------------------------------------------------------------------------
2015-09-19 10:25:57 +00:00
bool core : : get_block_by_hash ( const crypto : : hash & h , block & blk ) const
** CHANGES ARE EXPERIMENTAL (FOR TESTING ONLY)
Bockchain:
1. Optim: Multi-thread long-hash computation when encountering groups of blocks.
2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible.
3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible.
4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks.
5. Optim: Multi-thread signature computation whenever possible.
6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD)
7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???).
8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads).
Berkeley-DB:
1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc).
2. Fix: Unable to pop blocks on reorganize due to transaction errors.
3. Patch: Large number of transaction aborts when running multi-threaded bulk queries.
4. Patch: Insufficient locks error when running full sync.
5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation.
6. Optim: Add bulk queries to get output global indices.
7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
9. Optim: Added thread-safe buffers used when multi-threading bulk queries.
10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details)
11. Mod: Added checkpoint thread and auto-remove-logs option.
12. *Now usable on 32-bit systems like RPI2.
LMDB:
1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect)
2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details)
5. Mod: Auto resize to +1GB instead of multiplier x1.5
ETC:
1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete.
2. Fix: 32-bit saturation bug when computing next difficulty on large blocks.
[PENDING ISSUES]
1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization.
This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD.
2. Berkeley db, possible bug "unable to allocate memory". TBD.
[NEW OPTIONS] (*Currently all enabled for testing purposes)
1. --fast-block-sync arg=[0:1] (default: 1)
a. 0 = Compute long hash per block (may take a while depending on CPU)
b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence)
2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000)
a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions.
b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache.
Fast - Write meta-data but defer data flush.
Fastest - Defer meta-data and data flush.
Sync - Flush data after nblocks_per_sync and wait.
Async - Flush data after nblocks_per_sync but do not wait for the operation to finish.
3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower)
Max number of threads to use when computing long-hash in groups.
4. --show-time-stats arg=[0:1] (default: 1)
Show benchmark related time stats.
5. --db-auto-remove-logs arg=[0:1] (default: 1)
For berkeley-db only. Auto remove logs if enabled.
**Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version.
At the moment, you need a full resync to use this optimized version.
[PERFORMANCE COMPARISON]
**Some figures are approximations only.
Using a baseline machine of an i7-2600K+SSD+(with full pow computation):
1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain.
2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain.
3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain.
Averate procesing times (with full pow computation):
lmdb-optimized:
1. tx_ave = 2.5 ms / tx
2. block_ave = 5.87 ms / block
memory-official-repo:
1. tx_ave = 8.85 ms / tx
2. block_ave = 19.68 ms / block
lmdb-official-repo (0f4a036437fd41a5498ee5e74e2422ea6177aa3e)
1. tx_ave = 47.8 ms / tx
2. block_ave = 64.2 ms / block
**Note: The following data denotes processing times only (does not include p2p download time)
lmdb-optimized processing times (with full pow computation):
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000).
2. Laptop, Dual-core / 4-threads U4200 (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000).
3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000).
lmdb-optimized processing times (with per-block-checkpoint)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with full pow computation)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000).
2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with per-block-checkpoint)
1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
2015-07-10 20:09:32 +00:00
{
2014-03-03 22:07:58 +00:00
return m_blockchain_storage . get_block_by_hash ( h , blk ) ;
}
//-----------------------------------------------------------------------------------------------
2015-09-19 10:25:57 +00:00
std : : string core : : print_pool ( bool short_format ) const
2014-03-03 22:07:58 +00:00
{
return m_mempool . print_pool ( short_format ) ;
}
//-----------------------------------------------------------------------------------------------
bool core : : update_miner_block_template ( )
{
m_miner . on_block_chain_update ( ) ;
return true ;
}
//-----------------------------------------------------------------------------------------------
bool core : : on_idle ( )
{
if ( ! m_starter_message_showed )
{
2015-12-14 04:54:39 +00:00
LOG_PRINT_L0 ( ENDL < < " ********************************************************************** " < < ENDL
< < " The daemon will start synchronizing with the network. It may take up to several hours. " < < ENDL
2014-03-03 22:07:58 +00:00
< < ENDL
2014-05-25 17:06:40 +00:00
< < " You can set the level of process detailization* through \" set_log <level> \" command*, where <level> is between 0 (no details) and 4 (very verbose). " < < ENDL
2014-03-03 22:07:58 +00:00
< < ENDL
< < " Use \" help \" command to see the list of available commands. " < < ENDL
< < ENDL
2015-12-14 04:54:39 +00:00
< < " Note: in case you need to interrupt the process, use \" exit \" command. Otherwise, the current progress won't be saved. " < < ENDL
2014-03-03 22:07:58 +00:00
< < " ********************************************************************** " ) ;
m_starter_message_showed = true ;
}
2015-09-13 17:09:57 +00:00
m_fork_moaner . do_call ( boost : : bind ( & core : : check_fork_time , this ) ) ;
2015-11-21 00:26:48 +00:00
m_txpool_auto_relayer . do_call ( boost : : bind ( & core : : relay_txpool_transactions , this ) ) ;
2014-03-03 22:07:58 +00:00
m_miner . on_idle ( ) ;
2014-06-15 07:48:13 +00:00
m_mempool . on_idle ( ) ;
2014-03-03 22:07:58 +00:00
return true ;
}
//-----------------------------------------------------------------------------------------------
2015-09-13 17:09:57 +00:00
bool core : : check_fork_time ( )
{
HardFork : : State state = m_blockchain_storage . get_hard_fork_state ( ) ;
switch ( state ) {
case HardFork : : LikelyForked :
2016-02-17 22:57:14 +00:00
LOG_PRINT_RED_L0 ( ENDL
2015-09-13 17:09:57 +00:00
< < " ********************************************************************** " < < ENDL
< < " Last scheduled hard fork is too far in the past. " < < ENDL
< < " We are most likely forked from the network. Daemon update needed now. " < < ENDL
< < " ********************************************************************** " < < ENDL ) ;
break ;
case HardFork : : UpdateNeeded :
2016-02-17 22:57:14 +00:00
LOG_PRINT_RED_L0 ( ENDL
2015-09-13 17:09:57 +00:00
< < " ********************************************************************** " < < ENDL
< < " Last scheduled hard fork time shows a daemon update is needed now. " < < ENDL
< < " ********************************************************************** " < < ENDL ) ;
break ;
default :
break ;
}
return true ;
}
//-----------------------------------------------------------------------------------------------
** CHANGES ARE EXPERIMENTAL (FOR TESTING ONLY)
Bockchain:
1. Optim: Multi-thread long-hash computation when encountering groups of blocks.
2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible.
3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible.
4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks.
5. Optim: Multi-thread signature computation whenever possible.
6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD)
7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???).
8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads).
Berkeley-DB:
1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc).
2. Fix: Unable to pop blocks on reorganize due to transaction errors.
3. Patch: Large number of transaction aborts when running multi-threaded bulk queries.
4. Patch: Insufficient locks error when running full sync.
5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation.
6. Optim: Add bulk queries to get output global indices.
7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
9. Optim: Added thread-safe buffers used when multi-threading bulk queries.
10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details)
11. Mod: Added checkpoint thread and auto-remove-logs option.
12. *Now usable on 32-bit systems like RPI2.
LMDB:
1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect)
2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details)
5. Mod: Auto resize to +1GB instead of multiplier x1.5
ETC:
1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete.
2. Fix: 32-bit saturation bug when computing next difficulty on large blocks.
[PENDING ISSUES]
1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization.
This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD.
2. Berkeley db, possible bug "unable to allocate memory". TBD.
[NEW OPTIONS] (*Currently all enabled for testing purposes)
1. --fast-block-sync arg=[0:1] (default: 1)
a. 0 = Compute long hash per block (may take a while depending on CPU)
b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence)
2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000)
a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions.
b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache.
Fast - Write meta-data but defer data flush.
Fastest - Defer meta-data and data flush.
Sync - Flush data after nblocks_per_sync and wait.
Async - Flush data after nblocks_per_sync but do not wait for the operation to finish.
3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower)
Max number of threads to use when computing long-hash in groups.
4. --show-time-stats arg=[0:1] (default: 1)
Show benchmark related time stats.
5. --db-auto-remove-logs arg=[0:1] (default: 1)
For berkeley-db only. Auto remove logs if enabled.
**Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version.
At the moment, you need a full resync to use this optimized version.
[PERFORMANCE COMPARISON]
**Some figures are approximations only.
Using a baseline machine of an i7-2600K+SSD+(with full pow computation):
1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain.
2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain.
3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain.
Averate procesing times (with full pow computation):
lmdb-optimized:
1. tx_ave = 2.5 ms / tx
2. block_ave = 5.87 ms / block
memory-official-repo:
1. tx_ave = 8.85 ms / tx
2. block_ave = 19.68 ms / block
lmdb-official-repo (0f4a036437fd41a5498ee5e74e2422ea6177aa3e)
1. tx_ave = 47.8 ms / tx
2. block_ave = 64.2 ms / block
**Note: The following data denotes processing times only (does not include p2p download time)
lmdb-optimized processing times (with full pow computation):
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000).
2. Laptop, Dual-core / 4-threads U4200 (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000).
3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000).
lmdb-optimized processing times (with per-block-checkpoint)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with full pow computation)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000).
2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with per-block-checkpoint)
1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
2015-07-10 20:09:32 +00:00
void core : : set_target_blockchain_height ( uint64_t target_blockchain_height )
{
2016-09-25 18:08:31 +00:00
m_target_blockchain_height = target_blockchain_height ;
2014-06-04 20:50:13 +00:00
}
//-----------------------------------------------------------------------------------------------
** CHANGES ARE EXPERIMENTAL (FOR TESTING ONLY)
Bockchain:
1. Optim: Multi-thread long-hash computation when encountering groups of blocks.
2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible.
3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible.
4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks.
5. Optim: Multi-thread signature computation whenever possible.
6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD)
7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???).
8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads).
Berkeley-DB:
1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc).
2. Fix: Unable to pop blocks on reorganize due to transaction errors.
3. Patch: Large number of transaction aborts when running multi-threaded bulk queries.
4. Patch: Insufficient locks error when running full sync.
5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation.
6. Optim: Add bulk queries to get output global indices.
7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
9. Optim: Added thread-safe buffers used when multi-threading bulk queries.
10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details)
11. Mod: Added checkpoint thread and auto-remove-logs option.
12. *Now usable on 32-bit systems like RPI2.
LMDB:
1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect)
2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details)
5. Mod: Auto resize to +1GB instead of multiplier x1.5
ETC:
1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete.
2. Fix: 32-bit saturation bug when computing next difficulty on large blocks.
[PENDING ISSUES]
1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization.
This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD.
2. Berkeley db, possible bug "unable to allocate memory". TBD.
[NEW OPTIONS] (*Currently all enabled for testing purposes)
1. --fast-block-sync arg=[0:1] (default: 1)
a. 0 = Compute long hash per block (may take a while depending on CPU)
b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence)
2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000)
a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions.
b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache.
Fast - Write meta-data but defer data flush.
Fastest - Defer meta-data and data flush.
Sync - Flush data after nblocks_per_sync and wait.
Async - Flush data after nblocks_per_sync but do not wait for the operation to finish.
3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower)
Max number of threads to use when computing long-hash in groups.
4. --show-time-stats arg=[0:1] (default: 1)
Show benchmark related time stats.
5. --db-auto-remove-logs arg=[0:1] (default: 1)
For berkeley-db only. Auto remove logs if enabled.
**Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version.
At the moment, you need a full resync to use this optimized version.
[PERFORMANCE COMPARISON]
**Some figures are approximations only.
Using a baseline machine of an i7-2600K+SSD+(with full pow computation):
1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain.
2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain.
3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain.
Averate procesing times (with full pow computation):
lmdb-optimized:
1. tx_ave = 2.5 ms / tx
2. block_ave = 5.87 ms / block
memory-official-repo:
1. tx_ave = 8.85 ms / tx
2. block_ave = 19.68 ms / block
lmdb-official-repo (0f4a036437fd41a5498ee5e74e2422ea6177aa3e)
1. tx_ave = 47.8 ms / tx
2. block_ave = 64.2 ms / block
**Note: The following data denotes processing times only (does not include p2p download time)
lmdb-optimized processing times (with full pow computation):
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000).
2. Laptop, Dual-core / 4-threads U4200 (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000).
3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000).
lmdb-optimized processing times (with per-block-checkpoint)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with full pow computation)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000).
2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with per-block-checkpoint)
1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
2015-07-10 20:09:32 +00:00
uint64_t core : : get_target_blockchain_height ( ) const
{
2014-06-04 20:50:13 +00:00
return m_target_blockchain_height ;
}
2014-09-30 19:24:43 +00:00
//-----------------------------------------------------------------------------------------------
2017-01-08 23:50:29 +00:00
std : : time_t core : : get_start_time ( ) const
{
return start_time ;
}
//-----------------------------------------------------------------------------------------------
2014-09-30 19:24:43 +00:00
void core : : graceful_exit ( )
{
raise ( SIGTERM ) ;
}
2014-03-03 22:07:58 +00:00
}