get_output_key method is commonly used when working with txs and their key images. Because the method is not const, passing blockchain object though const& or pointers to const is not possible in this context. This is especially problematic in external projects (e.g., projects in moneroexamples) that use monero C++ api to operate on the blockchain and txs.
Thus, having get_output_key method will simplify moving blockchain object around through const references and pointers to const objects.
a48f2dab blockchain_prune_known_spent_data: blackball file is now optional (moneromooo-monero)
17b45725 Outputs where all amounts are known spent can now be pruned (moneromooo-monero)
Only for pre rct for obvious reasons.
Note: DO NOT use a known spent list which includes outputs
which are not known spent. If the list includes any output
that's just strongly thought to be spent, but not provably
so, you risk finding yourself unable to sync past the point
where that output is spent.
I estimate only 200 MB saved on current mainnet though,
unless the new blackballing rule unearths a good amount of
large-amount-set extra spent outs.
149da42 db_lmdb: enable batch transactions by default (stoffu)
34cb6b4 add --regtest and --fixed-difficulty for regression testing (vicsn)
9e1403e update get_info RPC and bump RPC version (vicsn)
207b66e first new functional tests (vicsn)
This gets rid of the temporary precalc cache.
Also make the RPC able to send data back in binary or JSON,
since there can be a lot of data
This bumps the LMDB database format to v3, with migration.
on_generateblocks RPC call combines functionality from the on_getblocktemplate and on_submitblock RPC calls to allow rapid block creation. Difficulty is set permanently to 1 for regtest.
Makes use of FAKECHAIN network type, but takes hard fork heights from mainchain
Default reserve_size in generate_blocks RPC call is now 1. If it is 0, the following error occurs 'Failed to calculate offset for'.
Queries hard fork heights info of other network types
This patch allows to filter out sensitive information for queries that rely on the pool state, when running in restricted mode.
This filtering is only applied to data sent back to RPC queries. Results of inline commands typed locally in the daemon are not affected.
In practice, when running with `--restricted-rpc`:
* get_transaction_pool will list relayed transactions with the fields "last relayed time" and "received time" set to zero.
* get_transaction_pool will not list transaction that have do_not_relay set to true, and will not list key images that are used only for such transactions
* get_transaction_pool_hashes.bin will not list such transaction
* get_transaction_pool_stats will not count such transactions in any of the aggregated values that are computed
The implementation does not make filtering the default, so developers should be mindful of this if they add new RPC functionality.
Fixes#2590.
Transactions in the txpool are marked when another transaction
is seen double spending one or more of its inputs.
This is then exposed wherever appropriate.
Note that being marked with this "double spend seen" flag does
NOT mean this transaction IS a double spend and will never be
mined: it just means that the network has seen at least another
transaction spending at least one of the same inputs, so care
should be taken to wait for a few confirmations before acting
upon that transaction (ie, mostly of use for merchants wanting
to accept unconfirmed transactions).
And optimize import startup:
Remember start_height position during initial count_blocks pass
to avoid having to reread entire file again to arrive at start_height
If monerod is started with default sync mode, set it to SAFE after
synchronization completes. Set it back to FAST if synchronization
restarts (e.g. because another peer has a longer blockchain).
If monerod is started with an explicit sync mode, none of this
automation takes effect.
When scanning for outputs used in a set of incoming blocks,
we expect that some of the inputs in their transactions will
not be found in the blockchain, as they could be in previous
blocks in that set. Those outputs will be scanned there at
a later point. In this case, we add a flag to control wehther
an output not being found is expected or not.
3ff54bdd Check for correct thread before ending batch transaction (Howard Chu)
eaf8470b Must wait for previous batch to finish before starting new one (Howard Chu)
c903c554 Don't cache block height, always get from DB (Howard Chu)
eb1fb601 Tweak default db-sync-mode to fast:async:1 (Howard Chu)
0693cff9 Use batch transactions when syncing (Howard Chu)
25% of the outputs are selected from the last 5 days (if possible),
in order to avoid the common case of sending recently received
outputs again. 25% and 5 days are subject to review later, since
it's just a wallet level change.
Since this queries block heights for blocks that may or may not
exist, queries for non existing blocks would throw an exception,
and that would slow down the loop a lot. 7 seconds to go through
a 30 hash list.
Fix this by adding an optional return block height to block_exists
and using this instead. Actual errors will still throw an
exception.
This also cuts down on log exception spam.
Since these are needed at the same time as the output pubkeys,
this is a whole lot faster, and takes less space. Only outputs
of 0 amount store the commitment. When reading other outputs,
a fake commitment is regenerated on the fly. This avoids having
to rewrite the database to add space for fake commitments for
existing outputs.
This code relies on two things:
- LMDB must support fixed size records per key, rather than
per database (ie, all records on key 0 are the same size, all
records for non 0 keys are same size, but records from key 0
and non 0 keys do have different sizes).
- the commitment must be directly after the rest of the data
in outkey and output_data_t.