2019-03-05 21:05:34 +00:00
// Copyright (c) 2014-2019, The Monero Project
2015-12-14 04:54:39 +00:00
//
2014-07-23 13:03:52 +00:00
// All rights reserved.
2015-12-14 04:54:39 +00:00
//
2014-07-23 13:03:52 +00:00
// Redistribution and use in source and binary forms, with or without modification, are
// permitted provided that the following conditions are met:
2015-12-14 04:54:39 +00:00
//
2014-07-23 13:03:52 +00:00
// 1. Redistributions of source code must retain the above copyright notice, this list of
// conditions and the following disclaimer.
2015-12-14 04:54:39 +00:00
//
2014-07-23 13:03:52 +00:00
// 2. Redistributions in binary form must reproduce the above copyright notice, this list
// of conditions and the following disclaimer in the documentation and/or other
// materials provided with the distribution.
2015-12-14 04:54:39 +00:00
//
2014-07-23 13:03:52 +00:00
// 3. Neither the name of the copyright holder nor the names of its contributors may be
// used to endorse or promote products derived from this software without specific
// prior written permission.
2015-12-14 04:54:39 +00:00
//
2014-07-23 13:03:52 +00:00
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
2015-12-14 04:54:39 +00:00
//
2014-07-23 13:03:52 +00:00
// Parts of this file are originally copyright (c) 2012-2013 The Cryptonote developers
2014-03-03 22:07:58 +00:00
2015-11-23 17:34:55 +00:00
// IP blocking adapted from Boolberry
2014-05-25 17:06:40 +00:00
#include <algorithm>
2018-12-16 17:57:44 +00:00
#include <boost/bind.hpp>
2014-12-15 22:23:42 +00:00
#include <boost/date_time/posix_time/posix_time.hpp>
2018-12-16 17:57:44 +00:00
#include <boost/filesystem/operations.hpp>
#include <boost/optional/optional.hpp>
2015-12-14 04:54:39 +00:00
#include <boost/thread/thread.hpp>
2018-12-18 00:05:27 +00:00
#include <boost/uuid/uuid_io.hpp>
2014-12-15 22:23:42 +00:00
#include <atomic>
2018-12-16 17:57:44 +00:00
#include <functional>
#include <limits>
#include <memory>
#include <tuple>
#include <vector>
2014-05-25 17:06:40 +00:00
2014-03-03 22:07:58 +00:00
#include "version.h"
#include "string_tools.h"
#include "common/util.h"
2014-09-17 21:25:19 +00:00
#include "common/dns_utils.h"
Pruning
The blockchain prunes seven eighths of prunable tx data.
This saves about two thirds of the blockchain size, while
keeping the node useful as a sync source for an eighth
of the blockchain.
No other data is currently pruned.
There are three ways to prune a blockchain:
- run monerod with --prune-blockchain
- run "prune_blockchain" in the monerod console
- run the monero-blockchain-prune utility
The first two will prune in place. Due to how LMDB works, this
will not reduce the blockchain size on disk. Instead, it will
mark parts of the file as free, so that future data will use
that free space, causing the file to not grow until free space
grows scarce.
The third way will create a second database, a pruned copy of
the original one. Since this is a new file, this one will be
smaller than the original one.
Once the database is pruned, it will stay pruned as it syncs.
That is, there is no need to use --prune-blockchain again, etc.
2018-04-29 22:30:51 +00:00
#include "common/pruning.h"
2018-12-16 17:57:44 +00:00
#include "net/error.h"
2014-03-03 22:07:58 +00:00
#include "net/net_helper.h"
#include "math_helper.h"
2018-12-16 17:57:44 +00:00
#include "misc_log_ex.h"
2014-03-03 22:07:58 +00:00
#include "p2p_protocol_defs.h"
#include "net/local_ip.h"
#include "crypto/crypto.h"
#include "storages/levin_abstract_invoke2.h"
2017-10-28 15:06:43 +00:00
#include "cryptonote_core/cryptonote_core.h"
2018-12-16 17:57:44 +00:00
#include "net/parse.h"
2014-09-10 18:01:30 +00:00
2018-04-21 09:30:55 +00:00
#include <miniupnp/miniupnpc/miniupnpc.h>
#include <miniupnp/miniupnpc/upnpcommands.h>
#include <miniupnp/miniupnpc/upnperrors.h>
2014-04-09 12:14:35 +00:00
Change logging to easylogging++
This replaces the epee and data_loggers logging systems with
a single one, and also adds filename:line and explicit severity
levels. Categories may be defined, and logging severity set
by category (or set of categories). epee style 0-4 log level
maps to a sensible severity configuration. Log files now also
rotate when reaching 100 MB.
To select which logs to output, use the MONERO_LOGS environment
variable, with a comma separated list of categories (globs are
supported), with their requested severity level after a colon.
If a log matches more than one such setting, the last one in
the configuration string applies. A few examples:
This one is (mostly) silent, only outputting fatal errors:
MONERO_LOGS=*:FATAL
This one is very verbose:
MONERO_LOGS=*:TRACE
This one is totally silent (logwise):
MONERO_LOGS=""
This one outputs all errors and warnings, except for the
"verify" category, which prints just fatal errors (the verify
category is used for logs about incoming transactions and
blocks, and it is expected that some/many will fail to verify,
hence we don't want the spam):
MONERO_LOGS=*:WARNING,verify:FATAL
Log levels are, in decreasing order of priority:
FATAL, ERROR, WARNING, INFO, DEBUG, TRACE
Subcategories may be added using prefixes and globs. This
example will output net.p2p logs at the TRACE level, but all
other net* logs only at INFO:
MONERO_LOGS=*:ERROR,net*:INFO,net.p2p:TRACE
Logs which are intended for the user (which Monero was using
a lot through epee, but really isn't a nice way to go things)
should use the "global" category. There are a few helper macros
for using this category, eg: MGINFO("this shows up by default")
or MGINFO_RED("this is red"), to try to keep a similar look
and feel for now.
Existing epee log macros still exist, and map to the new log
levels, but since they're used as a "user facing" UI element
as much as a logging system, they often don't map well to log
severities (ie, a log level 0 log may be an error, or may be
something we want the user to see, such as an important info).
In those cases, I tried to use the new macros. In other cases,
I left the existing macros in. When modifying logs, it is
probably best to switch to the new macros with explicit levels.
The --log-level options and set_log commands now also accept
category settings, in addition to the epee style log levels.
2017-01-01 16:34:23 +00:00
#undef MONERO_DEFAULT_LOG_CATEGORY
#define MONERO_DEFAULT_LOG_CATEGORY "net.p2p"
2014-03-03 22:07:58 +00:00
#define NET_MAKE_IP(b1,b2,b3,b4) ((LPARAM)(((DWORD)(b1)<<24)+((DWORD)(b2)<<16)+((DWORD)(b3)<<8)+((DWORD)(b4))))
2017-03-17 23:39:47 +00:00
#define MIN_WANTED_SEED_NODES 12
2014-03-03 22:07:58 +00:00
namespace nodetool
{
2018-12-16 17:57:44 +00:00
template<class t_payload_net_handler>
node_server<t_payload_net_handler>::~node_server()
{
// tcp server uses io_service in destructor, and every zone uses
// io_service from public zone.
for (auto current = m_network_zones.begin(); current != m_network_zones.end(); /* below */)
{
if (current->first != epee::net_utils::zone::public_)
current = m_network_zones.erase(current);
else
++current;
}
}
//-----------------------------------------------------------------------------------
2018-06-11 03:43:18 +00:00
inline bool append_net_address(std::vector<epee::net_utils::network_address> & seed_nodes, std::string const & addr, uint16_t default_port);
2014-03-03 22:07:58 +00:00
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
void node_server<t_payload_net_handler>::init_options(boost::program_options::options_description& desc)
{
command_line::add_arg(desc, arg_p2p_bind_ip);
2019-04-10 22:34:30 +00:00
command_line::add_arg(desc, arg_p2p_bind_ipv6_address);
2017-02-09 21:29:35 +00:00
command_line::add_arg(desc, arg_p2p_bind_port, false);
2019-04-10 22:34:30 +00:00
command_line::add_arg(desc, arg_p2p_bind_port_ipv6, false);
command_line::add_arg(desc, arg_p2p_use_ipv6);
2019-10-13 13:27:46 +00:00
command_line::add_arg(desc, arg_p2p_ignore_ipv4);
2014-03-03 22:07:58 +00:00
command_line::add_arg(desc, arg_p2p_external_port);
command_line::add_arg(desc, arg_p2p_allow_local_ip);
command_line::add_arg(desc, arg_p2p_add_peer);
command_line::add_arg(desc, arg_p2p_add_priority_node);
2014-05-25 17:06:40 +00:00
command_line::add_arg(desc, arg_p2p_add_exclusive_node);
2015-12-14 04:54:39 +00:00
command_line::add_arg(desc, arg_p2p_seed_node);
2019-10-25 01:06:31 +00:00
command_line::add_arg(desc, arg_tx_proxy);
2018-12-16 17:57:44 +00:00
command_line::add_arg(desc, arg_anonymous_inbound);
2015-01-05 19:30:17 +00:00
command_line::add_arg(desc, arg_p2p_hide_my_port);
2019-02-25 01:31:45 +00:00
command_line::add_arg(desc, arg_no_sync);
2015-01-05 19:30:17 +00:00
command_line::add_arg(desc, arg_no_igd);
2019-06-06 10:28:02 +00:00
command_line::add_arg(desc, arg_igd);
2015-01-05 19:30:17 +00:00
command_line::add_arg(desc, arg_out_peers);
2018-01-20 21:44:23 +00:00
command_line::add_arg(desc, arg_in_peers);
2015-01-05 19:30:17 +00:00
command_line::add_arg(desc, arg_tos_flag);
command_line::add_arg(desc, arg_limit_rate_up);
2015-05-06 16:10:51 +00:00
command_line::add_arg(desc, arg_limit_rate_down);
command_line::add_arg(desc, arg_limit_rate);
2015-04-01 17:00:45 +00:00
}
2014-03-03 22:07:58 +00:00
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
bool node_server<t_payload_net_handler>::init_config()
{
TRY_ENTRY();
2018-12-16 17:57:44 +00:00
auto storage = peerlist_storage::open(m_config_folder + "/" + P2P_NET_DATA_FILENAME);
if (storage)
m_peerlist_storage = std::move(*storage);
2014-03-03 22:07:58 +00:00
2018-12-16 17:57:44 +00:00
m_network_zones[epee::net_utils::zone::public_].m_config.m_support_flags = P2P_SUPPORT_FLAGS;
2014-03-03 22:07:58 +00:00
m_first_connection_maker_call = true;
2018-12-16 17:57:44 +00:00
2014-03-03 22:07:58 +00:00
CATCH_ENTRY_L0("node_server::init_config", false);
return true;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
2016-10-26 19:00:08 +00:00
void node_server<t_payload_net_handler>::for_each_connection(std::function<bool(typename t_payload_net_handler::connection_context&, peerid_type, uint32_t)> f)
2014-03-03 22:07:58 +00:00
{
2018-12-16 17:57:44 +00:00
for(auto& zone : m_network_zones)
{
zone.second.m_net_server.get_config_object().foreach_connection([&](p2p_connection_context& cntx){
return f(cntx, cntx.peer_id, cntx.support_flags);
});
}
2014-03-03 22:07:58 +00:00
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
2017-07-02 21:41:15 +00:00
bool node_server<t_payload_net_handler>::for_connection(const boost::uuids::uuid &connection_id, std::function<bool(typename t_payload_net_handler::connection_context&, peerid_type, uint32_t)> f)
{
2018-12-16 17:57:44 +00:00
for(auto& zone : m_network_zones)
{
const bool result = zone.second.m_net_server.get_config_object().for_connection(connection_id, [&](p2p_connection_context& cntx){
return f(cntx, cntx.peer_id, cntx.support_flags);
});
if (result)
return true;
}
return false;
2017-07-02 21:41:15 +00:00
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
2019-03-29 10:47:53 +00:00
bool node_server<t_payload_net_handler>::is_remote_host_allowed(const epee::net_utils::network_address &address, time_t *t)
2015-11-23 17:34:55 +00:00
{
2017-05-27 10:35:54 +00:00
CRITICAL_REGION_LOCAL(m_blocked_hosts_lock);
2019-03-29 10:47:53 +00:00
const time_t now = time(nullptr);
// look in the hosts list
2019-09-16 19:20:23 +00:00
auto it = m_blocked_hosts.find(address.host_str());
2019-03-29 10:47:53 +00:00
if (it != m_blocked_hosts.end())
2015-11-23 17:34:55 +00:00
{
2019-03-29 10:47:53 +00:00
if (now >= it->second)
{
m_blocked_hosts.erase(it);
MCLOG_CYAN(el::Level::Info, "global", "Host " << address.host_str() << " unblocked.");
it = m_blocked_hosts.end();
}
else
{
if (t)
*t = it->second - now;
return false;
}
2015-11-23 17:34:55 +00:00
}
2019-03-29 10:47:53 +00:00
// manually loop in subnets
if (address.get_type_id() == epee::net_utils::address_type::ipv4)
{
auto ipv4_address = address.template as<epee::net_utils::ipv4_network_address>();
std::map<epee::net_utils::ipv4_network_subnet, time_t>::iterator it;
for (it = m_blocked_subnets.begin(); it != m_blocked_subnets.end(); )
{
if (now >= it->second)
{
it = m_blocked_subnets.erase(it);
MCLOG_CYAN(el::Level::Info, "global", "Subnet " << it->first.host_str() << " unblocked.");
continue;
}
if (it->first.matches(ipv4_address))
{
if (t)
*t = it->second - now;
return false;
}
++it;
}
}
// not found in hosts or subnets, allowed
return true;
2015-11-23 17:34:55 +00:00
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
2017-05-27 10:35:54 +00:00
bool node_server<t_payload_net_handler>::block_host(const epee::net_utils::network_address &addr, time_t seconds)
2015-11-23 17:34:55 +00:00
{
2018-12-16 17:57:44 +00:00
if(!addr.is_blockable())
return false;
2019-04-11 21:57:51 +00:00
const time_t now = time(nullptr);
2017-05-27 10:35:54 +00:00
CRITICAL_REGION_LOCAL(m_blocked_hosts_lock);
2019-04-11 21:57:51 +00:00
time_t limit;
if (now > std::numeric_limits<time_t>::max() - seconds)
limit = std::numeric_limits<time_t>::max();
else
limit = now + seconds;
2019-09-16 19:20:23 +00:00
m_blocked_hosts[addr.host_str()] = limit;
2016-10-02 16:39:21 +00:00
2018-12-16 17:57:44 +00:00
// drop any connection to that address. This should only have to look into
// the zone related to the connection, but really make sure everything is
// swept ...
std::vector<boost::uuids::uuid> conns;
for(auto& zone : m_network_zones)
2016-10-02 16:39:21 +00:00
{
2018-12-16 17:57:44 +00:00
zone.second.m_net_server.get_config_object().foreach_connection([&](const p2p_connection_context& cntxt)
2016-10-02 16:39:21 +00:00
{
2018-12-16 17:57:44 +00:00
if (cntxt.m_remote_address.is_same_host(addr))
{
conns.push_back(cntxt.m_connection_id);
}
return true;
});
for (const auto &c: conns)
zone.second.m_net_server.get_config_object().close(c);
conns.clear();
}
2016-10-02 16:39:21 +00:00
2017-05-27 10:35:54 +00:00
MCLOG_CYAN(el::Level::Info, "global", "Host " << addr.host_str() << " blocked.");
2015-11-23 17:34:55 +00:00
return true;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
2017-05-27 10:35:54 +00:00
bool node_server<t_payload_net_handler>::unblock_host(const epee::net_utils::network_address &address)
2015-11-26 00:04:22 +00:00
{
2017-05-27 10:35:54 +00:00
CRITICAL_REGION_LOCAL(m_blocked_hosts_lock);
2019-09-16 19:20:23 +00:00
auto i = m_blocked_hosts.find(address.host_str());
2017-05-27 10:35:54 +00:00
if (i == m_blocked_hosts.end())
2015-11-26 00:04:22 +00:00
return false;
2017-05-27 10:35:54 +00:00
m_blocked_hosts.erase(i);
MCLOG_CYAN(el::Level::Info, "global", "Host " << address.host_str() << " unblocked.");
2015-11-26 00:04:22 +00:00
return true;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
2019-03-29 10:47:53 +00:00
bool node_server<t_payload_net_handler>::block_subnet(const epee::net_utils::ipv4_network_subnet &subnet, time_t seconds)
{
const time_t now = time(nullptr);
CRITICAL_REGION_LOCAL(m_blocked_hosts_lock);
time_t limit;
if (now > std::numeric_limits<time_t>::max() - seconds)
limit = std::numeric_limits<time_t>::max();
else
limit = now + seconds;
m_blocked_subnets[subnet] = limit;
// drop any connection to that subnet. This should only have to look into
// the zone related to the connection, but really make sure everything is
// swept ...
std::vector<boost::uuids::uuid> conns;
for(auto& zone : m_network_zones)
{
zone.second.m_net_server.get_config_object().foreach_connection([&](const p2p_connection_context& cntxt)
{
if (cntxt.m_remote_address.get_type_id() != epee::net_utils::ipv4_network_address::get_type_id())
return true;
auto ipv4_address = cntxt.m_remote_address.template as<epee::net_utils::ipv4_network_address>();
if (subnet.matches(ipv4_address))
{
conns.push_back(cntxt.m_connection_id);
}
return true;
});
for (const auto &c: conns)
zone.second.m_net_server.get_config_object().close(c);
conns.clear();
}
MCLOG_CYAN(el::Level::Info, "global", "Subnet " << subnet.host_str() << " blocked.");
return true;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
bool node_server<t_payload_net_handler>::unblock_subnet(const epee::net_utils::ipv4_network_subnet &subnet)
{
CRITICAL_REGION_LOCAL(m_blocked_hosts_lock);
auto i = m_blocked_subnets.find(subnet);
if (i == m_blocked_subnets.end())
return false;
m_blocked_subnets.erase(i);
MCLOG_CYAN(el::Level::Info, "global", "Subnet " << subnet.host_str() << " unblocked.");
return true;
}
//-----------------------------------------------------------------------------------
2015-11-26 00:04:22 +00:00
template<class t_payload_net_handler>
2019-09-24 13:08:25 +00:00
bool node_server<t_payload_net_handler>::add_host_fail(const epee::net_utils::network_address &address, unsigned int score)
2015-11-23 17:34:55 +00:00
{
2018-12-16 17:57:44 +00:00
if(!address.is_blockable())
return false;
2017-05-27 10:35:54 +00:00
CRITICAL_REGION_LOCAL(m_host_fails_score_lock);
2019-09-24 13:08:25 +00:00
uint64_t fails = m_host_fails_score[address.host_str()] += score;
2017-05-27 10:35:54 +00:00
MDEBUG("Host " << address.host_str() << " fail score=" << fails);
2015-11-23 17:34:55 +00:00
if(fails > P2P_IP_FAILS_BEFORE_BLOCK)
{
2017-05-27 10:35:54 +00:00
auto it = m_host_fails_score.find(address.host_str());
CHECK_AND_ASSERT_MES(it != m_host_fails_score.end(), false, "internal error");
2015-11-23 17:34:55 +00:00
it->second = P2P_IP_FAILS_BEFORE_BLOCK/2;
2017-05-27 10:35:54 +00:00
block_host(address);
2015-11-23 17:34:55 +00:00
}
return true;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
2014-09-08 16:51:04 +00:00
bool node_server<t_payload_net_handler>::handle_command_line(
const boost::program_options::variables_map& vm
)
2014-03-03 22:07:58 +00:00
{
2018-02-16 11:04:04 +00:00
bool testnet = command_line::get_arg(vm, cryptonote::arg_testnet_on);
bool stagenet = command_line::get_arg(vm, cryptonote::arg_stagenet_on);
m_nettype = testnet ? cryptonote::TESTNET : stagenet ? cryptonote::STAGENET : cryptonote::MAINNET;
2018-12-16 17:57:44 +00:00
network_zone& public_zone = m_network_zones[epee::net_utils::zone::public_];
public_zone.m_connect = &public_connect;
public_zone.m_bind_ip = command_line::get_arg(vm, arg_p2p_bind_ip);
2019-04-10 22:34:30 +00:00
public_zone.m_bind_ipv6_address = command_line::get_arg(vm, arg_p2p_bind_ipv6_address);
2018-12-16 17:57:44 +00:00
public_zone.m_port = command_line::get_arg(vm, arg_p2p_bind_port);
2019-04-10 22:34:30 +00:00
public_zone.m_port_ipv6 = command_line::get_arg(vm, arg_p2p_bind_port_ipv6);
2018-12-16 17:57:44 +00:00
public_zone.m_can_pingback = true;
2014-03-03 22:07:58 +00:00
m_external_port = command_line::get_arg(vm, arg_p2p_external_port);
m_allow_local_ip = command_line::get_arg(vm, arg_p2p_allow_local_ip);
2019-06-06 10:28:02 +00:00
const bool has_no_igd = command_line::get_arg(vm, arg_no_igd);
const std::string sigd = command_line::get_arg(vm, arg_igd);
if (sigd == "enabled")
{
if (has_no_igd)
{
MFATAL("Cannot have both --" << arg_no_igd.name << " and --" << arg_igd.name << " enabled");
return false;
}
m_igd = igd;
}
else if (sigd == "disabled")
{
m_igd = no_igd;
}
else if (sigd == "delayed")
{
if (has_no_igd && !command_line::is_arg_defaulted(vm, arg_igd))
{
MFATAL("Cannot have both --" << arg_no_igd.name << " and --" << arg_igd.name << " delayed");
return false;
}
m_igd = has_no_igd ? no_igd : delayed_igd;
}
else
{
MFATAL("Invalid value for --" << arg_igd.name << ", expected enabled, disabled or delayed");
return false;
}
2017-11-30 15:35:52 +00:00
m_offline = command_line::get_arg(vm, cryptonote::arg_offline);
2019-04-10 22:34:30 +00:00
m_use_ipv6 = command_line::get_arg(vm, arg_p2p_use_ipv6);
2019-10-13 13:27:46 +00:00
m_require_ipv4 = !command_line::get_arg(vm, arg_p2p_ignore_ipv4);
2019-05-16 20:34:22 +00:00
public_zone.m_notifier = cryptonote::levin::notify{
2019-09-20 15:16:18 +00:00
public_zone.m_net_server.get_io_service(), public_zone.m_net_server.get_config_shared(), nullptr, true
2019-05-16 20:34:22 +00:00
};
2014-03-03 22:07:58 +00:00
if (command_line::has_arg(vm, arg_p2p_add_peer))
2015-12-14 04:54:39 +00:00
{
2014-03-03 22:07:58 +00:00
std::vector<std::string> perrs = command_line::get_arg(vm, arg_p2p_add_peer);
for(const std::string& pr_str: perrs)
{
nodetool::peerlist_entry pe = AUTO_VAL_INIT(pe);
pe.id = crypto::rand<uint64_t>();
2018-06-11 03:16:29 +00:00
const uint16_t default_port = cryptonote::get_config(m_nettype).P2P_DEFAULT_PORT;
2018-12-16 17:57:44 +00:00
expect<epee::net_utils::network_address> adr = net::get_network_address(pr_str, default_port);
if (adr)
2018-06-11 03:43:18 +00:00
{
2018-12-16 17:57:44 +00:00
add_zone(adr->get_zone());
pe.adr = std::move(*adr);
m_command_line_peers.push_back(std::move(pe));
2018-06-11 03:43:18 +00:00
continue;
}
2018-12-16 17:57:44 +00:00
CHECK_AND_ASSERT_MES(
adr == net::error::unsupported_address, false, "Bad address (\"" << pr_str << "\"): " << adr.error().message()
);
2018-06-11 03:43:18 +00:00
std::vector<epee::net_utils::network_address> resolved_addrs;
2018-12-16 17:57:44 +00:00
bool r = append_net_address(resolved_addrs, pr_str, default_port);
2018-06-11 03:43:18 +00:00
CHECK_AND_ASSERT_MES(r, false, "Failed to parse or resolve address from string: " << pr_str);
for (const epee::net_utils::network_address& addr : resolved_addrs)
{
pe.id = crypto::rand<uint64_t>();
pe.adr = addr;
m_command_line_peers.push_back(pe);
}
2014-03-03 22:07:58 +00:00
}
}
2015-12-14 04:54:39 +00:00
2014-05-25 17:06:40 +00:00
if (command_line::has_arg(vm,arg_p2p_add_exclusive_node))
{
if (!parse_peers_and_add_to_container(vm, arg_p2p_add_exclusive_node, m_exclusive_peers))
return false;
}
2015-12-14 04:54:39 +00:00
2014-06-27 17:21:48 +00:00
if (command_line::has_arg(vm, arg_p2p_add_priority_node))
2014-05-25 17:06:40 +00:00
{
if (!parse_peers_and_add_to_container(vm, arg_p2p_add_priority_node, m_priority_peers))
return false;
2014-03-03 22:07:58 +00:00
}
2015-12-14 04:54:39 +00:00
2014-03-03 22:07:58 +00:00
if (command_line::has_arg(vm, arg_p2p_seed_node))
{
2014-05-25 17:06:40 +00:00
if (!parse_peers_and_add_to_container(vm, arg_p2p_seed_node, m_seed_nodes))
return false;
2014-03-03 22:07:58 +00:00
}
2014-05-25 17:06:40 +00:00
2014-03-03 22:07:58 +00:00
if(command_line::has_arg(vm, arg_p2p_hide_my_port))
2014-05-25 17:06:40 +00:00
m_hide_my_port = true;
2015-12-14 04:54:39 +00:00
2019-02-25 01:31:45 +00:00
if (command_line::has_arg(vm, arg_no_sync))
m_payload_handler.set_no_sync(true);
2018-12-16 17:57:44 +00:00
if ( !set_max_out_peers(public_zone, command_line::get_arg(vm, arg_out_peers) ) )
2015-12-14 04:54:39 +00:00
return false;
2018-12-16 17:57:44 +00:00
else
m_payload_handler.set_max_out_peers(public_zone.m_config.m_net_config.max_out_connection_count);
2015-01-05 19:30:17 +00:00
2018-12-16 17:57:44 +00:00
if ( !set_max_in_peers(public_zone, command_line::get_arg(vm, arg_in_peers) ) )
2018-01-20 21:44:23 +00:00
return false;
2015-12-14 04:54:39 +00:00
if ( !set_tos_flag(vm, command_line::get_arg(vm, arg_tos_flag) ) )
return false;
2015-01-05 19:30:17 +00:00
2015-12-14 04:54:39 +00:00
if ( !set_rate_up_limit(vm, command_line::get_arg(vm, arg_limit_rate_up) ) )
return false;
2015-01-05 19:30:17 +00:00
2015-12-14 04:54:39 +00:00
if ( !set_rate_down_limit(vm, command_line::get_arg(vm, arg_limit_rate_down) ) )
return false;
if ( !set_rate_limit(vm, command_line::get_arg(vm, arg_limit_rate) ) )
return false;
2014-05-25 17:06:40 +00:00
2018-12-16 17:57:44 +00:00
2019-05-16 20:34:22 +00:00
epee::byte_slice noise = nullptr;
2018-12-16 17:57:44 +00:00
auto proxies = get_proxies(vm);
if (!proxies)
return false;
for (auto& proxy : *proxies)
{
network_zone& zone = add_zone(proxy.zone);
if (zone.m_connect != nullptr)
{
2019-10-25 01:06:31 +00:00
MERROR("Listed --" << arg_tx_proxy.name << " twice with " << epee::net_utils::zone_to_string(proxy.zone));
2018-12-16 17:57:44 +00:00
return false;
}
zone.m_connect = &socks_connect;
zone.m_proxy_address = std::move(proxy.address);
if (!set_max_out_peers(zone, proxy.max_connections))
return false;
2019-05-16 20:34:22 +00:00
epee::byte_slice this_noise = nullptr;
if (proxy.noise)
{
static_assert(sizeof(epee::levin::bucket_head2) < CRYPTONOTE_NOISE_BYTES, "noise bytes too small");
if (noise.empty())
noise = epee::levin::make_noise_notify(CRYPTONOTE_NOISE_BYTES);
this_noise = noise.clone();
}
zone.m_notifier = cryptonote::levin::notify{
2019-09-20 15:16:18 +00:00
zone.m_net_server.get_io_service(), zone.m_net_server.get_config_shared(), std::move(this_noise), false
2019-05-16 20:34:22 +00:00
};
2018-12-16 17:57:44 +00:00
}
for (const auto& zone : m_network_zones)
{
if (zone.second.m_connect == nullptr)
{
2019-10-25 01:06:31 +00:00
MERROR("Set outgoing peer for " << epee::net_utils::zone_to_string(zone.first) << " but did not set --" << arg_tx_proxy.name);
2018-12-16 17:57:44 +00:00
return false;
}
}
auto inbounds = get_anonymous_inbounds(vm);
if (!inbounds)
return false;
2019-05-16 20:34:22 +00:00
const std::size_t tx_relay_zones = m_network_zones.size();
2018-12-16 17:57:44 +00:00
for (auto& inbound : *inbounds)
{
network_zone& zone = add_zone(inbound.our_address.get_zone());
if (!zone.m_bind_ip.empty())
{
MERROR("Listed --" << arg_anonymous_inbound.name << " twice with " << epee::net_utils::zone_to_string(inbound.our_address.get_zone()) << " network");
return false;
}
2019-05-16 20:34:22 +00:00
if (zone.m_connect == nullptr && tx_relay_zones <= 1)
{
2019-10-25 01:06:31 +00:00
MERROR("Listed --" << arg_anonymous_inbound.name << " without listing any --" << arg_tx_proxy.name << ". The latter is necessary for sending local txes over anonymity networks");
2019-05-16 20:34:22 +00:00
return false;
}
2018-12-16 17:57:44 +00:00
zone.m_bind_ip = std::move(inbound.local_ip);
zone.m_port = std::move(inbound.local_port);
zone.m_net_server.set_default_remote(std::move(inbound.default_remote));
zone.m_our_address = std::move(inbound.our_address);
if (!set_max_in_peers(zone, inbound.max_connections))
return false;
}
2014-05-25 17:06:40 +00:00
return true;
2014-03-03 22:07:58 +00:00
}
//-----------------------------------------------------------------------------------
2018-06-11 03:43:18 +00:00
inline bool append_net_address(
2017-05-27 10:35:54 +00:00
std::vector<epee::net_utils::network_address> & seed_nodes
2014-09-24 12:45:34 +00:00
, std::string const & addr
2018-06-11 03:43:18 +00:00
, uint16_t default_port
2014-09-24 12:45:34 +00:00
)
2014-03-20 11:46:11 +00:00
{
2014-09-24 12:45:34 +00:00
using namespace boost::asio;
2018-06-11 03:43:18 +00:00
std::string host = addr;
std::string port = std::to_string(default_port);
2019-04-10 22:34:30 +00:00
size_t colon_pos = addr.find_last_of(':');
size_t dot_pos = addr.find_last_of('.');
size_t square_brace_pos = addr.find('[');
// IPv6 will have colons regardless. IPv6 and IPv4 address:port will have a colon but also either a . or a [
// as IPv6 addresses specified as address:port are to be specified as "[addr:addr:...:addr]:port"
// One may also specify an IPv6 address as simply "[addr:addr:...:addr]" without the port; in that case
// the square braces will be stripped here.
if ((std::string::npos != colon_pos && std::string::npos != dot_pos) || std::string::npos != square_brace_pos)
2018-06-11 03:43:18 +00:00
{
2019-04-10 22:34:30 +00:00
net::get_network_address_host_and_port(addr, host, port);
2018-06-11 03:43:18 +00:00
}
MINFO("Resolving node address: host=" << host << ", port=" << port);
2014-09-24 12:45:34 +00:00
io_service io_srv;
ip::tcp::resolver resolver(io_srv);
2016-11-23 23:08:53 +00:00
ip::tcp::resolver::query query(host, port, boost::asio::ip::tcp::resolver::query::canonical_name);
2014-09-24 12:45:34 +00:00
boost::system::error_code ec;
ip::tcp::resolver::iterator i = resolver.resolve(query, ec);
2018-06-11 03:43:18 +00:00
CHECK_AND_ASSERT_MES(!ec, false, "Failed to resolve host name '" << host << "': " << ec.message() << ':' << ec.value());
2014-09-24 12:45:34 +00:00
ip::tcp::resolver::iterator iend;
for (; i != iend; ++i)
2014-09-09 16:15:42 +00:00
{
2014-09-24 12:45:34 +00:00
ip::tcp::endpoint endpoint = *i;
if (endpoint.address().is_v4())
2014-03-20 11:46:11 +00:00
{
2017-08-25 15:14:46 +00:00
epee::net_utils::network_address na{epee::net_utils::ipv4_network_address{boost::asio::detail::socket_ops::host_to_network_long(endpoint.address().to_v4().to_ulong()), endpoint.port()}};
2014-09-24 12:45:34 +00:00
seed_nodes.push_back(na);
2018-06-11 03:43:18 +00:00
MINFO("Added node: " << na.str());
2014-09-09 16:15:42 +00:00
}
else
{
2019-04-10 22:34:30 +00:00
epee::net_utils::network_address na{epee::net_utils::ipv6_network_address{endpoint.address().to_v6(), endpoint.port()}};
seed_nodes.push_back(na);
MINFO("Added node: " << na.str());
2014-03-20 11:46:11 +00:00
}
}
2018-06-11 03:43:18 +00:00
return true;
2014-03-20 11:46:11 +00:00
}
//-----------------------------------------------------------------------------------
2014-03-03 22:07:58 +00:00
template<class t_payload_net_handler>
2018-02-16 11:04:04 +00:00
std::set<std::string> node_server<t_payload_net_handler>::get_seed_nodes(cryptonote::network_type nettype) const
2014-03-03 22:07:58 +00:00
{
2015-05-26 05:11:44 +00:00
std::set<std::string> full_addrs;
2018-02-16 11:04:04 +00:00
if (nettype == cryptonote::TESTNET)
2014-09-09 13:10:30 +00:00
{
2017-02-21 20:40:26 +00:00
full_addrs.insert("212.83.175.67:28080");
2016-09-18 18:10:46 +00:00
full_addrs.insert("5.9.100.248:28080");
2017-02-21 20:40:26 +00:00
full_addrs.insert("163.172.182.165:28080");
full_addrs.insert("195.154.123.123:28080");
full_addrs.insert("212.83.172.165:28080");
2019-10-13 21:13:57 +00:00
full_addrs.insert("192.110.160.146:28080");
2014-09-09 13:10:30 +00:00
}
2018-02-16 11:04:04 +00:00
else if (nettype == cryptonote::STAGENET)
{
full_addrs.insert("162.210.173.150:38080");
full_addrs.insert("162.210.173.151:38080");
2019-10-13 21:13:57 +00:00
full_addrs.insert("192.110.160.146:38080");
2018-02-16 11:04:04 +00:00
}
2018-06-14 19:11:49 +00:00
else if (nettype == cryptonote::FAKECHAIN)
{
}
2014-09-09 13:10:30 +00:00
else
2017-03-17 23:39:47 +00:00
{
full_addrs.insert("107.152.130.98:18080");
full_addrs.insert("212.83.175.67:18080");
full_addrs.insert("5.9.100.248:18080");
full_addrs.insert("163.172.182.165:18080");
full_addrs.insert("161.67.132.39:18080");
full_addrs.insert("198.74.231.92:18080");
2018-03-22 05:46:49 +00:00
full_addrs.insert("195.154.123.123:18080");
full_addrs.insert("212.83.172.165:18080");
2019-10-13 21:13:57 +00:00
full_addrs.insert("192.110.160.146:18080");
2017-03-17 23:39:47 +00:00
}
return full_addrs;
}
2018-12-16 17:57:44 +00:00
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
typename node_server<t_payload_net_handler>::network_zone& node_server<t_payload_net_handler>::add_zone(const epee::net_utils::zone zone)
{
const auto zone_ = m_network_zones.lower_bound(zone);
if (zone_ != m_network_zones.end() && zone_->first == zone)
return zone_->second;
2017-03-17 23:39:47 +00:00
2018-12-16 17:57:44 +00:00
network_zone& public_zone = m_network_zones[epee::net_utils::zone::public_];
return m_network_zones.emplace_hint(zone_, std::piecewise_construct, std::make_tuple(zone), std::tie(public_zone.m_net_server.get_io_service()))->second;
}
2017-03-17 23:39:47 +00:00
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
bool node_server<t_payload_net_handler>::init(const boost::program_options::variables_map& vm)
{
std::set<std::string> full_addrs;
2018-01-17 12:01:42 +00:00
bool res = handle_command_line(vm);
CHECK_AND_ASSERT_MES(res, false, "Failed to handle command line");
2017-03-17 23:39:47 +00:00
2018-04-29 13:57:08 +00:00
m_fallback_seed_nodes_added = false;
2018-02-16 11:04:04 +00:00
if (m_nettype == cryptonote::TESTNET)
2017-03-17 23:39:47 +00:00
{
memcpy(&m_network_id, &::config::testnet::NETWORK_ID, 16);
2018-02-16 11:04:04 +00:00
full_addrs = get_seed_nodes(cryptonote::TESTNET);
}
else if (m_nettype == cryptonote::STAGENET)
{
memcpy(&m_network_id, &::config::stagenet::NETWORK_ID, 16);
full_addrs = get_seed_nodes(cryptonote::STAGENET);
2017-03-17 23:39:47 +00:00
}
2018-03-13 11:20:49 +00:00
else
2014-09-09 13:10:30 +00:00
{
2015-01-29 22:10:53 +00:00
memcpy(&m_network_id, &::config::NETWORK_ID, 16);
2019-03-22 13:25:54 +00:00
if (m_exclusive_peers.empty() && !m_offline)
2018-03-13 11:20:49 +00:00
{
2014-09-17 19:35:52 +00:00
// for each hostname in the seed nodes list, attempt to DNS resolve and
// add the result addresses as seed nodes
// TODO: at some point add IPv6 support, but that won't be relevant
// for some time yet.
2015-12-14 04:54:39 +00:00
2014-12-15 22:23:42 +00:00
std::vector<std::vector<std::string>> dns_results;
dns_results.resize(m_seed_nodes_list.size());
2014-12-15 22:43:12 +00:00
2019-09-25 14:37:06 +00:00
// some libc implementation provide only a very small stack
// for threads, e.g. musl only gives +- 80kb, which is not
// enough to do a resolve with unbound. we request a stack
// of 1 mb, which should be plenty
boost::thread::attributes thread_attributes;
thread_attributes.set_stack_size(1024*1024);
2017-07-28 21:23:05 +00:00
std::list<boost::thread> dns_threads;
2014-12-15 22:23:42 +00:00
uint64_t result_index = 0;
2014-09-17 19:35:52 +00:00
for (const std::string& addr_str : m_seed_nodes_list)
{
2019-09-25 14:37:06 +00:00
boost::thread th = boost::thread(thread_attributes, [=, &dns_results, &addr_str]
2014-12-15 22:23:42 +00:00
{
Change logging to easylogging++
This replaces the epee and data_loggers logging systems with
a single one, and also adds filename:line and explicit severity
levels. Categories may be defined, and logging severity set
by category (or set of categories). epee style 0-4 log level
maps to a sensible severity configuration. Log files now also
rotate when reaching 100 MB.
To select which logs to output, use the MONERO_LOGS environment
variable, with a comma separated list of categories (globs are
supported), with their requested severity level after a colon.
If a log matches more than one such setting, the last one in
the configuration string applies. A few examples:
This one is (mostly) silent, only outputting fatal errors:
MONERO_LOGS=*:FATAL
This one is very verbose:
MONERO_LOGS=*:TRACE
This one is totally silent (logwise):
MONERO_LOGS=""
This one outputs all errors and warnings, except for the
"verify" category, which prints just fatal errors (the verify
category is used for logs about incoming transactions and
blocks, and it is expected that some/many will fail to verify,
hence we don't want the spam):
MONERO_LOGS=*:WARNING,verify:FATAL
Log levels are, in decreasing order of priority:
FATAL, ERROR, WARNING, INFO, DEBUG, TRACE
Subcategories may be added using prefixes and globs. This
example will output net.p2p logs at the TRACE level, but all
other net* logs only at INFO:
MONERO_LOGS=*:ERROR,net*:INFO,net.p2p:TRACE
Logs which are intended for the user (which Monero was using
a lot through epee, but really isn't a nice way to go things)
should use the "global" category. There are a few helper macros
for using this category, eg: MGINFO("this shows up by default")
or MGINFO_RED("this is red"), to try to keep a similar look
and feel for now.
Existing epee log macros still exist, and map to the new log
levels, but since they're used as a "user facing" UI element
as much as a logging system, they often don't map well to log
severities (ie, a log level 0 log may be an error, or may be
something we want the user to see, such as an important info).
In those cases, I tried to use the new macros. In other cases,
I left the existing macros in. When modifying logs, it is
probably best to switch to the new macros with explicit levels.
The --log-level options and set_log commands now also accept
category settings, in addition to the epee style log levels.
2017-01-01 16:34:23 +00:00
MDEBUG("dns_threads[" << result_index << "] created for: " << addr_str);
2014-12-15 22:23:42 +00:00
// TODO: care about dnssec avail/valid
bool avail, valid;
2015-01-14 21:14:01 +00:00
std::vector<std::string> addr_list;
try
{
2015-08-15 04:04:29 +00:00
addr_list = tools::DNSResolver::instance().get_ipv4(addr_str, avail, valid);
Change logging to easylogging++
This replaces the epee and data_loggers logging systems with
a single one, and also adds filename:line and explicit severity
levels. Categories may be defined, and logging severity set
by category (or set of categories). epee style 0-4 log level
maps to a sensible severity configuration. Log files now also
rotate when reaching 100 MB.
To select which logs to output, use the MONERO_LOGS environment
variable, with a comma separated list of categories (globs are
supported), with their requested severity level after a colon.
If a log matches more than one such setting, the last one in
the configuration string applies. A few examples:
This one is (mostly) silent, only outputting fatal errors:
MONERO_LOGS=*:FATAL
This one is very verbose:
MONERO_LOGS=*:TRACE
This one is totally silent (logwise):
MONERO_LOGS=""
This one outputs all errors and warnings, except for the
"verify" category, which prints just fatal errors (the verify
category is used for logs about incoming transactions and
blocks, and it is expected that some/many will fail to verify,
hence we don't want the spam):
MONERO_LOGS=*:WARNING,verify:FATAL
Log levels are, in decreasing order of priority:
FATAL, ERROR, WARNING, INFO, DEBUG, TRACE
Subcategories may be added using prefixes and globs. This
example will output net.p2p logs at the TRACE level, but all
other net* logs only at INFO:
MONERO_LOGS=*:ERROR,net*:INFO,net.p2p:TRACE
Logs which are intended for the user (which Monero was using
a lot through epee, but really isn't a nice way to go things)
should use the "global" category. There are a few helper macros
for using this category, eg: MGINFO("this shows up by default")
or MGINFO_RED("this is red"), to try to keep a similar look
and feel for now.
Existing epee log macros still exist, and map to the new log
levels, but since they're used as a "user facing" UI element
as much as a logging system, they often don't map well to log
severities (ie, a log level 0 log may be an error, or may be
something we want the user to see, such as an important info).
In those cases, I tried to use the new macros. In other cases,
I left the existing macros in. When modifying logs, it is
probably best to switch to the new macros with explicit levels.
The --log-level options and set_log commands now also accept
category settings, in addition to the epee style log levels.
2017-01-01 16:34:23 +00:00
MDEBUG("dns_threads[" << result_index << "] DNS resolve done");
2015-01-14 21:14:01 +00:00
boost::this_thread::interruption_point();
}
catch(const boost::thread_interrupted&)
{
// thread interruption request
// even if we now have results, finish thread without setting
// result variables, which are now out of scope in main thread
Change logging to easylogging++
This replaces the epee and data_loggers logging systems with
a single one, and also adds filename:line and explicit severity
levels. Categories may be defined, and logging severity set
by category (or set of categories). epee style 0-4 log level
maps to a sensible severity configuration. Log files now also
rotate when reaching 100 MB.
To select which logs to output, use the MONERO_LOGS environment
variable, with a comma separated list of categories (globs are
supported), with their requested severity level after a colon.
If a log matches more than one such setting, the last one in
the configuration string applies. A few examples:
This one is (mostly) silent, only outputting fatal errors:
MONERO_LOGS=*:FATAL
This one is very verbose:
MONERO_LOGS=*:TRACE
This one is totally silent (logwise):
MONERO_LOGS=""
This one outputs all errors and warnings, except for the
"verify" category, which prints just fatal errors (the verify
category is used for logs about incoming transactions and
blocks, and it is expected that some/many will fail to verify,
hence we don't want the spam):
MONERO_LOGS=*:WARNING,verify:FATAL
Log levels are, in decreasing order of priority:
FATAL, ERROR, WARNING, INFO, DEBUG, TRACE
Subcategories may be added using prefixes and globs. This
example will output net.p2p logs at the TRACE level, but all
other net* logs only at INFO:
MONERO_LOGS=*:ERROR,net*:INFO,net.p2p:TRACE
Logs which are intended for the user (which Monero was using
a lot through epee, but really isn't a nice way to go things)
should use the "global" category. There are a few helper macros
for using this category, eg: MGINFO("this shows up by default")
or MGINFO_RED("this is red"), to try to keep a similar look
and feel for now.
Existing epee log macros still exist, and map to the new log
levels, but since they're used as a "user facing" UI element
as much as a logging system, they often don't map well to log
severities (ie, a log level 0 log may be an error, or may be
something we want the user to see, such as an important info).
In those cases, I tried to use the new macros. In other cases,
I left the existing macros in. When modifying logs, it is
probably best to switch to the new macros with explicit levels.
The --log-level options and set_log commands now also accept
category settings, in addition to the epee style log levels.
2017-01-01 16:34:23 +00:00
MWARNING("dns_threads[" << result_index << "] interrupted");
2015-01-14 21:14:01 +00:00
return;
}
2014-12-15 22:23:42 +00:00
Change logging to easylogging++
This replaces the epee and data_loggers logging systems with
a single one, and also adds filename:line and explicit severity
levels. Categories may be defined, and logging severity set
by category (or set of categories). epee style 0-4 log level
maps to a sensible severity configuration. Log files now also
rotate when reaching 100 MB.
To select which logs to output, use the MONERO_LOGS environment
variable, with a comma separated list of categories (globs are
supported), with their requested severity level after a colon.
If a log matches more than one such setting, the last one in
the configuration string applies. A few examples:
This one is (mostly) silent, only outputting fatal errors:
MONERO_LOGS=*:FATAL
This one is very verbose:
MONERO_LOGS=*:TRACE
This one is totally silent (logwise):
MONERO_LOGS=""
This one outputs all errors and warnings, except for the
"verify" category, which prints just fatal errors (the verify
category is used for logs about incoming transactions and
blocks, and it is expected that some/many will fail to verify,
hence we don't want the spam):
MONERO_LOGS=*:WARNING,verify:FATAL
Log levels are, in decreasing order of priority:
FATAL, ERROR, WARNING, INFO, DEBUG, TRACE
Subcategories may be added using prefixes and globs. This
example will output net.p2p logs at the TRACE level, but all
other net* logs only at INFO:
MONERO_LOGS=*:ERROR,net*:INFO,net.p2p:TRACE
Logs which are intended for the user (which Monero was using
a lot through epee, but really isn't a nice way to go things)
should use the "global" category. There are a few helper macros
for using this category, eg: MGINFO("this shows up by default")
or MGINFO_RED("this is red"), to try to keep a similar look
and feel for now.
Existing epee log macros still exist, and map to the new log
levels, but since they're used as a "user facing" UI element
as much as a logging system, they often don't map well to log
severities (ie, a log level 0 log may be an error, or may be
something we want the user to see, such as an important info).
In those cases, I tried to use the new macros. In other cases,
I left the existing macros in. When modifying logs, it is
probably best to switch to the new macros with explicit levels.
The --log-level options and set_log commands now also accept
category settings, in addition to the epee style log levels.
2017-01-01 16:34:23 +00:00
MINFO("dns_threads[" << result_index << "] addr_str: " << addr_str << " number of results: " << addr_list.size());
2015-01-14 21:14:01 +00:00
dns_results[result_index] = addr_list;
2014-12-15 22:23:42 +00:00
});
2017-07-28 21:23:05 +00:00
dns_threads.push_back(std::move(th));
2015-01-14 21:14:01 +00:00
++result_index;
2014-12-15 22:23:42 +00:00
}
Change logging to easylogging++
This replaces the epee and data_loggers logging systems with
a single one, and also adds filename:line and explicit severity
levels. Categories may be defined, and logging severity set
by category (or set of categories). epee style 0-4 log level
maps to a sensible severity configuration. Log files now also
rotate when reaching 100 MB.
To select which logs to output, use the MONERO_LOGS environment
variable, with a comma separated list of categories (globs are
supported), with their requested severity level after a colon.
If a log matches more than one such setting, the last one in
the configuration string applies. A few examples:
This one is (mostly) silent, only outputting fatal errors:
MONERO_LOGS=*:FATAL
This one is very verbose:
MONERO_LOGS=*:TRACE
This one is totally silent (logwise):
MONERO_LOGS=""
This one outputs all errors and warnings, except for the
"verify" category, which prints just fatal errors (the verify
category is used for logs about incoming transactions and
blocks, and it is expected that some/many will fail to verify,
hence we don't want the spam):
MONERO_LOGS=*:WARNING,verify:FATAL
Log levels are, in decreasing order of priority:
FATAL, ERROR, WARNING, INFO, DEBUG, TRACE
Subcategories may be added using prefixes and globs. This
example will output net.p2p logs at the TRACE level, but all
other net* logs only at INFO:
MONERO_LOGS=*:ERROR,net*:INFO,net.p2p:TRACE
Logs which are intended for the user (which Monero was using
a lot through epee, but really isn't a nice way to go things)
should use the "global" category. There are a few helper macros
for using this category, eg: MGINFO("this shows up by default")
or MGINFO_RED("this is red"), to try to keep a similar look
and feel for now.
Existing epee log macros still exist, and map to the new log
levels, but since they're used as a "user facing" UI element
as much as a logging system, they often don't map well to log
severities (ie, a log level 0 log may be an error, or may be
something we want the user to see, such as an important info).
In those cases, I tried to use the new macros. In other cases,
I left the existing macros in. When modifying logs, it is
probably best to switch to the new macros with explicit levels.
The --log-level options and set_log commands now also accept
category settings, in addition to the epee style log levels.
2017-01-01 16:34:23 +00:00
MDEBUG("dns_threads created, now waiting for completion or timeout of " << CRYPTONOTE_DNS_TIMEOUT_MS << "ms");
2015-01-14 21:14:01 +00:00
boost::chrono::system_clock::time_point deadline = boost::chrono::system_clock::now() + boost::chrono::milliseconds(CRYPTONOTE_DNS_TIMEOUT_MS);
uint64_t i = 0;
2017-07-28 21:23:05 +00:00
for (boost::thread& th : dns_threads)
2014-12-15 22:23:42 +00:00
{
2017-07-28 21:23:05 +00:00
if (! th.try_join_until(deadline))
2014-12-15 22:23:42 +00:00
{
Change logging to easylogging++
This replaces the epee and data_loggers logging systems with
a single one, and also adds filename:line and explicit severity
levels. Categories may be defined, and logging severity set
by category (or set of categories). epee style 0-4 log level
maps to a sensible severity configuration. Log files now also
rotate when reaching 100 MB.
To select which logs to output, use the MONERO_LOGS environment
variable, with a comma separated list of categories (globs are
supported), with their requested severity level after a colon.
If a log matches more than one such setting, the last one in
the configuration string applies. A few examples:
This one is (mostly) silent, only outputting fatal errors:
MONERO_LOGS=*:FATAL
This one is very verbose:
MONERO_LOGS=*:TRACE
This one is totally silent (logwise):
MONERO_LOGS=""
This one outputs all errors and warnings, except for the
"verify" category, which prints just fatal errors (the verify
category is used for logs about incoming transactions and
blocks, and it is expected that some/many will fail to verify,
hence we don't want the spam):
MONERO_LOGS=*:WARNING,verify:FATAL
Log levels are, in decreasing order of priority:
FATAL, ERROR, WARNING, INFO, DEBUG, TRACE
Subcategories may be added using prefixes and globs. This
example will output net.p2p logs at the TRACE level, but all
other net* logs only at INFO:
MONERO_LOGS=*:ERROR,net*:INFO,net.p2p:TRACE
Logs which are intended for the user (which Monero was using
a lot through epee, but really isn't a nice way to go things)
should use the "global" category. There are a few helper macros
for using this category, eg: MGINFO("this shows up by default")
or MGINFO_RED("this is red"), to try to keep a similar look
and feel for now.
Existing epee log macros still exist, and map to the new log
levels, but since they're used as a "user facing" UI element
as much as a logging system, they often don't map well to log
severities (ie, a log level 0 log may be an error, or may be
something we want the user to see, such as an important info).
In those cases, I tried to use the new macros. In other cases,
I left the existing macros in. When modifying logs, it is
probably best to switch to the new macros with explicit levels.
The --log-level options and set_log commands now also accept
category settings, in addition to the epee style log levels.
2017-01-01 16:34:23 +00:00
MWARNING("dns_threads[" << i << "] timed out, sending interrupt");
2017-07-28 21:23:05 +00:00
th.interrupt();
2014-12-15 22:23:42 +00:00
}
2015-01-14 21:14:01 +00:00
++i;
2014-12-15 22:23:42 +00:00
}
2015-01-14 21:14:01 +00:00
i = 0;
2014-12-15 22:23:42 +00:00
for (const auto& result : dns_results)
{
Change logging to easylogging++
This replaces the epee and data_loggers logging systems with
a single one, and also adds filename:line and explicit severity
levels. Categories may be defined, and logging severity set
by category (or set of categories). epee style 0-4 log level
maps to a sensible severity configuration. Log files now also
rotate when reaching 100 MB.
To select which logs to output, use the MONERO_LOGS environment
variable, with a comma separated list of categories (globs are
supported), with their requested severity level after a colon.
If a log matches more than one such setting, the last one in
the configuration string applies. A few examples:
This one is (mostly) silent, only outputting fatal errors:
MONERO_LOGS=*:FATAL
This one is very verbose:
MONERO_LOGS=*:TRACE
This one is totally silent (logwise):
MONERO_LOGS=""
This one outputs all errors and warnings, except for the
"verify" category, which prints just fatal errors (the verify
category is used for logs about incoming transactions and
blocks, and it is expected that some/many will fail to verify,
hence we don't want the spam):
MONERO_LOGS=*:WARNING,verify:FATAL
Log levels are, in decreasing order of priority:
FATAL, ERROR, WARNING, INFO, DEBUG, TRACE
Subcategories may be added using prefixes and globs. This
example will output net.p2p logs at the TRACE level, but all
other net* logs only at INFO:
MONERO_LOGS=*:ERROR,net*:INFO,net.p2p:TRACE
Logs which are intended for the user (which Monero was using
a lot through epee, but really isn't a nice way to go things)
should use the "global" category. There are a few helper macros
for using this category, eg: MGINFO("this shows up by default")
or MGINFO_RED("this is red"), to try to keep a similar look
and feel for now.
Existing epee log macros still exist, and map to the new log
levels, but since they're used as a "user facing" UI element
as much as a logging system, they often don't map well to log
severities (ie, a log level 0 log may be an error, or may be
something we want the user to see, such as an important info).
In those cases, I tried to use the new macros. In other cases,
I left the existing macros in. When modifying logs, it is
probably best to switch to the new macros with explicit levels.
The --log-level options and set_log commands now also accept
category settings, in addition to the epee style log levels.
2017-01-01 16:34:23 +00:00
MDEBUG("DNS lookup for " << m_seed_nodes_list[i] << ": " << result.size() << " results");
2015-01-14 21:14:01 +00:00
// if no results for node, thread's lookup likely timed out
if (result.size())
2014-09-17 19:35:52 +00:00
{
2015-01-14 21:14:01 +00:00
for (const auto& addr_string : result)
2018-06-11 03:16:29 +00:00
full_addrs.insert(addr_string + ":" + std::to_string(cryptonote::get_config(m_nettype).P2P_DEFAULT_PORT));
2014-09-17 19:35:52 +00:00
}
2015-01-14 21:14:01 +00:00
++i;
2014-09-17 19:35:52 +00:00
}
2017-03-17 23:39:47 +00:00
// append the fallback nodes if we have too few seed nodes to start with
if (full_addrs.size() < MIN_WANTED_SEED_NODES)
2014-09-17 19:35:52 +00:00
{
2017-03-17 23:39:47 +00:00
if (full_addrs.empty())
MINFO("DNS seed node lookup either timed out or failed, falling back to defaults");
else
MINFO("Not enough DNS seed nodes found, using fallback defaults too");
2018-02-16 11:04:04 +00:00
for (const auto &peer: get_seed_nodes(cryptonote::MAINNET))
2017-03-17 23:39:47 +00:00
full_addrs.insert(peer);
2018-04-29 13:57:08 +00:00
m_fallback_seed_nodes_added = true;
2014-09-17 19:35:52 +00:00
}
2014-07-16 17:30:15 +00:00
}
2018-03-13 11:20:49 +00:00
}
2014-03-03 22:07:58 +00:00
2015-05-26 05:11:44 +00:00
for (const auto& full_addr : full_addrs)
{
Change logging to easylogging++
This replaces the epee and data_loggers logging systems with
a single one, and also adds filename:line and explicit severity
levels. Categories may be defined, and logging severity set
by category (or set of categories). epee style 0-4 log level
maps to a sensible severity configuration. Log files now also
rotate when reaching 100 MB.
To select which logs to output, use the MONERO_LOGS environment
variable, with a comma separated list of categories (globs are
supported), with their requested severity level after a colon.
If a log matches more than one such setting, the last one in
the configuration string applies. A few examples:
This one is (mostly) silent, only outputting fatal errors:
MONERO_LOGS=*:FATAL
This one is very verbose:
MONERO_LOGS=*:TRACE
This one is totally silent (logwise):
MONERO_LOGS=""
This one outputs all errors and warnings, except for the
"verify" category, which prints just fatal errors (the verify
category is used for logs about incoming transactions and
blocks, and it is expected that some/many will fail to verify,
hence we don't want the spam):
MONERO_LOGS=*:WARNING,verify:FATAL
Log levels are, in decreasing order of priority:
FATAL, ERROR, WARNING, INFO, DEBUG, TRACE
Subcategories may be added using prefixes and globs. This
example will output net.p2p logs at the TRACE level, but all
other net* logs only at INFO:
MONERO_LOGS=*:ERROR,net*:INFO,net.p2p:TRACE
Logs which are intended for the user (which Monero was using
a lot through epee, but really isn't a nice way to go things)
should use the "global" category. There are a few helper macros
for using this category, eg: MGINFO("this shows up by default")
or MGINFO_RED("this is red"), to try to keep a similar look
and feel for now.
Existing epee log macros still exist, and map to the new log
levels, but since they're used as a "user facing" UI element
as much as a logging system, they often don't map well to log
severities (ie, a log level 0 log may be an error, or may be
something we want the user to see, such as an important info).
In those cases, I tried to use the new macros. In other cases,
I left the existing macros in. When modifying logs, it is
probably best to switch to the new macros with explicit levels.
The --log-level options and set_log commands now also accept
category settings, in addition to the epee style log levels.
2017-01-01 16:34:23 +00:00
MDEBUG("Seed node: " << full_addr);
2018-06-11 03:43:18 +00:00
append_net_address(m_seed_nodes, full_addr, cryptonote::get_config(m_nettype).P2P_DEFAULT_PORT);
2015-05-26 05:11:44 +00:00
}
Change logging to easylogging++
This replaces the epee and data_loggers logging systems with
a single one, and also adds filename:line and explicit severity
levels. Categories may be defined, and logging severity set
by category (or set of categories). epee style 0-4 log level
maps to a sensible severity configuration. Log files now also
rotate when reaching 100 MB.
To select which logs to output, use the MONERO_LOGS environment
variable, with a comma separated list of categories (globs are
supported), with their requested severity level after a colon.
If a log matches more than one such setting, the last one in
the configuration string applies. A few examples:
This one is (mostly) silent, only outputting fatal errors:
MONERO_LOGS=*:FATAL
This one is very verbose:
MONERO_LOGS=*:TRACE
This one is totally silent (logwise):
MONERO_LOGS=""
This one outputs all errors and warnings, except for the
"verify" category, which prints just fatal errors (the verify
category is used for logs about incoming transactions and
blocks, and it is expected that some/many will fail to verify,
hence we don't want the spam):
MONERO_LOGS=*:WARNING,verify:FATAL
Log levels are, in decreasing order of priority:
FATAL, ERROR, WARNING, INFO, DEBUG, TRACE
Subcategories may be added using prefixes and globs. This
example will output net.p2p logs at the TRACE level, but all
other net* logs only at INFO:
MONERO_LOGS=*:ERROR,net*:INFO,net.p2p:TRACE
Logs which are intended for the user (which Monero was using
a lot through epee, but really isn't a nice way to go things)
should use the "global" category. There are a few helper macros
for using this category, eg: MGINFO("this shows up by default")
or MGINFO_RED("this is red"), to try to keep a similar look
and feel for now.
Existing epee log macros still exist, and map to the new log
levels, but since they're used as a "user facing" UI element
as much as a logging system, they often don't map well to log
severities (ie, a log level 0 log may be an error, or may be
something we want the user to see, such as an important info).
In those cases, I tried to use the new macros. In other cases,
I left the existing macros in. When modifying logs, it is
probably best to switch to the new macros with explicit levels.
The --log-level options and set_log commands now also accept
category settings, in addition to the epee style log levels.
2017-01-01 16:34:23 +00:00
MDEBUG("Number of seed nodes: " << m_seed_nodes.size());
2015-05-26 05:11:44 +00:00
2018-01-21 15:29:55 +00:00
m_config_folder = command_line::get_arg(vm, cryptonote::arg_data_dir);
2018-12-16 17:57:44 +00:00
network_zone& public_zone = m_network_zones.at(epee::net_utils::zone::public_);
2014-03-03 22:07:58 +00:00
2018-12-16 17:57:44 +00:00
if ((m_nettype == cryptonote::MAINNET && public_zone.m_port != std::to_string(::config::P2P_DEFAULT_PORT))
|| (m_nettype == cryptonote::TESTNET && public_zone.m_port != std::to_string(::config::testnet::P2P_DEFAULT_PORT))
|| (m_nettype == cryptonote::STAGENET && public_zone.m_port != std::to_string(::config::stagenet::P2P_DEFAULT_PORT))) {
m_config_folder = m_config_folder + "/" + public_zone.m_port;
2017-01-28 00:36:39 +00:00
}
2014-03-03 22:07:58 +00:00
res = init_config();
CHECK_AND_ASSERT_MES(res, false, "Failed to init config.");
2018-12-16 17:57:44 +00:00
for (auto& zone : m_network_zones)
{
res = zone.second.m_peerlist.init(m_peerlist_storage.take_zone(zone.first), m_allow_local_ip);
CHECK_AND_ASSERT_MES(res, false, "Failed to init peerlist.");
}
2014-03-03 22:07:58 +00:00
2018-12-16 17:57:44 +00:00
for(const auto& p: m_command_line_peers)
m_network_zones.at(p.adr.get_zone()).m_peerlist.append_with_peer_white(p);
2014-03-03 22:07:58 +00:00
2018-12-16 17:57:44 +00:00
// all peers are now setup
#ifdef CRYPTONOTE_PRUNING_DEBUG_SPOOF_SEED
for (auto& zone : m_network_zones)
{
std::list<peerlist_entry> plw;
while (zone.second.m_peerlist.get_white_peers_count())
{
plw.push_back(peerlist_entry());
zone.second.m_peerlist.get_white_peer_by_index(plw.back(), 0);
zone.second.m_peerlist.remove_from_peer_white(plw.back());
}
for (auto &e:plw)
zone.second.m_peerlist.append_with_peer_white(e);
std::list<peerlist_entry> plg;
while (zone.second.m_peerlist.get_gray_peers_count())
{
plg.push_back(peerlist_entry());
zone.second.m_peerlist.get_gray_peer_by_index(plg.back(), 0);
zone.second.m_peerlist.remove_from_peer_gray(plg.back());
}
for (auto &e:plg)
zone.second.m_peerlist.append_with_peer_gray(e);
}
#endif
2015-12-14 04:54:39 +00:00
2014-03-03 22:07:58 +00:00
//only in case if we really sure that we have external visible ip
m_have_address = true;
m_last_stat_request_time = 0;
//configure self
2018-12-16 17:57:44 +00:00
public_zone.m_net_server.set_threads_prefix("P2P"); // all zones use these threads/asio::io_service
2014-03-03 22:07:58 +00:00
2015-12-07 20:21:45 +00:00
// from here onwards, it's online stuff
if (m_offline)
return res;
2014-03-03 22:07:58 +00:00
//try to bind
epee: add SSL support
RPC connections now have optional tranparent SSL.
An optional private key and certificate file can be passed,
using the --{rpc,daemon}-ssl-private-key and
--{rpc,daemon}-ssl-certificate options. Those have as
argument a path to a PEM format private private key and
certificate, respectively.
If not given, a temporary self signed certificate will be used.
SSL can be enabled or disabled using --{rpc}-ssl, which
accepts autodetect (default), disabled or enabled.
Access can be restricted to particular certificates using the
--rpc-ssl-allowed-certificates, which takes a list of
paths to PEM encoded certificates. This can allow a wallet to
connect to only the daemon they think they're connected to,
by forcing SSL and listing the paths to the known good
certificates.
To generate long term certificates:
openssl genrsa -out /tmp/KEY 4096
openssl req -new -key /tmp/KEY -out /tmp/REQ
openssl x509 -req -days 999999 -sha256 -in /tmp/REQ -signkey /tmp/KEY -out /tmp/CERT
/tmp/KEY is the private key, and /tmp/CERT is the certificate,
both in PEM format. /tmp/REQ can be removed. Adjust the last
command to set expiration date, etc, as needed. It doesn't
make a whole lot of sense for monero anyway, since most servers
will run with one time temporary self signed certificates anyway.
SSL support is transparent, so all communication is done on the
existing ports, with SSL autodetection. This means you can start
using an SSL daemon now, but you should not enforce SSL yet or
nothing will talk to you.
2018-06-14 22:44:48 +00:00
m_ssl_support = epee::net_utils::ssl_support_t::e_ssl_support_disabled;
2018-12-16 17:57:44 +00:00
for (auto& zone : m_network_zones)
{
zone.second.m_net_server.get_config_object().set_handler(this);
zone.second.m_net_server.get_config_object().m_invoke_timeout = P2P_DEFAULT_INVOKE_TIMEOUT;
if (!zone.second.m_bind_ip.empty())
{
2019-04-10 22:34:30 +00:00
std::string ipv6_addr = "";
std::string ipv6_port = "";
2018-12-16 17:57:44 +00:00
zone.second.m_net_server.set_connection_filter(this);
2019-04-10 22:34:30 +00:00
MINFO("Binding (IPv4) on " << zone.second.m_bind_ip << ":" << zone.second.m_port);
if (!zone.second.m_bind_ipv6_address.empty() && m_use_ipv6)
{
ipv6_addr = zone.second.m_bind_ipv6_address;
ipv6_port = zone.second.m_port_ipv6;
MINFO("Binding (IPv6) on " << zone.second.m_bind_ipv6_address << ":" << zone.second.m_port_ipv6);
}
res = zone.second.m_net_server.init_server(zone.second.m_port, zone.second.m_bind_ip, ipv6_port, ipv6_addr, m_use_ipv6, m_require_ipv4, epee::net_utils::ssl_support_t::e_ssl_support_disabled);
2018-12-16 17:57:44 +00:00
CHECK_AND_ASSERT_MES(res, false, "Failed to bind server");
}
}
2014-03-03 22:07:58 +00:00
2018-12-16 17:57:44 +00:00
m_listening_port = public_zone.m_net_server.get_binded_port();
2019-04-10 22:34:30 +00:00
MLOG_GREEN(el::Level::Info, "Net service bound (IPv4) to " << public_zone.m_bind_ip << ":" << m_listening_port);
if (m_use_ipv6)
{
m_listening_port_ipv6 = public_zone.m_net_server.get_binded_port_ipv6();
MLOG_GREEN(el::Level::Info, "Net service bound (IPv6) to " << public_zone.m_bind_ipv6_address << ":" << m_listening_port_ipv6);
}
2014-03-03 22:07:58 +00:00
if(m_external_port)
Change logging to easylogging++
This replaces the epee and data_loggers logging systems with
a single one, and also adds filename:line and explicit severity
levels. Categories may be defined, and logging severity set
by category (or set of categories). epee style 0-4 log level
maps to a sensible severity configuration. Log files now also
rotate when reaching 100 MB.
To select which logs to output, use the MONERO_LOGS environment
variable, with a comma separated list of categories (globs are
supported), with their requested severity level after a colon.
If a log matches more than one such setting, the last one in
the configuration string applies. A few examples:
This one is (mostly) silent, only outputting fatal errors:
MONERO_LOGS=*:FATAL
This one is very verbose:
MONERO_LOGS=*:TRACE
This one is totally silent (logwise):
MONERO_LOGS=""
This one outputs all errors and warnings, except for the
"verify" category, which prints just fatal errors (the verify
category is used for logs about incoming transactions and
blocks, and it is expected that some/many will fail to verify,
hence we don't want the spam):
MONERO_LOGS=*:WARNING,verify:FATAL
Log levels are, in decreasing order of priority:
FATAL, ERROR, WARNING, INFO, DEBUG, TRACE
Subcategories may be added using prefixes and globs. This
example will output net.p2p logs at the TRACE level, but all
other net* logs only at INFO:
MONERO_LOGS=*:ERROR,net*:INFO,net.p2p:TRACE
Logs which are intended for the user (which Monero was using
a lot through epee, but really isn't a nice way to go things)
should use the "global" category. There are a few helper macros
for using this category, eg: MGINFO("this shows up by default")
or MGINFO_RED("this is red"), to try to keep a similar look
and feel for now.
Existing epee log macros still exist, and map to the new log
levels, but since they're used as a "user facing" UI element
as much as a logging system, they often don't map well to log
severities (ie, a log level 0 log may be an error, or may be
something we want the user to see, such as an important info).
In those cases, I tried to use the new macros. In other cases,
I left the existing macros in. When modifying logs, it is
probably best to switch to the new macros with explicit levels.
The --log-level options and set_log commands now also accept
category settings, in addition to the epee style log levels.
2017-01-01 16:34:23 +00:00
MDEBUG("External port defined as " << m_external_port);
2014-04-09 12:14:35 +00:00
2017-08-29 21:28:23 +00:00
// add UPnP port mapping
2019-06-06 10:28:02 +00:00
if(m_igd == igd)
2019-04-10 22:34:30 +00:00
{
add_upnp_port_mapping_v4(m_listening_port);
if (m_use_ipv6)
{
add_upnp_port_mapping_v6(m_listening_port_ipv6);
}
}
2015-12-14 04:54:39 +00:00
2014-03-03 22:07:58 +00:00
return res;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
typename node_server<t_payload_net_handler>::payload_net_handler& node_server<t_payload_net_handler>::get_payload_object()
{
return m_payload_handler;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
bool node_server<t_payload_net_handler>::run()
{
2015-12-14 04:54:39 +00:00
// creating thread to log number of connections
2016-02-18 21:30:10 +00:00
mPeersLoggerThread.reset(new boost::thread([&]()
2015-12-14 04:54:39 +00:00
{
_note("Thread monitor number of peers - start");
2018-12-16 17:57:44 +00:00
const network_zone& public_zone = m_network_zones.at(epee::net_utils::zone::public_);
while (!is_closing && !public_zone.m_net_server.is_stop_signal_sent())
2015-12-14 04:54:39 +00:00
{ // main loop of thread
//number_of_peers = m_net_server.get_config_object().get_connections_count();
2018-12-16 17:57:44 +00:00
for (auto& zone : m_network_zones)
2015-12-14 04:54:39 +00:00
{
2018-12-16 17:57:44 +00:00
unsigned int number_of_in_peers = 0;
unsigned int number_of_out_peers = 0;
zone.second.m_net_server.get_config_object().foreach_connection([&](const p2p_connection_context& cntxt)
2018-01-20 21:44:23 +00:00
{
2018-12-16 17:57:44 +00:00
if (cntxt.m_is_income)
{
++number_of_in_peers;
}
else
{
++number_of_out_peers;
}
return true;
}); // lambda
zone.second.m_current_number_of_in_peers = number_of_in_peers;
zone.second.m_current_number_of_out_peers = number_of_out_peers;
}
2016-03-11 12:25:28 +00:00
boost::this_thread::sleep_for(boost::chrono::seconds(1));
2015-12-14 04:54:39 +00:00
} // main loop of thread
_note("Thread monitor number of peers - done");
})); // lambda
2018-12-16 17:57:44 +00:00
network_zone& public_zone = m_network_zones.at(epee::net_utils::zone::public_);
public_zone.m_net_server.add_idle_handler(boost::bind(&node_server<t_payload_net_handler>::idle_worker, this), 1000);
public_zone.m_net_server.add_idle_handler(boost::bind(&t_payload_net_handler::on_idle, &m_payload_handler), 1000);
2014-03-03 22:07:58 +00:00
//here you can set worker threads count
int thrds_count = 10;
2014-04-30 20:50:06 +00:00
boost::thread::attributes attrs;
attrs.set_stack_size(THREAD_STACK_SIZE);
2014-03-03 22:07:58 +00:00
//go to loop
Change logging to easylogging++
This replaces the epee and data_loggers logging systems with
a single one, and also adds filename:line and explicit severity
levels. Categories may be defined, and logging severity set
by category (or set of categories). epee style 0-4 log level
maps to a sensible severity configuration. Log files now also
rotate when reaching 100 MB.
To select which logs to output, use the MONERO_LOGS environment
variable, with a comma separated list of categories (globs are
supported), with their requested severity level after a colon.
If a log matches more than one such setting, the last one in
the configuration string applies. A few examples:
This one is (mostly) silent, only outputting fatal errors:
MONERO_LOGS=*:FATAL
This one is very verbose:
MONERO_LOGS=*:TRACE
This one is totally silent (logwise):
MONERO_LOGS=""
This one outputs all errors and warnings, except for the
"verify" category, which prints just fatal errors (the verify
category is used for logs about incoming transactions and
blocks, and it is expected that some/many will fail to verify,
hence we don't want the spam):
MONERO_LOGS=*:WARNING,verify:FATAL
Log levels are, in decreasing order of priority:
FATAL, ERROR, WARNING, INFO, DEBUG, TRACE
Subcategories may be added using prefixes and globs. This
example will output net.p2p logs at the TRACE level, but all
other net* logs only at INFO:
MONERO_LOGS=*:ERROR,net*:INFO,net.p2p:TRACE
Logs which are intended for the user (which Monero was using
a lot through epee, but really isn't a nice way to go things)
should use the "global" category. There are a few helper macros
for using this category, eg: MGINFO("this shows up by default")
or MGINFO_RED("this is red"), to try to keep a similar look
and feel for now.
Existing epee log macros still exist, and map to the new log
levels, but since they're used as a "user facing" UI element
as much as a logging system, they often don't map well to log
severities (ie, a log level 0 log may be an error, or may be
something we want the user to see, such as an important info).
In those cases, I tried to use the new macros. In other cases,
I left the existing macros in. When modifying logs, it is
probably best to switch to the new macros with explicit levels.
The --log-level options and set_log commands now also accept
category settings, in addition to the epee style log levels.
2017-01-01 16:34:23 +00:00
MINFO("Run net_service loop( " << thrds_count << " threads)...");
2018-12-16 17:57:44 +00:00
if(!public_zone.m_net_server.run_server(thrds_count, true, attrs))
2014-03-03 22:07:58 +00:00
{
LOG_ERROR("Failed to run net tcp server!");
}
Change logging to easylogging++
This replaces the epee and data_loggers logging systems with
a single one, and also adds filename:line and explicit severity
levels. Categories may be defined, and logging severity set
by category (or set of categories). epee style 0-4 log level
maps to a sensible severity configuration. Log files now also
rotate when reaching 100 MB.
To select which logs to output, use the MONERO_LOGS environment
variable, with a comma separated list of categories (globs are
supported), with their requested severity level after a colon.
If a log matches more than one such setting, the last one in
the configuration string applies. A few examples:
This one is (mostly) silent, only outputting fatal errors:
MONERO_LOGS=*:FATAL
This one is very verbose:
MONERO_LOGS=*:TRACE
This one is totally silent (logwise):
MONERO_LOGS=""
This one outputs all errors and warnings, except for the
"verify" category, which prints just fatal errors (the verify
category is used for logs about incoming transactions and
blocks, and it is expected that some/many will fail to verify,
hence we don't want the spam):
MONERO_LOGS=*:WARNING,verify:FATAL
Log levels are, in decreasing order of priority:
FATAL, ERROR, WARNING, INFO, DEBUG, TRACE
Subcategories may be added using prefixes and globs. This
example will output net.p2p logs at the TRACE level, but all
other net* logs only at INFO:
MONERO_LOGS=*:ERROR,net*:INFO,net.p2p:TRACE
Logs which are intended for the user (which Monero was using
a lot through epee, but really isn't a nice way to go things)
should use the "global" category. There are a few helper macros
for using this category, eg: MGINFO("this shows up by default")
or MGINFO_RED("this is red"), to try to keep a similar look
and feel for now.
Existing epee log macros still exist, and map to the new log
levels, but since they're used as a "user facing" UI element
as much as a logging system, they often don't map well to log
severities (ie, a log level 0 log may be an error, or may be
something we want the user to see, such as an important info).
In those cases, I tried to use the new macros. In other cases,
I left the existing macros in. When modifying logs, it is
probably best to switch to the new macros with explicit levels.
The --log-level options and set_log commands now also accept
category settings, in addition to the epee style log levels.
2017-01-01 16:34:23 +00:00
MINFO("net_service loop stopped.");
2014-03-03 22:07:58 +00:00
return true;
}
2018-12-16 17:57:44 +00:00
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
uint64_t node_server<t_payload_net_handler>::get_public_connections_count()
{
auto public_zone = m_network_zones.find(epee::net_utils::zone::public_);
if (public_zone == m_network_zones.end())
return 0;
return public_zone->second.m_net_server.get_config_object().get_connections_count();
}
2014-03-03 22:07:58 +00:00
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
uint64_t node_server<t_payload_net_handler>::get_connections_count()
{
2018-12-16 17:57:44 +00:00
std::uint64_t count = 0;
for (auto& zone : m_network_zones)
count += zone.second.m_net_server.get_config_object().get_connections_count();
return count;
2014-03-03 22:07:58 +00:00
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
bool node_server<t_payload_net_handler>::deinit()
{
2015-12-14 04:54:39 +00:00
kill();
2018-10-16 09:17:21 +00:00
if (!m_offline)
{
2018-12-16 17:57:44 +00:00
for(auto& zone : m_network_zones)
zone.second.m_net_server.deinit_server();
2018-10-16 09:17:21 +00:00
// remove UPnP port mapping
2019-06-06 10:28:02 +00:00
if(m_igd == igd)
2018-10-16 09:17:21 +00:00
delete_upnp_port_mapping(m_listening_port);
}
2014-03-03 22:07:58 +00:00
return store_config();
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
bool node_server<t_payload_net_handler>::store_config()
{
TRY_ENTRY();
2018-12-16 17:57:44 +00:00
2014-03-03 22:07:58 +00:00
if (!tools::create_directories_if_necessary(m_config_folder))
{
2018-12-16 17:57:44 +00:00
MWARNING("Failed to create data directory \"" << m_config_folder);
2014-03-03 22:07:58 +00:00
return false;
}
2018-12-16 17:57:44 +00:00
peerlist_types active{};
for (auto& zone : m_network_zones)
zone.second.m_peerlist.get_peerlist(active);
const std::string state_file_path = m_config_folder + "/" + P2P_NET_DATA_FILENAME;
if (!m_peerlist_storage.store(state_file_path, active))
2014-03-03 22:07:58 +00:00
{
Change logging to easylogging++
This replaces the epee and data_loggers logging systems with
a single one, and also adds filename:line and explicit severity
levels. Categories may be defined, and logging severity set
by category (or set of categories). epee style 0-4 log level
maps to a sensible severity configuration. Log files now also
rotate when reaching 100 MB.
To select which logs to output, use the MONERO_LOGS environment
variable, with a comma separated list of categories (globs are
supported), with their requested severity level after a colon.
If a log matches more than one such setting, the last one in
the configuration string applies. A few examples:
This one is (mostly) silent, only outputting fatal errors:
MONERO_LOGS=*:FATAL
This one is very verbose:
MONERO_LOGS=*:TRACE
This one is totally silent (logwise):
MONERO_LOGS=""
This one outputs all errors and warnings, except for the
"verify" category, which prints just fatal errors (the verify
category is used for logs about incoming transactions and
blocks, and it is expected that some/many will fail to verify,
hence we don't want the spam):
MONERO_LOGS=*:WARNING,verify:FATAL
Log levels are, in decreasing order of priority:
FATAL, ERROR, WARNING, INFO, DEBUG, TRACE
Subcategories may be added using prefixes and globs. This
example will output net.p2p logs at the TRACE level, but all
other net* logs only at INFO:
MONERO_LOGS=*:ERROR,net*:INFO,net.p2p:TRACE
Logs which are intended for the user (which Monero was using
a lot through epee, but really isn't a nice way to go things)
should use the "global" category. There are a few helper macros
for using this category, eg: MGINFO("this shows up by default")
or MGINFO_RED("this is red"), to try to keep a similar look
and feel for now.
Existing epee log macros still exist, and map to the new log
levels, but since they're used as a "user facing" UI element
as much as a logging system, they often don't map well to log
severities (ie, a log level 0 log may be an error, or may be
something we want the user to see, such as an important info).
In those cases, I tried to use the new macros. In other cases,
I left the existing macros in. When modifying logs, it is
probably best to switch to the new macros with explicit levels.
The --log-level options and set_log commands now also accept
category settings, in addition to the epee style log levels.
2017-01-01 16:34:23 +00:00
MWARNING("Failed to save config to file " << state_file_path);
2014-03-03 22:07:58 +00:00
return false;
2018-12-16 17:57:44 +00:00
}
CATCH_ENTRY_L0("node_server::store", false);
2014-03-03 22:07:58 +00:00
return true;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
bool node_server<t_payload_net_handler>::send_stop_signal()
{
2017-12-15 10:28:15 +00:00
MDEBUG("[node] sending stop signal");
2018-12-16 17:57:44 +00:00
for (auto& zone : m_network_zones)
zone.second.m_net_server.send_stop_signal();
2017-12-15 10:28:15 +00:00
MDEBUG("[node] Stop signal sent");
2018-12-16 17:57:44 +00:00
for (auto& zone : m_network_zones)
{
std::list<boost::uuids::uuid> connection_ids;
zone.second.m_net_server.get_config_object().foreach_connection([&](const p2p_connection_context& cntxt) {
connection_ids.push_back(cntxt.m_connection_id);
return true;
});
for (const auto &connection_id: connection_ids)
zone.second.m_net_server.get_config_object().close(connection_id);
}
2016-12-04 12:27:45 +00:00
m_payload_handler.stop();
2014-03-03 22:07:58 +00:00
return true;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
bool node_server<t_payload_net_handler>::do_handshake_with_peer(peerid_type& pi, p2p_connection_context& context_, bool just_take_peerlist)
{
2018-12-16 17:57:44 +00:00
network_zone& zone = m_network_zones.at(context_.m_remote_address.get_zone());
2014-03-03 22:07:58 +00:00
typename COMMAND_HANDSHAKE::request arg;
typename COMMAND_HANDSHAKE::response rsp;
2018-12-16 17:57:44 +00:00
get_local_node_data(arg.node_data, zone);
2014-03-03 22:07:58 +00:00
m_payload_handler.get_payload_sync_data(arg.payload_data);
2015-12-14 04:54:39 +00:00
2014-05-25 17:06:40 +00:00
epee::simple_event ev;
2014-03-03 22:07:58 +00:00
std::atomic<bool> hsh_result(false);
2015-12-14 04:54:39 +00:00
2018-12-16 17:57:44 +00:00
bool r = epee::net_utils::async_invoke_remote_command2<typename COMMAND_HANDSHAKE::response>(context_.m_connection_id, COMMAND_HANDSHAKE::ID, arg, zone.m_net_server.get_config_object(),
Pruning
The blockchain prunes seven eighths of prunable tx data.
This saves about two thirds of the blockchain size, while
keeping the node useful as a sync source for an eighth
of the blockchain.
No other data is currently pruned.
There are three ways to prune a blockchain:
- run monerod with --prune-blockchain
- run "prune_blockchain" in the monerod console
- run the monero-blockchain-prune utility
The first two will prune in place. Due to how LMDB works, this
will not reduce the blockchain size on disk. Instead, it will
mark parts of the file as free, so that future data will use
that free space, causing the file to not grow until free space
grows scarce.
The third way will create a second database, a pruned copy of
the original one. Since this is a new file, this one will be
smaller than the original one.
Once the database is pruned, it will stay pruned as it syncs.
That is, there is no need to use --prune-blockchain again, etc.
2018-04-29 22:30:51 +00:00
[this, &pi, &ev, &hsh_result, &just_take_peerlist, &context_](int code, const typename COMMAND_HANDSHAKE::response& rsp, p2p_connection_context& context)
2014-03-03 22:07:58 +00:00
{
2014-05-25 17:06:40 +00:00
epee::misc_utils::auto_scope_leave_caller scope_exit_handler = epee::misc_utils::create_scope_leave_handler([&](){ev.raise();});
2014-03-03 22:07:58 +00:00
if(code < 0)
{
2017-08-01 09:39:36 +00:00
LOG_WARNING_CC(context, "COMMAND_HANDSHAKE invoke failed. (" << code << ", " << epee::levin::get_err_descr(code) << ")");
2014-03-03 22:07:58 +00:00
return;
}
2014-07-16 17:30:15 +00:00
if(rsp.node_data.network_id != m_network_id)
2014-03-03 22:07:58 +00:00
{
2018-12-18 00:05:27 +00:00
LOG_WARNING_CC(context, "COMMAND_HANDSHAKE Failed, wrong network! (" << rsp.node_data.network_id << "), closing connection.");
2014-03-03 22:07:58 +00:00
return;
}
2017-05-27 10:35:54 +00:00
if(!handle_remote_peerlist(rsp.local_peerlist_new, rsp.node_data.local_time, context))
2014-03-03 22:07:58 +00:00
{
2017-08-01 09:39:36 +00:00
LOG_WARNING_CC(context, "COMMAND_HANDSHAKE: failed to handle_remote_peerlist(...), closing connection.");
2017-05-27 10:35:54 +00:00
add_host_fail(context.m_remote_address);
2014-03-03 22:07:58 +00:00
return;
}
hsh_result = true;
if(!just_take_peerlist)
{
if(!m_payload_handler.process_payload_sync_data(rsp.payload_data, context, true))
{
2017-08-01 09:39:36 +00:00
LOG_WARNING_CC(context, "COMMAND_HANDSHAKE invoked, but process_payload_sync_data returned false, dropping connection.");
2014-03-03 22:07:58 +00:00
hsh_result = false;
return;
}
pi = context.peer_id = rsp.node_data.peer_id;
2019-02-24 08:47:49 +00:00
context.m_rpc_port = rsp.node_data.rpc_port;
daemon, wallet: new pay for RPC use system
Daemons intended for public use can be set up to require payment
in the form of hashes in exchange for RPC service. This enables
public daemons to receive payment for their work over a large
number of calls. This system behaves similarly to a pool, so
payment takes the form of valid blocks every so often, yielding
a large one off payment, rather than constant micropayments.
This system can also be used by third parties as a "paywall"
layer, where users of a service can pay for use by mining Monero
to the service provider's address. An example of this for web
site access is Primo, a Monero mining based website "paywall":
https://github.com/selene-kovri/primo
This has some advantages:
- incentive to run a node providing RPC services, thereby promoting the availability of third party nodes for those who can't run their own
- incentive to run your own node instead of using a third party's, thereby promoting decentralization
- decentralized: payment is done between a client and server, with no third party needed
- private: since the system is "pay as you go", you don't need to identify yourself to claim a long lived balance
- no payment occurs on the blockchain, so there is no extra transactional load
- one may mine with a beefy server, and use those credits from a phone, by reusing the client ID (at the cost of some privacy)
- no barrier to entry: anyone may run a RPC node, and your expected revenue depends on how much work you do
- Sybil resistant: if you run 1000 idle RPC nodes, you don't magically get more revenue
- no large credit balance maintained on servers, so they have no incentive to exit scam
- you can use any/many node(s), since there's little cost in switching servers
- market based prices: competition between servers to lower costs
- incentive for a distributed third party node system: if some public nodes are overused/slow, traffic can move to others
- increases network security
- helps counteract mining pools' share of the network hash rate
- zero incentive for a payer to "double spend" since a reorg does not give any money back to the miner
And some disadvantages:
- low power clients will have difficulty mining (but one can optionally mine in advance and/or with a faster machine)
- payment is "random", so a server might go a long time without a block before getting one
- a public node's overall expected payment may be small
Public nodes are expected to compete to find a suitable level for
cost of service.
The daemon can be set up this way to require payment for RPC services:
monerod --rpc-payment-address 4xxxxxx \
--rpc-payment-credits 250 --rpc-payment-difficulty 1000
These values are an example only.
The --rpc-payment-difficulty switch selects how hard each "share" should
be, similar to a mining pool. The higher the difficulty, the fewer
shares a client will find.
The --rpc-payment-credits switch selects how many credits are awarded
for each share a client finds.
Considering both options, clients will be awarded credits/difficulty
credits for every hash they calculate. For example, in the command line
above, 0.25 credits per hash. A client mining at 100 H/s will therefore
get an average of 25 credits per second.
For reference, in the current implementation, a credit is enough to
sync 20 blocks, so a 100 H/s client that's just starting to use Monero
and uses this daemon will be able to sync 500 blocks per second.
The wallet can be set to automatically mine if connected to a daemon
which requires payment for RPC usage. It will try to keep a balance
of 50000 credits, stopping mining when it's at this level, and starting
again as credits are spent. With the example above, a new client will
mine this much credits in about half an hour, and this target is enough
to sync 500000 blocks (currently about a third of the monero blockchain).
There are three new settings in the wallet:
- credits-target: this is the amount of credits a wallet will try to
reach before stopping mining. The default of 0 means 50000 credits.
- auto-mine-for-rpc-payment-threshold: this controls the minimum
credit rate which the wallet considers worth mining for. If the
daemon credits less than this ratio, the wallet will consider mining
to be not worth it. In the example above, the rate is 0.25
- persistent-rpc-client-id: if set, this allows the wallet to reuse
a client id across runs. This means a public node can tell a wallet
that's connecting is the same as one that connected previously, but
allows a wallet to keep their credit balance from one run to the
other. Since the wallet only mines to keep a small credit balance,
this is not normally worth doing. However, someone may want to mine
on a fast server, and use that credit balance on a low power device
such as a phone. If left unset, a new client ID is generated at
each wallet start, for privacy reasons.
To mine and use a credit balance on two different devices, you can
use the --rpc-client-secret-key switch. A wallet's client secret key
can be found using the new rpc_payments command in the wallet.
Note: anyone knowing your RPC client secret key is able to use your
credit balance.
The wallet has a few new commands too:
- start_mining_for_rpc: start mining to acquire more credits,
regardless of the auto mining settings
- stop_mining_for_rpc: stop mining to acquire more credits
- rpc_payments: display information about current credits with
the currently selected daemon
The node has an extra command:
- rpc_payments: display information about clients and their
balances
The node will forget about any balance for clients which have
been inactive for 6 months. Balances carry over on node restart.
2018-02-11 15:15:56 +00:00
context.m_rpc_credits_per_hash = rsp.node_data.rpc_credits_per_hash;
m_network_zones.at(context.m_remote_address.get_zone()).m_peerlist.set_peer_just_seen(rsp.node_data.peer_id, context.m_remote_address, context.m_pruning_seed, context.m_rpc_port, context.m_rpc_credits_per_hash);
2014-03-03 22:07:58 +00:00
2018-12-16 17:57:44 +00:00
// move
for (auto const& zone : m_network_zones)
2014-03-03 22:07:58 +00:00
{
2018-12-16 17:57:44 +00:00
if(rsp.node_data.peer_id == zone.second.m_config.m_peer_id)
{
LOG_DEBUG_CC(context, "Connection to self detected, dropping connection");
hsh_result = false;
return;
}
2014-03-03 22:07:58 +00:00
}
Pruning
The blockchain prunes seven eighths of prunable tx data.
This saves about two thirds of the blockchain size, while
keeping the node useful as a sync source for an eighth
of the blockchain.
No other data is currently pruned.
There are three ways to prune a blockchain:
- run monerod with --prune-blockchain
- run "prune_blockchain" in the monerod console
- run the monero-blockchain-prune utility
The first two will prune in place. Due to how LMDB works, this
will not reduce the blockchain size on disk. Instead, it will
mark parts of the file as free, so that future data will use
that free space, causing the file to not grow until free space
grows scarce.
The third way will create a second database, a pruned copy of
the original one. Since this is a new file, this one will be
smaller than the original one.
Once the database is pruned, it will stay pruned as it syncs.
That is, there is no need to use --prune-blockchain again, etc.
2018-04-29 22:30:51 +00:00
LOG_INFO_CC(context, "New connection handshaked, pruning seed " << epee::string_tools::to_string_hex(context.m_pruning_seed));
Change logging to easylogging++
This replaces the epee and data_loggers logging systems with
a single one, and also adds filename:line and explicit severity
levels. Categories may be defined, and logging severity set
by category (or set of categories). epee style 0-4 log level
maps to a sensible severity configuration. Log files now also
rotate when reaching 100 MB.
To select which logs to output, use the MONERO_LOGS environment
variable, with a comma separated list of categories (globs are
supported), with their requested severity level after a colon.
If a log matches more than one such setting, the last one in
the configuration string applies. A few examples:
This one is (mostly) silent, only outputting fatal errors:
MONERO_LOGS=*:FATAL
This one is very verbose:
MONERO_LOGS=*:TRACE
This one is totally silent (logwise):
MONERO_LOGS=""
This one outputs all errors and warnings, except for the
"verify" category, which prints just fatal errors (the verify
category is used for logs about incoming transactions and
blocks, and it is expected that some/many will fail to verify,
hence we don't want the spam):
MONERO_LOGS=*:WARNING,verify:FATAL
Log levels are, in decreasing order of priority:
FATAL, ERROR, WARNING, INFO, DEBUG, TRACE
Subcategories may be added using prefixes and globs. This
example will output net.p2p logs at the TRACE level, but all
other net* logs only at INFO:
MONERO_LOGS=*:ERROR,net*:INFO,net.p2p:TRACE
Logs which are intended for the user (which Monero was using
a lot through epee, but really isn't a nice way to go things)
should use the "global" category. There are a few helper macros
for using this category, eg: MGINFO("this shows up by default")
or MGINFO_RED("this is red"), to try to keep a similar look
and feel for now.
Existing epee log macros still exist, and map to the new log
levels, but since they're used as a "user facing" UI element
as much as a logging system, they often don't map well to log
severities (ie, a log level 0 log may be an error, or may be
something we want the user to see, such as an important info).
In those cases, I tried to use the new macros. In other cases,
I left the existing macros in. When modifying logs, it is
probably best to switch to the new macros with explicit levels.
The --log-level options and set_log commands now also accept
category settings, in addition to the epee style log levels.
2017-01-01 16:34:23 +00:00
LOG_DEBUG_CC(context, " COMMAND_HANDSHAKE INVOKED OK");
2014-03-03 22:07:58 +00:00
}else
{
Change logging to easylogging++
This replaces the epee and data_loggers logging systems with
a single one, and also adds filename:line and explicit severity
levels. Categories may be defined, and logging severity set
by category (or set of categories). epee style 0-4 log level
maps to a sensible severity configuration. Log files now also
rotate when reaching 100 MB.
To select which logs to output, use the MONERO_LOGS environment
variable, with a comma separated list of categories (globs are
supported), with their requested severity level after a colon.
If a log matches more than one such setting, the last one in
the configuration string applies. A few examples:
This one is (mostly) silent, only outputting fatal errors:
MONERO_LOGS=*:FATAL
This one is very verbose:
MONERO_LOGS=*:TRACE
This one is totally silent (logwise):
MONERO_LOGS=""
This one outputs all errors and warnings, except for the
"verify" category, which prints just fatal errors (the verify
category is used for logs about incoming transactions and
blocks, and it is expected that some/many will fail to verify,
hence we don't want the spam):
MONERO_LOGS=*:WARNING,verify:FATAL
Log levels are, in decreasing order of priority:
FATAL, ERROR, WARNING, INFO, DEBUG, TRACE
Subcategories may be added using prefixes and globs. This
example will output net.p2p logs at the TRACE level, but all
other net* logs only at INFO:
MONERO_LOGS=*:ERROR,net*:INFO,net.p2p:TRACE
Logs which are intended for the user (which Monero was using
a lot through epee, but really isn't a nice way to go things)
should use the "global" category. There are a few helper macros
for using this category, eg: MGINFO("this shows up by default")
or MGINFO_RED("this is red"), to try to keep a similar look
and feel for now.
Existing epee log macros still exist, and map to the new log
levels, but since they're used as a "user facing" UI element
as much as a logging system, they often don't map well to log
severities (ie, a log level 0 log may be an error, or may be
something we want the user to see, such as an important info).
In those cases, I tried to use the new macros. In other cases,
I left the existing macros in. When modifying logs, it is
probably best to switch to the new macros with explicit levels.
The --log-level options and set_log commands now also accept
category settings, in addition to the epee style log levels.
2017-01-01 16:34:23 +00:00
LOG_DEBUG_CC(context, " COMMAND_HANDSHAKE(AND CLOSE) INVOKED OK");
2014-03-03 22:07:58 +00:00
}
Pruning
The blockchain prunes seven eighths of prunable tx data.
This saves about two thirds of the blockchain size, while
keeping the node useful as a sync source for an eighth
of the blockchain.
No other data is currently pruned.
There are three ways to prune a blockchain:
- run monerod with --prune-blockchain
- run "prune_blockchain" in the monerod console
- run the monero-blockchain-prune utility
The first two will prune in place. Due to how LMDB works, this
will not reduce the blockchain size on disk. Instead, it will
mark parts of the file as free, so that future data will use
that free space, causing the file to not grow until free space
grows scarce.
The third way will create a second database, a pruned copy of
the original one. Since this is a new file, this one will be
smaller than the original one.
Once the database is pruned, it will stay pruned as it syncs.
That is, there is no need to use --prune-blockchain again, etc.
2018-04-29 22:30:51 +00:00
context_ = context;
2014-03-03 22:07:58 +00:00
}, P2P_DEFAULT_HANDSHAKE_INVOKE_TIMEOUT);
if(r)
{
ev.wait();
}
if(!hsh_result)
{
2017-08-01 09:39:36 +00:00
LOG_WARNING_CC(context_, "COMMAND_HANDSHAKE Failed");
2018-12-16 17:57:44 +00:00
m_network_zones.at(context_.m_remote_address.get_zone()).m_net_server.get_config_object().close(context_.m_connection_id);
2014-03-03 22:07:58 +00:00
}
2016-10-26 19:00:08 +00:00
else
{
try_get_support_flags(context_, [](p2p_connection_context& flags_context, const uint32_t& support_flags)
{
flags_context.support_flags = support_flags;
});
}
2014-03-03 22:07:58 +00:00
return hsh_result;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
2014-05-25 17:06:40 +00:00
bool node_server<t_payload_net_handler>::do_peer_timed_sync(const epee::net_utils::connection_context_base& context_, peerid_type peer_id)
2014-03-03 22:07:58 +00:00
{
typename COMMAND_TIMED_SYNC::request arg = AUTO_VAL_INIT(arg);
m_payload_handler.get_payload_sync_data(arg.payload_data);
2018-12-16 17:57:44 +00:00
network_zone& zone = m_network_zones.at(context_.m_remote_address.get_zone());
bool r = epee::net_utils::async_invoke_remote_command2<typename COMMAND_TIMED_SYNC::response>(context_.m_connection_id, COMMAND_TIMED_SYNC::ID, arg, zone.m_net_server.get_config_object(),
2014-03-03 22:07:58 +00:00
[this](int code, const typename COMMAND_TIMED_SYNC::response& rsp, p2p_connection_context& context)
{
2017-06-04 21:37:53 +00:00
context.m_in_timedsync = false;
2014-03-03 22:07:58 +00:00
if(code < 0)
{
2017-08-01 09:39:36 +00:00
LOG_WARNING_CC(context, "COMMAND_TIMED_SYNC invoke failed. (" << code << ", " << epee::levin::get_err_descr(code) << ")");
2014-03-03 22:07:58 +00:00
return;
}
2017-05-27 10:35:54 +00:00
if(!handle_remote_peerlist(rsp.local_peerlist_new, rsp.local_time, context))
2014-03-03 22:07:58 +00:00
{
Change logging to easylogging++
This replaces the epee and data_loggers logging systems with
a single one, and also adds filename:line and explicit severity
levels. Categories may be defined, and logging severity set
by category (or set of categories). epee style 0-4 log level
maps to a sensible severity configuration. Log files now also
rotate when reaching 100 MB.
To select which logs to output, use the MONERO_LOGS environment
variable, with a comma separated list of categories (globs are
supported), with their requested severity level after a colon.
If a log matches more than one such setting, the last one in
the configuration string applies. A few examples:
This one is (mostly) silent, only outputting fatal errors:
MONERO_LOGS=*:FATAL
This one is very verbose:
MONERO_LOGS=*:TRACE
This one is totally silent (logwise):
MONERO_LOGS=""
This one outputs all errors and warnings, except for the
"verify" category, which prints just fatal errors (the verify
category is used for logs about incoming transactions and
blocks, and it is expected that some/many will fail to verify,
hence we don't want the spam):
MONERO_LOGS=*:WARNING,verify:FATAL
Log levels are, in decreasing order of priority:
FATAL, ERROR, WARNING, INFO, DEBUG, TRACE
Subcategories may be added using prefixes and globs. This
example will output net.p2p logs at the TRACE level, but all
other net* logs only at INFO:
MONERO_LOGS=*:ERROR,net*:INFO,net.p2p:TRACE
Logs which are intended for the user (which Monero was using
a lot through epee, but really isn't a nice way to go things)
should use the "global" category. There are a few helper macros
for using this category, eg: MGINFO("this shows up by default")
or MGINFO_RED("this is red"), to try to keep a similar look
and feel for now.
Existing epee log macros still exist, and map to the new log
levels, but since they're used as a "user facing" UI element
as much as a logging system, they often don't map well to log
severities (ie, a log level 0 log may be an error, or may be
something we want the user to see, such as an important info).
In those cases, I tried to use the new macros. In other cases,
I left the existing macros in. When modifying logs, it is
probably best to switch to the new macros with explicit levels.
The --log-level options and set_log commands now also accept
category settings, in addition to the epee style log levels.
2017-01-01 16:34:23 +00:00
LOG_WARNING_CC(context, "COMMAND_TIMED_SYNC: failed to handle_remote_peerlist(...), closing connection.");
2018-12-16 17:57:44 +00:00
m_network_zones.at(context.m_remote_address.get_zone()).m_net_server.get_config_object().close(context.m_connection_id );
2017-05-27 10:35:54 +00:00
add_host_fail(context.m_remote_address);
2014-03-03 22:07:58 +00:00
}
if(!context.m_is_income)
daemon, wallet: new pay for RPC use system
Daemons intended for public use can be set up to require payment
in the form of hashes in exchange for RPC service. This enables
public daemons to receive payment for their work over a large
number of calls. This system behaves similarly to a pool, so
payment takes the form of valid blocks every so often, yielding
a large one off payment, rather than constant micropayments.
This system can also be used by third parties as a "paywall"
layer, where users of a service can pay for use by mining Monero
to the service provider's address. An example of this for web
site access is Primo, a Monero mining based website "paywall":
https://github.com/selene-kovri/primo
This has some advantages:
- incentive to run a node providing RPC services, thereby promoting the availability of third party nodes for those who can't run their own
- incentive to run your own node instead of using a third party's, thereby promoting decentralization
- decentralized: payment is done between a client and server, with no third party needed
- private: since the system is "pay as you go", you don't need to identify yourself to claim a long lived balance
- no payment occurs on the blockchain, so there is no extra transactional load
- one may mine with a beefy server, and use those credits from a phone, by reusing the client ID (at the cost of some privacy)
- no barrier to entry: anyone may run a RPC node, and your expected revenue depends on how much work you do
- Sybil resistant: if you run 1000 idle RPC nodes, you don't magically get more revenue
- no large credit balance maintained on servers, so they have no incentive to exit scam
- you can use any/many node(s), since there's little cost in switching servers
- market based prices: competition between servers to lower costs
- incentive for a distributed third party node system: if some public nodes are overused/slow, traffic can move to others
- increases network security
- helps counteract mining pools' share of the network hash rate
- zero incentive for a payer to "double spend" since a reorg does not give any money back to the miner
And some disadvantages:
- low power clients will have difficulty mining (but one can optionally mine in advance and/or with a faster machine)
- payment is "random", so a server might go a long time without a block before getting one
- a public node's overall expected payment may be small
Public nodes are expected to compete to find a suitable level for
cost of service.
The daemon can be set up this way to require payment for RPC services:
monerod --rpc-payment-address 4xxxxxx \
--rpc-payment-credits 250 --rpc-payment-difficulty 1000
These values are an example only.
The --rpc-payment-difficulty switch selects how hard each "share" should
be, similar to a mining pool. The higher the difficulty, the fewer
shares a client will find.
The --rpc-payment-credits switch selects how many credits are awarded
for each share a client finds.
Considering both options, clients will be awarded credits/difficulty
credits for every hash they calculate. For example, in the command line
above, 0.25 credits per hash. A client mining at 100 H/s will therefore
get an average of 25 credits per second.
For reference, in the current implementation, a credit is enough to
sync 20 blocks, so a 100 H/s client that's just starting to use Monero
and uses this daemon will be able to sync 500 blocks per second.
The wallet can be set to automatically mine if connected to a daemon
which requires payment for RPC usage. It will try to keep a balance
of 50000 credits, stopping mining when it's at this level, and starting
again as credits are spent. With the example above, a new client will
mine this much credits in about half an hour, and this target is enough
to sync 500000 blocks (currently about a third of the monero blockchain).
There are three new settings in the wallet:
- credits-target: this is the amount of credits a wallet will try to
reach before stopping mining. The default of 0 means 50000 credits.
- auto-mine-for-rpc-payment-threshold: this controls the minimum
credit rate which the wallet considers worth mining for. If the
daemon credits less than this ratio, the wallet will consider mining
to be not worth it. In the example above, the rate is 0.25
- persistent-rpc-client-id: if set, this allows the wallet to reuse
a client id across runs. This means a public node can tell a wallet
that's connecting is the same as one that connected previously, but
allows a wallet to keep their credit balance from one run to the
other. Since the wallet only mines to keep a small credit balance,
this is not normally worth doing. However, someone may want to mine
on a fast server, and use that credit balance on a low power device
such as a phone. If left unset, a new client ID is generated at
each wallet start, for privacy reasons.
To mine and use a credit balance on two different devices, you can
use the --rpc-client-secret-key switch. A wallet's client secret key
can be found using the new rpc_payments command in the wallet.
Note: anyone knowing your RPC client secret key is able to use your
credit balance.
The wallet has a few new commands too:
- start_mining_for_rpc: start mining to acquire more credits,
regardless of the auto mining settings
- stop_mining_for_rpc: stop mining to acquire more credits
- rpc_payments: display information about current credits with
the currently selected daemon
The node has an extra command:
- rpc_payments: display information about clients and their
balances
The node will forget about any balance for clients which have
been inactive for 6 months. Balances carry over on node restart.
2018-02-11 15:15:56 +00:00
m_network_zones.at(context.m_remote_address.get_zone()).m_peerlist.set_peer_just_seen(context.peer_id, context.m_remote_address, context.m_pruning_seed, context.m_rpc_port, context.m_rpc_credits_per_hash);
2019-07-04 21:44:28 +00:00
if (!m_payload_handler.process_payload_sync_data(rsp.payload_data, context, false))
{
m_network_zones.at(context.m_remote_address.get_zone()).m_net_server.get_config_object().close(context.m_connection_id );
}
2014-03-03 22:07:58 +00:00
});
if(!r)
{
2017-08-01 09:39:36 +00:00
LOG_WARNING_CC(context_, "COMMAND_TIMED_SYNC Failed");
2014-03-03 22:07:58 +00:00
return false;
}
return true;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
size_t node_server<t_payload_net_handler>::get_random_index_with_fixed_probability(size_t max_index)
{
//divide by zero workaround
if(!max_index)
return 0;
size_t x = crypto::rand<size_t>()%(max_index+1);
size_t res = (x*x*x)/(max_index*max_index); //parabola \/
Change logging to easylogging++
This replaces the epee and data_loggers logging systems with
a single one, and also adds filename:line and explicit severity
levels. Categories may be defined, and logging severity set
by category (or set of categories). epee style 0-4 log level
maps to a sensible severity configuration. Log files now also
rotate when reaching 100 MB.
To select which logs to output, use the MONERO_LOGS environment
variable, with a comma separated list of categories (globs are
supported), with their requested severity level after a colon.
If a log matches more than one such setting, the last one in
the configuration string applies. A few examples:
This one is (mostly) silent, only outputting fatal errors:
MONERO_LOGS=*:FATAL
This one is very verbose:
MONERO_LOGS=*:TRACE
This one is totally silent (logwise):
MONERO_LOGS=""
This one outputs all errors and warnings, except for the
"verify" category, which prints just fatal errors (the verify
category is used for logs about incoming transactions and
blocks, and it is expected that some/many will fail to verify,
hence we don't want the spam):
MONERO_LOGS=*:WARNING,verify:FATAL
Log levels are, in decreasing order of priority:
FATAL, ERROR, WARNING, INFO, DEBUG, TRACE
Subcategories may be added using prefixes and globs. This
example will output net.p2p logs at the TRACE level, but all
other net* logs only at INFO:
MONERO_LOGS=*:ERROR,net*:INFO,net.p2p:TRACE
Logs which are intended for the user (which Monero was using
a lot through epee, but really isn't a nice way to go things)
should use the "global" category. There are a few helper macros
for using this category, eg: MGINFO("this shows up by default")
or MGINFO_RED("this is red"), to try to keep a similar look
and feel for now.
Existing epee log macros still exist, and map to the new log
levels, but since they're used as a "user facing" UI element
as much as a logging system, they often don't map well to log
severities (ie, a log level 0 log may be an error, or may be
something we want the user to see, such as an important info).
In those cases, I tried to use the new macros. In other cases,
I left the existing macros in. When modifying logs, it is
probably best to switch to the new macros with explicit levels.
The --log-level options and set_log commands now also accept
category settings, in addition to the epee style log levels.
2017-01-01 16:34:23 +00:00
MDEBUG("Random connection index=" << res << "(x="<< x << ", max_index=" << max_index << ")");
2014-03-03 22:07:58 +00:00
return res;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
bool node_server<t_payload_net_handler>::is_peer_used(const peerlist_entry& peer)
{
2018-12-16 17:57:44 +00:00
for(const auto& zone : m_network_zones)
if(zone.second.m_config.m_peer_id == peer.id)
return true;//dont make connections to ourself
2014-03-03 22:07:58 +00:00
bool used = false;
2018-12-16 17:57:44 +00:00
for(auto& zone : m_network_zones)
2014-03-03 22:07:58 +00:00
{
2018-12-16 17:57:44 +00:00
zone.second.m_net_server.get_config_object().foreach_connection([&](const p2p_connection_context& cntxt)
2014-03-03 22:07:58 +00:00
{
2018-12-16 17:57:44 +00:00
if(cntxt.peer_id == peer.id || (!cntxt.m_is_income && peer.adr == cntxt.m_remote_address))
{
used = true;
return false;//stop enumerating
}
return true;
});
2014-03-03 22:07:58 +00:00
2018-12-16 17:57:44 +00:00
if(used)
return true;
}
return false;
2014-03-03 22:07:58 +00:00
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
2017-02-09 00:11:58 +00:00
bool node_server<t_payload_net_handler>::is_peer_used(const anchor_peerlist_entry& peer)
{
2018-12-16 17:57:44 +00:00
for(auto& zone : m_network_zones) {
if(zone.second.m_config.m_peer_id == peer.id) {
return true;//dont make connections to ourself
2017-02-09 00:11:58 +00:00
}
2018-12-16 17:57:44 +00:00
bool used = false;
zone.second.m_net_server.get_config_object().foreach_connection([&](const p2p_connection_context& cntxt)
{
if(cntxt.peer_id == peer.id || (!cntxt.m_is_income && peer.adr == cntxt.m_remote_address))
{
used = true;
return false;//stop enumerating
}
return true;
});
if (used)
return true;
}
return false;
2017-02-09 00:11:58 +00:00
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
2017-05-27 10:35:54 +00:00
bool node_server<t_payload_net_handler>::is_addr_connected(const epee::net_utils::network_address& peer)
2014-03-03 22:07:58 +00:00
{
2018-12-16 17:57:44 +00:00
const auto zone = m_network_zones.find(peer.get_zone());
if (zone == m_network_zones.end())
return false;
2014-03-03 22:07:58 +00:00
bool connected = false;
2018-12-16 17:57:44 +00:00
zone->second.m_net_server.get_config_object().foreach_connection([&](const p2p_connection_context& cntxt)
2014-03-03 22:07:58 +00:00
{
2017-05-27 10:35:54 +00:00
if(!cntxt.m_is_income && peer == cntxt.m_remote_address)
2014-03-03 22:07:58 +00:00
{
connected = true;
return false;//stop enumerating
}
return true;
});
return connected;
}
2014-05-25 17:06:40 +00:00
#define LOG_PRINT_CC_PRIORITY_NODE(priority, con, msg) \
do { \
if (priority) {\
Change logging to easylogging++
This replaces the epee and data_loggers logging systems with
a single one, and also adds filename:line and explicit severity
levels. Categories may be defined, and logging severity set
by category (or set of categories). epee style 0-4 log level
maps to a sensible severity configuration. Log files now also
rotate when reaching 100 MB.
To select which logs to output, use the MONERO_LOGS environment
variable, with a comma separated list of categories (globs are
supported), with their requested severity level after a colon.
If a log matches more than one such setting, the last one in
the configuration string applies. A few examples:
This one is (mostly) silent, only outputting fatal errors:
MONERO_LOGS=*:FATAL
This one is very verbose:
MONERO_LOGS=*:TRACE
This one is totally silent (logwise):
MONERO_LOGS=""
This one outputs all errors and warnings, except for the
"verify" category, which prints just fatal errors (the verify
category is used for logs about incoming transactions and
blocks, and it is expected that some/many will fail to verify,
hence we don't want the spam):
MONERO_LOGS=*:WARNING,verify:FATAL
Log levels are, in decreasing order of priority:
FATAL, ERROR, WARNING, INFO, DEBUG, TRACE
Subcategories may be added using prefixes and globs. This
example will output net.p2p logs at the TRACE level, but all
other net* logs only at INFO:
MONERO_LOGS=*:ERROR,net*:INFO,net.p2p:TRACE
Logs which are intended for the user (which Monero was using
a lot through epee, but really isn't a nice way to go things)
should use the "global" category. There are a few helper macros
for using this category, eg: MGINFO("this shows up by default")
or MGINFO_RED("this is red"), to try to keep a similar look
and feel for now.
Existing epee log macros still exist, and map to the new log
levels, but since they're used as a "user facing" UI element
as much as a logging system, they often don't map well to log
severities (ie, a log level 0 log may be an error, or may be
something we want the user to see, such as an important info).
In those cases, I tried to use the new macros. In other cases,
I left the existing macros in. When modifying logs, it is
probably best to switch to the new macros with explicit levels.
The --log-level options and set_log commands now also accept
category settings, in addition to the epee style log levels.
2017-01-01 16:34:23 +00:00
LOG_INFO_CC(con, "[priority]" << msg); \
2014-05-25 17:06:40 +00:00
} else {\
Change logging to easylogging++
This replaces the epee and data_loggers logging systems with
a single one, and also adds filename:line and explicit severity
levels. Categories may be defined, and logging severity set
by category (or set of categories). epee style 0-4 log level
maps to a sensible severity configuration. Log files now also
rotate when reaching 100 MB.
To select which logs to output, use the MONERO_LOGS environment
variable, with a comma separated list of categories (globs are
supported), with their requested severity level after a colon.
If a log matches more than one such setting, the last one in
the configuration string applies. A few examples:
This one is (mostly) silent, only outputting fatal errors:
MONERO_LOGS=*:FATAL
This one is very verbose:
MONERO_LOGS=*:TRACE
This one is totally silent (logwise):
MONERO_LOGS=""
This one outputs all errors and warnings, except for the
"verify" category, which prints just fatal errors (the verify
category is used for logs about incoming transactions and
blocks, and it is expected that some/many will fail to verify,
hence we don't want the spam):
MONERO_LOGS=*:WARNING,verify:FATAL
Log levels are, in decreasing order of priority:
FATAL, ERROR, WARNING, INFO, DEBUG, TRACE
Subcategories may be added using prefixes and globs. This
example will output net.p2p logs at the TRACE level, but all
other net* logs only at INFO:
MONERO_LOGS=*:ERROR,net*:INFO,net.p2p:TRACE
Logs which are intended for the user (which Monero was using
a lot through epee, but really isn't a nice way to go things)
should use the "global" category. There are a few helper macros
for using this category, eg: MGINFO("this shows up by default")
or MGINFO_RED("this is red"), to try to keep a similar look
and feel for now.
Existing epee log macros still exist, and map to the new log
levels, but since they're used as a "user facing" UI element
as much as a logging system, they often don't map well to log
severities (ie, a log level 0 log may be an error, or may be
something we want the user to see, such as an important info).
In those cases, I tried to use the new macros. In other cases,
I left the existing macros in. When modifying logs, it is
probably best to switch to the new macros with explicit levels.
The --log-level options and set_log commands now also accept
category settings, in addition to the epee style log levels.
2017-01-01 16:34:23 +00:00
LOG_INFO_CC(con, msg); \
2014-05-25 17:06:40 +00:00
} \
} while(0)
2014-03-03 22:07:58 +00:00
template<class t_payload_net_handler>
2017-05-27 10:35:54 +00:00
bool node_server<t_payload_net_handler>::try_to_connect_and_handshake_with_new_peer(const epee::net_utils::network_address& na, bool just_take_peerlist, uint64_t last_seen_stamp, PeerType peer_type, uint64_t first_seen_stamp)
2014-03-03 22:07:58 +00:00
{
2018-12-16 17:57:44 +00:00
network_zone& zone = m_network_zones.at(na.get_zone());
if (zone.m_connect == nullptr) // outgoing connections in zone not possible
return false;
if (zone.m_current_number_of_out_peers == zone.m_config.m_net_config.max_out_connection_count) // out peers limit
2015-12-14 04:54:39 +00:00
{
return false;
}
2018-12-16 17:57:44 +00:00
else if (zone.m_current_number_of_out_peers > zone.m_config.m_net_config.max_out_connection_count)
2015-12-14 04:54:39 +00:00
{
2018-12-16 17:57:44 +00:00
zone.m_net_server.get_config_object().del_out_connections(1);
--(zone.m_current_number_of_out_peers); // atomic variable, update time = 1s
2015-12-14 04:54:39 +00:00
return false;
}
2018-12-16 17:57:44 +00:00
2017-05-27 10:35:54 +00:00
MDEBUG("Connecting to " << na.str() << "(peer_type=" << peer_type << ", last_seen: "
2014-05-25 17:06:40 +00:00
<< (last_seen_stamp ? epee::misc_utils::get_time_interval_string(time(NULL) - last_seen_stamp):"never")
<< ")...");
2014-03-03 22:07:58 +00:00
epee: add SSL support
RPC connections now have optional tranparent SSL.
An optional private key and certificate file can be passed,
using the --{rpc,daemon}-ssl-private-key and
--{rpc,daemon}-ssl-certificate options. Those have as
argument a path to a PEM format private private key and
certificate, respectively.
If not given, a temporary self signed certificate will be used.
SSL can be enabled or disabled using --{rpc}-ssl, which
accepts autodetect (default), disabled or enabled.
Access can be restricted to particular certificates using the
--rpc-ssl-allowed-certificates, which takes a list of
paths to PEM encoded certificates. This can allow a wallet to
connect to only the daemon they think they're connected to,
by forcing SSL and listing the paths to the known good
certificates.
To generate long term certificates:
openssl genrsa -out /tmp/KEY 4096
openssl req -new -key /tmp/KEY -out /tmp/REQ
openssl x509 -req -days 999999 -sha256 -in /tmp/REQ -signkey /tmp/KEY -out /tmp/CERT
/tmp/KEY is the private key, and /tmp/CERT is the certificate,
both in PEM format. /tmp/REQ can be removed. Adjust the last
command to set expiration date, etc, as needed. It doesn't
make a whole lot of sense for monero anyway, since most servers
will run with one time temporary self signed certificates anyway.
SSL support is transparent, so all communication is done on the
existing ports, with SSL autodetection. This means you can start
using an SSL daemon now, but you should not enforce SSL yet or
nothing will talk to you.
2018-06-14 22:44:48 +00:00
auto con = zone.m_connect(zone, na, m_ssl_support);
2018-12-16 17:57:44 +00:00
if(!con)
2014-03-03 22:07:58 +00:00
{
2014-05-25 17:06:40 +00:00
bool is_priority = is_priority_node(na);
2018-12-16 17:57:44 +00:00
LOG_PRINT_CC_PRIORITY_NODE(is_priority, bool(con), "Connect failed to " << na.str()
2014-03-03 22:07:58 +00:00
/*<< ", try " << try_count*/);
//m_peerlist.set_peer_unreachable(pe);
return false;
}
2014-05-25 17:06:40 +00:00
2018-12-16 17:57:44 +00:00
con->m_anchor = peer_type == anchor;
2014-03-03 22:07:58 +00:00
peerid_type pi = AUTO_VAL_INIT(pi);
2018-12-16 17:57:44 +00:00
bool res = do_handshake_with_peer(pi, *con, just_take_peerlist);
2014-05-25 17:06:40 +00:00
2014-03-03 22:07:58 +00:00
if(!res)
{
2014-05-25 17:06:40 +00:00
bool is_priority = is_priority_node(na);
2018-12-16 17:57:44 +00:00
LOG_PRINT_CC_PRIORITY_NODE(is_priority, *con, "Failed to HANDSHAKE with peer "
2017-05-27 10:35:54 +00:00
<< na.str()
2014-03-03 22:07:58 +00:00
/*<< ", try " << try_count*/);
2019-07-04 21:44:28 +00:00
zone.m_net_server.get_config_object().close(con->m_connection_id);
2014-03-03 22:07:58 +00:00
return false;
}
2014-05-25 17:06:40 +00:00
2014-03-03 22:07:58 +00:00
if(just_take_peerlist)
{
2018-12-16 17:57:44 +00:00
zone.m_net_server.get_config_object().close(con->m_connection_id);
LOG_DEBUG_CC(*con, "CONNECTION HANDSHAKED OK AND CLOSED.");
2014-03-03 22:07:58 +00:00
return true;
}
peerlist_entry pe_local = AUTO_VAL_INIT(pe_local);
pe_local.adr = na;
pe_local.id = pi;
2014-08-20 15:57:29 +00:00
time_t last_seen;
time(&last_seen);
pe_local.last_seen = static_cast<int64_t>(last_seen);
2018-12-16 17:57:44 +00:00
pe_local.pruning_seed = con->m_pruning_seed;
2019-02-24 08:47:49 +00:00
pe_local.rpc_port = con->m_rpc_port;
daemon, wallet: new pay for RPC use system
Daemons intended for public use can be set up to require payment
in the form of hashes in exchange for RPC service. This enables
public daemons to receive payment for their work over a large
number of calls. This system behaves similarly to a pool, so
payment takes the form of valid blocks every so often, yielding
a large one off payment, rather than constant micropayments.
This system can also be used by third parties as a "paywall"
layer, where users of a service can pay for use by mining Monero
to the service provider's address. An example of this for web
site access is Primo, a Monero mining based website "paywall":
https://github.com/selene-kovri/primo
This has some advantages:
- incentive to run a node providing RPC services, thereby promoting the availability of third party nodes for those who can't run their own
- incentive to run your own node instead of using a third party's, thereby promoting decentralization
- decentralized: payment is done between a client and server, with no third party needed
- private: since the system is "pay as you go", you don't need to identify yourself to claim a long lived balance
- no payment occurs on the blockchain, so there is no extra transactional load
- one may mine with a beefy server, and use those credits from a phone, by reusing the client ID (at the cost of some privacy)
- no barrier to entry: anyone may run a RPC node, and your expected revenue depends on how much work you do
- Sybil resistant: if you run 1000 idle RPC nodes, you don't magically get more revenue
- no large credit balance maintained on servers, so they have no incentive to exit scam
- you can use any/many node(s), since there's little cost in switching servers
- market based prices: competition between servers to lower costs
- incentive for a distributed third party node system: if some public nodes are overused/slow, traffic can move to others
- increases network security
- helps counteract mining pools' share of the network hash rate
- zero incentive for a payer to "double spend" since a reorg does not give any money back to the miner
And some disadvantages:
- low power clients will have difficulty mining (but one can optionally mine in advance and/or with a faster machine)
- payment is "random", so a server might go a long time without a block before getting one
- a public node's overall expected payment may be small
Public nodes are expected to compete to find a suitable level for
cost of service.
The daemon can be set up this way to require payment for RPC services:
monerod --rpc-payment-address 4xxxxxx \
--rpc-payment-credits 250 --rpc-payment-difficulty 1000
These values are an example only.
The --rpc-payment-difficulty switch selects how hard each "share" should
be, similar to a mining pool. The higher the difficulty, the fewer
shares a client will find.
The --rpc-payment-credits switch selects how many credits are awarded
for each share a client finds.
Considering both options, clients will be awarded credits/difficulty
credits for every hash they calculate. For example, in the command line
above, 0.25 credits per hash. A client mining at 100 H/s will therefore
get an average of 25 credits per second.
For reference, in the current implementation, a credit is enough to
sync 20 blocks, so a 100 H/s client that's just starting to use Monero
and uses this daemon will be able to sync 500 blocks per second.
The wallet can be set to automatically mine if connected to a daemon
which requires payment for RPC usage. It will try to keep a balance
of 50000 credits, stopping mining when it's at this level, and starting
again as credits are spent. With the example above, a new client will
mine this much credits in about half an hour, and this target is enough
to sync 500000 blocks (currently about a third of the monero blockchain).
There are three new settings in the wallet:
- credits-target: this is the amount of credits a wallet will try to
reach before stopping mining. The default of 0 means 50000 credits.
- auto-mine-for-rpc-payment-threshold: this controls the minimum
credit rate which the wallet considers worth mining for. If the
daemon credits less than this ratio, the wallet will consider mining
to be not worth it. In the example above, the rate is 0.25
- persistent-rpc-client-id: if set, this allows the wallet to reuse
a client id across runs. This means a public node can tell a wallet
that's connecting is the same as one that connected previously, but
allows a wallet to keep their credit balance from one run to the
other. Since the wallet only mines to keep a small credit balance,
this is not normally worth doing. However, someone may want to mine
on a fast server, and use that credit balance on a low power device
such as a phone. If left unset, a new client ID is generated at
each wallet start, for privacy reasons.
To mine and use a credit balance on two different devices, you can
use the --rpc-client-secret-key switch. A wallet's client secret key
can be found using the new rpc_payments command in the wallet.
Note: anyone knowing your RPC client secret key is able to use your
credit balance.
The wallet has a few new commands too:
- start_mining_for_rpc: start mining to acquire more credits,
regardless of the auto mining settings
- stop_mining_for_rpc: stop mining to acquire more credits
- rpc_payments: display information about current credits with
the currently selected daemon
The node has an extra command:
- rpc_payments: display information about clients and their
balances
The node will forget about any balance for clients which have
been inactive for 6 months. Balances carry over on node restart.
2018-02-11 15:15:56 +00:00
pe_local.rpc_credits_per_hash = con->m_rpc_credits_per_hash;
2018-12-16 17:57:44 +00:00
zone.m_peerlist.append_with_peer_white(pe_local);
2014-03-03 22:07:58 +00:00
//update last seen and push it to peerlist manager
2017-02-09 00:11:58 +00:00
anchor_peerlist_entry ape = AUTO_VAL_INIT(ape);
ape.adr = na;
ape.id = pi;
ape.first_seen = first_seen_stamp ? first_seen_stamp : time(nullptr);
2018-12-16 17:57:44 +00:00
zone.m_peerlist.append_with_peer_anchor(ape);
2019-05-16 20:34:22 +00:00
zone.m_notifier.new_out_connection();
2017-02-09 00:11:58 +00:00
2018-12-16 17:57:44 +00:00
LOG_DEBUG_CC(*con, "CONNECTION HANDSHAKED OK.");
2014-03-03 22:07:58 +00:00
return true;
}
2014-05-25 17:06:40 +00:00
2017-01-20 23:59:04 +00:00
template<class t_payload_net_handler>
2017-05-27 10:35:54 +00:00
bool node_server<t_payload_net_handler>::check_connection_and_handshake_with_peer(const epee::net_utils::network_address& na, uint64_t last_seen_stamp)
2017-01-20 23:59:04 +00:00
{
2018-12-16 17:57:44 +00:00
network_zone& zone = m_network_zones.at(na.get_zone());
if (zone.m_connect == nullptr)
return false;
2017-05-27 10:35:54 +00:00
LOG_PRINT_L1("Connecting to " << na.str() << "(last_seen: "
2017-01-20 23:59:04 +00:00
<< (last_seen_stamp ? epee::misc_utils::get_time_interval_string(time(NULL) - last_seen_stamp):"never")
<< ")...");
epee: add SSL support
RPC connections now have optional tranparent SSL.
An optional private key and certificate file can be passed,
using the --{rpc,daemon}-ssl-private-key and
--{rpc,daemon}-ssl-certificate options. Those have as
argument a path to a PEM format private private key and
certificate, respectively.
If not given, a temporary self signed certificate will be used.
SSL can be enabled or disabled using --{rpc}-ssl, which
accepts autodetect (default), disabled or enabled.
Access can be restricted to particular certificates using the
--rpc-ssl-allowed-certificates, which takes a list of
paths to PEM encoded certificates. This can allow a wallet to
connect to only the daemon they think they're connected to,
by forcing SSL and listing the paths to the known good
certificates.
To generate long term certificates:
openssl genrsa -out /tmp/KEY 4096
openssl req -new -key /tmp/KEY -out /tmp/REQ
openssl x509 -req -days 999999 -sha256 -in /tmp/REQ -signkey /tmp/KEY -out /tmp/CERT
/tmp/KEY is the private key, and /tmp/CERT is the certificate,
both in PEM format. /tmp/REQ can be removed. Adjust the last
command to set expiration date, etc, as needed. It doesn't
make a whole lot of sense for monero anyway, since most servers
will run with one time temporary self signed certificates anyway.
SSL support is transparent, so all communication is done on the
existing ports, with SSL autodetection. This means you can start
using an SSL daemon now, but you should not enforce SSL yet or
nothing will talk to you.
2018-06-14 22:44:48 +00:00
auto con = zone.m_connect(zone, na, m_ssl_support);
2018-12-16 17:57:44 +00:00
if (!con) {
2017-01-20 23:59:04 +00:00
bool is_priority = is_priority_node(na);
2018-12-16 17:57:44 +00:00
LOG_PRINT_CC_PRIORITY_NODE(is_priority, p2p_connection_context{}, "Connect failed to " << na.str());
2017-01-20 23:59:04 +00:00
return false;
}
2018-12-16 17:57:44 +00:00
con->m_anchor = false;
2017-01-20 23:59:04 +00:00
peerid_type pi = AUTO_VAL_INIT(pi);
2018-12-16 17:57:44 +00:00
const bool res = do_handshake_with_peer(pi, *con, true);
2017-01-20 23:59:04 +00:00
if (!res) {
bool is_priority = is_priority_node(na);
2018-12-16 17:57:44 +00:00
LOG_PRINT_CC_PRIORITY_NODE(is_priority, *con, "Failed to HANDSHAKE with peer " << na.str());
2019-07-04 21:44:28 +00:00
zone.m_net_server.get_config_object().close(con->m_connection_id);
2017-01-20 23:59:04 +00:00
return false;
}
2018-12-16 17:57:44 +00:00
zone.m_net_server.get_config_object().close(con->m_connection_id);
2017-01-20 23:59:04 +00:00
2018-12-16 17:57:44 +00:00
LOG_DEBUG_CC(*con, "CONNECTION HANDSHAKED OK AND CLOSED.");
2017-01-20 23:59:04 +00:00
return true;
}
2014-05-25 17:06:40 +00:00
#undef LOG_PRINT_CC_PRIORITY_NODE
2015-11-23 17:34:55 +00:00
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
2017-05-27 10:35:54 +00:00
bool node_server<t_payload_net_handler>::is_addr_recently_failed(const epee::net_utils::network_address& addr)
2015-11-23 17:34:55 +00:00
{
CRITICAL_REGION_LOCAL(m_conn_fails_cache_lock);
2019-09-16 19:20:23 +00:00
auto it = m_conn_fails_cache.find(addr.host_str());
2015-11-23 17:34:55 +00:00
if(it == m_conn_fails_cache.end())
return false;
if(time(NULL) - it->second > P2P_FAILED_ADDR_FORGET_SECONDS)
return false;
else
return true;
}
2014-03-03 22:07:58 +00:00
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
2017-02-09 00:11:58 +00:00
bool node_server<t_payload_net_handler>::make_new_connection_from_anchor_peerlist(const std::vector<anchor_peerlist_entry>& anchor_peerlist)
{
for (const auto& pe: anchor_peerlist) {
Pruning
The blockchain prunes seven eighths of prunable tx data.
This saves about two thirds of the blockchain size, while
keeping the node useful as a sync source for an eighth
of the blockchain.
No other data is currently pruned.
There are three ways to prune a blockchain:
- run monerod with --prune-blockchain
- run "prune_blockchain" in the monerod console
- run the monero-blockchain-prune utility
The first two will prune in place. Due to how LMDB works, this
will not reduce the blockchain size on disk. Instead, it will
mark parts of the file as free, so that future data will use
that free space, causing the file to not grow until free space
grows scarce.
The third way will create a second database, a pruned copy of
the original one. Since this is a new file, this one will be
smaller than the original one.
Once the database is pruned, it will stay pruned as it syncs.
That is, there is no need to use --prune-blockchain again, etc.
2018-04-29 22:30:51 +00:00
_note("Considering connecting (out) to anchor peer: " << peerid_type(pe.id) << " " << pe.adr.str());
2017-02-09 00:11:58 +00:00
if(is_peer_used(pe)) {
_note("Peer is used");
continue;
}
2017-05-27 10:35:54 +00:00
if(!is_remote_host_allowed(pe.adr)) {
2017-02-09 00:11:58 +00:00
continue;
}
if(is_addr_recently_failed(pe.adr)) {
continue;
}
2017-08-20 20:15:53 +00:00
MDEBUG("Selected peer: " << peerid_to_string(pe.id) << " " << pe.adr.str()
2017-02-09 00:11:58 +00:00
<< "[peer_type=" << anchor
<< "] first_seen: " << epee::misc_utils::get_time_interval_string(time(NULL) - pe.first_seen));
if(!try_to_connect_and_handshake_with_new_peer(pe.adr, false, 0, anchor, pe.first_seen)) {
_note("Handshake failed");
continue;
}
return true;
}
return false;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
2018-12-16 17:57:44 +00:00
bool node_server<t_payload_net_handler>::make_new_connection_from_peerlist(network_zone& zone, bool use_white_list)
2014-03-03 22:07:58 +00:00
{
Pruning
The blockchain prunes seven eighths of prunable tx data.
This saves about two thirds of the blockchain size, while
keeping the node useful as a sync source for an eighth
of the blockchain.
No other data is currently pruned.
There are three ways to prune a blockchain:
- run monerod with --prune-blockchain
- run "prune_blockchain" in the monerod console
- run the monero-blockchain-prune utility
The first two will prune in place. Due to how LMDB works, this
will not reduce the blockchain size on disk. Instead, it will
mark parts of the file as free, so that future data will use
that free space, causing the file to not grow until free space
grows scarce.
The third way will create a second database, a pruned copy of
the original one. Since this is a new file, this one will be
smaller than the original one.
Once the database is pruned, it will stay pruned as it syncs.
That is, there is no need to use --prune-blockchain again, etc.
2018-04-29 22:30:51 +00:00
size_t max_random_index = 0;
2014-03-03 22:07:58 +00:00
std::set<size_t> tried_peers;
size_t try_count = 0;
size_t rand_count = 0;
2018-12-16 17:57:44 +00:00
while(rand_count < (max_random_index+1)*3 && try_count < 10 && !zone.m_net_server.is_stop_signal_sent())
2014-03-03 22:07:58 +00:00
{
++rand_count;
2017-02-28 16:39:39 +00:00
size_t random_index;
Pruning
The blockchain prunes seven eighths of prunable tx data.
This saves about two thirds of the blockchain size, while
keeping the node useful as a sync source for an eighth
of the blockchain.
No other data is currently pruned.
There are three ways to prune a blockchain:
- run monerod with --prune-blockchain
- run "prune_blockchain" in the monerod console
- run the monero-blockchain-prune utility
The first two will prune in place. Due to how LMDB works, this
will not reduce the blockchain size on disk. Instead, it will
mark parts of the file as free, so that future data will use
that free space, causing the file to not grow until free space
grows scarce.
The third way will create a second database, a pruned copy of
the original one. Since this is a new file, this one will be
smaller than the original one.
Once the database is pruned, it will stay pruned as it syncs.
That is, there is no need to use --prune-blockchain again, etc.
2018-04-29 22:30:51 +00:00
const uint32_t next_needed_pruning_stripe = m_payload_handler.get_next_needed_pruning_stripe().second;
2017-02-28 16:39:39 +00:00
2019-07-05 18:19:40 +00:00
// build a set of all the /16 we're connected to, and prefer a peer that's not in that set
std::set<uint32_t> classB;
if (&zone == &m_network_zones.at(epee::net_utils::zone::public_)) // at returns reference, not copy
{
zone.m_net_server.get_config_object().foreach_connection([&](const p2p_connection_context& cntxt)
{
if (cntxt.m_remote_address.get_type_id() == epee::net_utils::ipv4_network_address::get_type_id())
{
const epee::net_utils::network_address na = cntxt.m_remote_address;
const uint32_t actual_ip = na.as<const epee::net_utils::ipv4_network_address>().ip();
classB.insert(actual_ip & 0x0000ffff);
}
return true;
});
}
Pruning
The blockchain prunes seven eighths of prunable tx data.
This saves about two thirds of the blockchain size, while
keeping the node useful as a sync source for an eighth
of the blockchain.
No other data is currently pruned.
There are three ways to prune a blockchain:
- run monerod with --prune-blockchain
- run "prune_blockchain" in the monerod console
- run the monero-blockchain-prune utility
The first two will prune in place. Due to how LMDB works, this
will not reduce the blockchain size on disk. Instead, it will
mark parts of the file as free, so that future data will use
that free space, causing the file to not grow until free space
grows scarce.
The third way will create a second database, a pruned copy of
the original one. Since this is a new file, this one will be
smaller than the original one.
Once the database is pruned, it will stay pruned as it syncs.
That is, there is no need to use --prune-blockchain again, etc.
2018-04-29 22:30:51 +00:00
std::deque<size_t> filtered;
const size_t limit = use_white_list ? 20 : std::numeric_limits<size_t>::max();
2019-07-05 18:19:40 +00:00
size_t idx = 0, skipped = 0;
for (int step = 0; step < 2; ++step)
{
bool skip_duplicate_class_B = step == 0;
zone.m_peerlist.foreach (use_white_list, [&classB, &filtered, &idx, &skipped, skip_duplicate_class_B, limit, next_needed_pruning_stripe](const peerlist_entry &pe){
if (filtered.size() >= limit)
return false;
bool skip = false;
if (skip_duplicate_class_B && pe.adr.get_type_id() == epee::net_utils::ipv4_network_address::get_type_id())
{
const epee::net_utils::network_address na = pe.adr;
uint32_t actual_ip = na.as<const epee::net_utils::ipv4_network_address>().ip();
skip = classB.find(actual_ip & 0x0000ffff) != classB.end();
}
if (skip)
++skipped;
else if (next_needed_pruning_stripe == 0 || pe.pruning_seed == 0)
filtered.push_back(idx);
else if (next_needed_pruning_stripe == tools::get_pruning_stripe(pe.pruning_seed))
filtered.push_front(idx);
++idx;
return true;
});
if (skipped == 0 || !filtered.empty())
break;
if (skipped)
2019-08-21 14:00:43 +00:00
MINFO("Skipping " << skipped << " possible peers as they share a class B with existing peers");
2019-07-05 18:19:40 +00:00
}
Pruning
The blockchain prunes seven eighths of prunable tx data.
This saves about two thirds of the blockchain size, while
keeping the node useful as a sync source for an eighth
of the blockchain.
No other data is currently pruned.
There are three ways to prune a blockchain:
- run monerod with --prune-blockchain
- run "prune_blockchain" in the monerod console
- run the monero-blockchain-prune utility
The first two will prune in place. Due to how LMDB works, this
will not reduce the blockchain size on disk. Instead, it will
mark parts of the file as free, so that future data will use
that free space, causing the file to not grow until free space
grows scarce.
The third way will create a second database, a pruned copy of
the original one. Since this is a new file, this one will be
smaller than the original one.
Once the database is pruned, it will stay pruned as it syncs.
That is, there is no need to use --prune-blockchain again, etc.
2018-04-29 22:30:51 +00:00
if (filtered.empty())
{
MDEBUG("No available peer in " << (use_white_list ? "white" : "gray") << " list filtered by " << next_needed_pruning_stripe);
return false;
}
if (use_white_list)
{
// if using the white list, we first pick in the set of peers we've already been using earlier
random_index = get_random_index_with_fixed_probability(std::min<uint64_t>(filtered.size() - 1, 20));
CRITICAL_REGION_LOCAL(m_used_stripe_peers_mutex);
if (next_needed_pruning_stripe > 0 && next_needed_pruning_stripe <= (1ul << CRYPTONOTE_PRUNING_LOG_STRIPES) && !m_used_stripe_peers[next_needed_pruning_stripe-1].empty())
{
const epee::net_utils::network_address na = m_used_stripe_peers[next_needed_pruning_stripe-1].front();
m_used_stripe_peers[next_needed_pruning_stripe-1].pop_front();
for (size_t i = 0; i < filtered.size(); ++i)
{
peerlist_entry pe;
2018-12-16 17:57:44 +00:00
if (zone.m_peerlist.get_white_peer_by_index(pe, filtered[i]) && pe.adr == na)
Pruning
The blockchain prunes seven eighths of prunable tx data.
This saves about two thirds of the blockchain size, while
keeping the node useful as a sync source for an eighth
of the blockchain.
No other data is currently pruned.
There are three ways to prune a blockchain:
- run monerod with --prune-blockchain
- run "prune_blockchain" in the monerod console
- run the monero-blockchain-prune utility
The first two will prune in place. Due to how LMDB works, this
will not reduce the blockchain size on disk. Instead, it will
mark parts of the file as free, so that future data will use
that free space, causing the file to not grow until free space
grows scarce.
The third way will create a second database, a pruned copy of
the original one. Since this is a new file, this one will be
smaller than the original one.
Once the database is pruned, it will stay pruned as it syncs.
That is, there is no need to use --prune-blockchain again, etc.
2018-04-29 22:30:51 +00:00
{
MDEBUG("Reusing stripe " << next_needed_pruning_stripe << " peer " << pe.adr.str());
random_index = i;
break;
}
}
}
2017-02-28 16:39:39 +00:00
}
Pruning
The blockchain prunes seven eighths of prunable tx data.
This saves about two thirds of the blockchain size, while
keeping the node useful as a sync source for an eighth
of the blockchain.
No other data is currently pruned.
There are three ways to prune a blockchain:
- run monerod with --prune-blockchain
- run "prune_blockchain" in the monerod console
- run the monero-blockchain-prune utility
The first two will prune in place. Due to how LMDB works, this
will not reduce the blockchain size on disk. Instead, it will
mark parts of the file as free, so that future data will use
that free space, causing the file to not grow until free space
grows scarce.
The third way will create a second database, a pruned copy of
the original one. Since this is a new file, this one will be
smaller than the original one.
Once the database is pruned, it will stay pruned as it syncs.
That is, there is no need to use --prune-blockchain again, etc.
2018-04-29 22:30:51 +00:00
else
2019-04-03 05:10:24 +00:00
random_index = crypto::rand_idx(filtered.size());
2017-02-28 16:39:39 +00:00
Pruning
The blockchain prunes seven eighths of prunable tx data.
This saves about two thirds of the blockchain size, while
keeping the node useful as a sync source for an eighth
of the blockchain.
No other data is currently pruned.
There are three ways to prune a blockchain:
- run monerod with --prune-blockchain
- run "prune_blockchain" in the monerod console
- run the monero-blockchain-prune utility
The first two will prune in place. Due to how LMDB works, this
will not reduce the blockchain size on disk. Instead, it will
mark parts of the file as free, so that future data will use
that free space, causing the file to not grow until free space
grows scarce.
The third way will create a second database, a pruned copy of
the original one. Since this is a new file, this one will be
smaller than the original one.
Once the database is pruned, it will stay pruned as it syncs.
That is, there is no need to use --prune-blockchain again, etc.
2018-04-29 22:30:51 +00:00
CHECK_AND_ASSERT_MES(random_index < filtered.size(), false, "random_index < filtered.size() failed!!");
random_index = filtered[random_index];
2018-12-16 17:57:44 +00:00
CHECK_AND_ASSERT_MES(random_index < (use_white_list ? zone.m_peerlist.get_white_peers_count() : zone.m_peerlist.get_gray_peers_count()),
Pruning
The blockchain prunes seven eighths of prunable tx data.
This saves about two thirds of the blockchain size, while
keeping the node useful as a sync source for an eighth
of the blockchain.
No other data is currently pruned.
There are three ways to prune a blockchain:
- run monerod with --prune-blockchain
- run "prune_blockchain" in the monerod console
- run the monero-blockchain-prune utility
The first two will prune in place. Due to how LMDB works, this
will not reduce the blockchain size on disk. Instead, it will
mark parts of the file as free, so that future data will use
that free space, causing the file to not grow until free space
grows scarce.
The third way will create a second database, a pruned copy of
the original one. Since this is a new file, this one will be
smaller than the original one.
Once the database is pruned, it will stay pruned as it syncs.
That is, there is no need to use --prune-blockchain again, etc.
2018-04-29 22:30:51 +00:00
false, "random_index < peers size failed!!");
2014-03-03 22:07:58 +00:00
if(tried_peers.count(random_index))
continue;
tried_peers.insert(random_index);
peerlist_entry pe = AUTO_VAL_INIT(pe);
2018-12-16 17:57:44 +00:00
bool r = use_white_list ? zone.m_peerlist.get_white_peer_by_index(pe, random_index):zone.m_peerlist.get_gray_peer_by_index(pe, random_index);
2014-03-03 22:07:58 +00:00
CHECK_AND_ASSERT_MES(r, false, "Failed to get random peer from peerlist(white:" << use_white_list << ")");
++try_count;
Pruning
The blockchain prunes seven eighths of prunable tx data.
This saves about two thirds of the blockchain size, while
keeping the node useful as a sync source for an eighth
of the blockchain.
No other data is currently pruned.
There are three ways to prune a blockchain:
- run monerod with --prune-blockchain
- run "prune_blockchain" in the monerod console
- run the monero-blockchain-prune utility
The first two will prune in place. Due to how LMDB works, this
will not reduce the blockchain size on disk. Instead, it will
mark parts of the file as free, so that future data will use
that free space, causing the file to not grow until free space
grows scarce.
The third way will create a second database, a pruned copy of
the original one. Since this is a new file, this one will be
smaller than the original one.
Once the database is pruned, it will stay pruned as it syncs.
That is, there is no need to use --prune-blockchain again, etc.
2018-04-29 22:30:51 +00:00
_note("Considering connecting (out) to " << (use_white_list ? "white" : "gray") << " list peer: " <<
peerid_to_string(pe.id) << " " << pe.adr.str() << ", pruning seed " << epee::string_tools::to_string_hex(pe.pruning_seed) <<
" (stripe " << next_needed_pruning_stripe << " needed)");
2015-02-12 19:59:39 +00:00
if(is_peer_used(pe)) {
2015-12-14 04:54:39 +00:00
_note("Peer is used");
2014-03-03 22:07:58 +00:00
continue;
2015-12-14 04:54:39 +00:00
}
2014-03-03 22:07:58 +00:00
2017-05-27 10:35:54 +00:00
if(!is_remote_host_allowed(pe.adr))
2015-11-23 17:34:55 +00:00
continue;
if(is_addr_recently_failed(pe.adr))
continue;
2017-08-20 20:15:53 +00:00
MDEBUG("Selected peer: " << peerid_to_string(pe.id) << " " << pe.adr.str()
Pruning
The blockchain prunes seven eighths of prunable tx data.
This saves about two thirds of the blockchain size, while
keeping the node useful as a sync source for an eighth
of the blockchain.
No other data is currently pruned.
There are three ways to prune a blockchain:
- run monerod with --prune-blockchain
- run "prune_blockchain" in the monerod console
- run the monero-blockchain-prune utility
The first two will prune in place. Due to how LMDB works, this
will not reduce the blockchain size on disk. Instead, it will
mark parts of the file as free, so that future data will use
that free space, causing the file to not grow until free space
grows scarce.
The third way will create a second database, a pruned copy of
the original one. Since this is a new file, this one will be
smaller than the original one.
Once the database is pruned, it will stay pruned as it syncs.
That is, there is no need to use --prune-blockchain again, etc.
2018-04-29 22:30:51 +00:00
<< ", pruning seed " << epee::string_tools::to_string_hex(pe.pruning_seed) << " "
2017-02-09 00:11:58 +00:00
<< "[peer_list=" << (use_white_list ? white : gray)
2014-05-25 17:06:40 +00:00
<< "] last_seen: " << (pe.last_seen ? epee::misc_utils::get_time_interval_string(time(NULL) - pe.last_seen) : "never"));
2015-12-14 04:54:39 +00:00
2017-02-09 00:11:58 +00:00
if(!try_to_connect_and_handshake_with_new_peer(pe.adr, false, pe.last_seen, use_white_list ? white : gray)) {
2015-12-14 04:54:39 +00:00
_note("Handshake failed");
2014-03-03 22:07:58 +00:00
continue;
2015-12-14 04:54:39 +00:00
}
2014-03-03 22:07:58 +00:00
return true;
}
return false;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
2017-08-09 21:44:39 +00:00
bool node_server<t_payload_net_handler>::connect_to_seed()
2014-03-03 22:07:58 +00:00
{
2018-01-17 11:17:21 +00:00
if (m_seed_nodes.empty() || m_offline || !m_exclusive_peers.empty())
2017-08-09 21:44:39 +00:00
return true;
2014-05-25 17:06:40 +00:00
2014-03-03 22:07:58 +00:00
size_t try_count = 0;
2019-04-03 05:10:24 +00:00
size_t current_index = crypto::rand_idx(m_seed_nodes.size());
2018-12-16 17:57:44 +00:00
const net_server& server = m_network_zones.at(epee::net_utils::zone::public_).m_net_server;
2014-03-03 22:07:58 +00:00
while(true)
2015-12-14 04:54:39 +00:00
{
2018-12-16 17:57:44 +00:00
if(server.is_stop_signal_sent())
2014-03-03 22:07:58 +00:00
return false;
if(try_to_connect_and_handshake_with_new_peer(m_seed_nodes[current_index], true))
break;
if(++try_count > m_seed_nodes.size())
{
2018-04-29 13:57:08 +00:00
if (!m_fallback_seed_nodes_added)
2017-03-17 23:39:47 +00:00
{
MWARNING("Failed to connect to any of seed peers, trying fallback seeds");
2018-04-29 13:57:08 +00:00
current_index = m_seed_nodes.size();
2018-02-16 11:04:04 +00:00
for (const auto &peer: get_seed_nodes(m_nettype))
2017-03-17 23:39:47 +00:00
{
MDEBUG("Fallback seed node: " << peer);
2018-06-11 03:43:18 +00:00
append_net_address(m_seed_nodes, peer, cryptonote::get_config(m_nettype).P2P_DEFAULT_PORT);
2017-03-17 23:39:47 +00:00
}
2018-04-29 13:57:08 +00:00
m_fallback_seed_nodes_added = true;
if (current_index == m_seed_nodes.size())
{
MWARNING("No fallback seeds, continuing without seeds");
break;
}
2017-03-17 23:39:47 +00:00
// continue for another few cycles
}
else
{
MWARNING("Failed to connect to any of seed peers, continuing without seeds");
break;
}
2014-03-03 22:07:58 +00:00
}
2014-03-20 11:46:11 +00:00
if(++current_index >= m_seed_nodes.size())
2014-03-03 22:07:58 +00:00
current_index = 0;
}
2017-08-09 21:44:39 +00:00
return true;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
bool node_server<t_payload_net_handler>::connections_maker()
{
2018-12-16 17:57:44 +00:00
using zone_type = epee::net_utils::zone;
2018-02-01 11:48:03 +00:00
if (m_offline) return true;
2017-08-09 21:44:39 +00:00
if (!connect_to_peerlist(m_exclusive_peers)) return false;
if (!m_exclusive_peers.empty()) return true;
2018-12-16 17:57:44 +00:00
// Only have seeds in the public zone right now.
size_t start_conn_count = get_public_outgoing_connections_count();
if(!get_public_white_peers_count() && m_seed_nodes.size())
2017-08-09 21:44:39 +00:00
{
if (!connect_to_seed())
return false;
2014-03-03 22:07:58 +00:00
}
2014-05-25 17:06:40 +00:00
if (!connect_to_peerlist(m_priority_peers)) return false;
2014-03-03 22:07:58 +00:00
2018-12-16 17:57:44 +00:00
for(auto& zone : m_network_zones)
2014-03-03 22:07:58 +00:00
{
2018-12-16 17:57:44 +00:00
size_t base_expected_white_connections = (zone.second.m_config.m_net_config.max_out_connection_count*P2P_DEFAULT_WHITELIST_CONNECTIONS_PERCENT)/100;
size_t conn_count = get_outgoing_connections_count(zone.second);
while(conn_count < zone.second.m_config.m_net_config.max_out_connection_count)
2014-03-03 22:07:58 +00:00
{
2018-12-16 17:57:44 +00:00
const size_t expected_white_connections = m_payload_handler.get_next_needed_pruning_stripe().second ? zone.second.m_config.m_net_config.max_out_connection_count : base_expected_white_connections;
if(conn_count < expected_white_connections)
{
//start from anchor list
while (get_outgoing_connections_count(zone.second) < P2P_DEFAULT_ANCHOR_CONNECTIONS_COUNT
&& make_expected_connections_count(zone.second, anchor, P2P_DEFAULT_ANCHOR_CONNECTIONS_COUNT));
//then do white list
while (get_outgoing_connections_count(zone.second) < expected_white_connections
&& make_expected_connections_count(zone.second, white, expected_white_connections));
//then do grey list
while (get_outgoing_connections_count(zone.second) < zone.second.m_config.m_net_config.max_out_connection_count
&& make_expected_connections_count(zone.second, gray, zone.second.m_config.m_net_config.max_out_connection_count));
}else
{
//start from grey list
while (get_outgoing_connections_count(zone.second) < zone.second.m_config.m_net_config.max_out_connection_count
&& make_expected_connections_count(zone.second, gray, zone.second.m_config.m_net_config.max_out_connection_count));
//and then do white list
while (get_outgoing_connections_count(zone.second) < zone.second.m_config.m_net_config.max_out_connection_count
&& make_expected_connections_count(zone.second, white, zone.second.m_config.m_net_config.max_out_connection_count));
}
if(zone.second.m_net_server.is_stop_signal_sent())
return false;
2018-09-16 10:48:04 +00:00
size_t new_conn_count = get_outgoing_connections_count(zone.second);
if (new_conn_count <= conn_count)
{
// we did not make any connection, sleep a bit to avoid a busy loop in case we don't have
// any peers to try, then break so we will try seeds to get more peers
boost::this_thread::sleep_for(boost::chrono::seconds(1));
break;
}
conn_count = new_conn_count;
2014-03-03 22:07:58 +00:00
}
}
2018-12-16 17:57:44 +00:00
if (start_conn_count == get_public_outgoing_connections_count() && start_conn_count < m_network_zones.at(zone_type::public_).m_config.m_net_config.max_out_connection_count)
2017-08-09 21:44:39 +00:00
{
MINFO("Failed to connect to any, trying seeds");
if (!connect_to_seed())
return false;
}
2014-03-03 22:07:58 +00:00
return true;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
2018-12-16 17:57:44 +00:00
bool node_server<t_payload_net_handler>::make_expected_connections_count(network_zone& zone, PeerType peer_type, size_t expected_connections)
2014-03-03 22:07:58 +00:00
{
2015-12-07 20:21:45 +00:00
if (m_offline)
Pruning
The blockchain prunes seven eighths of prunable tx data.
This saves about two thirds of the blockchain size, while
keeping the node useful as a sync source for an eighth
of the blockchain.
No other data is currently pruned.
There are three ways to prune a blockchain:
- run monerod with --prune-blockchain
- run "prune_blockchain" in the monerod console
- run the monero-blockchain-prune utility
The first two will prune in place. Due to how LMDB works, this
will not reduce the blockchain size on disk. Instead, it will
mark parts of the file as free, so that future data will use
that free space, causing the file to not grow until free space
grows scarce.
The third way will create a second database, a pruned copy of
the original one. Since this is a new file, this one will be
smaller than the original one.
Once the database is pruned, it will stay pruned as it syncs.
That is, there is no need to use --prune-blockchain again, etc.
2018-04-29 22:30:51 +00:00
return false;
2015-12-07 20:21:45 +00:00
2017-02-09 00:11:58 +00:00
std::vector<anchor_peerlist_entry> apl;
if (peer_type == anchor) {
2018-12-16 17:57:44 +00:00
zone.m_peerlist.get_and_empty_anchor_peerlist(apl);
2017-02-09 00:11:58 +00:00
}
2018-12-16 17:57:44 +00:00
size_t conn_count = get_outgoing_connections_count(zone);
2014-03-03 22:07:58 +00:00
//add new connections from white peers
Pruning
The blockchain prunes seven eighths of prunable tx data.
This saves about two thirds of the blockchain size, while
keeping the node useful as a sync source for an eighth
of the blockchain.
No other data is currently pruned.
There are three ways to prune a blockchain:
- run monerod with --prune-blockchain
- run "prune_blockchain" in the monerod console
- run the monero-blockchain-prune utility
The first two will prune in place. Due to how LMDB works, this
will not reduce the blockchain size on disk. Instead, it will
mark parts of the file as free, so that future data will use
that free space, causing the file to not grow until free space
grows scarce.
The third way will create a second database, a pruned copy of
the original one. Since this is a new file, this one will be
smaller than the original one.
Once the database is pruned, it will stay pruned as it syncs.
That is, there is no need to use --prune-blockchain again, etc.
2018-04-29 22:30:51 +00:00
if(conn_count < expected_connections)
2014-03-03 22:07:58 +00:00
{
2018-12-16 17:57:44 +00:00
if(zone.m_net_server.is_stop_signal_sent())
2014-03-03 22:07:58 +00:00
return false;
Pruning
The blockchain prunes seven eighths of prunable tx data.
This saves about two thirds of the blockchain size, while
keeping the node useful as a sync source for an eighth
of the blockchain.
No other data is currently pruned.
There are three ways to prune a blockchain:
- run monerod with --prune-blockchain
- run "prune_blockchain" in the monerod console
- run the monero-blockchain-prune utility
The first two will prune in place. Due to how LMDB works, this
will not reduce the blockchain size on disk. Instead, it will
mark parts of the file as free, so that future data will use
that free space, causing the file to not grow until free space
grows scarce.
The third way will create a second database, a pruned copy of
the original one. Since this is a new file, this one will be
smaller than the original one.
Once the database is pruned, it will stay pruned as it syncs.
That is, there is no need to use --prune-blockchain again, etc.
2018-04-29 22:30:51 +00:00
MDEBUG("Making expected connection, type " << peer_type << ", " << conn_count << "/" << expected_connections << " connections");
2017-02-09 00:11:58 +00:00
if (peer_type == anchor && !make_new_connection_from_anchor_peerlist(apl)) {
Pruning
The blockchain prunes seven eighths of prunable tx data.
This saves about two thirds of the blockchain size, while
keeping the node useful as a sync source for an eighth
of the blockchain.
No other data is currently pruned.
There are three ways to prune a blockchain:
- run monerod with --prune-blockchain
- run "prune_blockchain" in the monerod console
- run the monero-blockchain-prune utility
The first two will prune in place. Due to how LMDB works, this
will not reduce the blockchain size on disk. Instead, it will
mark parts of the file as free, so that future data will use
that free space, causing the file to not grow until free space
grows scarce.
The third way will create a second database, a pruned copy of
the original one. Since this is a new file, this one will be
smaller than the original one.
Once the database is pruned, it will stay pruned as it syncs.
That is, there is no need to use --prune-blockchain again, etc.
2018-04-29 22:30:51 +00:00
return false;
2017-02-09 00:11:58 +00:00
}
2018-12-16 17:57:44 +00:00
if (peer_type == white && !make_new_connection_from_peerlist(zone, true)) {
Pruning
The blockchain prunes seven eighths of prunable tx data.
This saves about two thirds of the blockchain size, while
keeping the node useful as a sync source for an eighth
of the blockchain.
No other data is currently pruned.
There are three ways to prune a blockchain:
- run monerod with --prune-blockchain
- run "prune_blockchain" in the monerod console
- run the monero-blockchain-prune utility
The first two will prune in place. Due to how LMDB works, this
will not reduce the blockchain size on disk. Instead, it will
mark parts of the file as free, so that future data will use
that free space, causing the file to not grow until free space
grows scarce.
The third way will create a second database, a pruned copy of
the original one. Since this is a new file, this one will be
smaller than the original one.
Once the database is pruned, it will stay pruned as it syncs.
That is, there is no need to use --prune-blockchain again, etc.
2018-04-29 22:30:51 +00:00
return false;
2017-02-09 00:11:58 +00:00
}
2018-12-16 17:57:44 +00:00
if (peer_type == gray && !make_new_connection_from_peerlist(zone, false)) {
Pruning
The blockchain prunes seven eighths of prunable tx data.
This saves about two thirds of the blockchain size, while
keeping the node useful as a sync source for an eighth
of the blockchain.
No other data is currently pruned.
There are three ways to prune a blockchain:
- run monerod with --prune-blockchain
- run "prune_blockchain" in the monerod console
- run the monero-blockchain-prune utility
The first two will prune in place. Due to how LMDB works, this
will not reduce the blockchain size on disk. Instead, it will
mark parts of the file as free, so that future data will use
that free space, causing the file to not grow until free space
grows scarce.
The third way will create a second database, a pruned copy of
the original one. Since this is a new file, this one will be
smaller than the original one.
Once the database is pruned, it will stay pruned as it syncs.
That is, there is no need to use --prune-blockchain again, etc.
2018-04-29 22:30:51 +00:00
return false;
2017-02-09 00:11:58 +00:00
}
2014-03-03 22:07:58 +00:00
}
return true;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
2018-12-16 17:57:44 +00:00
size_t node_server<t_payload_net_handler>::get_public_outgoing_connections_count()
{
auto public_zone = m_network_zones.find(epee::net_utils::zone::public_);
if (public_zone == m_network_zones.end())
return 0;
return get_outgoing_connections_count(public_zone->second);
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
size_t node_server<t_payload_net_handler>::get_incoming_connections_count(network_zone& zone)
2014-03-03 22:07:58 +00:00
{
size_t count = 0;
2018-12-16 17:57:44 +00:00
zone.m_net_server.get_config_object().foreach_connection([&](const p2p_connection_context& cntxt)
2014-03-03 22:07:58 +00:00
{
2018-12-16 17:57:44 +00:00
if(cntxt.m_is_income)
2014-03-03 22:07:58 +00:00
++count;
return true;
});
return count;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
2018-12-16 17:57:44 +00:00
size_t node_server<t_payload_net_handler>::get_outgoing_connections_count(network_zone& zone)
2018-01-20 21:44:23 +00:00
{
size_t count = 0;
2018-12-16 17:57:44 +00:00
zone.m_net_server.get_config_object().foreach_connection([&](const p2p_connection_context& cntxt)
2018-01-20 21:44:23 +00:00
{
2018-12-16 17:57:44 +00:00
if(!cntxt.m_is_income)
2018-01-20 21:44:23 +00:00
++count;
return true;
});
return count;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
2018-12-16 17:57:44 +00:00
size_t node_server<t_payload_net_handler>::get_outgoing_connections_count()
{
size_t count = 0;
for(auto& zone : m_network_zones)
count += get_outgoing_connections_count(zone.second);
return count;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
size_t node_server<t_payload_net_handler>::get_incoming_connections_count()
{
size_t count = 0;
for (auto& zone : m_network_zones)
{
zone.second.m_net_server.get_config_object().foreach_connection([&](const p2p_connection_context& cntxt)
{
if(cntxt.m_is_income)
++count;
return true;
});
}
return count;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
size_t node_server<t_payload_net_handler>::get_public_white_peers_count()
{
auto public_zone = m_network_zones.find(epee::net_utils::zone::public_);
if (public_zone == m_network_zones.end())
return 0;
return public_zone->second.m_peerlist.get_white_peers_count();
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
size_t node_server<t_payload_net_handler>::get_public_gray_peers_count()
{
auto public_zone = m_network_zones.find(epee::net_utils::zone::public_);
if (public_zone == m_network_zones.end())
return 0;
return public_zone->second.m_peerlist.get_gray_peers_count();
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
void node_server<t_payload_net_handler>::get_public_peerlist(std::vector<peerlist_entry>& gray, std::vector<peerlist_entry>& white)
{
auto public_zone = m_network_zones.find(epee::net_utils::zone::public_);
if (public_zone != m_network_zones.end())
public_zone->second.m_peerlist.get_peerlist(gray, white);
}
//-----------------------------------------------------------------------------------
2019-06-26 09:14:23 +00:00
template<class t_payload_net_handler>
void node_server<t_payload_net_handler>::get_peerlist(std::vector<peerlist_entry>& gray, std::vector<peerlist_entry>& white)
{
for (auto &zone: m_network_zones)
{
zone.second.m_peerlist.get_peerlist(gray, white); // appends
}
}
//-----------------------------------------------------------------------------------
2018-12-16 17:57:44 +00:00
template<class t_payload_net_handler>
2014-03-03 22:07:58 +00:00
bool node_server<t_payload_net_handler>::idle_worker()
{
m_peer_handshake_idle_maker_interval.do_call(boost::bind(&node_server<t_payload_net_handler>::peer_sync_idle_maker, this));
m_connections_maker_interval.do_call(boost::bind(&node_server<t_payload_net_handler>::connections_maker, this));
2017-01-20 23:59:04 +00:00
m_gray_peerlist_housekeeping_interval.do_call(boost::bind(&node_server<t_payload_net_handler>::gray_peerlist_housekeeping, this));
2014-03-03 22:07:58 +00:00
m_peerlist_store_interval.do_call(boost::bind(&node_server<t_payload_net_handler>::store_config, this));
2018-05-25 11:34:52 +00:00
m_incoming_connections_interval.do_call(boost::bind(&node_server<t_payload_net_handler>::check_incoming_connections, this));
return true;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
bool node_server<t_payload_net_handler>::check_incoming_connections()
{
2018-11-01 14:51:08 +00:00
if (m_offline)
2018-05-25 11:34:52 +00:00
return true;
2018-12-16 17:57:44 +00:00
const auto public_zone = m_network_zones.find(epee::net_utils::zone::public_);
if (public_zone != m_network_zones.end() && get_incoming_connections_count(public_zone->second) == 0)
2018-05-25 11:34:52 +00:00
{
2018-12-16 17:57:44 +00:00
if (m_hide_my_port || public_zone->second.m_config.m_net_config.max_in_connection_count == 0)
2018-11-01 14:51:08 +00:00
{
MGINFO("Incoming connections disabled, enable them for full connectivity");
}
else
{
2019-06-06 10:28:02 +00:00
if (m_igd == delayed_igd)
{
MWARNING("No incoming connections, trying to setup IGD");
add_upnp_port_mapping(m_listening_port);
m_igd = igd;
}
else
{
const el::Level level = el::Level::Warning;
MCLOG_RED(level, "global", "No incoming connections - check firewalls/routers allow port " << get_this_peer_port());
}
2018-11-01 14:51:08 +00:00
}
2018-05-25 11:34:52 +00:00
}
2014-03-03 22:07:58 +00:00
return true;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
bool node_server<t_payload_net_handler>::peer_sync_idle_maker()
{
Change logging to easylogging++
This replaces the epee and data_loggers logging systems with
a single one, and also adds filename:line and explicit severity
levels. Categories may be defined, and logging severity set
by category (or set of categories). epee style 0-4 log level
maps to a sensible severity configuration. Log files now also
rotate when reaching 100 MB.
To select which logs to output, use the MONERO_LOGS environment
variable, with a comma separated list of categories (globs are
supported), with their requested severity level after a colon.
If a log matches more than one such setting, the last one in
the configuration string applies. A few examples:
This one is (mostly) silent, only outputting fatal errors:
MONERO_LOGS=*:FATAL
This one is very verbose:
MONERO_LOGS=*:TRACE
This one is totally silent (logwise):
MONERO_LOGS=""
This one outputs all errors and warnings, except for the
"verify" category, which prints just fatal errors (the verify
category is used for logs about incoming transactions and
blocks, and it is expected that some/many will fail to verify,
hence we don't want the spam):
MONERO_LOGS=*:WARNING,verify:FATAL
Log levels are, in decreasing order of priority:
FATAL, ERROR, WARNING, INFO, DEBUG, TRACE
Subcategories may be added using prefixes and globs. This
example will output net.p2p logs at the TRACE level, but all
other net* logs only at INFO:
MONERO_LOGS=*:ERROR,net*:INFO,net.p2p:TRACE
Logs which are intended for the user (which Monero was using
a lot through epee, but really isn't a nice way to go things)
should use the "global" category. There are a few helper macros
for using this category, eg: MGINFO("this shows up by default")
or MGINFO_RED("this is red"), to try to keep a similar look
and feel for now.
Existing epee log macros still exist, and map to the new log
levels, but since they're used as a "user facing" UI element
as much as a logging system, they often don't map well to log
severities (ie, a log level 0 log may be an error, or may be
something we want the user to see, such as an important info).
In those cases, I tried to use the new macros. In other cases,
I left the existing macros in. When modifying logs, it is
probably best to switch to the new macros with explicit levels.
The --log-level options and set_log commands now also accept
category settings, in addition to the epee style log levels.
2017-01-01 16:34:23 +00:00
MDEBUG("STARTED PEERLIST IDLE HANDSHAKE");
2014-05-25 17:06:40 +00:00
typedef std::list<std::pair<epee::net_utils::connection_context_base, peerid_type> > local_connects_type;
2014-03-03 22:07:58 +00:00
local_connects_type cncts;
2018-12-16 17:57:44 +00:00
for(auto& zone : m_network_zones)
2014-03-03 22:07:58 +00:00
{
2018-12-16 17:57:44 +00:00
zone.second.m_net_server.get_config_object().foreach_connection([&](p2p_connection_context& cntxt)
2017-06-04 21:37:53 +00:00
{
2018-12-16 17:57:44 +00:00
if(cntxt.peer_id && !cntxt.m_in_timedsync)
{
cntxt.m_in_timedsync = true;
cncts.push_back(local_connects_type::value_type(cntxt, cntxt.peer_id));//do idle sync only with handshaked connections
}
return true;
});
}
2014-03-03 22:07:58 +00:00
std::for_each(cncts.begin(), cncts.end(), [&](const typename local_connects_type::value_type& vl){do_peer_timed_sync(vl.first, vl.second);});
Change logging to easylogging++
This replaces the epee and data_loggers logging systems with
a single one, and also adds filename:line and explicit severity
levels. Categories may be defined, and logging severity set
by category (or set of categories). epee style 0-4 log level
maps to a sensible severity configuration. Log files now also
rotate when reaching 100 MB.
To select which logs to output, use the MONERO_LOGS environment
variable, with a comma separated list of categories (globs are
supported), with their requested severity level after a colon.
If a log matches more than one such setting, the last one in
the configuration string applies. A few examples:
This one is (mostly) silent, only outputting fatal errors:
MONERO_LOGS=*:FATAL
This one is very verbose:
MONERO_LOGS=*:TRACE
This one is totally silent (logwise):
MONERO_LOGS=""
This one outputs all errors and warnings, except for the
"verify" category, which prints just fatal errors (the verify
category is used for logs about incoming transactions and
blocks, and it is expected that some/many will fail to verify,
hence we don't want the spam):
MONERO_LOGS=*:WARNING,verify:FATAL
Log levels are, in decreasing order of priority:
FATAL, ERROR, WARNING, INFO, DEBUG, TRACE
Subcategories may be added using prefixes and globs. This
example will output net.p2p logs at the TRACE level, but all
other net* logs only at INFO:
MONERO_LOGS=*:ERROR,net*:INFO,net.p2p:TRACE
Logs which are intended for the user (which Monero was using
a lot through epee, but really isn't a nice way to go things)
should use the "global" category. There are a few helper macros
for using this category, eg: MGINFO("this shows up by default")
or MGINFO_RED("this is red"), to try to keep a similar look
and feel for now.
Existing epee log macros still exist, and map to the new log
levels, but since they're used as a "user facing" UI element
as much as a logging system, they often don't map well to log
severities (ie, a log level 0 log may be an error, or may be
something we want the user to see, such as an important info).
In those cases, I tried to use the new macros. In other cases,
I left the existing macros in. When modifying logs, it is
probably best to switch to the new macros with explicit levels.
The --log-level options and set_log commands now also accept
category settings, in addition to the epee style log levels.
2017-01-01 16:34:23 +00:00
MDEBUG("FINISHED PEERLIST IDLE HANDSHAKE");
2014-03-03 22:07:58 +00:00
return true;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
2019-06-18 22:11:18 +00:00
bool node_server<t_payload_net_handler>::sanitize_peerlist(std::vector<peerlist_entry>& local_peerlist)
2014-03-03 22:07:58 +00:00
{
2019-06-18 22:11:18 +00:00
for (size_t i = 0; i < local_peerlist.size(); ++i)
2014-03-03 22:07:58 +00:00
{
2019-06-18 22:11:18 +00:00
bool ignore = false;
peerlist_entry &be = local_peerlist[i];
epee::net_utils::network_address &na = be.adr;
if (na.is_loopback() || na.is_local())
2014-03-03 22:07:58 +00:00
{
2019-06-18 22:11:18 +00:00
ignore = true;
}
else if (be.adr.get_type_id() == epee::net_utils::ipv4_network_address::get_type_id())
{
const epee::net_utils::ipv4_network_address &ipv4 = na.as<const epee::net_utils::ipv4_network_address>();
if (ipv4.ip() == 0)
ignore = true;
2019-10-02 19:04:57 +00:00
else if (ipv4.port() == be.rpc_port)
ignore = true;
2014-03-03 22:07:58 +00:00
}
2019-10-02 16:37:52 +00:00
if (be.pruning_seed && (be.pruning_seed < tools::make_pruning_seed(1, CRYPTONOTE_PRUNING_LOG_STRIPES) || be.pruning_seed > tools::make_pruning_seed(1ul << CRYPTONOTE_PRUNING_LOG_STRIPES, CRYPTONOTE_PRUNING_LOG_STRIPES)))
ignore = true;
2019-06-18 22:11:18 +00:00
if (ignore)
{
MDEBUG("Ignoring " << be.adr.str());
std::swap(local_peerlist[i], local_peerlist[local_peerlist.size() - 1]);
local_peerlist.resize(local_peerlist.size() - 1);
--i;
continue;
}
Pruning
The blockchain prunes seven eighths of prunable tx data.
This saves about two thirds of the blockchain size, while
keeping the node useful as a sync source for an eighth
of the blockchain.
No other data is currently pruned.
There are three ways to prune a blockchain:
- run monerod with --prune-blockchain
- run "prune_blockchain" in the monerod console
- run the monero-blockchain-prune utility
The first two will prune in place. Due to how LMDB works, this
will not reduce the blockchain size on disk. Instead, it will
mark parts of the file as free, so that future data will use
that free space, causing the file to not grow until free space
grows scarce.
The third way will create a second database, a pruned copy of
the original one. Since this is a new file, this one will be
smaller than the original one.
Once the database is pruned, it will stay pruned as it syncs.
That is, there is no need to use --prune-blockchain again, etc.
2018-04-29 22:30:51 +00:00
#ifdef CRYPTONOTE_PRUNING_DEBUG_SPOOF_SEED
be.pruning_seed = tools::make_pruning_seed(1 + (be.adr.as<epee::net_utils::ipv4_network_address>().ip()) % (1ul << CRYPTONOTE_PRUNING_LOG_STRIPES), CRYPTONOTE_PRUNING_LOG_STRIPES);
#endif
2014-03-03 22:07:58 +00:00
}
return true;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
2018-12-05 22:25:27 +00:00
bool node_server<t_payload_net_handler>::handle_remote_peerlist(const std::vector<peerlist_entry>& peerlist, time_t local_time, const epee::net_utils::connection_context_base& context)
2014-03-03 22:07:58 +00:00
{
2018-12-05 22:25:27 +00:00
std::vector<peerlist_entry> peerlist_ = peerlist;
2019-06-18 22:11:18 +00:00
if(!sanitize_peerlist(peerlist_))
2014-03-03 22:07:58 +00:00
return false;
2018-12-16 17:57:44 +00:00
const epee::net_utils::zone zone = context.m_remote_address.get_zone();
for(const auto& peer : peerlist_)
{
if(peer.adr.get_zone() != zone)
{
MWARNING(context << " sent peerlist from another zone, dropping");
return false;
}
}
2019-06-18 22:11:18 +00:00
LOG_DEBUG_CC(context, "REMOTE PEERLIST: remote peerlist size=" << peerlist_.size());
LOG_DEBUG_CC(context, "REMOTE PEERLIST: " << ENDL << print_peerlist_to_string(peerlist_));
2018-12-16 17:57:44 +00:00
return m_network_zones.at(context.m_remote_address.get_zone()).m_peerlist.merge_peerlist(peerlist_);
2014-03-03 22:07:58 +00:00
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
2018-12-16 17:57:44 +00:00
bool node_server<t_payload_net_handler>::get_local_node_data(basic_node_data& node_data, const network_zone& zone)
2014-03-03 22:07:58 +00:00
{
2014-04-30 17:52:21 +00:00
time_t local_time;
time(&local_time);
2018-12-16 17:57:44 +00:00
node_data.local_time = local_time; // \TODO This can be an identifying value across zones (public internet to tor/i2p) ...
node_data.peer_id = zone.m_config.m_peer_id;
if(!m_hide_my_port && zone.m_can_pingback)
2017-09-01 07:50:22 +00:00
node_data.my_port = m_external_port ? m_external_port : m_listening_port;
2015-12-14 04:54:39 +00:00
else
2014-03-03 22:07:58 +00:00
node_data.my_port = 0;
2019-02-24 08:47:49 +00:00
node_data.rpc_port = zone.m_can_pingback ? m_rpc_port : 0;
daemon, wallet: new pay for RPC use system
Daemons intended for public use can be set up to require payment
in the form of hashes in exchange for RPC service. This enables
public daemons to receive payment for their work over a large
number of calls. This system behaves similarly to a pool, so
payment takes the form of valid blocks every so often, yielding
a large one off payment, rather than constant micropayments.
This system can also be used by third parties as a "paywall"
layer, where users of a service can pay for use by mining Monero
to the service provider's address. An example of this for web
site access is Primo, a Monero mining based website "paywall":
https://github.com/selene-kovri/primo
This has some advantages:
- incentive to run a node providing RPC services, thereby promoting the availability of third party nodes for those who can't run their own
- incentive to run your own node instead of using a third party's, thereby promoting decentralization
- decentralized: payment is done between a client and server, with no third party needed
- private: since the system is "pay as you go", you don't need to identify yourself to claim a long lived balance
- no payment occurs on the blockchain, so there is no extra transactional load
- one may mine with a beefy server, and use those credits from a phone, by reusing the client ID (at the cost of some privacy)
- no barrier to entry: anyone may run a RPC node, and your expected revenue depends on how much work you do
- Sybil resistant: if you run 1000 idle RPC nodes, you don't magically get more revenue
- no large credit balance maintained on servers, so they have no incentive to exit scam
- you can use any/many node(s), since there's little cost in switching servers
- market based prices: competition between servers to lower costs
- incentive for a distributed third party node system: if some public nodes are overused/slow, traffic can move to others
- increases network security
- helps counteract mining pools' share of the network hash rate
- zero incentive for a payer to "double spend" since a reorg does not give any money back to the miner
And some disadvantages:
- low power clients will have difficulty mining (but one can optionally mine in advance and/or with a faster machine)
- payment is "random", so a server might go a long time without a block before getting one
- a public node's overall expected payment may be small
Public nodes are expected to compete to find a suitable level for
cost of service.
The daemon can be set up this way to require payment for RPC services:
monerod --rpc-payment-address 4xxxxxx \
--rpc-payment-credits 250 --rpc-payment-difficulty 1000
These values are an example only.
The --rpc-payment-difficulty switch selects how hard each "share" should
be, similar to a mining pool. The higher the difficulty, the fewer
shares a client will find.
The --rpc-payment-credits switch selects how many credits are awarded
for each share a client finds.
Considering both options, clients will be awarded credits/difficulty
credits for every hash they calculate. For example, in the command line
above, 0.25 credits per hash. A client mining at 100 H/s will therefore
get an average of 25 credits per second.
For reference, in the current implementation, a credit is enough to
sync 20 blocks, so a 100 H/s client that's just starting to use Monero
and uses this daemon will be able to sync 500 blocks per second.
The wallet can be set to automatically mine if connected to a daemon
which requires payment for RPC usage. It will try to keep a balance
of 50000 credits, stopping mining when it's at this level, and starting
again as credits are spent. With the example above, a new client will
mine this much credits in about half an hour, and this target is enough
to sync 500000 blocks (currently about a third of the monero blockchain).
There are three new settings in the wallet:
- credits-target: this is the amount of credits a wallet will try to
reach before stopping mining. The default of 0 means 50000 credits.
- auto-mine-for-rpc-payment-threshold: this controls the minimum
credit rate which the wallet considers worth mining for. If the
daemon credits less than this ratio, the wallet will consider mining
to be not worth it. In the example above, the rate is 0.25
- persistent-rpc-client-id: if set, this allows the wallet to reuse
a client id across runs. This means a public node can tell a wallet
that's connecting is the same as one that connected previously, but
allows a wallet to keep their credit balance from one run to the
other. Since the wallet only mines to keep a small credit balance,
this is not normally worth doing. However, someone may want to mine
on a fast server, and use that credit balance on a low power device
such as a phone. If left unset, a new client ID is generated at
each wallet start, for privacy reasons.
To mine and use a credit balance on two different devices, you can
use the --rpc-client-secret-key switch. A wallet's client secret key
can be found using the new rpc_payments command in the wallet.
Note: anyone knowing your RPC client secret key is able to use your
credit balance.
The wallet has a few new commands too:
- start_mining_for_rpc: start mining to acquire more credits,
regardless of the auto mining settings
- stop_mining_for_rpc: stop mining to acquire more credits
- rpc_payments: display information about current credits with
the currently selected daemon
The node has an extra command:
- rpc_payments: display information about clients and their
balances
The node will forget about any balance for clients which have
been inactive for 6 months. Balances carry over on node restart.
2018-02-11 15:15:56 +00:00
node_data.rpc_credits_per_hash = zone.m_can_pingback ? m_rpc_credits_per_hash : 0;
2014-07-16 17:30:15 +00:00
node_data.network_id = m_network_id;
2014-03-03 22:07:58 +00:00
return true;
}
//-----------------------------------------------------------------------------------
#ifdef ALLOW_DEBUG_COMMANDS
template<class t_payload_net_handler>
2018-12-16 17:57:44 +00:00
bool node_server<t_payload_net_handler>::check_trust(const proof_of_trust& tr, const epee::net_utils::zone zone_type)
2014-03-03 22:07:58 +00:00
{
uint64_t local_time = time(NULL);
uint64_t time_delata = local_time > tr.time ? local_time - tr.time: tr.time - local_time;
if(time_delata > 24*60*60 )
{
2017-08-01 09:39:36 +00:00
MWARNING("check_trust failed to check time conditions, local_time=" << local_time << ", proof_time=" << tr.time);
2014-03-03 22:07:58 +00:00
return false;
}
if(m_last_stat_request_time >= tr.time )
{
2017-08-01 09:39:36 +00:00
MWARNING("check_trust failed to check time conditions, last_stat_request_time=" << m_last_stat_request_time << ", proof_time=" << tr.time);
2014-03-03 22:07:58 +00:00
return false;
}
2018-12-16 17:57:44 +00:00
const network_zone& zone = m_network_zones.at(zone_type);
if(zone.m_config.m_peer_id != tr.peer_id)
2014-03-03 22:07:58 +00:00
{
2018-12-16 17:57:44 +00:00
MWARNING("check_trust failed: peer_id mismatch (passed " << tr.peer_id << ", expected " << zone.m_config.m_peer_id<< ")");
2014-03-03 22:07:58 +00:00
return false;
}
crypto::public_key pk = AUTO_VAL_INIT(pk);
2014-09-05 02:14:36 +00:00
epee::string_tools::hex_to_pod(::config::P2P_REMOTE_DEBUG_TRUSTED_PUB_KEY, pk);
2017-07-27 14:46:47 +00:00
crypto::hash h = get_proof_of_trust_hash(tr);
2014-03-03 22:07:58 +00:00
if(!crypto::check_signature(h, pk, tr.sign))
{
2017-08-01 09:39:36 +00:00
MWARNING("check_trust failed: sign check failed");
2014-03-03 22:07:58 +00:00
return false;
}
//update last request time
m_last_stat_request_time = tr.time;
return true;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
int node_server<t_payload_net_handler>::handle_get_stat_info(int command, typename COMMAND_REQUEST_STAT_INFO::request& arg, typename COMMAND_REQUEST_STAT_INFO::response& rsp, p2p_connection_context& context)
{
2018-12-16 17:57:44 +00:00
if(!check_trust(arg.tr, context.m_remote_address.get_zone()))
2014-03-03 22:07:58 +00:00
{
drop_connection(context);
return 1;
}
2018-12-16 17:57:44 +00:00
rsp.connections_count = get_connections_count();
2014-03-03 22:07:58 +00:00
rsp.incoming_connections_count = rsp.connections_count - get_outgoing_connections_count();
2014-09-12 11:06:51 +00:00
rsp.version = MONERO_VERSION_FULL;
2014-03-03 22:07:58 +00:00
rsp.os_version = tools::get_os_version_string();
m_payload_handler.get_stat_info(rsp.payload_info);
return 1;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
int node_server<t_payload_net_handler>::handle_get_network_state(int command, COMMAND_REQUEST_NETWORK_STATE::request& arg, COMMAND_REQUEST_NETWORK_STATE::response& rsp, p2p_connection_context& context)
{
2018-12-16 17:57:44 +00:00
if(!check_trust(arg.tr, context.m_remote_address.get_zone()))
2014-03-03 22:07:58 +00:00
{
drop_connection(context);
return 1;
}
2018-12-16 17:57:44 +00:00
m_network_zones.at(epee::net_utils::zone::public_).m_net_server.get_config_object().foreach_connection([&](const p2p_connection_context& cntxt)
2014-03-03 22:07:58 +00:00
{
connection_entry ce;
2017-05-27 10:35:54 +00:00
ce.adr = cntxt.m_remote_address;
2014-03-03 22:07:58 +00:00
ce.id = cntxt.peer_id;
ce.is_income = cntxt.m_is_income;
rsp.connections_list.push_back(ce);
return true;
});
2018-12-16 17:57:44 +00:00
network_zone& zone = m_network_zones.at(context.m_remote_address.get_zone());
zone.m_peerlist.get_peerlist(rsp.local_peerlist_gray, rsp.local_peerlist_white);
rsp.my_id = zone.m_config.m_peer_id;
2014-03-03 22:07:58 +00:00
rsp.local_time = time(NULL);
return 1;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
int node_server<t_payload_net_handler>::handle_get_peer_id(int command, COMMAND_REQUEST_PEER_ID::request& arg, COMMAND_REQUEST_PEER_ID::response& rsp, p2p_connection_context& context)
{
2018-12-16 17:57:44 +00:00
rsp.my_id = m_network_zones.at(context.m_remote_address.get_zone()).m_config.m_peer_id;
2014-03-03 22:07:58 +00:00
return 1;
}
#endif
2016-10-26 19:00:08 +00:00
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
int node_server<t_payload_net_handler>::handle_get_support_flags(int command, COMMAND_REQUEST_SUPPORT_FLAGS::request& arg, COMMAND_REQUEST_SUPPORT_FLAGS::response& rsp, p2p_connection_context& context)
{
2018-12-16 17:57:44 +00:00
rsp.support_flags = m_network_zones.at(context.m_remote_address.get_zone()).m_config.m_support_flags;
2016-10-26 19:00:08 +00:00
return 1;
}
2014-03-03 22:07:58 +00:00
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
void node_server<t_payload_net_handler>::request_callback(const epee::net_utils::connection_context_base& context)
{
2018-12-16 17:57:44 +00:00
m_network_zones.at(context.m_remote_address.get_zone()).m_net_server.get_config_object().request_callback(context.m_connection_id);
2014-03-03 22:07:58 +00:00
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
2018-12-16 17:57:44 +00:00
bool node_server<t_payload_net_handler>::relay_notify_to_list(int command, const epee::span<const uint8_t> data_buff, std::vector<std::pair<epee::net_utils::zone, boost::uuids::uuid>> connections)
2016-11-29 16:21:33 +00:00
{
2018-12-16 17:57:44 +00:00
std::sort(connections.begin(), connections.end());
auto zone = m_network_zones.begin();
2017-01-22 20:38:10 +00:00
for(const auto& c_id: connections)
2016-11-29 16:21:33 +00:00
{
2018-12-16 17:57:44 +00:00
for (;;)
{
if (zone == m_network_zones.end())
{
MWARNING("Unable to relay all messages, " << epee::net_utils::zone_to_string(c_id.first) << " not available");
return false;
}
if (c_id.first <= zone->first)
break;
2019-05-16 20:34:22 +00:00
2018-12-16 17:57:44 +00:00
++zone;
}
if (zone->first == c_id.first)
zone->second.m_net_server.get_config_object().notify(command, data_buff, c_id.second);
2016-11-29 16:21:33 +00:00
}
return true;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
2019-05-16 20:34:22 +00:00
epee::net_utils::zone node_server<t_payload_net_handler>::send_txs(std::vector<cryptonote::blobdata> txs, const epee::net_utils::zone origin, const boost::uuids::uuid& source, const bool pad_txs)
{
namespace enet = epee::net_utils;
const auto send = [&txs, &source, pad_txs] (std::pair<const enet::zone, network_zone>& network)
{
if (network.second.m_notifier.send_txs(std::move(txs), source, (pad_txs || network.first != enet::zone::public_)))
return network.first;
return enet::zone::invalid;
};
if (m_network_zones.empty())
return enet::zone::invalid;
if (origin != enet::zone::invalid)
return send(*m_network_zones.begin()); // send all txs received via p2p over public network
if (m_network_zones.size() <= 2)
return send(*m_network_zones.rbegin()); // see static asserts below; sends over anonymity network iff enabled
/* These checks are to ensure that i2p is highest priority if multiple
zones are selected. Make sure to update logic if the values cannot be
in the same relative order. `m_network_zones` must be sorted map too. */
static_assert(std::is_same<std::underlying_type<enet::zone>::type, std::uint8_t>{}, "expected uint8_t zone");
static_assert(unsigned(enet::zone::invalid) == 0, "invalid expected to be 0");
static_assert(unsigned(enet::zone::public_) == 1, "public_ expected to be 1");
static_assert(unsigned(enet::zone::i2p) == 2, "i2p expected to be 2");
static_assert(unsigned(enet::zone::tor) == 3, "tor expected to be 3");
// check for anonymity networks with noise and connections
for (auto network = ++m_network_zones.begin(); network != m_network_zones.end(); ++network)
{
if (enet::zone::tor < network->first)
break; // unknown network
const auto status = network->second.m_notifier.get_status();
if (status.has_noise && status.connections_filled)
return send(*network);
}
// use the anonymity network with outbound support
for (auto network = ++m_network_zones.begin(); network != m_network_zones.end(); ++network)
{
if (enet::zone::tor < network->first)
break; // unknown network
if (network->second.m_connect)
return send(*network);
}
// configuration should not allow this scenario
return enet::zone::invalid;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
2014-03-03 22:07:58 +00:00
void node_server<t_payload_net_handler>::callback(p2p_connection_context& context)
{
m_payload_handler.on_callback(context);
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
2018-12-06 18:04:33 +00:00
bool node_server<t_payload_net_handler>::invoke_notify_to_peer(int command, const epee::span<const uint8_t> req_buff, const epee::net_utils::connection_context_base& context)
2014-03-03 22:07:58 +00:00
{
2018-12-16 17:57:44 +00:00
if(is_filtered_command(context.m_remote_address, command))
return false;
network_zone& zone = m_network_zones.at(context.m_remote_address.get_zone());
int res = zone.m_net_server.get_config_object().notify(command, req_buff, context.m_connection_id);
2014-03-03 22:07:58 +00:00
return res > 0;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
2018-12-06 18:04:33 +00:00
bool node_server<t_payload_net_handler>::invoke_command_to_peer(int command, const epee::span<const uint8_t> req_buff, std::string& resp_buff, const epee::net_utils::connection_context_base& context)
2014-03-03 22:07:58 +00:00
{
2018-12-16 17:57:44 +00:00
if(is_filtered_command(context.m_remote_address, command))
return false;
network_zone& zone = m_network_zones.at(context.m_remote_address.get_zone());
int res = zone.m_net_server.get_config_object().invoke(command, req_buff, resp_buff, context.m_connection_id);
2014-03-03 22:07:58 +00:00
return res > 0;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
bool node_server<t_payload_net_handler>::drop_connection(const epee::net_utils::connection_context_base& context)
{
2018-12-16 17:57:44 +00:00
m_network_zones.at(context.m_remote_address.get_zone()).m_net_server.get_config_object().close(context.m_connection_id);
2014-03-03 22:07:58 +00:00
return true;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler> template<class t_callback>
2018-02-02 18:45:12 +00:00
bool node_server<t_payload_net_handler>::try_ping(basic_node_data& node_data, p2p_connection_context& context, const t_callback &cb)
2014-03-03 22:07:58 +00:00
{
if(!node_data.my_port)
return false;
2019-04-10 22:34:30 +00:00
bool address_ok = (context.m_remote_address.get_type_id() == epee::net_utils::ipv4_network_address::get_type_id() || context.m_remote_address.get_type_id() == epee::net_utils::ipv6_network_address::get_type_id());
CHECK_AND_ASSERT_MES(address_ok, false,
"Only IPv4 or IPv6 addresses are supported here");
2017-05-27 10:35:54 +00:00
const epee::net_utils::network_address na = context.m_remote_address;
2019-04-10 22:34:30 +00:00
std::string ip;
uint32_t ipv4_addr;
boost::asio::ip::address_v6 ipv6_addr;
bool is_ipv4;
if (na.get_type_id() == epee::net_utils::ipv4_network_address::get_type_id())
{
ipv4_addr = na.as<const epee::net_utils::ipv4_network_address>().ip();
ip = epee::string_tools::get_ip_string_from_int32(ipv4_addr);
is_ipv4 = true;
}
else
{
ipv6_addr = na.as<const epee::net_utils::ipv6_network_address>().ip();
ip = ipv6_addr.to_string();
is_ipv4 = false;
}
2018-12-16 17:57:44 +00:00
network_zone& zone = m_network_zones.at(na.get_zone());
if(!zone.m_peerlist.is_host_allowed(context.m_remote_address))
2014-03-03 22:07:58 +00:00
return false;
2018-12-16 17:57:44 +00:00
2014-05-25 17:06:40 +00:00
std::string port = epee::string_tools::num_to_string_fast(node_data.my_port);
2019-04-10 22:34:30 +00:00
epee::net_utils::network_address address;
if (is_ipv4)
{
address = epee::net_utils::network_address{epee::net_utils::ipv4_network_address(ipv4_addr, node_data.my_port)};
}
else
{
address = epee::net_utils::network_address{epee::net_utils::ipv6_network_address(ipv6_addr, node_data.my_port)};
}
2014-03-03 22:07:58 +00:00
peerid_type pr = node_data.peer_id;
2018-12-16 17:57:44 +00:00
bool r = zone.m_net_server.connect_async(ip, port, zone.m_config.m_net_config.ping_connection_timeout, [cb, /*context,*/ address, pr, this](
2014-03-03 22:07:58 +00:00
const typename net_server::t_connection_context& ping_context,
const boost::system::error_code& ec)->bool
{
if(ec)
{
2017-05-27 10:35:54 +00:00
LOG_WARNING_CC(ping_context, "back ping connect failed to " << address.str());
2014-03-03 22:07:58 +00:00
return false;
}
COMMAND_PING::request req;
COMMAND_PING::response rsp;
//vc2010 workaround
/*std::string ip_ = ip;
std::string port_=port;
peerid_type pr_ = pr;
auto cb_ = cb;*/
2015-05-26 05:07:17 +00:00
// GCC 5.1.0 gives error with second use of uint64_t (peerid_type) variable.
peerid_type pr_ = pr;
2018-12-16 17:57:44 +00:00
network_zone& zone = m_network_zones.at(address.get_zone());
bool inv_call_res = epee::net_utils::async_invoke_remote_command2<COMMAND_PING::response>(ping_context.m_connection_id, COMMAND_PING::ID, req, zone.m_net_server.get_config_object(),
2014-03-03 22:07:58 +00:00
[=](int code, const COMMAND_PING::response& rsp, p2p_connection_context& context)
{
if(code <= 0)
{
2017-08-01 09:39:36 +00:00
LOG_WARNING_CC(ping_context, "Failed to invoke COMMAND_PING to " << address.str() << "(" << code << ", " << epee::levin::get_err_descr(code) << ")");
2014-03-03 22:07:58 +00:00
return;
}
2018-12-16 17:57:44 +00:00
network_zone& zone = m_network_zones.at(address.get_zone());
2014-03-03 22:07:58 +00:00
if(rsp.status != PING_OK_RESPONSE_STATUS_TEXT || pr != rsp.peer_id)
{
2017-08-01 09:39:36 +00:00
LOG_WARNING_CC(ping_context, "back ping invoke wrong response \"" << rsp.status << "\" from" << address.str() << ", hsh_peer_id=" << pr_ << ", rsp.peer_id=" << rsp.peer_id);
2018-12-16 17:57:44 +00:00
zone.m_net_server.get_config_object().close(ping_context.m_connection_id);
2014-03-03 22:07:58 +00:00
return;
}
2018-12-16 17:57:44 +00:00
zone.m_net_server.get_config_object().close(ping_context.m_connection_id);
2014-03-03 22:07:58 +00:00
cb();
});
if(!inv_call_res)
{
2017-08-01 09:39:36 +00:00
LOG_WARNING_CC(ping_context, "back ping invoke failed to " << address.str());
2018-12-16 17:57:44 +00:00
zone.m_net_server.get_config_object().close(ping_context.m_connection_id);
2014-03-03 22:07:58 +00:00
return false;
}
return true;
2018-10-13 10:19:17 +00:00
});
2014-03-03 22:07:58 +00:00
if(!r)
{
2017-08-01 09:39:36 +00:00
LOG_WARNING_CC(context, "Failed to call connect_async, network error.");
2014-03-03 22:07:58 +00:00
}
return r;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
2016-10-26 19:00:08 +00:00
bool node_server<t_payload_net_handler>::try_get_support_flags(const p2p_connection_context& context, std::function<void(p2p_connection_context&, const uint32_t&)> f)
{
2018-12-16 17:57:44 +00:00
if(context.m_remote_address.get_zone() != epee::net_utils::zone::public_)
return false;
2016-10-26 19:00:08 +00:00
COMMAND_REQUEST_SUPPORT_FLAGS::request support_flags_request;
bool r = epee::net_utils::async_invoke_remote_command2<typename COMMAND_REQUEST_SUPPORT_FLAGS::response>
(
context.m_connection_id,
COMMAND_REQUEST_SUPPORT_FLAGS::ID,
support_flags_request,
2018-12-16 17:57:44 +00:00
m_network_zones.at(epee::net_utils::zone::public_).m_net_server.get_config_object(),
2016-10-26 19:00:08 +00:00
[=](int code, const typename COMMAND_REQUEST_SUPPORT_FLAGS::response& rsp, p2p_connection_context& context_)
{
if(code < 0)
{
2017-08-01 09:39:36 +00:00
LOG_WARNING_CC(context_, "COMMAND_REQUEST_SUPPORT_FLAGS invoke failed. (" << code << ", " << epee::levin::get_err_descr(code) << ")");
2016-10-26 19:00:08 +00:00
return;
}
f(context_, rsp.support_flags);
},
P2P_DEFAULT_HANDSHAKE_INVOKE_TIMEOUT
);
return r;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
2014-03-03 22:07:58 +00:00
int node_server<t_payload_net_handler>::handle_timed_sync(int command, typename COMMAND_TIMED_SYNC::request& arg, typename COMMAND_TIMED_SYNC::response& rsp, p2p_connection_context& context)
{
if(!m_payload_handler.process_payload_sync_data(arg.payload_data, context, false))
{
2017-08-01 09:39:36 +00:00
LOG_WARNING_CC(context, "Failed to process_payload_sync_data(), dropping connection");
2014-03-03 22:07:58 +00:00
drop_connection(context);
return 1;
}
//fill response
rsp.local_time = time(NULL);
2018-12-16 17:57:44 +00:00
const epee::net_utils::zone zone_type = context.m_remote_address.get_zone();
network_zone& zone = m_network_zones.at(zone_type);
2019-04-23 11:07:44 +00:00
zone.m_peerlist.get_peerlist_head(rsp.local_peerlist_new, true);
2014-03-03 22:07:58 +00:00
m_payload_handler.get_payload_sync_data(rsp.payload_data);
2018-12-16 17:57:44 +00:00
/* Tor/I2P nodes receiving connections via forwarding (from tor/i2p daemon)
do not know the address of the connecting peer. This is relayed to them,
iff the node has setup an inbound hidden service. The other peer will have
to use the random peer_id value to link the two. My initial thought is that
the inbound peer should leave the other side marked as `<unknown tor host>`,
etc., because someone could give faulty addresses over Tor/I2P to get the
real peer with that identity banned/blacklisted. */
if(!context.m_is_income && zone.m_our_address.get_zone() == zone_type)
rsp.local_peerlist_new.push_back(peerlist_entry{zone.m_our_address, zone.m_config.m_peer_id, std::time(nullptr)});
Change logging to easylogging++
This replaces the epee and data_loggers logging systems with
a single one, and also adds filename:line and explicit severity
levels. Categories may be defined, and logging severity set
by category (or set of categories). epee style 0-4 log level
maps to a sensible severity configuration. Log files now also
rotate when reaching 100 MB.
To select which logs to output, use the MONERO_LOGS environment
variable, with a comma separated list of categories (globs are
supported), with their requested severity level after a colon.
If a log matches more than one such setting, the last one in
the configuration string applies. A few examples:
This one is (mostly) silent, only outputting fatal errors:
MONERO_LOGS=*:FATAL
This one is very verbose:
MONERO_LOGS=*:TRACE
This one is totally silent (logwise):
MONERO_LOGS=""
This one outputs all errors and warnings, except for the
"verify" category, which prints just fatal errors (the verify
category is used for logs about incoming transactions and
blocks, and it is expected that some/many will fail to verify,
hence we don't want the spam):
MONERO_LOGS=*:WARNING,verify:FATAL
Log levels are, in decreasing order of priority:
FATAL, ERROR, WARNING, INFO, DEBUG, TRACE
Subcategories may be added using prefixes and globs. This
example will output net.p2p logs at the TRACE level, but all
other net* logs only at INFO:
MONERO_LOGS=*:ERROR,net*:INFO,net.p2p:TRACE
Logs which are intended for the user (which Monero was using
a lot through epee, but really isn't a nice way to go things)
should use the "global" category. There are a few helper macros
for using this category, eg: MGINFO("this shows up by default")
or MGINFO_RED("this is red"), to try to keep a similar look
and feel for now.
Existing epee log macros still exist, and map to the new log
levels, but since they're used as a "user facing" UI element
as much as a logging system, they often don't map well to log
severities (ie, a log level 0 log may be an error, or may be
something we want the user to see, such as an important info).
In those cases, I tried to use the new macros. In other cases,
I left the existing macros in. When modifying logs, it is
probably best to switch to the new macros with explicit levels.
The --log-level options and set_log commands now also accept
category settings, in addition to the epee style log levels.
2017-01-01 16:34:23 +00:00
LOG_DEBUG_CC(context, "COMMAND_TIMED_SYNC");
2014-03-03 22:07:58 +00:00
return 1;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
int node_server<t_payload_net_handler>::handle_handshake(int command, typename COMMAND_HANDSHAKE::request& arg, typename COMMAND_HANDSHAKE::response& rsp, p2p_connection_context& context)
{
2014-07-16 17:30:15 +00:00
if(arg.node_data.network_id != m_network_id)
2014-03-03 22:07:58 +00:00
{
2018-12-18 00:05:27 +00:00
LOG_INFO_CC(context, "WRONG NETWORK AGENT CONNECTED! id=" << arg.node_data.network_id);
2014-03-03 22:07:58 +00:00
drop_connection(context);
2017-05-27 10:35:54 +00:00
add_host_fail(context.m_remote_address);
2014-03-03 22:07:58 +00:00
return 1;
}
if(!context.m_is_income)
{
2017-08-01 09:39:36 +00:00
LOG_WARNING_CC(context, "COMMAND_HANDSHAKE came not from incoming connection");
2014-03-03 22:07:58 +00:00
drop_connection(context);
2017-05-27 10:35:54 +00:00
add_host_fail(context.m_remote_address);
2014-03-03 22:07:58 +00:00
return 1;
}
if(context.peer_id)
{
2017-08-01 09:39:36 +00:00
LOG_WARNING_CC(context, "COMMAND_HANDSHAKE came, but seems that connection already have associated peer_id (double COMMAND_HANDSHAKE?)");
2014-03-03 22:07:58 +00:00
drop_connection(context);
return 1;
}
2018-12-16 17:57:44 +00:00
network_zone& zone = m_network_zones.at(context.m_remote_address.get_zone());
2019-08-21 18:19:36 +00:00
// test only the remote end's zone, otherwise an attacker could connect to you on clearnet
// and pass in a tor connection's peer id, and deduce the two are the same if you reject it
if(arg.node_data.peer_id == zone.m_config.m_peer_id)
{
LOG_DEBUG_CC(context, "Connection to self detected, dropping connection");
drop_connection(context);
return 1;
}
2018-12-16 17:57:44 +00:00
if (zone.m_current_number_of_in_peers >= zone.m_config.m_net_config.max_in_connection_count) // in peers limit
2018-01-20 21:44:23 +00:00
{
LOG_WARNING_CC(context, "COMMAND_HANDSHAKE came, but already have max incoming connections, so dropping this one.");
drop_connection(context);
return 1;
}
2014-03-03 22:07:58 +00:00
if(!m_payload_handler.process_payload_sync_data(arg.payload_data, context, true))
{
2017-08-01 09:39:36 +00:00
LOG_WARNING_CC(context, "COMMAND_HANDSHAKE came, but process_payload_sync_data returned false, dropping connection.");
2014-03-03 22:07:58 +00:00
drop_connection(context);
return 1;
}
2017-01-14 12:21:20 +00:00
2017-05-27 10:35:54 +00:00
if(has_too_many_connections(context.m_remote_address))
2017-01-14 12:21:20 +00:00
{
2017-05-27 10:35:54 +00:00
LOG_PRINT_CCONTEXT_L1("CONNECTION FROM " << context.m_remote_address.host_str() << " REFUSED, too many connections from the same address");
2017-01-14 12:21:20 +00:00
drop_connection(context);
return 1;
}
2014-03-03 22:07:58 +00:00
//associate peer_id with this connection
context.peer_id = arg.node_data.peer_id;
2017-08-08 16:23:02 +00:00
context.m_in_timedsync = false;
2019-02-24 08:47:49 +00:00
context.m_rpc_port = arg.node_data.rpc_port;
daemon, wallet: new pay for RPC use system
Daemons intended for public use can be set up to require payment
in the form of hashes in exchange for RPC service. This enables
public daemons to receive payment for their work over a large
number of calls. This system behaves similarly to a pool, so
payment takes the form of valid blocks every so often, yielding
a large one off payment, rather than constant micropayments.
This system can also be used by third parties as a "paywall"
layer, where users of a service can pay for use by mining Monero
to the service provider's address. An example of this for web
site access is Primo, a Monero mining based website "paywall":
https://github.com/selene-kovri/primo
This has some advantages:
- incentive to run a node providing RPC services, thereby promoting the availability of third party nodes for those who can't run their own
- incentive to run your own node instead of using a third party's, thereby promoting decentralization
- decentralized: payment is done between a client and server, with no third party needed
- private: since the system is "pay as you go", you don't need to identify yourself to claim a long lived balance
- no payment occurs on the blockchain, so there is no extra transactional load
- one may mine with a beefy server, and use those credits from a phone, by reusing the client ID (at the cost of some privacy)
- no barrier to entry: anyone may run a RPC node, and your expected revenue depends on how much work you do
- Sybil resistant: if you run 1000 idle RPC nodes, you don't magically get more revenue
- no large credit balance maintained on servers, so they have no incentive to exit scam
- you can use any/many node(s), since there's little cost in switching servers
- market based prices: competition between servers to lower costs
- incentive for a distributed third party node system: if some public nodes are overused/slow, traffic can move to others
- increases network security
- helps counteract mining pools' share of the network hash rate
- zero incentive for a payer to "double spend" since a reorg does not give any money back to the miner
And some disadvantages:
- low power clients will have difficulty mining (but one can optionally mine in advance and/or with a faster machine)
- payment is "random", so a server might go a long time without a block before getting one
- a public node's overall expected payment may be small
Public nodes are expected to compete to find a suitable level for
cost of service.
The daemon can be set up this way to require payment for RPC services:
monerod --rpc-payment-address 4xxxxxx \
--rpc-payment-credits 250 --rpc-payment-difficulty 1000
These values are an example only.
The --rpc-payment-difficulty switch selects how hard each "share" should
be, similar to a mining pool. The higher the difficulty, the fewer
shares a client will find.
The --rpc-payment-credits switch selects how many credits are awarded
for each share a client finds.
Considering both options, clients will be awarded credits/difficulty
credits for every hash they calculate. For example, in the command line
above, 0.25 credits per hash. A client mining at 100 H/s will therefore
get an average of 25 credits per second.
For reference, in the current implementation, a credit is enough to
sync 20 blocks, so a 100 H/s client that's just starting to use Monero
and uses this daemon will be able to sync 500 blocks per second.
The wallet can be set to automatically mine if connected to a daemon
which requires payment for RPC usage. It will try to keep a balance
of 50000 credits, stopping mining when it's at this level, and starting
again as credits are spent. With the example above, a new client will
mine this much credits in about half an hour, and this target is enough
to sync 500000 blocks (currently about a third of the monero blockchain).
There are three new settings in the wallet:
- credits-target: this is the amount of credits a wallet will try to
reach before stopping mining. The default of 0 means 50000 credits.
- auto-mine-for-rpc-payment-threshold: this controls the minimum
credit rate which the wallet considers worth mining for. If the
daemon credits less than this ratio, the wallet will consider mining
to be not worth it. In the example above, the rate is 0.25
- persistent-rpc-client-id: if set, this allows the wallet to reuse
a client id across runs. This means a public node can tell a wallet
that's connecting is the same as one that connected previously, but
allows a wallet to keep their credit balance from one run to the
other. Since the wallet only mines to keep a small credit balance,
this is not normally worth doing. However, someone may want to mine
on a fast server, and use that credit balance on a low power device
such as a phone. If left unset, a new client ID is generated at
each wallet start, for privacy reasons.
To mine and use a credit balance on two different devices, you can
use the --rpc-client-secret-key switch. A wallet's client secret key
can be found using the new rpc_payments command in the wallet.
Note: anyone knowing your RPC client secret key is able to use your
credit balance.
The wallet has a few new commands too:
- start_mining_for_rpc: start mining to acquire more credits,
regardless of the auto mining settings
- stop_mining_for_rpc: stop mining to acquire more credits
- rpc_payments: display information about current credits with
the currently selected daemon
The node has an extra command:
- rpc_payments: display information about clients and their
balances
The node will forget about any balance for clients which have
been inactive for 6 months. Balances carry over on node restart.
2018-02-11 15:15:56 +00:00
context.m_rpc_credits_per_hash = arg.node_data.rpc_credits_per_hash;
2014-03-03 22:07:58 +00:00
2019-08-21 18:19:36 +00:00
if(arg.node_data.my_port && zone.m_can_pingback)
2014-03-03 22:07:58 +00:00
{
peerid_type peer_id_l = arg.node_data.peer_id;
2017-08-25 15:14:46 +00:00
uint32_t port_l = arg.node_data.my_port;
2014-03-03 22:07:58 +00:00
//try ping to be sure that we can add this peer to peer_list
try_ping(arg.node_data, context, [peer_id_l, port_l, context, this]()
{
2019-04-10 22:34:30 +00:00
CHECK_AND_ASSERT_MES((context.m_remote_address.get_type_id() == epee::net_utils::ipv4_network_address::get_type_id() || context.m_remote_address.get_type_id() == epee::net_utils::ipv6_network_address::get_type_id()), void(),
"Only IPv4 or IPv6 addresses are supported here");
2014-03-03 22:07:58 +00:00
//called only(!) if success pinged, update local peerlist
peerlist_entry pe;
2017-05-27 10:35:54 +00:00
const epee::net_utils::network_address na = context.m_remote_address;
2019-04-10 22:34:30 +00:00
if (context.m_remote_address.get_type_id() == epee::net_utils::ipv4_network_address::get_type_id())
{
pe.adr = epee::net_utils::ipv4_network_address(na.as<epee::net_utils::ipv4_network_address>().ip(), port_l);
}
else
{
pe.adr = epee::net_utils::ipv6_network_address(na.as<epee::net_utils::ipv6_network_address>().ip(), port_l);
}
2014-08-20 15:57:29 +00:00
time_t last_seen;
time(&last_seen);
pe.last_seen = static_cast<int64_t>(last_seen);
2014-03-03 22:07:58 +00:00
pe.id = peer_id_l;
Pruning
The blockchain prunes seven eighths of prunable tx data.
This saves about two thirds of the blockchain size, while
keeping the node useful as a sync source for an eighth
of the blockchain.
No other data is currently pruned.
There are three ways to prune a blockchain:
- run monerod with --prune-blockchain
- run "prune_blockchain" in the monerod console
- run the monero-blockchain-prune utility
The first two will prune in place. Due to how LMDB works, this
will not reduce the blockchain size on disk. Instead, it will
mark parts of the file as free, so that future data will use
that free space, causing the file to not grow until free space
grows scarce.
The third way will create a second database, a pruned copy of
the original one. Since this is a new file, this one will be
smaller than the original one.
Once the database is pruned, it will stay pruned as it syncs.
That is, there is no need to use --prune-blockchain again, etc.
2018-04-29 22:30:51 +00:00
pe.pruning_seed = context.m_pruning_seed;
2019-02-24 08:47:49 +00:00
pe.rpc_port = context.m_rpc_port;
daemon, wallet: new pay for RPC use system
Daemons intended for public use can be set up to require payment
in the form of hashes in exchange for RPC service. This enables
public daemons to receive payment for their work over a large
number of calls. This system behaves similarly to a pool, so
payment takes the form of valid blocks every so often, yielding
a large one off payment, rather than constant micropayments.
This system can also be used by third parties as a "paywall"
layer, where users of a service can pay for use by mining Monero
to the service provider's address. An example of this for web
site access is Primo, a Monero mining based website "paywall":
https://github.com/selene-kovri/primo
This has some advantages:
- incentive to run a node providing RPC services, thereby promoting the availability of third party nodes for those who can't run their own
- incentive to run your own node instead of using a third party's, thereby promoting decentralization
- decentralized: payment is done between a client and server, with no third party needed
- private: since the system is "pay as you go", you don't need to identify yourself to claim a long lived balance
- no payment occurs on the blockchain, so there is no extra transactional load
- one may mine with a beefy server, and use those credits from a phone, by reusing the client ID (at the cost of some privacy)
- no barrier to entry: anyone may run a RPC node, and your expected revenue depends on how much work you do
- Sybil resistant: if you run 1000 idle RPC nodes, you don't magically get more revenue
- no large credit balance maintained on servers, so they have no incentive to exit scam
- you can use any/many node(s), since there's little cost in switching servers
- market based prices: competition between servers to lower costs
- incentive for a distributed third party node system: if some public nodes are overused/slow, traffic can move to others
- increases network security
- helps counteract mining pools' share of the network hash rate
- zero incentive for a payer to "double spend" since a reorg does not give any money back to the miner
And some disadvantages:
- low power clients will have difficulty mining (but one can optionally mine in advance and/or with a faster machine)
- payment is "random", so a server might go a long time without a block before getting one
- a public node's overall expected payment may be small
Public nodes are expected to compete to find a suitable level for
cost of service.
The daemon can be set up this way to require payment for RPC services:
monerod --rpc-payment-address 4xxxxxx \
--rpc-payment-credits 250 --rpc-payment-difficulty 1000
These values are an example only.
The --rpc-payment-difficulty switch selects how hard each "share" should
be, similar to a mining pool. The higher the difficulty, the fewer
shares a client will find.
The --rpc-payment-credits switch selects how many credits are awarded
for each share a client finds.
Considering both options, clients will be awarded credits/difficulty
credits for every hash they calculate. For example, in the command line
above, 0.25 credits per hash. A client mining at 100 H/s will therefore
get an average of 25 credits per second.
For reference, in the current implementation, a credit is enough to
sync 20 blocks, so a 100 H/s client that's just starting to use Monero
and uses this daemon will be able to sync 500 blocks per second.
The wallet can be set to automatically mine if connected to a daemon
which requires payment for RPC usage. It will try to keep a balance
of 50000 credits, stopping mining when it's at this level, and starting
again as credits are spent. With the example above, a new client will
mine this much credits in about half an hour, and this target is enough
to sync 500000 blocks (currently about a third of the monero blockchain).
There are three new settings in the wallet:
- credits-target: this is the amount of credits a wallet will try to
reach before stopping mining. The default of 0 means 50000 credits.
- auto-mine-for-rpc-payment-threshold: this controls the minimum
credit rate which the wallet considers worth mining for. If the
daemon credits less than this ratio, the wallet will consider mining
to be not worth it. In the example above, the rate is 0.25
- persistent-rpc-client-id: if set, this allows the wallet to reuse
a client id across runs. This means a public node can tell a wallet
that's connecting is the same as one that connected previously, but
allows a wallet to keep their credit balance from one run to the
other. Since the wallet only mines to keep a small credit balance,
this is not normally worth doing. However, someone may want to mine
on a fast server, and use that credit balance on a low power device
such as a phone. If left unset, a new client ID is generated at
each wallet start, for privacy reasons.
To mine and use a credit balance on two different devices, you can
use the --rpc-client-secret-key switch. A wallet's client secret key
can be found using the new rpc_payments command in the wallet.
Note: anyone knowing your RPC client secret key is able to use your
credit balance.
The wallet has a few new commands too:
- start_mining_for_rpc: start mining to acquire more credits,
regardless of the auto mining settings
- stop_mining_for_rpc: stop mining to acquire more credits
- rpc_payments: display information about current credits with
the currently selected daemon
The node has an extra command:
- rpc_payments: display information about clients and their
balances
The node will forget about any balance for clients which have
been inactive for 6 months. Balances carry over on node restart.
2018-02-11 15:15:56 +00:00
pe.rpc_credits_per_hash = context.m_rpc_credits_per_hash;
2018-12-16 17:57:44 +00:00
this->m_network_zones.at(context.m_remote_address.get_zone()).m_peerlist.append_with_peer_white(pe);
2017-05-27 10:35:54 +00:00
LOG_DEBUG_CC(context, "PING SUCCESS " << context.m_remote_address.host_str() << ":" << port_l);
2014-03-03 22:07:58 +00:00
});
}
2016-10-26 19:00:08 +00:00
try_get_support_flags(context, [](p2p_connection_context& flags_context, const uint32_t& support_flags)
{
flags_context.support_flags = support_flags;
});
2014-03-03 22:07:58 +00:00
//fill response
2019-04-23 11:07:44 +00:00
zone.m_peerlist.get_peerlist_head(rsp.local_peerlist_new, true);
2018-12-16 17:57:44 +00:00
get_local_node_data(rsp.node_data, zone);
2014-03-03 22:07:58 +00:00
m_payload_handler.get_payload_sync_data(rsp.payload_data);
Change logging to easylogging++
This replaces the epee and data_loggers logging systems with
a single one, and also adds filename:line and explicit severity
levels. Categories may be defined, and logging severity set
by category (or set of categories). epee style 0-4 log level
maps to a sensible severity configuration. Log files now also
rotate when reaching 100 MB.
To select which logs to output, use the MONERO_LOGS environment
variable, with a comma separated list of categories (globs are
supported), with their requested severity level after a colon.
If a log matches more than one such setting, the last one in
the configuration string applies. A few examples:
This one is (mostly) silent, only outputting fatal errors:
MONERO_LOGS=*:FATAL
This one is very verbose:
MONERO_LOGS=*:TRACE
This one is totally silent (logwise):
MONERO_LOGS=""
This one outputs all errors and warnings, except for the
"verify" category, which prints just fatal errors (the verify
category is used for logs about incoming transactions and
blocks, and it is expected that some/many will fail to verify,
hence we don't want the spam):
MONERO_LOGS=*:WARNING,verify:FATAL
Log levels are, in decreasing order of priority:
FATAL, ERROR, WARNING, INFO, DEBUG, TRACE
Subcategories may be added using prefixes and globs. This
example will output net.p2p logs at the TRACE level, but all
other net* logs only at INFO:
MONERO_LOGS=*:ERROR,net*:INFO,net.p2p:TRACE
Logs which are intended for the user (which Monero was using
a lot through epee, but really isn't a nice way to go things)
should use the "global" category. There are a few helper macros
for using this category, eg: MGINFO("this shows up by default")
or MGINFO_RED("this is red"), to try to keep a similar look
and feel for now.
Existing epee log macros still exist, and map to the new log
levels, but since they're used as a "user facing" UI element
as much as a logging system, they often don't map well to log
severities (ie, a log level 0 log may be an error, or may be
something we want the user to see, such as an important info).
In those cases, I tried to use the new macros. In other cases,
I left the existing macros in. When modifying logs, it is
probably best to switch to the new macros with explicit levels.
The --log-level options and set_log commands now also accept
category settings, in addition to the epee style log levels.
2017-01-01 16:34:23 +00:00
LOG_DEBUG_CC(context, "COMMAND_HANDSHAKE");
2014-03-03 22:07:58 +00:00
return 1;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
int node_server<t_payload_net_handler>::handle_ping(int command, COMMAND_PING::request& arg, COMMAND_PING::response& rsp, p2p_connection_context& context)
{
Change logging to easylogging++
This replaces the epee and data_loggers logging systems with
a single one, and also adds filename:line and explicit severity
levels. Categories may be defined, and logging severity set
by category (or set of categories). epee style 0-4 log level
maps to a sensible severity configuration. Log files now also
rotate when reaching 100 MB.
To select which logs to output, use the MONERO_LOGS environment
variable, with a comma separated list of categories (globs are
supported), with their requested severity level after a colon.
If a log matches more than one such setting, the last one in
the configuration string applies. A few examples:
This one is (mostly) silent, only outputting fatal errors:
MONERO_LOGS=*:FATAL
This one is very verbose:
MONERO_LOGS=*:TRACE
This one is totally silent (logwise):
MONERO_LOGS=""
This one outputs all errors and warnings, except for the
"verify" category, which prints just fatal errors (the verify
category is used for logs about incoming transactions and
blocks, and it is expected that some/many will fail to verify,
hence we don't want the spam):
MONERO_LOGS=*:WARNING,verify:FATAL
Log levels are, in decreasing order of priority:
FATAL, ERROR, WARNING, INFO, DEBUG, TRACE
Subcategories may be added using prefixes and globs. This
example will output net.p2p logs at the TRACE level, but all
other net* logs only at INFO:
MONERO_LOGS=*:ERROR,net*:INFO,net.p2p:TRACE
Logs which are intended for the user (which Monero was using
a lot through epee, but really isn't a nice way to go things)
should use the "global" category. There are a few helper macros
for using this category, eg: MGINFO("this shows up by default")
or MGINFO_RED("this is red"), to try to keep a similar look
and feel for now.
Existing epee log macros still exist, and map to the new log
levels, but since they're used as a "user facing" UI element
as much as a logging system, they often don't map well to log
severities (ie, a log level 0 log may be an error, or may be
something we want the user to see, such as an important info).
In those cases, I tried to use the new macros. In other cases,
I left the existing macros in. When modifying logs, it is
probably best to switch to the new macros with explicit levels.
The --log-level options and set_log commands now also accept
category settings, in addition to the epee style log levels.
2017-01-01 16:34:23 +00:00
LOG_DEBUG_CC(context, "COMMAND_PING");
2014-03-03 22:07:58 +00:00
rsp.status = PING_OK_RESPONSE_STATUS_TEXT;
2018-12-16 17:57:44 +00:00
rsp.peer_id = m_network_zones.at(context.m_remote_address.get_zone()).m_config.m_peer_id;
2014-03-03 22:07:58 +00:00
return 1;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
bool node_server<t_payload_net_handler>::log_peerlist()
{
2018-12-05 22:25:27 +00:00
std::vector<peerlist_entry> pl_white;
std::vector<peerlist_entry> pl_gray;
2018-12-16 17:57:44 +00:00
for (auto& zone : m_network_zones)
zone.second.m_peerlist.get_peerlist(pl_gray, pl_white);
Change logging to easylogging++
This replaces the epee and data_loggers logging systems with
a single one, and also adds filename:line and explicit severity
levels. Categories may be defined, and logging severity set
by category (or set of categories). epee style 0-4 log level
maps to a sensible severity configuration. Log files now also
rotate when reaching 100 MB.
To select which logs to output, use the MONERO_LOGS environment
variable, with a comma separated list of categories (globs are
supported), with their requested severity level after a colon.
If a log matches more than one such setting, the last one in
the configuration string applies. A few examples:
This one is (mostly) silent, only outputting fatal errors:
MONERO_LOGS=*:FATAL
This one is very verbose:
MONERO_LOGS=*:TRACE
This one is totally silent (logwise):
MONERO_LOGS=""
This one outputs all errors and warnings, except for the
"verify" category, which prints just fatal errors (the verify
category is used for logs about incoming transactions and
blocks, and it is expected that some/many will fail to verify,
hence we don't want the spam):
MONERO_LOGS=*:WARNING,verify:FATAL
Log levels are, in decreasing order of priority:
FATAL, ERROR, WARNING, INFO, DEBUG, TRACE
Subcategories may be added using prefixes and globs. This
example will output net.p2p logs at the TRACE level, but all
other net* logs only at INFO:
MONERO_LOGS=*:ERROR,net*:INFO,net.p2p:TRACE
Logs which are intended for the user (which Monero was using
a lot through epee, but really isn't a nice way to go things)
should use the "global" category. There are a few helper macros
for using this category, eg: MGINFO("this shows up by default")
or MGINFO_RED("this is red"), to try to keep a similar look
and feel for now.
Existing epee log macros still exist, and map to the new log
levels, but since they're used as a "user facing" UI element
as much as a logging system, they often don't map well to log
severities (ie, a log level 0 log may be an error, or may be
something we want the user to see, such as an important info).
In those cases, I tried to use the new macros. In other cases,
I left the existing macros in. When modifying logs, it is
probably best to switch to the new macros with explicit levels.
The --log-level options and set_log commands now also accept
category settings, in addition to the epee style log levels.
2017-01-01 16:34:23 +00:00
MINFO(ENDL << "Peerlist white:" << ENDL << print_peerlist_to_string(pl_white) << ENDL << "Peerlist gray:" << ENDL << print_peerlist_to_string(pl_gray) );
2014-03-03 22:07:58 +00:00
return true;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
bool node_server<t_payload_net_handler>::log_connections()
{
Change logging to easylogging++
This replaces the epee and data_loggers logging systems with
a single one, and also adds filename:line and explicit severity
levels. Categories may be defined, and logging severity set
by category (or set of categories). epee style 0-4 log level
maps to a sensible severity configuration. Log files now also
rotate when reaching 100 MB.
To select which logs to output, use the MONERO_LOGS environment
variable, with a comma separated list of categories (globs are
supported), with their requested severity level after a colon.
If a log matches more than one such setting, the last one in
the configuration string applies. A few examples:
This one is (mostly) silent, only outputting fatal errors:
MONERO_LOGS=*:FATAL
This one is very verbose:
MONERO_LOGS=*:TRACE
This one is totally silent (logwise):
MONERO_LOGS=""
This one outputs all errors and warnings, except for the
"verify" category, which prints just fatal errors (the verify
category is used for logs about incoming transactions and
blocks, and it is expected that some/many will fail to verify,
hence we don't want the spam):
MONERO_LOGS=*:WARNING,verify:FATAL
Log levels are, in decreasing order of priority:
FATAL, ERROR, WARNING, INFO, DEBUG, TRACE
Subcategories may be added using prefixes and globs. This
example will output net.p2p logs at the TRACE level, but all
other net* logs only at INFO:
MONERO_LOGS=*:ERROR,net*:INFO,net.p2p:TRACE
Logs which are intended for the user (which Monero was using
a lot through epee, but really isn't a nice way to go things)
should use the "global" category. There are a few helper macros
for using this category, eg: MGINFO("this shows up by default")
or MGINFO_RED("this is red"), to try to keep a similar look
and feel for now.
Existing epee log macros still exist, and map to the new log
levels, but since they're used as a "user facing" UI element
as much as a logging system, they often don't map well to log
severities (ie, a log level 0 log may be an error, or may be
something we want the user to see, such as an important info).
In those cases, I tried to use the new macros. In other cases,
I left the existing macros in. When modifying logs, it is
probably best to switch to the new macros with explicit levels.
The --log-level options and set_log commands now also accept
category settings, in addition to the epee style log levels.
2017-01-01 16:34:23 +00:00
MINFO("Connections: \r\n" << print_connections_container() );
2014-03-03 22:07:58 +00:00
return true;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
std::string node_server<t_payload_net_handler>::print_connections_container()
{
std::stringstream ss;
2018-12-16 17:57:44 +00:00
for (auto& zone : m_network_zones)
2014-03-03 22:07:58 +00:00
{
2018-12-16 17:57:44 +00:00
zone.second.m_net_server.get_config_object().foreach_connection([&](const p2p_connection_context& cntxt)
{
ss << cntxt.m_remote_address.str()
<< " \t\tpeer_id " << cntxt.peer_id
<< " \t\tconn_id " << cntxt.m_connection_id << (cntxt.m_is_income ? " INC":" OUT")
<< std::endl;
return true;
});
}
2014-03-03 22:07:58 +00:00
std::string s = ss.str();
return s;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
void node_server<t_payload_net_handler>::on_connection_new(p2p_connection_context& context)
{
Change logging to easylogging++
This replaces the epee and data_loggers logging systems with
a single one, and also adds filename:line and explicit severity
levels. Categories may be defined, and logging severity set
by category (or set of categories). epee style 0-4 log level
maps to a sensible severity configuration. Log files now also
rotate when reaching 100 MB.
To select which logs to output, use the MONERO_LOGS environment
variable, with a comma separated list of categories (globs are
supported), with their requested severity level after a colon.
If a log matches more than one such setting, the last one in
the configuration string applies. A few examples:
This one is (mostly) silent, only outputting fatal errors:
MONERO_LOGS=*:FATAL
This one is very verbose:
MONERO_LOGS=*:TRACE
This one is totally silent (logwise):
MONERO_LOGS=""
This one outputs all errors and warnings, except for the
"verify" category, which prints just fatal errors (the verify
category is used for logs about incoming transactions and
blocks, and it is expected that some/many will fail to verify,
hence we don't want the spam):
MONERO_LOGS=*:WARNING,verify:FATAL
Log levels are, in decreasing order of priority:
FATAL, ERROR, WARNING, INFO, DEBUG, TRACE
Subcategories may be added using prefixes and globs. This
example will output net.p2p logs at the TRACE level, but all
other net* logs only at INFO:
MONERO_LOGS=*:ERROR,net*:INFO,net.p2p:TRACE
Logs which are intended for the user (which Monero was using
a lot through epee, but really isn't a nice way to go things)
should use the "global" category. There are a few helper macros
for using this category, eg: MGINFO("this shows up by default")
or MGINFO_RED("this is red"), to try to keep a similar look
and feel for now.
Existing epee log macros still exist, and map to the new log
levels, but since they're used as a "user facing" UI element
as much as a logging system, they often don't map well to log
severities (ie, a log level 0 log may be an error, or may be
something we want the user to see, such as an important info).
In those cases, I tried to use the new macros. In other cases,
I left the existing macros in. When modifying logs, it is
probably best to switch to the new macros with explicit levels.
The --log-level options and set_log commands now also accept
category settings, in addition to the epee style log levels.
2017-01-01 16:34:23 +00:00
MINFO("["<< epee::net_utils::print_connection_context(context) << "] NEW CONNECTION");
2014-03-03 22:07:58 +00:00
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
void node_server<t_payload_net_handler>::on_connection_close(p2p_connection_context& context)
{
2018-12-16 17:57:44 +00:00
network_zone& zone = m_network_zones.at(context.m_remote_address.get_zone());
if (!zone.m_net_server.is_stop_signal_sent() && !context.m_is_income) {
2017-05-27 10:35:54 +00:00
epee::net_utils::network_address na = AUTO_VAL_INIT(na);
na = context.m_remote_address;
2017-02-09 00:11:58 +00:00
2018-12-16 17:57:44 +00:00
zone.m_peerlist.remove_from_peer_anchor(na);
2017-02-09 00:11:58 +00:00
}
2017-08-18 19:14:23 +00:00
m_payload_handler.on_connection_close(context);
Change logging to easylogging++
This replaces the epee and data_loggers logging systems with
a single one, and also adds filename:line and explicit severity
levels. Categories may be defined, and logging severity set
by category (or set of categories). epee style 0-4 log level
maps to a sensible severity configuration. Log files now also
rotate when reaching 100 MB.
To select which logs to output, use the MONERO_LOGS environment
variable, with a comma separated list of categories (globs are
supported), with their requested severity level after a colon.
If a log matches more than one such setting, the last one in
the configuration string applies. A few examples:
This one is (mostly) silent, only outputting fatal errors:
MONERO_LOGS=*:FATAL
This one is very verbose:
MONERO_LOGS=*:TRACE
This one is totally silent (logwise):
MONERO_LOGS=""
This one outputs all errors and warnings, except for the
"verify" category, which prints just fatal errors (the verify
category is used for logs about incoming transactions and
blocks, and it is expected that some/many will fail to verify,
hence we don't want the spam):
MONERO_LOGS=*:WARNING,verify:FATAL
Log levels are, in decreasing order of priority:
FATAL, ERROR, WARNING, INFO, DEBUG, TRACE
Subcategories may be added using prefixes and globs. This
example will output net.p2p logs at the TRACE level, but all
other net* logs only at INFO:
MONERO_LOGS=*:ERROR,net*:INFO,net.p2p:TRACE
Logs which are intended for the user (which Monero was using
a lot through epee, but really isn't a nice way to go things)
should use the "global" category. There are a few helper macros
for using this category, eg: MGINFO("this shows up by default")
or MGINFO_RED("this is red"), to try to keep a similar look
and feel for now.
Existing epee log macros still exist, and map to the new log
levels, but since they're used as a "user facing" UI element
as much as a logging system, they often don't map well to log
severities (ie, a log level 0 log may be an error, or may be
something we want the user to see, such as an important info).
In those cases, I tried to use the new macros. In other cases,
I left the existing macros in. When modifying logs, it is
probably best to switch to the new macros with explicit levels.
The --log-level options and set_log commands now also accept
category settings, in addition to the epee style log levels.
2017-01-01 16:34:23 +00:00
MINFO("["<< epee::net_utils::print_connection_context(context) << "] CLOSE CONNECTION");
2014-05-25 17:06:40 +00:00
}
template<class t_payload_net_handler>
2017-05-27 10:35:54 +00:00
bool node_server<t_payload_net_handler>::is_priority_node(const epee::net_utils::network_address& na)
2014-05-25 17:06:40 +00:00
{
return (std::find(m_priority_peers.begin(), m_priority_peers.end(), na) != m_priority_peers.end()) || (std::find(m_exclusive_peers.begin(), m_exclusive_peers.end(), na) != m_exclusive_peers.end());
}
template<class t_payload_net_handler> template <class Container>
bool node_server<t_payload_net_handler>::connect_to_peerlist(const Container& peers)
{
2018-12-16 17:57:44 +00:00
const network_zone& public_zone = m_network_zones.at(epee::net_utils::zone::public_);
2017-05-27 10:35:54 +00:00
for(const epee::net_utils::network_address& na: peers)
2014-05-25 17:06:40 +00:00
{
2018-12-16 17:57:44 +00:00
if(public_zone.m_net_server.is_stop_signal_sent())
2014-05-25 17:06:40 +00:00
return false;
if(is_addr_connected(na))
continue;
try_to_connect_and_handshake_with_new_peer(na);
}
return true;
}
template<class t_payload_net_handler> template <class Container>
bool node_server<t_payload_net_handler>::parse_peers_and_add_to_container(const boost::program_options::variables_map& vm, const command_line::arg_descriptor<std::vector<std::string> > & arg, Container& container)
{
std::vector<std::string> perrs = command_line::get_arg(vm, arg);
for(const std::string& pr_str: perrs)
{
2018-06-11 03:16:29 +00:00
const uint16_t default_port = cryptonote::get_config(m_nettype).P2P_DEFAULT_PORT;
2018-12-16 17:57:44 +00:00
expect<epee::net_utils::network_address> adr = net::get_network_address(pr_str, default_port);
if (adr)
2018-06-11 03:43:18 +00:00
{
2018-12-16 17:57:44 +00:00
add_zone(adr->get_zone());
container.push_back(std::move(*adr));
2018-06-11 03:43:18 +00:00
continue;
}
std::vector<epee::net_utils::network_address> resolved_addrs;
2018-12-16 17:57:44 +00:00
bool r = append_net_address(resolved_addrs, pr_str, default_port);
2018-06-11 03:43:18 +00:00
CHECK_AND_ASSERT_MES(r, false, "Failed to parse or resolve address from string: " << pr_str);
for (const epee::net_utils::network_address& addr : resolved_addrs)
{
container.push_back(addr);
}
2014-05-25 17:06:40 +00:00
}
return true;
2014-03-03 22:07:58 +00:00
}
2015-12-14 04:54:39 +00:00
2015-01-05 19:30:17 +00:00
template<class t_payload_net_handler>
2018-12-16 17:57:44 +00:00
bool node_server<t_payload_net_handler>::set_max_out_peers(network_zone& zone, int64_t max)
2015-12-14 04:54:39 +00:00
{
2018-12-16 17:57:44 +00:00
if(max == -1) {
zone.m_config.m_net_config.max_out_connection_count = P2P_DEFAULT_CONNECTIONS_COUNT;
return true;
}
zone.m_config.m_net_config.max_out_connection_count = max;
2015-12-14 04:54:39 +00:00
return true;
}
2018-01-20 21:44:23 +00:00
template<class t_payload_net_handler>
2018-12-16 17:57:44 +00:00
bool node_server<t_payload_net_handler>::set_max_in_peers(network_zone& zone, int64_t max)
2018-01-20 21:44:23 +00:00
{
2018-12-16 17:57:44 +00:00
zone.m_config.m_net_config.max_in_connection_count = max;
2018-01-20 21:44:23 +00:00
return true;
}
2015-01-05 19:30:17 +00:00
template<class t_payload_net_handler>
2018-12-16 17:57:44 +00:00
void node_server<t_payload_net_handler>::change_max_out_public_peers(size_t count)
2015-01-05 19:30:17 +00:00
{
2018-12-16 17:57:44 +00:00
auto public_zone = m_network_zones.find(epee::net_utils::zone::public_);
if (public_zone != m_network_zones.end())
{
2019-06-18 22:47:05 +00:00
const auto current = public_zone->second.m_net_server.get_config_object().get_out_connections_count();
2018-12-16 17:57:44 +00:00
public_zone->second.m_config.m_net_config.max_out_connection_count = count;
if(current > count)
public_zone->second.m_net_server.get_config_object().del_out_connections(current - count);
2019-06-18 22:27:47 +00:00
m_payload_handler.set_max_out_peers(count);
2018-12-16 17:57:44 +00:00
}
2015-01-05 19:30:17 +00:00
}
2015-12-14 04:54:39 +00:00
2019-05-28 17:54:41 +00:00
template<class t_payload_net_handler>
uint32_t node_server<t_payload_net_handler>::get_max_out_public_peers() const
{
const auto public_zone = m_network_zones.find(epee::net_utils::zone::public_);
if (public_zone == m_network_zones.end())
return 0;
return public_zone->second.m_config.m_net_config.max_out_connection_count;
}
2018-01-20 21:44:23 +00:00
template<class t_payload_net_handler>
2018-12-16 17:57:44 +00:00
void node_server<t_payload_net_handler>::change_max_in_public_peers(size_t count)
2018-01-20 21:44:23 +00:00
{
2018-12-16 17:57:44 +00:00
auto public_zone = m_network_zones.find(epee::net_utils::zone::public_);
if (public_zone != m_network_zones.end())
{
2019-06-18 22:47:05 +00:00
const auto current = public_zone->second.m_net_server.get_config_object().get_in_connections_count();
2018-12-16 17:57:44 +00:00
public_zone->second.m_config.m_net_config.max_in_connection_count = count;
if(current > count)
public_zone->second.m_net_server.get_config_object().del_in_connections(current - count);
}
2018-01-20 21:44:23 +00:00
}
2019-05-28 17:54:41 +00:00
template<class t_payload_net_handler>
uint32_t node_server<t_payload_net_handler>::get_max_in_public_peers() const
{
const auto public_zone = m_network_zones.find(epee::net_utils::zone::public_);
if (public_zone == m_network_zones.end())
return 0;
return public_zone->second.m_config.m_net_config.max_in_connection_count;
}
2015-12-14 04:54:39 +00:00
template<class t_payload_net_handler>
2015-01-05 19:30:17 +00:00
bool node_server<t_payload_net_handler>::set_tos_flag(const boost::program_options::variables_map& vm, int flag)
2015-12-14 04:54:39 +00:00
{
if(flag==-1){
return true;
}
epee::net_utils::connection<epee::levin::async_protocol_handler<p2p_connection_context> >::set_tos_flag(flag);
_dbg1("Set ToS flag " << flag);
return true;
}
2015-01-05 19:30:17 +00:00
template<class t_payload_net_handler>
2015-12-14 04:54:39 +00:00
bool node_server<t_payload_net_handler>::set_rate_up_limit(const boost::program_options::variables_map& vm, int64_t limit)
{
2019-03-20 15:40:59 +00:00
this->islimitup=(limit != -1) && (limit != default_limit_up);
2015-12-14 04:54:39 +00:00
if (limit==-1) {
limit=default_limit_up;
}
epee::net_utils::connection<epee::levin::async_protocol_handler<p2p_connection_context> >::set_rate_up_limit( limit );
2017-11-26 14:26:17 +00:00
MINFO("Set limit-up to " << limit << " kB/s");
2015-12-14 04:54:39 +00:00
return true;
}
2015-01-05 19:30:17 +00:00
template<class t_payload_net_handler>
2015-12-14 04:54:39 +00:00
bool node_server<t_payload_net_handler>::set_rate_down_limit(const boost::program_options::variables_map& vm, int64_t limit)
{
2019-03-20 15:40:59 +00:00
this->islimitdown=(limit != -1) && (limit != default_limit_down);
2015-12-14 04:54:39 +00:00
if(limit==-1) {
limit=default_limit_down;
}
epee::net_utils::connection<epee::levin::async_protocol_handler<p2p_connection_context> >::set_rate_down_limit( limit );
2017-11-26 14:26:17 +00:00
MINFO("Set limit-down to " << limit << " kB/s");
2015-12-14 04:54:39 +00:00
return true;
}
template<class t_payload_net_handler>
bool node_server<t_payload_net_handler>::set_rate_limit(const boost::program_options::variables_map& vm, int64_t limit)
{
int64_t limit_up = 0;
int64_t limit_down = 0;
2015-01-05 19:30:17 +00:00
2015-12-14 04:54:39 +00:00
if(limit == -1)
{
2017-11-26 14:26:17 +00:00
limit_up = default_limit_up;
limit_down = default_limit_down;
2015-12-14 04:54:39 +00:00
}
else
{
2017-11-26 14:26:17 +00:00
limit_up = limit;
limit_down = limit;
2015-12-14 04:54:39 +00:00
}
if(!this->islimitup) {
epee::net_utils::connection<epee::levin::async_protocol_handler<p2p_connection_context> >::set_rate_up_limit(limit_up);
2017-11-26 14:26:17 +00:00
MINFO("Set limit-up to " << limit_up << " kB/s");
2015-12-14 04:54:39 +00:00
}
if(!this->islimitdown) {
epee::net_utils::connection<epee::levin::async_protocol_handler<p2p_connection_context> >::set_rate_down_limit(limit_down);
2017-11-26 14:26:17 +00:00
MINFO("Set limit-down to " << limit_down << " kB/s");
2015-12-14 04:54:39 +00:00
}
return true;
}
2017-01-14 12:21:20 +00:00
template<class t_payload_net_handler>
2017-05-27 10:35:54 +00:00
bool node_server<t_payload_net_handler>::has_too_many_connections(const epee::net_utils::network_address &address)
2017-01-14 12:21:20 +00:00
{
2018-12-16 17:57:44 +00:00
if (address.get_zone() != epee::net_utils::zone::public_)
return false; // Unable to determine how many connections from host
2017-12-07 22:44:55 +00:00
const size_t max_connections = 1;
size_t count = 0;
2017-01-14 12:21:20 +00:00
2018-12-16 17:57:44 +00:00
m_network_zones.at(epee::net_utils::zone::public_).m_net_server.get_config_object().foreach_connection([&](const p2p_connection_context& cntxt)
2017-01-14 12:21:20 +00:00
{
2017-05-27 10:35:54 +00:00
if (cntxt.m_is_income && cntxt.m_remote_address.is_same_host(address)) {
2017-01-14 12:21:20 +00:00
count++;
if (count > max_connections) {
return false;
}
}
return true;
});
return count > max_connections;
}
2017-01-20 23:59:04 +00:00
template<class t_payload_net_handler>
bool node_server<t_payload_net_handler>::gray_peerlist_housekeeping()
{
2018-02-01 11:48:03 +00:00
if (m_offline) return true;
2017-09-10 12:11:42 +00:00
if (!m_exclusive_peers.empty()) return true;
Pruning
The blockchain prunes seven eighths of prunable tx data.
This saves about two thirds of the blockchain size, while
keeping the node useful as a sync source for an eighth
of the blockchain.
No other data is currently pruned.
There are three ways to prune a blockchain:
- run monerod with --prune-blockchain
- run "prune_blockchain" in the monerod console
- run the monero-blockchain-prune utility
The first two will prune in place. Due to how LMDB works, this
will not reduce the blockchain size on disk. Instead, it will
mark parts of the file as free, so that future data will use
that free space, causing the file to not grow until free space
grows scarce.
The third way will create a second database, a pruned copy of
the original one. Since this is a new file, this one will be
smaller than the original one.
Once the database is pruned, it will stay pruned as it syncs.
That is, there is no need to use --prune-blockchain again, etc.
2018-04-29 22:30:51 +00:00
if (m_payload_handler.needs_new_sync_connections()) return true;
2017-09-10 12:11:42 +00:00
2018-12-16 17:57:44 +00:00
for (auto& zone : m_network_zones)
{
if (zone.second.m_net_server.is_stop_signal_sent())
return false;
2017-01-20 23:59:04 +00:00
2018-12-16 17:57:44 +00:00
if (zone.second.m_connect == nullptr)
continue;
2017-01-20 23:59:04 +00:00
2018-12-16 17:57:44 +00:00
peerlist_entry pe{};
if (!zone.second.m_peerlist.get_random_gray_peer(pe))
continue;
2017-01-20 23:59:04 +00:00
2018-12-16 17:57:44 +00:00
if (!check_connection_and_handshake_with_peer(pe.adr, pe.last_seen))
{
zone.second.m_peerlist.remove_from_peer_gray(pe);
LOG_PRINT_L2("PEER EVICTED FROM GRAY PEER LIST IP address: " << pe.adr.host_str() << " Peer ID: " << peerid_type(pe.id));
}
else
{
daemon, wallet: new pay for RPC use system
Daemons intended for public use can be set up to require payment
in the form of hashes in exchange for RPC service. This enables
public daemons to receive payment for their work over a large
number of calls. This system behaves similarly to a pool, so
payment takes the form of valid blocks every so often, yielding
a large one off payment, rather than constant micropayments.
This system can also be used by third parties as a "paywall"
layer, where users of a service can pay for use by mining Monero
to the service provider's address. An example of this for web
site access is Primo, a Monero mining based website "paywall":
https://github.com/selene-kovri/primo
This has some advantages:
- incentive to run a node providing RPC services, thereby promoting the availability of third party nodes for those who can't run their own
- incentive to run your own node instead of using a third party's, thereby promoting decentralization
- decentralized: payment is done between a client and server, with no third party needed
- private: since the system is "pay as you go", you don't need to identify yourself to claim a long lived balance
- no payment occurs on the blockchain, so there is no extra transactional load
- one may mine with a beefy server, and use those credits from a phone, by reusing the client ID (at the cost of some privacy)
- no barrier to entry: anyone may run a RPC node, and your expected revenue depends on how much work you do
- Sybil resistant: if you run 1000 idle RPC nodes, you don't magically get more revenue
- no large credit balance maintained on servers, so they have no incentive to exit scam
- you can use any/many node(s), since there's little cost in switching servers
- market based prices: competition between servers to lower costs
- incentive for a distributed third party node system: if some public nodes are overused/slow, traffic can move to others
- increases network security
- helps counteract mining pools' share of the network hash rate
- zero incentive for a payer to "double spend" since a reorg does not give any money back to the miner
And some disadvantages:
- low power clients will have difficulty mining (but one can optionally mine in advance and/or with a faster machine)
- payment is "random", so a server might go a long time without a block before getting one
- a public node's overall expected payment may be small
Public nodes are expected to compete to find a suitable level for
cost of service.
The daemon can be set up this way to require payment for RPC services:
monerod --rpc-payment-address 4xxxxxx \
--rpc-payment-credits 250 --rpc-payment-difficulty 1000
These values are an example only.
The --rpc-payment-difficulty switch selects how hard each "share" should
be, similar to a mining pool. The higher the difficulty, the fewer
shares a client will find.
The --rpc-payment-credits switch selects how many credits are awarded
for each share a client finds.
Considering both options, clients will be awarded credits/difficulty
credits for every hash they calculate. For example, in the command line
above, 0.25 credits per hash. A client mining at 100 H/s will therefore
get an average of 25 credits per second.
For reference, in the current implementation, a credit is enough to
sync 20 blocks, so a 100 H/s client that's just starting to use Monero
and uses this daemon will be able to sync 500 blocks per second.
The wallet can be set to automatically mine if connected to a daemon
which requires payment for RPC usage. It will try to keep a balance
of 50000 credits, stopping mining when it's at this level, and starting
again as credits are spent. With the example above, a new client will
mine this much credits in about half an hour, and this target is enough
to sync 500000 blocks (currently about a third of the monero blockchain).
There are three new settings in the wallet:
- credits-target: this is the amount of credits a wallet will try to
reach before stopping mining. The default of 0 means 50000 credits.
- auto-mine-for-rpc-payment-threshold: this controls the minimum
credit rate which the wallet considers worth mining for. If the
daemon credits less than this ratio, the wallet will consider mining
to be not worth it. In the example above, the rate is 0.25
- persistent-rpc-client-id: if set, this allows the wallet to reuse
a client id across runs. This means a public node can tell a wallet
that's connecting is the same as one that connected previously, but
allows a wallet to keep their credit balance from one run to the
other. Since the wallet only mines to keep a small credit balance,
this is not normally worth doing. However, someone may want to mine
on a fast server, and use that credit balance on a low power device
such as a phone. If left unset, a new client ID is generated at
each wallet start, for privacy reasons.
To mine and use a credit balance on two different devices, you can
use the --rpc-client-secret-key switch. A wallet's client secret key
can be found using the new rpc_payments command in the wallet.
Note: anyone knowing your RPC client secret key is able to use your
credit balance.
The wallet has a few new commands too:
- start_mining_for_rpc: start mining to acquire more credits,
regardless of the auto mining settings
- stop_mining_for_rpc: stop mining to acquire more credits
- rpc_payments: display information about current credits with
the currently selected daemon
The node has an extra command:
- rpc_payments: display information about clients and their
balances
The node will forget about any balance for clients which have
been inactive for 6 months. Balances carry over on node restart.
2018-02-11 15:15:56 +00:00
zone.second.m_peerlist.set_peer_just_seen(pe.id, pe.adr, pe.pruning_seed, pe.rpc_port, pe.rpc_credits_per_hash);
2018-12-16 17:57:44 +00:00
LOG_PRINT_L2("PEER PROMOTED TO WHITE PEER LIST IP address: " << pe.adr.host_str() << " Peer ID: " << peerid_type(pe.id));
}
2017-01-20 23:59:04 +00:00
}
return true;
}
2017-08-29 21:28:23 +00:00
Pruning
The blockchain prunes seven eighths of prunable tx data.
This saves about two thirds of the blockchain size, while
keeping the node useful as a sync source for an eighth
of the blockchain.
No other data is currently pruned.
There are three ways to prune a blockchain:
- run monerod with --prune-blockchain
- run "prune_blockchain" in the monerod console
- run the monero-blockchain-prune utility
The first two will prune in place. Due to how LMDB works, this
will not reduce the blockchain size on disk. Instead, it will
mark parts of the file as free, so that future data will use
that free space, causing the file to not grow until free space
grows scarce.
The third way will create a second database, a pruned copy of
the original one. Since this is a new file, this one will be
smaller than the original one.
Once the database is pruned, it will stay pruned as it syncs.
That is, there is no need to use --prune-blockchain again, etc.
2018-04-29 22:30:51 +00:00
template<class t_payload_net_handler>
void node_server<t_payload_net_handler>::add_used_stripe_peer(const typename t_payload_net_handler::connection_context &context)
{
const uint32_t stripe = tools::get_pruning_stripe(context.m_pruning_seed);
if (stripe == 0 || stripe > (1ul << CRYPTONOTE_PRUNING_LOG_STRIPES))
return;
const uint32_t index = stripe - 1;
CRITICAL_REGION_LOCAL(m_used_stripe_peers_mutex);
MINFO("adding stripe " << stripe << " peer: " << context.m_remote_address.str());
std::remove_if(m_used_stripe_peers[index].begin(), m_used_stripe_peers[index].end(),
[&context](const epee::net_utils::network_address &na){ return context.m_remote_address == na; });
m_used_stripe_peers[index].push_back(context.m_remote_address);
}
template<class t_payload_net_handler>
void node_server<t_payload_net_handler>::remove_used_stripe_peer(const typename t_payload_net_handler::connection_context &context)
{
const uint32_t stripe = tools::get_pruning_stripe(context.m_pruning_seed);
if (stripe == 0 || stripe > (1ul << CRYPTONOTE_PRUNING_LOG_STRIPES))
return;
const uint32_t index = stripe - 1;
CRITICAL_REGION_LOCAL(m_used_stripe_peers_mutex);
MINFO("removing stripe " << stripe << " peer: " << context.m_remote_address.str());
std::remove_if(m_used_stripe_peers[index].begin(), m_used_stripe_peers[index].end(),
[&context](const epee::net_utils::network_address &na){ return context.m_remote_address == na; });
}
template<class t_payload_net_handler>
void node_server<t_payload_net_handler>::clear_used_stripe_peers()
{
CRITICAL_REGION_LOCAL(m_used_stripe_peers_mutex);
MINFO("clearing used stripe peers");
for (auto &e: m_used_stripe_peers)
e.clear();
}
2017-08-29 21:28:23 +00:00
template<class t_payload_net_handler>
2019-04-10 22:34:30 +00:00
void node_server<t_payload_net_handler>::add_upnp_port_mapping_impl(uint32_t port, bool ipv6) // if ipv6 false, do ipv4
2017-08-29 21:28:23 +00:00
{
2019-04-10 22:34:30 +00:00
std::string ipversion = ipv6 ? "(IPv6)" : "(IPv4)";
MDEBUG("Attempting to add IGD port mapping " << ipversion << ".");
2017-08-29 21:28:23 +00:00
int result;
2019-04-10 22:34:30 +00:00
const int ipv6_arg = ipv6 ? 1 : 0;
2017-08-29 21:28:23 +00:00
#if MINIUPNPC_API_VERSION > 13
// default according to miniupnpc.h
unsigned char ttl = 2;
2019-04-10 22:34:30 +00:00
UPNPDev* deviceList = upnpDiscover(1000, NULL, NULL, 0, ipv6_arg, ttl, &result);
2017-08-29 21:28:23 +00:00
#else
2019-04-10 22:34:30 +00:00
UPNPDev* deviceList = upnpDiscover(1000, NULL, NULL, 0, ipv6_arg, &result);
2017-08-29 21:28:23 +00:00
#endif
UPNPUrls urls;
IGDdatas igdData;
char lanAddress[64];
result = UPNP_GetValidIGD(deviceList, &urls, &igdData, lanAddress, sizeof lanAddress);
freeUPNPDevlist(deviceList);
2018-10-15 22:39:51 +00:00
if (result > 0) {
2017-08-29 21:28:23 +00:00
if (result == 1) {
std::ostringstream portString;
portString << port;
// Delete the port mapping before we create it, just in case we have dangling port mapping from the daemon not being shut down correctly
UPNP_DeletePortMapping(urls.controlURL, igdData.first.servicetype, portString.str().c_str(), "TCP", 0);
int portMappingResult;
portMappingResult = UPNP_AddPortMapping(urls.controlURL, igdData.first.servicetype, portString.str().c_str(), portString.str().c_str(), lanAddress, CRYPTONOTE_NAME, "TCP", 0, "0");
if (portMappingResult != 0) {
LOG_ERROR("UPNP_AddPortMapping failed, error: " << strupnperror(portMappingResult));
} else {
MLOG_GREEN(el::Level::Info, "Added IGD port mapping.");
}
} else if (result == 2) {
MWARNING("IGD was found but reported as not connected.");
} else if (result == 3) {
MWARNING("UPnP device was found but not recognized as IGD.");
} else {
MWARNING("UPNP_GetValidIGD returned an unknown result code.");
}
FreeUPNPUrls(&urls);
} else {
MINFO("No IGD was found.");
}
}
template<class t_payload_net_handler>
2019-04-10 22:34:30 +00:00
void node_server<t_payload_net_handler>::add_upnp_port_mapping_v4(uint32_t port)
{
add_upnp_port_mapping_impl(port, false);
}
template<class t_payload_net_handler>
void node_server<t_payload_net_handler>::add_upnp_port_mapping_v6(uint32_t port)
{
add_upnp_port_mapping_impl(port, true);
}
template<class t_payload_net_handler>
void node_server<t_payload_net_handler>::add_upnp_port_mapping(uint32_t port, bool ipv4, bool ipv6)
{
if (ipv4) add_upnp_port_mapping_v4(port);
if (ipv6) add_upnp_port_mapping_v6(port);
}
template<class t_payload_net_handler>
void node_server<t_payload_net_handler>::delete_upnp_port_mapping_impl(uint32_t port, bool ipv6)
2017-08-29 21:28:23 +00:00
{
2019-04-10 22:34:30 +00:00
std::string ipversion = ipv6 ? "(IPv6)" : "(IPv4)";
MDEBUG("Attempting to delete IGD port mapping " << ipversion << ".");
2017-08-29 21:28:23 +00:00
int result;
2019-04-10 22:34:30 +00:00
const int ipv6_arg = ipv6 ? 1 : 0;
2017-08-29 21:28:23 +00:00
#if MINIUPNPC_API_VERSION > 13
// default according to miniupnpc.h
unsigned char ttl = 2;
2019-04-10 22:34:30 +00:00
UPNPDev* deviceList = upnpDiscover(1000, NULL, NULL, 0, ipv6_arg, ttl, &result);
2017-08-29 21:28:23 +00:00
#else
2019-04-10 22:34:30 +00:00
UPNPDev* deviceList = upnpDiscover(1000, NULL, NULL, 0, ipv6_arg, &result);
2017-08-29 21:28:23 +00:00
#endif
UPNPUrls urls;
IGDdatas igdData;
char lanAddress[64];
result = UPNP_GetValidIGD(deviceList, &urls, &igdData, lanAddress, sizeof lanAddress);
freeUPNPDevlist(deviceList);
2018-10-15 22:39:51 +00:00
if (result > 0) {
2017-08-29 21:28:23 +00:00
if (result == 1) {
std::ostringstream portString;
portString << port;
int portMappingResult;
portMappingResult = UPNP_DeletePortMapping(urls.controlURL, igdData.first.servicetype, portString.str().c_str(), "TCP", 0);
if (portMappingResult != 0) {
LOG_ERROR("UPNP_DeletePortMapping failed, error: " << strupnperror(portMappingResult));
} else {
MLOG_GREEN(el::Level::Info, "Deleted IGD port mapping.");
}
} else if (result == 2) {
MWARNING("IGD was found but reported as not connected.");
} else if (result == 3) {
MWARNING("UPnP device was found but not recognized as IGD.");
} else {
MWARNING("UPNP_GetValidIGD returned an unknown result code.");
}
FreeUPNPUrls(&urls);
} else {
MINFO("No IGD was found.");
}
}
2018-12-16 17:57:44 +00:00
2019-04-10 22:34:30 +00:00
template<class t_payload_net_handler>
void node_server<t_payload_net_handler>::delete_upnp_port_mapping_v4(uint32_t port)
{
delete_upnp_port_mapping_impl(port, false);
}
template<class t_payload_net_handler>
void node_server<t_payload_net_handler>::delete_upnp_port_mapping_v6(uint32_t port)
{
delete_upnp_port_mapping_impl(port, true);
}
template<class t_payload_net_handler>
void node_server<t_payload_net_handler>::delete_upnp_port_mapping(uint32_t port)
{
delete_upnp_port_mapping_v4(port);
delete_upnp_port_mapping_v6(port);
}
2018-12-16 17:57:44 +00:00
template<typename t_payload_net_handler>
boost::optional<p2p_connection_context_t<typename t_payload_net_handler::connection_context>>
epee: add SSL support
RPC connections now have optional tranparent SSL.
An optional private key and certificate file can be passed,
using the --{rpc,daemon}-ssl-private-key and
--{rpc,daemon}-ssl-certificate options. Those have as
argument a path to a PEM format private private key and
certificate, respectively.
If not given, a temporary self signed certificate will be used.
SSL can be enabled or disabled using --{rpc}-ssl, which
accepts autodetect (default), disabled or enabled.
Access can be restricted to particular certificates using the
--rpc-ssl-allowed-certificates, which takes a list of
paths to PEM encoded certificates. This can allow a wallet to
connect to only the daemon they think they're connected to,
by forcing SSL and listing the paths to the known good
certificates.
To generate long term certificates:
openssl genrsa -out /tmp/KEY 4096
openssl req -new -key /tmp/KEY -out /tmp/REQ
openssl x509 -req -days 999999 -sha256 -in /tmp/REQ -signkey /tmp/KEY -out /tmp/CERT
/tmp/KEY is the private key, and /tmp/CERT is the certificate,
both in PEM format. /tmp/REQ can be removed. Adjust the last
command to set expiration date, etc, as needed. It doesn't
make a whole lot of sense for monero anyway, since most servers
will run with one time temporary self signed certificates anyway.
SSL support is transparent, so all communication is done on the
existing ports, with SSL autodetection. This means you can start
using an SSL daemon now, but you should not enforce SSL yet or
nothing will talk to you.
2018-06-14 22:44:48 +00:00
node_server<t_payload_net_handler>::socks_connect(network_zone& zone, const epee::net_utils::network_address& remote, epee::net_utils::ssl_support_t ssl_support)
2018-12-16 17:57:44 +00:00
{
auto result = socks_connect_internal(zone.m_net_server.get_stop_signal(), zone.m_net_server.get_io_service(), zone.m_proxy_address, remote);
if (result) // if no error
{
p2p_connection_context context{};
epee: add SSL support
RPC connections now have optional tranparent SSL.
An optional private key and certificate file can be passed,
using the --{rpc,daemon}-ssl-private-key and
--{rpc,daemon}-ssl-certificate options. Those have as
argument a path to a PEM format private private key and
certificate, respectively.
If not given, a temporary self signed certificate will be used.
SSL can be enabled or disabled using --{rpc}-ssl, which
accepts autodetect (default), disabled or enabled.
Access can be restricted to particular certificates using the
--rpc-ssl-allowed-certificates, which takes a list of
paths to PEM encoded certificates. This can allow a wallet to
connect to only the daemon they think they're connected to,
by forcing SSL and listing the paths to the known good
certificates.
To generate long term certificates:
openssl genrsa -out /tmp/KEY 4096
openssl req -new -key /tmp/KEY -out /tmp/REQ
openssl x509 -req -days 999999 -sha256 -in /tmp/REQ -signkey /tmp/KEY -out /tmp/CERT
/tmp/KEY is the private key, and /tmp/CERT is the certificate,
both in PEM format. /tmp/REQ can be removed. Adjust the last
command to set expiration date, etc, as needed. It doesn't
make a whole lot of sense for monero anyway, since most servers
will run with one time temporary self signed certificates anyway.
SSL support is transparent, so all communication is done on the
existing ports, with SSL autodetection. This means you can start
using an SSL daemon now, but you should not enforce SSL yet or
nothing will talk to you.
2018-06-14 22:44:48 +00:00
if (zone.m_net_server.add_connection(context, std::move(*result), remote, ssl_support))
2018-12-16 17:57:44 +00:00
return {std::move(context)};
}
return boost::none;
}
template<typename t_payload_net_handler>
boost::optional<p2p_connection_context_t<typename t_payload_net_handler::connection_context>>
epee: add SSL support
RPC connections now have optional tranparent SSL.
An optional private key and certificate file can be passed,
using the --{rpc,daemon}-ssl-private-key and
--{rpc,daemon}-ssl-certificate options. Those have as
argument a path to a PEM format private private key and
certificate, respectively.
If not given, a temporary self signed certificate will be used.
SSL can be enabled or disabled using --{rpc}-ssl, which
accepts autodetect (default), disabled or enabled.
Access can be restricted to particular certificates using the
--rpc-ssl-allowed-certificates, which takes a list of
paths to PEM encoded certificates. This can allow a wallet to
connect to only the daemon they think they're connected to,
by forcing SSL and listing the paths to the known good
certificates.
To generate long term certificates:
openssl genrsa -out /tmp/KEY 4096
openssl req -new -key /tmp/KEY -out /tmp/REQ
openssl x509 -req -days 999999 -sha256 -in /tmp/REQ -signkey /tmp/KEY -out /tmp/CERT
/tmp/KEY is the private key, and /tmp/CERT is the certificate,
both in PEM format. /tmp/REQ can be removed. Adjust the last
command to set expiration date, etc, as needed. It doesn't
make a whole lot of sense for monero anyway, since most servers
will run with one time temporary self signed certificates anyway.
SSL support is transparent, so all communication is done on the
existing ports, with SSL autodetection. This means you can start
using an SSL daemon now, but you should not enforce SSL yet or
nothing will talk to you.
2018-06-14 22:44:48 +00:00
node_server<t_payload_net_handler>::public_connect(network_zone& zone, epee::net_utils::network_address const& na, epee::net_utils::ssl_support_t ssl_support)
2018-12-16 17:57:44 +00:00
{
2019-04-10 22:34:30 +00:00
bool is_ipv4 = na.get_type_id() == epee::net_utils::ipv4_network_address::get_type_id();
bool is_ipv6 = na.get_type_id() == epee::net_utils::ipv6_network_address::get_type_id();
CHECK_AND_ASSERT_MES(is_ipv4 || is_ipv6, boost::none,
"Only IPv4 or IPv6 addresses are supported here");
std::string address;
std::string port;
if (is_ipv4)
{
const epee::net_utils::ipv4_network_address &ipv4 = na.as<const epee::net_utils::ipv4_network_address>();
address = epee::string_tools::get_ip_string_from_int32(ipv4.ip());
port = epee::string_tools::num_to_string_fast(ipv4.port());
}
else if (is_ipv6)
{
const epee::net_utils::ipv6_network_address &ipv6 = na.as<const epee::net_utils::ipv6_network_address>();
address = ipv6.ip().to_string();
port = epee::string_tools::num_to_string_fast(ipv6.port());
}
else
{
LOG_ERROR("Only IPv4 or IPv6 addresses are supported here");
return boost::none;
}
2018-12-16 17:57:44 +00:00
typename net_server::t_connection_context con{};
2019-04-10 22:34:30 +00:00
const bool res = zone.m_net_server.connect(address, port,
2018-12-16 17:57:44 +00:00
zone.m_config.m_net_config.connection_timeout,
epee: add SSL support
RPC connections now have optional tranparent SSL.
An optional private key and certificate file can be passed,
using the --{rpc,daemon}-ssl-private-key and
--{rpc,daemon}-ssl-certificate options. Those have as
argument a path to a PEM format private private key and
certificate, respectively.
If not given, a temporary self signed certificate will be used.
SSL can be enabled or disabled using --{rpc}-ssl, which
accepts autodetect (default), disabled or enabled.
Access can be restricted to particular certificates using the
--rpc-ssl-allowed-certificates, which takes a list of
paths to PEM encoded certificates. This can allow a wallet to
connect to only the daemon they think they're connected to,
by forcing SSL and listing the paths to the known good
certificates.
To generate long term certificates:
openssl genrsa -out /tmp/KEY 4096
openssl req -new -key /tmp/KEY -out /tmp/REQ
openssl x509 -req -days 999999 -sha256 -in /tmp/REQ -signkey /tmp/KEY -out /tmp/CERT
/tmp/KEY is the private key, and /tmp/CERT is the certificate,
both in PEM format. /tmp/REQ can be removed. Adjust the last
command to set expiration date, etc, as needed. It doesn't
make a whole lot of sense for monero anyway, since most servers
will run with one time temporary self signed certificates anyway.
SSL support is transparent, so all communication is done on the
existing ports, with SSL autodetection. This means you can start
using an SSL daemon now, but you should not enforce SSL yet or
nothing will talk to you.
2018-06-14 22:44:48 +00:00
con, "0.0.0.0", ssl_support);
2018-12-16 17:57:44 +00:00
if (res)
return {std::move(con)};
return boost::none;
}
2014-03-03 22:07:58 +00:00
}