When receiving an answer packet, the command code was passed
to the callback instead of the error code. This was hiding
the "command not found" failure from the peer, and in turn
causing the code to attempt to deserialize a non existent
reply string.
This is intended to catch traffic coming from a web browser,
so we avoid issues with a web page sending a transfer RPC to
the wallet. Requiring a particular user agent can act as a
simple password scheme, while we wait for 0MQ and proper
authentication to be merged.
The noexcept specs were added to make GCC 6.1.1 happy (#846), but this
one was missing (because GCC did not complain about it on Linux, but
does complain on OSX).
5dc09f2 wallet_rpc_server: fix some string values being returned between <> (moneromooo-monero)
f8213c0 Require 64/16 characters for payment ids (moneromooo-monero)
The default behavior for hex string parsing would allow the
last digit to be made from a single hexadecimal character,
which is correct, but we typically do not want that as it
gets confusing and easy to not spot wrong input size.
The destructors get a noexcept(true) spec by default, but these
destructors in fact throw exceptions. An alternative fix might be to not
throw (most if not all of these throws are non-essential
error-reporting/logging).
1c0bffb Restrict also 'get_connections' and 'getbans' APIs. (osensei)
9f8bc49 Don't allow 'flush_txpool' and 'setbans' JSON_RPC methods when running in restricted mode. (osensei)
When the send queue limit is reached, it is likely to not drain
any time soon. If we call close on the connection, it will stay
alive, waiting for the queue to drain before actually closing,
and will hit that check again and again. Since the queue size
limit is the reason we're closing in the first place, we call
shutdown directly.
If we reach the send queue size limit, we need to release the lock,
or we will deadlock and it will never drain.
If we reach that limit, it's likely there's another problem in the
first place though, so it will probably not drain in practice either,
unless some kind of transient network timeout.
Since connections from the ::connect method are now kept in
a deque to be able to cancel them on exit, this leaks both
memory and a file descriptor. Here, we clean those up after
30 seconds, to avoid this. 30 seconds is higher then the
5 second timeout used in the async code, so this should be
safe. However, this is an assumption which would break if
that async code was to start relying on longer timeouts.
When the boost ioservice is stopped, pending work notifications
will not happen. This includes deadline timers, which would
otherwise time out the now cancelled I/O operations. When this
happens just after starting a new connect operation, this can
leave that operations in a state where it won't receive either
the completion notification nor a timeout, causing a hang.
This is fixed by keeping a list of connections corresponding
to the connect operations, and cancelling them before stopping
the boost ioservice.
Note that the list of these connections can grow unbounded, as
they're never cleaned up. Cleaning them up would involve
working out which connections do not have any pending work,
and it's not quite clear yet how to go about this.
This prevents an exception from existing the loop without
calling the exit handler, if one is defined.
The daemon defines one, which stops the p2p layer, and will
only exit once the p2p layer is shut down. This would cause
a hang upon an exception, as the input thread would have
exited and the daemon would wait forever with no console
user input.
The daemon will be polled every 90 seconds for new blocks.
It is enabled by default, and can be turned on/off with
set auto-refresh 1 and set auto-refresh 0 in the wallet.