When the send queue limit is reached, it is likely to not drain
any time soon. If we call close on the connection, it will stay
alive, waiting for the queue to drain before actually closing,
and will hit that check again and again. Since the queue size
limit is the reason we're closing in the first place, we call
shutdown directly.
If we reach the send queue size limit, we need to release the lock,
or we will deadlock and it will never drain.
If we reach that limit, it's likely there's another problem in the
first place though, so it will probably not drain in practice either,
unless some kind of transient network timeout.
Since connections from the ::connect method are now kept in
a deque to be able to cancel them on exit, this leaks both
memory and a file descriptor. Here, we clean those up after
30 seconds, to avoid this. 30 seconds is higher then the
5 second timeout used in the async code, so this should be
safe. However, this is an assumption which would break if
that async code was to start relying on longer timeouts.
When the boost ioservice is stopped, pending work notifications
will not happen. This includes deadline timers, which would
otherwise time out the now cancelled I/O operations. When this
happens just after starting a new connect operation, this can
leave that operations in a state where it won't receive either
the completion notification nor a timeout, causing a hang.
This is fixed by keeping a list of connections corresponding
to the connect operations, and cancelling them before stopping
the boost ioservice.
Note that the list of these connections can grow unbounded, as
they're never cleaned up. Cleaning them up would involve
working out which connections do not have any pending work,
and it's not quite clear yet how to go about this.
This prevents an exception from existing the loop without
calling the exit handler, if one is defined.
The daemon defines one, which stops the p2p layer, and will
only exit once the p2p layer is shut down. This would cause
a hang upon an exception, as the input thread would have
exited and the daemon would wait forever with no console
user input.
The daemon will be polled every 90 seconds for new blocks.
It is enabled by default, and can be turned on/off with
set auto-refresh 1 and set auto-refresh 0 in the wallet.
The daemon registers a custom exit command, which cause the
loop to stop. Catch this case before printing "Failed to read line"
as this is an expected case.
If process path name isn't found, then leave log file name blank.
This also applies if a process name is found, but it's blank after
removing a trailing dot extension.
It uses the async console handler differently than simplewallet,
and wasn't running the same exit code, causing it to never actually
exit after breaking out of the console entry loop.
Add fix for compile error with multiple uses of peerid_type (uint64_t)
variable in lambda expression.
- known GCC issue: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=65843
epee: replace return value of nullptr for expected boolean with false.
Fixes#231.
Daemon interactive mode is now working again.
RPC mapped calls in daemon and wallet have both had connection_context
removed as an argument as that argument was not being used anywhere.
many RPC functions added by the daemonize changes
(and related changes on the upstream dev branch that were not merged)
were commented out (apart from return). Other than that, this *should*
work...at any rate, it builds, and that's something.