Since kernel 4.10 commit 61e84623ace3 ("net: centralize net_device
min/max MTU checking"), the range of mtu is [min_mtu, max_mtu], which
is [68, 1500] by default.
It's necessary to set a max_mtu if a mtu > 1500 is supported.
Tested-by: Koen Vandeputte <koen.vandeputte@ncentric.com>
Signed-off-by: Mathias Kresin <dev@kresin.me>
Use the same method for setting queue index pointers consistenly
throughout the source file.
Signed-off-by: Koen Vandeputte <koen.vandeputte@ncentric.com>
- Remove kernel 4.9 support
- Apply specific 4.14 changes directly to source
- Refreshed all
Signed-off-by: Koen Vandeputte <koen.vandeputte@ncentric.com>
Already reschedule when 1 or more frames came in.
Checking for a full queue could produce a re-schedule loop as
the checked RX ring location could contain undefined values
depending on activity in previous loops.
Signed-off-by: Koen Vandeputte <koen.vandeputte@ncentric.com>
This reverts commit 0772ab938c0aedd7f4cc7127059d6ce8cf929dfa.
Trying to optimize calls to eth_complete_tx in this fasion causes a regression
where when sending only the tx queue can get disabled until a packet is
received. This original call to eth_schedule_poll() is scheduled so it
should not cause a performance issue.
Signed-off-by: Tim Harvey <tharvey@gateworks.com>
SVN-Revision: 40592
We already clean up tx descriptors in the napi eth_poll() function so it
would likely be rare to run out of available descriptors in eth_xmit. Thus
we can clean them up only when needed and return busy only when we
still don't have enough.
Signed-off-by: Tim Harvey <tharvey@gateworks.com>
SVN-Revision: 39762
The combination of r35942 and r35952 causes an issue where eth_schedule_poll()
can be called from a different CPU between the call to napi_complete() and the
setting of cur_index which can break the rx ring accounting and cause ethernet
latency and/or ethernet stalls. The issue can be easilly created by adding
a couple of artificial delays such as:
@@ -715,6 +715,7 @@ static int eth_poll(struct napi_struct *napi, int budget)
if (!received) {
napi_complete(napi);
+udelay(1000);
enable_irq(IRQ_CNS3XXX_SW_R0RXC);
}
@@ -727,6 +728,7 @@ static int eth_poll(struct napi_struct *napi, int budget)
rx_ring->cur_index = i;
wmb();
+udelay(1000);
enable_rx_dma(sw);
return received;
This patch moves the setting of cur_index back up where it needs to be and
addresses the original corner case that r35942 was trying to catch in an
improved fashion by checking to see if the rx descriptor ring has become
full before interrupts were re-enabled so that a poll can be scheduled again
and avoid an rx stall caused by rx interrupts ceasing to fire again.
Signed-off-by: Tim Harvey <tharvey@gateworks.com>
SVN-Revision: 39761
When an rx interrupt comes in, rx interrupts are disabled and NAPI
polling is scheduled. During the NAPI poll, the driver first processes
received frames in the ring, then fills the dma descriptor slots with
new buffers and calls tx complete, before finally re-enabling rx
interrupts and completing NAPI (if below the budget).
If the hardware rx queue overflows before the napi complete is called,
the hardware will not throw any further rx interrupts and rx processing
stops completely.
Fix this by keeping NAPI polling scheduled until it completes a poll
without receiving any packets, and also handle NAPI completion before
refilling rx or completing tx.
SVN-Revision: 35942
function. This removes those from the dwc_otg driver and removes the patch
that comments out the linkage of udc-core so that the dwc_otg driver can
co-exist happily with other USB Device Controllers.
Signed-off-by: Tim Harvey <tharvey@gateworks.com>
Signed-off-by: Florian Fainelli <florian@openwrt.org>
SVN-Revision: 34475