1
Commit Graph

217 Commits

Author SHA1 Message Date
David S. Miller
6ab33d5171 Merge branch 'master' of master.kernel.org:/pub/scm/linux/kernel/git/davem/net-2.6
Conflicts:

	drivers/net/ixgbe/ixgbe_main.c
	include/net/mac80211.h
	net/phonet/af_phonet.c
2008-11-20 16:44:00 -08:00
Lennert Buytenhek
5377152264 mv643xx_eth: calculate descriptor pointer only once in rxq_refill()
Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2008-11-20 03:59:04 -08:00
Lennert Buytenhek
f61e554776 mv643xx_eth: move receive error handling out of line
Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2008-11-20 03:58:46 -08:00
Lennert Buytenhek
66e63ffbc0 mv643xx_eth: implement ->set_rx_mode()
Currently, if multiple unicast addresses are programmed into a
mv643xx_eth interface, the core networking will resort to enabling
promiscuous mode on the interface, as mv643xx_eth does not implement
->set_rx_mode().

This patch switches mv643xx_eth over from ->set_multicast_list()
to ->set_rx_mode(), and implements support for secondary unicast
addresses.  The hardware can handle multiple unicast addresses as
long as their first 11 nibbles are the same (i.e. are of the form
xx:xx:xx:xx:xx:xy where the x part is the same for all addresses), so
if that is the case, we use that mode.  If it's not the case, we enable
unicast promiscuous mode in the hardware, which is slightly better than
enabling promiscuous mode for multicasts as well, which is what would
happen before.

While we are at it, change the programming sequence so that we
don't clear all filter bits first, so we don't lose all incoming
packets while the filter is being reprogrammed.

Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2008-11-20 03:58:27 -08:00
Lennert Buytenhek
66823b928d mv643xx_eth: inline txq_alloc_desc_index()
Since txq_alloc_desc_index() is a very simple function, and since
descriptor ring index handling for transmit reclaim, receive
processing and receive refill is already handled inline as well,
inline txq_alloc_desc_index() into its two call sites.

Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2008-11-20 03:58:09 -08:00
Lennert Buytenhek
37a6084f4b mv643xx_eth: introduce per-port register area pointer
The mv643xx_eth driver uses the rdl()/wrl() macros to read and
write hardware registers.  Per-port registers are accessed in the
following way:

	#define PORT_STATUS(p)			(0x0444 + ((p) << 10))

	[...]

	static inline u32 rdl(struct mv643xx_eth_private *mp, int offset)
	{
		return readl(mp->shared->base + offset);
	}

	[...]

	port_status = rdl(mp, PORT_STATUS(mp->port_num));

By giving the per-port 'struct mv643xx_eth_private' its own
'void __iomem *base' pointer that points to the per-port register
area, we can get rid of both the double indirection and the << 10
that is done for every per-port register access -- this patch does
that.

Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2008-11-20 03:57:36 -08:00
Lennert Buytenhek
10a9948d13 mv643xx_eth: checkpatch fixes
Fix up a couple of coding style issues caught by checkpatch.

Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2008-11-20 03:57:16 -08:00
Lennert Buytenhek
11b4aa03b2 mv643xx_eth: fix recycle check bound
When mv643xx_eth allocates skbuffs, it adds
'dma_get_cache_alignment() - 1' to the length it needs, so that it can
align the skb's ->data pointer to a cache boundary.  When checking
whether a transmitted skbuff can be reused as a receive buffer, these
bytes needs to be included into the minimum bound for the recycle check.

Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2008-11-20 01:39:52 -08:00
Lennert Buytenhek
bcb3336ce4 mv643xx_eth: fix the order of mdiobus_{unregister, free}() calls
Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2008-11-20 01:39:40 -08:00
David S. Miller
9eeda9abd1 Merge branch 'master' of master.kernel.org:/pub/scm/linux/kernel/git/davem/net-2.6
Conflicts:

	drivers/net/wireless/ath5k/base.c
	net/8021q/vlan_core.c
2008-11-06 22:43:03 -08:00
David S. Miller
babcda74e9 drivers/net: Kill now superfluous ->last_rx stores.
The generic packet receive code takes care of setting
netdev->last_rx when necessary, for the sake of the
bonding ARP monitor.

Drivers need not do it any more.

Some cases had to be skipped over because the drivers
were making use of the ->last_rx value themselves.

Signed-off-by: David S. Miller <davem@davemloft.net>
2008-11-03 21:11:17 -08:00
Lennert Buytenhek
ee04448d88 mv643xx_eth: fix SMI bus access timeouts
The mv643xx_eth mii bus implementation uses wait_event_timeout() to
wait for SMI completion interrupts.

If wait_event_timeout() would return zero, mv643xx_eth would conclude
that the SMI access timed out, but this is not necessarily true --
wait_event_timeout() can also return zero in the case where the SMI
completion interrupt did happen in time but where it took longer than
the requested timeout for the process performing the SMI access to be
scheduled again.  This would lead to occasional SMI access timeouts
when the system would be under heavy load.

The fix is to ignore the return value of wait_event_timeout(), and
to re-check the SMI done bit after wait_event_timeout() returns to
determine whether or not the SMI access timed out.

Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
Signed-off-by: Jeff Garzik <jgarzik@redhat.com>
2008-11-03 15:23:15 -05:00
Johannes Berg
e174961ca1 net: convert print_mac to %pM
This converts pretty much everything to print_mac. There were
a few things that had conflicts which I have just dropped for
now, no harm done.

I've built an allyesconfig with this and looked at the files
that weren't built very carefully, but it's a huge patch.

Signed-off-by: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
2008-10-27 17:06:18 -07:00
Lennert Buytenhek
c3efab8ed4 mv643xx_eth: include linux/ip.h to fix build
mv643xx_eth uses ip_hdr() (defined in linux/ip.h), but relied on
another header file to include the needed header file indirectly.
In latest net-next this indirect include chain is gone, so the
driver fails to build.  Include linux/ip.h explicitly to fix this.

Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2008-10-08 17:01:31 -07:00
Lennert Buytenhek
298cf9beb9 phylib: move to dynamic allocation of struct mii_bus
This patch introduces mdiobus_alloc() and mdiobus_free(), and
makes all mdio bus drivers use these functions to allocate their
struct mii_bus'es dynamically.

Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Acked-by: Andy Fleming <afleming@freescale.com>
2008-10-08 16:29:57 -07:00
Lennert Buytenhek
18ee49ddb0 phylib: rename mii_bus::dev to mii_bus::parent
In preparation of giving mii_bus objects a device tree presence of
their own, rename struct mii_bus's ->dev argument to ->parent, since
having a 'struct device *dev' that points to our parent device
conflicts with introducing a 'struct device dev' representing our own
device.

Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Acked-by: Andy Fleming <afleming@freescale.com>
2008-10-08 16:27:49 -07:00
Lennert Buytenhek
2bcb4b0f11 mv643xx_eth: hook up skb recycling
This gives a nice increase in the maximum loss-free packet forwarding
rate in routing workloads.

Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2008-10-01 02:33:57 -07:00
Lennert Buytenhek
042af53c78 mv643xx_eth: bump version to 1.4
Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
2008-09-19 19:34:05 +02:00
Lennert Buytenhek
ed94493fb3 mv643xx_eth: convert to phylib
Switch mv643xx_eth from using drivers/net/mii.c to using phylib.

Since the mv643xx_eth hardware does all the link state handling and
PHY polling, the driver will use phylib in the "Doing it all yourself"
mode described in the phylib documentation.

Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
Acked-by: Andy Fleming <afleming@freescale.com>
2008-09-19 19:34:00 +02:00
Lennert Buytenhek
4ff3495a51 mv643xx_eth: enforce frequent hardware statistics polling
If we don't poll the hardware statistics counters at least once every
~34 seconds, overflow might occur without us noticing.  So, set up a
timer to poll the statistics counters at least once every 30 seconds.

Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
2008-09-19 05:13:54 +02:00
Lennert Buytenhek
4df89bd5a5 mv643xx_eth: deal with unexpected ethernet header sizes
When the IP header doesn't start 14, 18, 22 or 26 bytes into the packet
(which are the only four cases that the hardware can deal with if asked
to do IP checksumming on transmit), invoke the software checksum helper
instead of letting the packet go out with a corrupt checksum inserted
into the packet in the wrong place.

Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
2008-09-19 05:13:31 +02:00
Lennert Buytenhek
170e7108a3 mv643xx_eth: fix receive checksumming
We have to explicitly tell the hardware to include the pseudo-header
when doing receive checksumming, otherwise hardware checksumming will
fail for every received packet and we'll end up setting CHECKSUM_NONE
on every received packet.

While we're at it, when skb->ip_summed is set to CHECKSUM_UNNECESSARY
on received packets, skb->csum is supposed to be undefined, and thus
there is no need to set it.

Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
2008-09-19 04:47:59 +02:00
Lennert Buytenhek
457b1d5a4b mv643xx_eth: add support for chips without transmit bandwidth control
Add support for mv643xx_eth versions that have no transmit bandwidth
control registers at all, such as the ethernet block found in the
Marvell 88F6183 ARM SoC.

Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
2008-09-14 15:53:29 +02:00
Lennert Buytenhek
6b8f90c276 mv643xx_eth: avoid reading ->byte_cnt twice during receive processing
Currently, the receive processing reads ->byte_cnt twice (once to
update interface statistics and once to properly size the data area
of the received skb), but since receive descriptors live in uncached
memory, caching this value in a local variable saves one uncached
access, and increases routing performance a tiny little bit more.

Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
2008-09-14 15:53:28 +02:00
Lennert Buytenhek
2b4a624d70 mv643xx_eth: shrink default receive and transmit queue sizes
Since the size of the receive queue is directly related to the data
cache footprint of the driver (between refilling a receive ring entry
with a fresh skb and receiving a packet in that entry, queue_size - 1
other skbs will have been touched), shrink the default receive queue
size to a saner number of entries, as 400 is definite overkill for
almost all workloads.

While we are at it, trim the default transmit queue size a bit as well.

Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
2008-09-14 14:18:10 +02:00
Lennert Buytenhek
99ab08e091 mv643xx_eth: replace array of skbs awaiting transmit completion with a queue
Get rid of the skb pointer array that we currently use for transmit
reclaim, and replace it with an skb queue, to which skbuffs are appended
when they are passed to the xmit function, and removed from the front
and freed when we do transmit queue reclaim and hit a descriptor with
the 'owned by device' bit clear and 'last descriptor' bit set.

Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
2008-09-14 14:09:06 +02:00
Lennert Buytenhek
a418950c13 mv643xx_eth: avoid dropping tx lock during transmit reclaim
By moving DMA unmapping during transmit reclaim back under the netif
tx lock, we avoid the situation where we read the DMA address and buffer
length from the descriptor under the lock and then not do anything with
that data after dropping the lock on platforms where the DMA unmapping
routines are all NOPs (which is the case on all ARM platforms that
mv643xx_eth is used on at least).

This saves two uncached reads, which makes a small but measurable
performance difference in routing benchmarks.

Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
2008-09-14 14:09:06 +02:00
Lennert Buytenhek
8fd89211bf mv643xx_eth: switch to netif tx queue lock, get rid of private spinlock
Since our ->hard_start_xmit() method is already called under spinlock
protection (the netif tx queue lock), we can simply make that lock
cover the private transmit state (descriptor ring indexes et al.) as
well, which avoids having to use a private lock to protect that state.

Since this was the last user of the driver-private spinlock, it can
be killed off.

Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
2008-09-14 14:09:05 +02:00
Lennert Buytenhek
1fa38c586e mv643xx_eth: move all work to the napi poll handler
Move link status handling, transmit reclaim and TX_END handling from
the interrupt handler to the napi poll handler.  This allows switching
->lock over to a non-IRQ-safe lock and removes all explicit interrupt
disabling from the driver.

Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
2008-09-14 14:09:00 +02:00
Lennert Buytenhek
e5ef1de198 mv643xx_eth: transmit multiqueue support
As all the infrastructure for multiple transmit queues already exists
in the driver, this patch is entirely trivial.

The individual transmit queues are still serialised by the driver's
per-port private spinlock, but that will disappear (i.e. be replaced
by the per-subqueue ->_xmit_lock) in a subsequent patch.

Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
2008-09-05 06:33:59 +02:00
Lennert Buytenhek
befefe2177 mv643xx_eth: delete unused and uninteresting interrupt source mask bits
Delete a couple of unused and uninteresting interrupt source mask bits:
- The receive resource underrun interrupt sources are uninteresting
  because if we are in out-of-memory mode, we are already dealing with
  the issue, and we don't need the hardware to remind us again that we
  are out of memory.
- The LINK and PHY interrupt sources can be coalesced into one define,
  since we always use them together.
- The transmit resource underrun interrupt source can be disabled since
  we never activate the head descriptor of a paged skb until the
  fragments are all activated, so transmit underrun during a packet
  should never happen.
- The INT_EXT_TX_0 define is never used.

Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
2008-09-05 06:33:59 +02:00
Lennert Buytenhek
4fdeca3f4e mv643xx_eth: get rid of netif_{stop,wake}_queue() calls on link down/up
There is no need to call netif_{stop,wake}_queue() when the link goes
down/up, as the networking already takes care of this internally.

Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
2008-09-05 06:33:59 +02:00
Lennert Buytenhek
ac840605f3 mv643xx_eth: remove force_phy_addr field
Currently, there are two different fields in the
mv643xx_eth_platform_data struct that together describe the PHY
address -- one field (phy_addr) has the address of the PHY, but if
that address is zero, a second field (force_phy_addr) needs to be
set to distinguish the actual address zero from a zero due to not
having filled in the PHY address explicitly (which should mean
'use the default PHY address').

If we are a bit smarter about the encoding of the phy_addr field,
we can avoid the need for a second field -- this patch does that.

Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
2008-09-05 06:33:59 +02:00
Lennert Buytenhek
fc0eb9f226 mv643xx_eth: smi sharing is a per-unit property, not a per-port one
Which top-level unit's SMI interface to use should be a property of
the top-level unit, not of the individual ports.  This patch moves the
->shared_smi pointer from the per-port platform data to the global
platform data.

Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
2008-09-05 06:33:58 +02:00
Lennert Buytenhek
f7981c1c67 mv643xx_eth: require contiguous receive and transmit queue numbering
Simplify receive and transmit queue handling by requiring the set
of queue numbers to be contiguous starting from zero.

Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
2008-09-05 06:33:58 +02:00
Lennert Buytenhek
17cd0a59f9 mv643xx_eth: get rid of compile-time configurable transmit checksumming
Get rid of the mv643xx_eth-internal MV643XX_ETH_CHECKSUM_OFFLOAD_TX
compile-time option.  Using transmit checksumming is the sane default,
and anyone wanting to disable it should use ethtool(8) instead of
recompiling their kernels.

Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
2008-09-05 06:33:58 +02:00
Lennert Buytenhek
2257e05c17 mv643xx_eth: get rid of receive-side locking
By having the receive out-of-memory handling timer schedule the napi
poll handler and then doing oom processing from the napi poll handler,
all code that touches receive state moves to napi context, letting us
get rid of all explicit locking in the receive paths since the only
mutual exclusion we need anymore at that point is protection against
reentering ourselves, which is provided by napi synchronisation.

Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
2008-09-05 06:33:58 +02:00
Lennert Buytenhek
78fff83b03 mv643xx_eth: make napi unconditional
Make napi unconditional on the receive side, so that we can get rid
of all the locking and local interrupt disabling in the receive path.

Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
2008-09-05 06:33:58 +02:00
Lennert Buytenhek
45c5d3bc1e mv643xx_eth: use the SMI done interrupt to wait for SMI access completion
If the platform code has passed us the IRQ number of the mv643xx_eth
top-level error interrupt, use the error interrupt to wait for SMI
access completion instead of polling the SMI busy bit, since SMI bus
accesses can take up to tens of milliseconds.

Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
2008-09-05 06:33:58 +02:00
Lennert Buytenhek
2b3ba0e3ea mv643xx_eth: switch ->phy_lock from a spinlock to a mutex
Since commit 81600eea98 ("mv643xx_eth:
use auto phy polling for configuring (R)(G)MII interface"),
mv643xx_eth no longer does SMI accesses from interrupt context.  The
only other callers that do SMI accesses all do them from process
context, which means we can switch the PHY lock from a spinlock to a
mutex, and get rid of the extra locking in some ethtool methods.

Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
2008-09-05 06:33:57 +02:00
Lennert Buytenhek
9da7874575 mv643xx_eth: get rid of modulo operations
Get rid of the modulo operations that are currently used for
computing successive TX/RX descriptor ring indexes.

Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
2008-09-05 06:33:57 +02:00
Lennert Buytenhek
2a1867a76f mv643xx_eth: get rid of IRQF_SAMPLE_RANDOM
Using IRQF_SAMPLE_RANDOM for the mv643xx_eth interrupt handler
significantly increases interrupt processing overhead, so get rid
of it.

Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
2008-09-05 06:33:57 +02:00
Lennert Buytenhek
3a499481c1 mv643xx_eth: fix receive buffer DMA unmapping
When tearing down a DMA mapping for a receive buffer, we should pass
dma_unmap_single() the exact same address that dma_map_single() gave
us when we originally set up the mapping.

Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
2008-09-05 06:33:57 +02:00
Lennert Buytenhek
b987384123 mv643xx_eth: fix 'netdev_priv(dev) == dev->priv' assumption
Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
2008-09-05 06:33:57 +02:00
Lennert Buytenhek
c4560318cf mv643xx_eth: bump version to 1.3
Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
2008-08-24 07:36:39 +02:00
Lennert Buytenhek
abe787170b mv643xx_eth: enforce multiple-of-8-bytes receive buffer size restriction
The mv643xx_eth hardware ignores the lower three bits of the buffer
size field in receive descriptors, causing the reception of full-sized
packets to fail at some MTUs.  Fix this by rounding the size of
allocated receive buffers up to a multiple of eight bytes.

While we are at it, add a bit of extra space to each receive buffer so
that we can handle multiple vlan tags on ingress.

Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
2008-08-24 03:33:44 +02:00
Lennert Buytenhek
9e1f377242 mv643xx_eth: fix NULL pointer dereference in rxq_process()
When we are low on memory, the assumption that every descriptor in the
receive ring will have an skbuff associated with it does not hold.

rxq_process() was assuming that if the receive descriptor it is working
on is not owned by the hardware, it can safely be processed and handed
to the networking stack.  But a descriptor in the receive ring not being
owned by the hardware can also happen when we are low on memory and did
not manage to refill the receive ring fully.

This patch changes rxq_process()'s bailout condition from "the first
receive descriptor to be processed is owned by the hardware" to "the
first receive descriptor to be processed is owned by the hardware OR
the number of valid receive descriptors in the ring is zero".

Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
2008-08-24 03:33:40 +02:00
Lennert Buytenhek
8e0b1bf6ac mv643xx_eth: fix inconsistent lock semantics
Nicolas Pitre noted that mv643xx_eth_poll was incorrectly using
non-IRQ-safe locks while checking whether to wake up the netdevice's
transmit queue.  Convert the locking to *_irq() variants, since we
are running from softirq context where interrupts are enabled.

Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
2008-08-24 03:33:16 +02:00
Lennert Buytenhek
92c70f27d2 mv643xx_eth: fix double add_timer() on the receive oom timer
Commit 12e4ab79cd ("mv643xx_eth: be
more agressive about RX refill") changed the condition for the receive
out-of-memory timer to be scheduled from "the receive ring is empty"
to "the receive ring is not full".

This can lead to a situation where the receive out-of-memory timer is
pending because a previous rxq_refill() didn't manage to refill the
receive ring entirely as a result of being out of memory, and
rxq_refill() is then called again as a side effect of a packet receive
interrupt, and that rxq_refill() call then again does not succeed to
refill the entire receive ring with fresh empty skbuffs because we are
still out of memory, and then tries to call add_timer() on the already
scheduled out-of-memory timer.

This patch fixes this issue by changing the add_timer() call in
rxq_refill() to a mod_timer() call.  If the OOM timer was not already
scheduled, this will behave as before, whereas if it was already
scheduled, this patch will push back its firing time a bit, which is
safe because we've (unsuccessfully) attempted to refill the receive
ring just before we do this.

Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
2008-08-24 03:33:02 +02:00
Lennert Buytenhek
819ddcafb3 mv643xx_eth: fix NAPI 'rotting packet' issue
When a receive interrupt occurs, mv643xx_eth would first process the
receive descriptors and then ACK the receive interrupt, instead of the
other way round.

This would leave a small race window between processing the last
receive descriptor and clearing the receive interrupt status in which
a new packet could come in, which would then 'rot' in the receive
ring until the next receive interrupt would come in.

Fix this by ACKing (clearing) the receive interrupt condition before
processing the receive descriptors.

Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
2008-08-24 03:32:56 +02:00