We believe that our earlier patch for supporting multiple hardware
receive queues per enic device requires more internal testing. At this
point, we think that it's best to disable the use of multiple receive
queues. The current patch provides an effective means for the same.
Also, we continue to disallow multiple hardware transmit queues per
device. But change the way we enforce this in order to maintain
consistency with the way receive queues are handled.
Signed-off-by: Christian Benvenuti <benve@cisco.com>
Signed-off-by: Danny Guo <dannguo@cisco.com>
Signed-off-by: Vasanthy Kolluri <vkolluri@cisco.com>
Signed-off-by: Roopa Prabhu <roprabhu@cisco.com>
Signed-off-by: David Wang <dwang2@cisco.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The only troublesome bit here is __mkroute_output which wants
to override res->fi and res->type, compute those in local
variables instead.
Signed-off-by: David S. Miller <davem@davemloft.net>
GCC emits all kinds of crazy zero extensions when we go from signed
int, to unsigned short, etc. etc.
This transformation has to be legal because:
1) In tkey_extract_bits() in mask_pfx(), the values are used to
perform shifts, on which negative values are undefined by C.
2) In fib_table_lookup() we perform comparisons with unsigned
values, constants, and additions. None of which should
encounter negative values.
Signed-off-by: David S. Miller <davem@davemloft.net>
This allows avoiding multiple writes to the initial __refcnt.
The most simplest cases of wanting an initial reference of "1"
in ipv4 and ipv6 have been converted, the rest have been left
along and kept at the existing "0".
Signed-off-by: David S. Miller <davem@davemloft.net>
This also allows us to combine all the dst->flags settings and avoid
read/modify/write sequences to this struct member.
Signed-off-by: David S. Miller <davem@davemloft.net>
There's a lot of redundancy and unnecessary stack frames
in the output route creation path.
1) Make __mkroute_output() return error pointers.
2) Eliminate ip_mkroute_output() entirely, made possible by #1.
3) Call __mkroute_output() directly and handling the returning error
pointers in ip_route_output_slow().
Signed-off-by: David S. Miller <davem@davemloft.net>
This also enables TSOv6, TSO-ECN, and UFO as loopback clearly can handle them.
Signed-off-by: Michał Mirosław <mirq-linux@rere.qmqm.pl>
Signed-off-by: David S. Miller <davem@davemloft.net>
Introduce NETIF_F_RXCSUM to replace device-private flags for RX checksum
offload. Integrate it with ndo_fix_features.
ethtool_op_get_rx_csum() is removed altogether as nothing in-tree uses it.
Signed-off-by: Michał Mirosław <mirq-linux@rere.qmqm.pl>
Signed-off-by: David S. Miller <davem@davemloft.net>
This introduces a new framework to handle device features setting.
It consists of:
- new fields in struct net_device:
+ hw_features - features that hw/driver supports toggling
+ wanted_features - features that user wants enabled, when possible
- new netdev_ops:
+ feat = ndo_fix_features(dev, feat) - API checking constraints for
enabling features or their combinations
+ ndo_set_features(dev) - API updating hardware state to match
changed dev->features
- new ethtool commands:
+ ETHTOOL_GFEATURES/ETHTOOL_SFEATURES: get/set dev->wanted_features
and trigger device reconfiguration if resulting dev->features
changed
+ ETHTOOL_GSTRINGS(ETH_SS_FEATURES): get feature bits names (meaning)
Signed-off-by: Michał Mirosław <mirq-linux@rere.qmqm.pl>
Signed-off-by: David S. Miller <davem@davemloft.net>
This allows to enable GRO even if RX csum is disabled. GRO will not
be used for packets without hardware checksum anyway.
Signed-off-by: Michał Mirosław <mirq-linux@rere.qmqm.pl>
Signed-off-by: David S. Miller <davem@davemloft.net>
This is needed for unified offloads patch.
Signed-off-by: Michał Mirosław <mirq-linux@rere.qmqm.pl>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Christian Benvenuti <benve@cisco.com>
Signed-off-by: Danny Guo <dannguo@cisco.com>
Signed-off-by: Vasanthy Kolluri <vkolluri@cisco.com>
Signed-off-by: Roopa Prabhu <roprabhu@cisco.com>
Signed-off-by: David Wang <dwang2@cisco.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
During a device reset, clear the counter for the no. of unicast addresses registered.
Also, rename the routines that update unicast and multicast address lists.
Signed-off-by: Christian Benvenuti <benve@cisco.com>
Signed-off-by: Danny Guo <dannguo@cisco.com>
Signed-off-by: Vasanthy Kolluri <vkolluri@cisco.com>
Signed-off-by: Roopa Prabhu <roprabhu@cisco.com>
Signed-off-by: David Wang <dwang2@cisco.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Support fetching and retrieving RX indirection table via ethtool.
Signed-off-by: Tom Herbert <therbert@google.com>
Acked-by: Eric Dumazet <eric.dumazet@gmail.com>
Acked-by: Eilon Greenstein <eilong@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Implement the ndo_setup_tc() operation with 2 traffic classes.
Current Solarstorm controllers do not implement TX queue priority, but
they do allow queues to be 'paced' with an enforced delay between
packets. Paced and unpaced queues are scheduled in round-robin within
two separate hardware bins (paced queues with a large delay may be
placed into a third bin temporarily, but we won't use that). If there
are queues in both bins, the TX scheduler will alternate between them.
If we make high-priority queues unpaced and best-effort queues paced,
and high-priority queues are mostly empty, a single high-priority queue
can then instantly take 50% of the packet rate regardless of how many
of the best-effort queues have descriptors outstanding.
We do not actually want an enforced delay between packets on best-
effort queues, so we set the pace value to a reserved value that
actually results in a delay of 0.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
efx_channel_get_{rx,tx}_queue() currently return NULL if the channel
isn't used for traffic in that direction. In most cases this is a
bug, but some callers rely on it as an existence test.
Add existence test functions efx_channel_has_{rx_queue,tx_queues}()
and use them as appropriate.
Change efx_channel_get_{rx,tx}_queue() to assert that the requested
queue exists.
Remove now-redundant initialisation from efx_set_channels().
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
efx_hard_start_xmit() needs to implement a mapping which is the
inverse of tx_queue::core_txq. Move the initialisation of
tx_queue::core_txq next to efx_hard_start_xmit() to make the
connection more obvious.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
If the root qdisc for a net device is mqprio, and the driver's
ndo_setup_tc() operation dynamically adds and remvoes TX queues,
netif_set_real_num_tx_queues() will be called during device
unregistration to remove the extra TX queues when the qdisc is
destroyed. Currently this causes the corresponding kobjects
to be leaked, and the device's reference count never drops to 0.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
This fixes warnings when CONFIG_DMA_API_DEBUG=y:
NULL NULL: DMA-API: device driver tries to free DMA memory it has not allocated [device address=0x000000004781a020] [size=64 bytes]
net eth0: DMA-API: device driver frees DMA memory with different size [device address=0x000000004781a020] [map size=2048 bytes] [unmap size=64 bytes]
Moreover pass the platform device to dma_{,un}map_single which makes
more sense because the logical network device doesn't know anything
about dma.
Passing the platform device was a suggestion by Lothar Waßmann.
Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
This undoes the effects of phy_start in fec_enet_open.
Reported-by: Lothar Waßmann <LW@KARO-electronics.de>
Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Moreover stop listing all i.MX platforms featuring a FEC, and use
the platform's config symbol that selects registration of a fec device
instead. This might make it easier to add new platforms.
Set default = y for ARMs having a fec to reduce defconfig sizes.
Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Saving it first into struct net_device->base_addr (which is an unsigned
long) is pointless and only needs to use more casts than necessary.
Reported-by: Lothar Waßmann <LW@KARO-electronics.de>
Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
alloc_etherdev internally uses kzalloc, so the private data is already
zerod out.
Reported-by: Lothar Waßmann <LW@KARO-electronics.de>
Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
dev_set_drvdata is called unconditionally in the probe function and so
it cannot be NULL.
Reported-by: Lothar Waßmann <LW@KARO-electronics.de>
Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
memcpy takes a const void * as 2nd argument. So the argument is
converted automatically to void * anyhow.
Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Note that we do not generate the redirect netevent any longer,
because we don't create a new cached route.
Instead, once the new neighbour is bound to the cached route,
we emit a neigh update event instead.
Signed-off-by: David S. Miller <davem@davemloft.net>
The general idea is that if we learn new PMTU information, we
bump the peer genid.
This triggers the dst_ops->check() code to validate and if
necessary propagate the new PMTU value into the metrics.
Learned PMTU information self-expires.
This means that it is not necessary to kill a cached route
entry just because the PMTU information is too old.
As a consequence:
1) When the path appears unreachable (dst_ops->link_failure
or dst_ops->negative_advice) we unwind the PMTU state if
it is out of date, instead of killing the cached route.
A redirected route will still be invalidated in these
situations.
2) rt_check_expire(), rt_worker_func(), et al. are no longer
necessary at all.
Signed-off-by: David S. Miller <davem@davemloft.net>
Platform code can now set the MICREL_PHY_50MHZ_CLK bit of dev_flags in a fixup
routine (registered with phy_register_fixup_for_uid()), to make the KZS8051RNL
PHY work with 50MHz RMII reference clock.
Cc: David J. Choi <david.choi@micrel.com>
Signed-off-by: Baruch Siach <baruch@tkos.co.il>
Signed-off-by: David S. Miller <davem@davemloft.net>
With previous patch, rose_get_neigh() routine
investigates the full list of neighbor nodes
until it finds or not an already connected node whether
it is called locally or through a level 3 transit frame.
If no routes are opened through an adjacent connected node
then a classical connect request is attempted.
Then there is no more reason for an extra loop such
as the one removed by this patch.
Signed-off-by: Bernard Pidoux <f6bvp@free.fr>
Signed-off-by: David S. Miller <davem@davemloft.net>
FPAC AX25 packet application is using Linux kernel ROSE
routing skills in order to connect or send packets to remote stations
knowing their ROSE address via a network of interconnected nodes.
Each FPAC node has a ROSE routing table that Linux ROSE module is
looking at each time a ROSE frame is relayed by the node or when
a connect request to a neighbor node is received.
A previous patch improved the system time response by looking at
already established routes each time the system was looking for a
route to relay a frame. If a neighbor node routing the destination
address was already connected, then the frame would be sent
through him. If not, a connection request would be issued.
The present patch extends the same routing capability to a connect
request asked by a user locally connected into an FPAC node.
Without this patch, a connect request was not well handled unless it
was directed to an immediate connected neighbor of the local node.
Implemented at a number of ROSE FPAC node stations, the present patch
improved dramatically FPAC ROSE routing time response and efficiency.
Signed-off-by: Bernard Pidoux <f6bvp@free.fr>
Signed-off-by: David S. Miller <davem@davemloft.net>
Commit 0c838ff1ad (ipv4: Consolidate all default route selection
implementations.) forgot to remove one rcu_read_unlock() from
fib_select_default().
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
All the cleanup code in mqprio_destroy() is currently conditional on
priv->qdiscs being non-null, but that condition should only apply to
the per-queue qdisc cleanup. We should always set the number of
traffic classes back to 0 here.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Bosch C_CAN controller is a full-CAN implementation which is compliant
to CAN protocol version 2.0 part A and B. Bosch C_CAN user manual can be
obtained from:
http://www.semiconductors.bosch.de/media/en/pdf/ipmodules_1/c_can/users_manual_c_can.pdf
This patch adds the support for this controller.
The following are the design choices made while writing the controller
driver:
1. Interface Register set IF1 has be used only in the current design.
2. Out of the 32 Message objects available, 16 are kept aside for RX
purposes and the rest for TX purposes.
3. NAPI implementation is such that both the TX and RX paths function
in polling mode.
Signed-off-by: Bhupesh Sharma <bhupesh.sharma@st.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Some JMicron Chip treat 0 as error checksum for UDP packets.
Which should be "No checksum needed".
Reported-by: Adam Swift <Adam.Swift@omnitude.net>
Confirmed-by: "Aries Lee" <arieslee@jmicron.com>
Signed-off-by: Guo-Fu Tseng <cooldavid@cooldavid.org>
Signed-off-by: David S. Miller <davem@davemloft.net>