Including fixes from netfiler, xfrm and bluetooth.
Current release - regressions: - posix-clock: Fix unbalanced locking in pc_clock_settime() - netfilter: fix typo causing some targets not to load on IPv6 Current release - new code bugs: - xfrm: policy: remove last remnants of pernet inexact list Previous releases - regressions: - core: fix races in netdev_tx_sent_queue()/dev_watchdog() - bluetooth: fix UAF on sco_sock_timeout - eth: hv_netvsc: fix VF namespace also in synthetic NIC NETDEV_REGISTER event - eth: usbnet: fix name regression - eth: be2net: fix potential memory leak in be_xmit() - eth: plip: fix transmit path breakage Previous releases - always broken: - sched: deny mismatched skip_sw/skip_hw flags for actions created by classifiers - netfilter: bpf: must hold reference on net namespace - eth: virtio_net: fix integer overflow in stats - eth: bnxt_en: replace ptp_lock with irqsave variant - eth: octeon_ep: add SKB allocation failures handling in __octep_oq_process_rx() Misc: - MAINTAINERS: add Simon as an official reviewer Signed-off-by: Paolo Abeni <pabeni@redhat.com> -----BEGIN PGP SIGNATURE----- iQJGBAABCAAwFiEEg1AjqC77wbdLX2LbKSR5jcyPE6QFAmcaTkUSHHBhYmVuaUBy ZWRoYXQuY29tAAoJECkkeY3MjxOkW8kP/iYfaxQ8zR61wUU7bOcVUSnEADR9XQ1H Nta5Z0tDJprZv254XW3hYDzU0Iy3OgclRE1oewF5fQVLn6Sfg4U5awxRTNdJw7KV wj62ziAv/xht2W/4nBsNfYkOZaDAibItbKtxlkOhgCGXSrXBoS22IonKRqEv2HLV Gu0vAY/VI9YNvB5Z6SEKFmQp2bWfX79AChVT72shLBLakOCUHBavk/DOU56XH1Ci IRmU5Lt8ysXWxCTF91rPCAbMyuxBbIv6phIKPV2ALpRUd6ha5nBqcl0wcS7Y1E+/ 0XOV71zjcXFoE/6hc5W3/mC7jm+ipXKVJOnIkCcWq40p6kDVJJ+E1RWEr5JxGEyF FtnUCZ8iK/F3/jSalMras2z+AZ/CGtfHF9wAS3YfMGtOJJb/k4dCxAddp7UzD9O4 yxAJhJ0DrVuplzwovL5owoJJXeRAMQeFydzHBYun5P8Sc9TtvviICi19fMgKGn4O eUQhjgZZY371sPnTDLDEw1Oqzs9qeaeV3S2dSeFJ98PQuPA5KVOf/R2/CptBIMi5 +UNcqeXrlUeYSBW94pPioEVStZDrzax5RVKh/Jo1tTnKzbnWDOOKZqSVsGPMWXdO 0aBlGuSsNe36VDg2C0QMxGk7+gXbKmk9U4+qVQH3KMpB8uqdAu5deMbTT6dfcwBV O/BaGiqoR4ak =dR3Q -----END PGP SIGNATURE----- Merge tag 'net-6.12-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net Pull networking fixes from Paolo Abeni: "Including fixes from netfiler, xfrm and bluetooth. Oddly this includes a fix for a posix clock regression; in our previous PR we included a change there as a pre-requisite for networking one. That fix proved to be buggy and requires the follow-up included here. Thomas suggested we should send it, given we sent the buggy patch. Current release - regressions: - posix-clock: Fix unbalanced locking in pc_clock_settime() - netfilter: fix typo causing some targets not to load on IPv6 Current release - new code bugs: - xfrm: policy: remove last remnants of pernet inexact list Previous releases - regressions: - core: fix races in netdev_tx_sent_queue()/dev_watchdog() - bluetooth: fix UAF on sco_sock_timeout - eth: hv_netvsc: fix VF namespace also in synthetic NIC NETDEV_REGISTER event - eth: usbnet: fix name regression - eth: be2net: fix potential memory leak in be_xmit() - eth: plip: fix transmit path breakage Previous releases - always broken: - sched: deny mismatched skip_sw/skip_hw flags for actions created by classifiers - netfilter: bpf: must hold reference on net namespace - eth: virtio_net: fix integer overflow in stats - eth: bnxt_en: replace ptp_lock with irqsave variant - eth: octeon_ep: add SKB allocation failures handling in __octep_oq_process_rx() Misc: - MAINTAINERS: add Simon as an official reviewer" * tag 'net-6.12-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (40 commits) net: dsa: mv88e6xxx: support 4000ps cycle counter period net: dsa: mv88e6xxx: read cycle counter period from hardware net: dsa: mv88e6xxx: group cycle counter coefficients net: usb: qmi_wwan: add Fibocom FG132 0x0112 composition hv_netvsc: Fix VF namespace also in synthetic NIC NETDEV_REGISTER event net: dsa: microchip: disable EEE for KSZ879x/KSZ877x/KSZ876x Bluetooth: ISO: Fix UAF on iso_sock_timeout Bluetooth: SCO: Fix UAF on sco_sock_timeout Bluetooth: hci_core: Disable works on hci_unregister_dev posix-clock: posix-clock: Fix unbalanced locking in pc_clock_settime() r8169: avoid unsolicited interrupts net: sched: use RCU read-side critical section in taprio_dump() net: sched: fix use-after-free in taprio_change() net/sched: act_api: deny mismatched skip_sw/skip_hw flags for actions created by classifiers net: usb: usbnet: fix name regression mlxsw: spectrum_router: fix xa_store() error checking virtio_net: fix integer overflow in stats net: fix races in netdev_tx_sent_queue()/dev_watchdog() net: wwan: fix global oob in wwan_rtnl_policy netfilter: xtables: fix typo causing some targets not to load on IPv6 ...
This commit is contained in:
commit
d44cd82264
5
.mailmap
5
.mailmap
@ -306,6 +306,11 @@ Jens Axboe <axboe@kernel.dk> <axboe@fb.com>
|
||||
Jens Axboe <axboe@kernel.dk> <axboe@meta.com>
|
||||
Jens Osterkamp <Jens.Osterkamp@de.ibm.com>
|
||||
Jernej Skrabec <jernej.skrabec@gmail.com> <jernej.skrabec@siol.net>
|
||||
Jesper Dangaard Brouer <hawk@kernel.org> <brouer@redhat.com>
|
||||
Jesper Dangaard Brouer <hawk@kernel.org> <hawk@comx.dk>
|
||||
Jesper Dangaard Brouer <hawk@kernel.org> <jbrouer@redhat.com>
|
||||
Jesper Dangaard Brouer <hawk@kernel.org> <jdb@comx.dk>
|
||||
Jesper Dangaard Brouer <hawk@kernel.org> <netoptimizer@brouer.com>
|
||||
Jessica Zhang <quic_jesszhan@quicinc.com> <jesszhan@codeaurora.org>
|
||||
Jilai Wang <quic_jilaiw@quicinc.com> <jilaiw@codeaurora.org>
|
||||
Jiri Kosina <jikos@kernel.org> <jikos@jikos.cz>
|
||||
|
@ -16042,6 +16042,7 @@ M: "David S. Miller" <davem@davemloft.net>
|
||||
M: Eric Dumazet <edumazet@google.com>
|
||||
M: Jakub Kicinski <kuba@kernel.org>
|
||||
M: Paolo Abeni <pabeni@redhat.com>
|
||||
R: Simon Horman <horms@kernel.org>
|
||||
L: netdev@vger.kernel.org
|
||||
S: Maintained
|
||||
P: Documentation/process/maintainer-netdev.rst
|
||||
@ -16084,6 +16085,7 @@ F: include/uapi/linux/rtnetlink.h
|
||||
F: lib/net_utils.c
|
||||
F: lib/random32.c
|
||||
F: net/
|
||||
F: samples/pktgen/
|
||||
F: tools/net/
|
||||
F: tools/testing/selftests/net/
|
||||
X: Documentation/networking/mac80211-injection.rst
|
||||
|
@ -2733,26 +2733,27 @@ static u32 ksz_get_phy_flags(struct dsa_switch *ds, int port)
|
||||
return MICREL_KSZ8_P1_ERRATA;
|
||||
break;
|
||||
case KSZ8567_CHIP_ID:
|
||||
/* KSZ8567R Errata DS80000752C Module 4 */
|
||||
case KSZ8765_CHIP_ID:
|
||||
case KSZ8794_CHIP_ID:
|
||||
case KSZ8795_CHIP_ID:
|
||||
/* KSZ879x/KSZ877x/KSZ876x Errata DS80000687C Module 2 */
|
||||
case KSZ9477_CHIP_ID:
|
||||
/* KSZ9477S Errata DS80000754A Module 4 */
|
||||
case KSZ9567_CHIP_ID:
|
||||
/* KSZ9567S Errata DS80000756A Module 4 */
|
||||
case KSZ9896_CHIP_ID:
|
||||
/* KSZ9896C Errata DS80000757A Module 3 */
|
||||
case KSZ9897_CHIP_ID:
|
||||
/* KSZ9477 Errata DS80000754C
|
||||
*
|
||||
* Module 4: Energy Efficient Ethernet (EEE) feature select must
|
||||
* be manually disabled
|
||||
/* KSZ9897R Errata DS80000758C Module 4 */
|
||||
/* Energy Efficient Ethernet (EEE) feature select must be manually disabled
|
||||
* The EEE feature is enabled by default, but it is not fully
|
||||
* operational. It must be manually disabled through register
|
||||
* controls. If not disabled, the PHY ports can auto-negotiate
|
||||
* to enable EEE, and this feature can cause link drops when
|
||||
* linked to another device supporting EEE.
|
||||
*
|
||||
* The same item appears in the errata for the KSZ9567, KSZ9896,
|
||||
* and KSZ9897.
|
||||
*
|
||||
* A similar item appears in the errata for the KSZ8567, but
|
||||
* provides an alternative workaround. For now, use the simple
|
||||
* workaround of disabling the EEE feature for this device too.
|
||||
* The same item appears in the errata for all switches above.
|
||||
*/
|
||||
return MICREL_NO_EEE;
|
||||
}
|
||||
|
@ -206,6 +206,7 @@ struct mv88e6xxx_gpio_ops;
|
||||
struct mv88e6xxx_avb_ops;
|
||||
struct mv88e6xxx_ptp_ops;
|
||||
struct mv88e6xxx_pcs_ops;
|
||||
struct mv88e6xxx_cc_coeffs;
|
||||
|
||||
struct mv88e6xxx_irq {
|
||||
u16 masked;
|
||||
@ -408,6 +409,7 @@ struct mv88e6xxx_chip {
|
||||
struct cyclecounter tstamp_cc;
|
||||
struct timecounter tstamp_tc;
|
||||
struct delayed_work overflow_work;
|
||||
const struct mv88e6xxx_cc_coeffs *cc_coeffs;
|
||||
|
||||
struct ptp_clock *ptp_clock;
|
||||
struct ptp_clock_info ptp_clock_info;
|
||||
@ -731,10 +733,6 @@ struct mv88e6xxx_ptp_ops {
|
||||
int arr1_sts_reg;
|
||||
int dep_sts_reg;
|
||||
u32 rx_filters;
|
||||
u32 cc_shift;
|
||||
u32 cc_mult;
|
||||
u32 cc_mult_num;
|
||||
u32 cc_mult_dem;
|
||||
};
|
||||
|
||||
struct mv88e6xxx_pcs_ops {
|
||||
|
@ -1713,6 +1713,7 @@ int mv88e6393x_port_set_policy(struct mv88e6xxx_chip *chip, int port,
|
||||
ptr = shift / 8;
|
||||
shift %= 8;
|
||||
mask >>= ptr * 8;
|
||||
ptr <<= 8;
|
||||
|
||||
err = mv88e6393x_port_policy_read(chip, port, ptr, ®);
|
||||
if (err)
|
||||
|
@ -18,6 +18,13 @@
|
||||
|
||||
#define MV88E6XXX_MAX_ADJ_PPB 1000000
|
||||
|
||||
struct mv88e6xxx_cc_coeffs {
|
||||
u32 cc_shift;
|
||||
u32 cc_mult;
|
||||
u32 cc_mult_num;
|
||||
u32 cc_mult_dem;
|
||||
};
|
||||
|
||||
/* Family MV88E6250:
|
||||
* Raw timestamps are in units of 10-ns clock periods.
|
||||
*
|
||||
@ -25,22 +32,43 @@
|
||||
* simplifies to
|
||||
* clkadj = scaled_ppm * 2^7 / 5^5
|
||||
*/
|
||||
#define MV88E6250_CC_SHIFT 28
|
||||
#define MV88E6250_CC_MULT (10 << MV88E6250_CC_SHIFT)
|
||||
#define MV88E6250_CC_MULT_NUM (1 << 7)
|
||||
#define MV88E6250_CC_MULT_DEM 3125ULL
|
||||
#define MV88E6XXX_CC_10NS_SHIFT 28
|
||||
static const struct mv88e6xxx_cc_coeffs mv88e6xxx_cc_10ns_coeffs = {
|
||||
.cc_shift = MV88E6XXX_CC_10NS_SHIFT,
|
||||
.cc_mult = 10 << MV88E6XXX_CC_10NS_SHIFT,
|
||||
.cc_mult_num = 1 << 7,
|
||||
.cc_mult_dem = 3125ULL,
|
||||
};
|
||||
|
||||
/* Other families:
|
||||
/* Other families except MV88E6393X in internal clock mode:
|
||||
* Raw timestamps are in units of 8-ns clock periods.
|
||||
*
|
||||
* clkadj = scaled_ppm * 8*2^28 / (10^6 * 2^16)
|
||||
* simplifies to
|
||||
* clkadj = scaled_ppm * 2^9 / 5^6
|
||||
*/
|
||||
#define MV88E6XXX_CC_SHIFT 28
|
||||
#define MV88E6XXX_CC_MULT (8 << MV88E6XXX_CC_SHIFT)
|
||||
#define MV88E6XXX_CC_MULT_NUM (1 << 9)
|
||||
#define MV88E6XXX_CC_MULT_DEM 15625ULL
|
||||
#define MV88E6XXX_CC_8NS_SHIFT 28
|
||||
static const struct mv88e6xxx_cc_coeffs mv88e6xxx_cc_8ns_coeffs = {
|
||||
.cc_shift = MV88E6XXX_CC_8NS_SHIFT,
|
||||
.cc_mult = 8 << MV88E6XXX_CC_8NS_SHIFT,
|
||||
.cc_mult_num = 1 << 9,
|
||||
.cc_mult_dem = 15625ULL
|
||||
};
|
||||
|
||||
/* Family MV88E6393X using internal clock:
|
||||
* Raw timestamps are in units of 4-ns clock periods.
|
||||
*
|
||||
* clkadj = scaled_ppm * 4*2^28 / (10^6 * 2^16)
|
||||
* simplifies to
|
||||
* clkadj = scaled_ppm * 2^8 / 5^6
|
||||
*/
|
||||
#define MV88E6XXX_CC_4NS_SHIFT 28
|
||||
static const struct mv88e6xxx_cc_coeffs mv88e6xxx_cc_4ns_coeffs = {
|
||||
.cc_shift = MV88E6XXX_CC_4NS_SHIFT,
|
||||
.cc_mult = 4 << MV88E6XXX_CC_4NS_SHIFT,
|
||||
.cc_mult_num = 1 << 8,
|
||||
.cc_mult_dem = 15625ULL
|
||||
};
|
||||
|
||||
#define TAI_EVENT_WORK_INTERVAL msecs_to_jiffies(100)
|
||||
|
||||
@ -83,6 +111,33 @@ static int mv88e6352_set_gpio_func(struct mv88e6xxx_chip *chip, int pin,
|
||||
return chip->info->ops->gpio_ops->set_pctl(chip, pin, func);
|
||||
}
|
||||
|
||||
static const struct mv88e6xxx_cc_coeffs *
|
||||
mv88e6xxx_cc_coeff_get(struct mv88e6xxx_chip *chip)
|
||||
{
|
||||
u16 period_ps;
|
||||
int err;
|
||||
|
||||
err = mv88e6xxx_tai_read(chip, MV88E6XXX_TAI_CLOCK_PERIOD, &period_ps, 1);
|
||||
if (err) {
|
||||
dev_err(chip->dev, "failed to read cycle counter period: %d\n",
|
||||
err);
|
||||
return ERR_PTR(err);
|
||||
}
|
||||
|
||||
switch (period_ps) {
|
||||
case 4000:
|
||||
return &mv88e6xxx_cc_4ns_coeffs;
|
||||
case 8000:
|
||||
return &mv88e6xxx_cc_8ns_coeffs;
|
||||
case 10000:
|
||||
return &mv88e6xxx_cc_10ns_coeffs;
|
||||
default:
|
||||
dev_err(chip->dev, "unexpected cycle counter period of %u ps\n",
|
||||
period_ps);
|
||||
return ERR_PTR(-ENODEV);
|
||||
}
|
||||
}
|
||||
|
||||
static u64 mv88e6352_ptp_clock_read(const struct cyclecounter *cc)
|
||||
{
|
||||
struct mv88e6xxx_chip *chip = cc_to_chip(cc);
|
||||
@ -204,7 +259,6 @@ out:
|
||||
static int mv88e6xxx_ptp_adjfine(struct ptp_clock_info *ptp, long scaled_ppm)
|
||||
{
|
||||
struct mv88e6xxx_chip *chip = ptp_to_chip(ptp);
|
||||
const struct mv88e6xxx_ptp_ops *ptp_ops = chip->info->ops->ptp_ops;
|
||||
int neg_adj = 0;
|
||||
u32 diff, mult;
|
||||
u64 adj;
|
||||
@ -214,10 +268,10 @@ static int mv88e6xxx_ptp_adjfine(struct ptp_clock_info *ptp, long scaled_ppm)
|
||||
scaled_ppm = -scaled_ppm;
|
||||
}
|
||||
|
||||
mult = ptp_ops->cc_mult;
|
||||
adj = ptp_ops->cc_mult_num;
|
||||
mult = chip->cc_coeffs->cc_mult;
|
||||
adj = chip->cc_coeffs->cc_mult_num;
|
||||
adj *= scaled_ppm;
|
||||
diff = div_u64(adj, ptp_ops->cc_mult_dem);
|
||||
diff = div_u64(adj, chip->cc_coeffs->cc_mult_dem);
|
||||
|
||||
mv88e6xxx_reg_lock(chip);
|
||||
|
||||
@ -364,10 +418,6 @@ const struct mv88e6xxx_ptp_ops mv88e6165_ptp_ops = {
|
||||
(1 << HWTSTAMP_FILTER_PTP_V2_EVENT) |
|
||||
(1 << HWTSTAMP_FILTER_PTP_V2_SYNC) |
|
||||
(1 << HWTSTAMP_FILTER_PTP_V2_DELAY_REQ),
|
||||
.cc_shift = MV88E6XXX_CC_SHIFT,
|
||||
.cc_mult = MV88E6XXX_CC_MULT,
|
||||
.cc_mult_num = MV88E6XXX_CC_MULT_NUM,
|
||||
.cc_mult_dem = MV88E6XXX_CC_MULT_DEM,
|
||||
};
|
||||
|
||||
const struct mv88e6xxx_ptp_ops mv88e6250_ptp_ops = {
|
||||
@ -391,10 +441,6 @@ const struct mv88e6xxx_ptp_ops mv88e6250_ptp_ops = {
|
||||
(1 << HWTSTAMP_FILTER_PTP_V2_EVENT) |
|
||||
(1 << HWTSTAMP_FILTER_PTP_V2_SYNC) |
|
||||
(1 << HWTSTAMP_FILTER_PTP_V2_DELAY_REQ),
|
||||
.cc_shift = MV88E6250_CC_SHIFT,
|
||||
.cc_mult = MV88E6250_CC_MULT,
|
||||
.cc_mult_num = MV88E6250_CC_MULT_NUM,
|
||||
.cc_mult_dem = MV88E6250_CC_MULT_DEM,
|
||||
};
|
||||
|
||||
const struct mv88e6xxx_ptp_ops mv88e6352_ptp_ops = {
|
||||
@ -418,10 +464,6 @@ const struct mv88e6xxx_ptp_ops mv88e6352_ptp_ops = {
|
||||
(1 << HWTSTAMP_FILTER_PTP_V2_EVENT) |
|
||||
(1 << HWTSTAMP_FILTER_PTP_V2_SYNC) |
|
||||
(1 << HWTSTAMP_FILTER_PTP_V2_DELAY_REQ),
|
||||
.cc_shift = MV88E6XXX_CC_SHIFT,
|
||||
.cc_mult = MV88E6XXX_CC_MULT,
|
||||
.cc_mult_num = MV88E6XXX_CC_MULT_NUM,
|
||||
.cc_mult_dem = MV88E6XXX_CC_MULT_DEM,
|
||||
};
|
||||
|
||||
const struct mv88e6xxx_ptp_ops mv88e6390_ptp_ops = {
|
||||
@ -446,10 +488,6 @@ const struct mv88e6xxx_ptp_ops mv88e6390_ptp_ops = {
|
||||
(1 << HWTSTAMP_FILTER_PTP_V2_EVENT) |
|
||||
(1 << HWTSTAMP_FILTER_PTP_V2_SYNC) |
|
||||
(1 << HWTSTAMP_FILTER_PTP_V2_DELAY_REQ),
|
||||
.cc_shift = MV88E6XXX_CC_SHIFT,
|
||||
.cc_mult = MV88E6XXX_CC_MULT,
|
||||
.cc_mult_num = MV88E6XXX_CC_MULT_NUM,
|
||||
.cc_mult_dem = MV88E6XXX_CC_MULT_DEM,
|
||||
};
|
||||
|
||||
static u64 mv88e6xxx_ptp_clock_read(const struct cyclecounter *cc)
|
||||
@ -462,10 +500,10 @@ static u64 mv88e6xxx_ptp_clock_read(const struct cyclecounter *cc)
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* With a 125MHz input clock, the 32-bit timestamp counter overflows in ~34.3
|
||||
/* With a 250MHz input clock, the 32-bit timestamp counter overflows in ~17.2
|
||||
* seconds; this task forces periodic reads so that we don't miss any.
|
||||
*/
|
||||
#define MV88E6XXX_TAI_OVERFLOW_PERIOD (HZ * 16)
|
||||
#define MV88E6XXX_TAI_OVERFLOW_PERIOD (HZ * 8)
|
||||
static void mv88e6xxx_ptp_overflow_check(struct work_struct *work)
|
||||
{
|
||||
struct delayed_work *dw = to_delayed_work(work);
|
||||
@ -484,11 +522,15 @@ int mv88e6xxx_ptp_setup(struct mv88e6xxx_chip *chip)
|
||||
int i;
|
||||
|
||||
/* Set up the cycle counter */
|
||||
chip->cc_coeffs = mv88e6xxx_cc_coeff_get(chip);
|
||||
if (IS_ERR(chip->cc_coeffs))
|
||||
return PTR_ERR(chip->cc_coeffs);
|
||||
|
||||
memset(&chip->tstamp_cc, 0, sizeof(chip->tstamp_cc));
|
||||
chip->tstamp_cc.read = mv88e6xxx_ptp_clock_read;
|
||||
chip->tstamp_cc.mask = CYCLECOUNTER_MASK(32);
|
||||
chip->tstamp_cc.mult = ptp_ops->cc_mult;
|
||||
chip->tstamp_cc.shift = ptp_ops->cc_shift;
|
||||
chip->tstamp_cc.mult = chip->cc_coeffs->cc_mult;
|
||||
chip->tstamp_cc.shift = chip->cc_coeffs->cc_shift;
|
||||
|
||||
timecounter_init(&chip->tstamp_tc, &chip->tstamp_cc,
|
||||
ktime_to_ns(ktime_get_real()));
|
||||
|
@ -2254,10 +2254,11 @@ static int bnxt_rx_pkt(struct bnxt *bp, struct bnxt_cp_ring_info *cpr,
|
||||
|
||||
if (!bnxt_get_rx_ts_p5(bp, &ts, cmpl_ts)) {
|
||||
struct bnxt_ptp_cfg *ptp = bp->ptp_cfg;
|
||||
unsigned long flags;
|
||||
|
||||
spin_lock_bh(&ptp->ptp_lock);
|
||||
spin_lock_irqsave(&ptp->ptp_lock, flags);
|
||||
ns = timecounter_cyc2time(&ptp->tc, ts);
|
||||
spin_unlock_bh(&ptp->ptp_lock);
|
||||
spin_unlock_irqrestore(&ptp->ptp_lock, flags);
|
||||
memset(skb_hwtstamps(skb), 0,
|
||||
sizeof(*skb_hwtstamps(skb)));
|
||||
skb_hwtstamps(skb)->hwtstamp = ns_to_ktime(ns);
|
||||
@ -2757,17 +2758,18 @@ static int bnxt_async_event_process(struct bnxt *bp,
|
||||
case ASYNC_EVENT_CMPL_PHC_UPDATE_EVENT_DATA1_FLAGS_PHC_RTC_UPDATE:
|
||||
if (BNXT_PTP_USE_RTC(bp)) {
|
||||
struct bnxt_ptp_cfg *ptp = bp->ptp_cfg;
|
||||
unsigned long flags;
|
||||
u64 ns;
|
||||
|
||||
if (!ptp)
|
||||
goto async_event_process_exit;
|
||||
|
||||
spin_lock_bh(&ptp->ptp_lock);
|
||||
spin_lock_irqsave(&ptp->ptp_lock, flags);
|
||||
bnxt_ptp_update_current_time(bp);
|
||||
ns = (((u64)BNXT_EVENT_PHC_RTC_UPDATE(data1) <<
|
||||
BNXT_PHC_BITS) | ptp->current_time);
|
||||
bnxt_ptp_rtc_timecounter_init(ptp, ns);
|
||||
spin_unlock_bh(&ptp->ptp_lock);
|
||||
spin_unlock_irqrestore(&ptp->ptp_lock, flags);
|
||||
}
|
||||
break;
|
||||
}
|
||||
@ -13494,9 +13496,11 @@ static void bnxt_force_fw_reset(struct bnxt *bp)
|
||||
return;
|
||||
|
||||
if (ptp) {
|
||||
spin_lock_bh(&ptp->ptp_lock);
|
||||
unsigned long flags;
|
||||
|
||||
spin_lock_irqsave(&ptp->ptp_lock, flags);
|
||||
set_bit(BNXT_STATE_IN_FW_RESET, &bp->state);
|
||||
spin_unlock_bh(&ptp->ptp_lock);
|
||||
spin_unlock_irqrestore(&ptp->ptp_lock, flags);
|
||||
} else {
|
||||
set_bit(BNXT_STATE_IN_FW_RESET, &bp->state);
|
||||
}
|
||||
@ -13561,9 +13565,11 @@ void bnxt_fw_reset(struct bnxt *bp)
|
||||
int n = 0, tmo;
|
||||
|
||||
if (ptp) {
|
||||
spin_lock_bh(&ptp->ptp_lock);
|
||||
unsigned long flags;
|
||||
|
||||
spin_lock_irqsave(&ptp->ptp_lock, flags);
|
||||
set_bit(BNXT_STATE_IN_FW_RESET, &bp->state);
|
||||
spin_unlock_bh(&ptp->ptp_lock);
|
||||
spin_unlock_irqrestore(&ptp->ptp_lock, flags);
|
||||
} else {
|
||||
set_bit(BNXT_STATE_IN_FW_RESET, &bp->state);
|
||||
}
|
||||
|
@ -62,13 +62,14 @@ static int bnxt_ptp_settime(struct ptp_clock_info *ptp_info,
|
||||
struct bnxt_ptp_cfg *ptp = container_of(ptp_info, struct bnxt_ptp_cfg,
|
||||
ptp_info);
|
||||
u64 ns = timespec64_to_ns(ts);
|
||||
unsigned long flags;
|
||||
|
||||
if (BNXT_PTP_USE_RTC(ptp->bp))
|
||||
return bnxt_ptp_cfg_settime(ptp->bp, ns);
|
||||
|
||||
spin_lock_bh(&ptp->ptp_lock);
|
||||
spin_lock_irqsave(&ptp->ptp_lock, flags);
|
||||
timecounter_init(&ptp->tc, &ptp->cc, ns);
|
||||
spin_unlock_bh(&ptp->ptp_lock);
|
||||
spin_unlock_irqrestore(&ptp->ptp_lock, flags);
|
||||
return 0;
|
||||
}
|
||||
|
||||
@ -100,13 +101,14 @@ static int bnxt_refclk_read(struct bnxt *bp, struct ptp_system_timestamp *sts,
|
||||
static void bnxt_ptp_get_current_time(struct bnxt *bp)
|
||||
{
|
||||
struct bnxt_ptp_cfg *ptp = bp->ptp_cfg;
|
||||
unsigned long flags;
|
||||
|
||||
if (!ptp)
|
||||
return;
|
||||
spin_lock_bh(&ptp->ptp_lock);
|
||||
spin_lock_irqsave(&ptp->ptp_lock, flags);
|
||||
WRITE_ONCE(ptp->old_time, ptp->current_time);
|
||||
bnxt_refclk_read(bp, NULL, &ptp->current_time);
|
||||
spin_unlock_bh(&ptp->ptp_lock);
|
||||
spin_unlock_irqrestore(&ptp->ptp_lock, flags);
|
||||
}
|
||||
|
||||
static int bnxt_hwrm_port_ts_query(struct bnxt *bp, u32 flags, u64 *ts,
|
||||
@ -149,17 +151,18 @@ static int bnxt_ptp_gettimex(struct ptp_clock_info *ptp_info,
|
||||
{
|
||||
struct bnxt_ptp_cfg *ptp = container_of(ptp_info, struct bnxt_ptp_cfg,
|
||||
ptp_info);
|
||||
unsigned long flags;
|
||||
u64 ns, cycles;
|
||||
int rc;
|
||||
|
||||
spin_lock_bh(&ptp->ptp_lock);
|
||||
spin_lock_irqsave(&ptp->ptp_lock, flags);
|
||||
rc = bnxt_refclk_read(ptp->bp, sts, &cycles);
|
||||
if (rc) {
|
||||
spin_unlock_bh(&ptp->ptp_lock);
|
||||
spin_unlock_irqrestore(&ptp->ptp_lock, flags);
|
||||
return rc;
|
||||
}
|
||||
ns = timecounter_cyc2time(&ptp->tc, cycles);
|
||||
spin_unlock_bh(&ptp->ptp_lock);
|
||||
spin_unlock_irqrestore(&ptp->ptp_lock, flags);
|
||||
*ts = ns_to_timespec64(ns);
|
||||
|
||||
return 0;
|
||||
@ -177,6 +180,7 @@ void bnxt_ptp_update_current_time(struct bnxt *bp)
|
||||
static int bnxt_ptp_adjphc(struct bnxt_ptp_cfg *ptp, s64 delta)
|
||||
{
|
||||
struct hwrm_port_mac_cfg_input *req;
|
||||
unsigned long flags;
|
||||
int rc;
|
||||
|
||||
rc = hwrm_req_init(ptp->bp, req, HWRM_PORT_MAC_CFG);
|
||||
@ -190,9 +194,9 @@ static int bnxt_ptp_adjphc(struct bnxt_ptp_cfg *ptp, s64 delta)
|
||||
if (rc) {
|
||||
netdev_err(ptp->bp->dev, "ptp adjphc failed. rc = %x\n", rc);
|
||||
} else {
|
||||
spin_lock_bh(&ptp->ptp_lock);
|
||||
spin_lock_irqsave(&ptp->ptp_lock, flags);
|
||||
bnxt_ptp_update_current_time(ptp->bp);
|
||||
spin_unlock_bh(&ptp->ptp_lock);
|
||||
spin_unlock_irqrestore(&ptp->ptp_lock, flags);
|
||||
}
|
||||
|
||||
return rc;
|
||||
@ -202,13 +206,14 @@ static int bnxt_ptp_adjtime(struct ptp_clock_info *ptp_info, s64 delta)
|
||||
{
|
||||
struct bnxt_ptp_cfg *ptp = container_of(ptp_info, struct bnxt_ptp_cfg,
|
||||
ptp_info);
|
||||
unsigned long flags;
|
||||
|
||||
if (BNXT_PTP_USE_RTC(ptp->bp))
|
||||
return bnxt_ptp_adjphc(ptp, delta);
|
||||
|
||||
spin_lock_bh(&ptp->ptp_lock);
|
||||
spin_lock_irqsave(&ptp->ptp_lock, flags);
|
||||
timecounter_adjtime(&ptp->tc, delta);
|
||||
spin_unlock_bh(&ptp->ptp_lock);
|
||||
spin_unlock_irqrestore(&ptp->ptp_lock, flags);
|
||||
return 0;
|
||||
}
|
||||
|
||||
@ -236,14 +241,15 @@ static int bnxt_ptp_adjfine(struct ptp_clock_info *ptp_info, long scaled_ppm)
|
||||
struct bnxt_ptp_cfg *ptp = container_of(ptp_info, struct bnxt_ptp_cfg,
|
||||
ptp_info);
|
||||
struct bnxt *bp = ptp->bp;
|
||||
unsigned long flags;
|
||||
|
||||
if (!BNXT_MH(bp))
|
||||
return bnxt_ptp_adjfine_rtc(bp, scaled_ppm);
|
||||
|
||||
spin_lock_bh(&ptp->ptp_lock);
|
||||
spin_lock_irqsave(&ptp->ptp_lock, flags);
|
||||
timecounter_read(&ptp->tc);
|
||||
ptp->cc.mult = adjust_by_scaled_ppm(ptp->cmult, scaled_ppm);
|
||||
spin_unlock_bh(&ptp->ptp_lock);
|
||||
spin_unlock_irqrestore(&ptp->ptp_lock, flags);
|
||||
return 0;
|
||||
}
|
||||
|
||||
@ -251,12 +257,13 @@ void bnxt_ptp_pps_event(struct bnxt *bp, u32 data1, u32 data2)
|
||||
{
|
||||
struct bnxt_ptp_cfg *ptp = bp->ptp_cfg;
|
||||
struct ptp_clock_event event;
|
||||
unsigned long flags;
|
||||
u64 ns, pps_ts;
|
||||
|
||||
pps_ts = EVENT_PPS_TS(data2, data1);
|
||||
spin_lock_bh(&ptp->ptp_lock);
|
||||
spin_lock_irqsave(&ptp->ptp_lock, flags);
|
||||
ns = timecounter_cyc2time(&ptp->tc, pps_ts);
|
||||
spin_unlock_bh(&ptp->ptp_lock);
|
||||
spin_unlock_irqrestore(&ptp->ptp_lock, flags);
|
||||
|
||||
switch (EVENT_DATA2_PPS_EVENT_TYPE(data2)) {
|
||||
case ASYNC_EVENT_CMPL_PPS_TIMESTAMP_EVENT_DATA2_EVENT_TYPE_INTERNAL:
|
||||
@ -393,16 +400,17 @@ static int bnxt_get_target_cycles(struct bnxt_ptp_cfg *ptp, u64 target_ns,
|
||||
{
|
||||
u64 cycles_now;
|
||||
u64 nsec_now, nsec_delta;
|
||||
unsigned long flags;
|
||||
int rc;
|
||||
|
||||
spin_lock_bh(&ptp->ptp_lock);
|
||||
spin_lock_irqsave(&ptp->ptp_lock, flags);
|
||||
rc = bnxt_refclk_read(ptp->bp, NULL, &cycles_now);
|
||||
if (rc) {
|
||||
spin_unlock_bh(&ptp->ptp_lock);
|
||||
spin_unlock_irqrestore(&ptp->ptp_lock, flags);
|
||||
return rc;
|
||||
}
|
||||
nsec_now = timecounter_cyc2time(&ptp->tc, cycles_now);
|
||||
spin_unlock_bh(&ptp->ptp_lock);
|
||||
spin_unlock_irqrestore(&ptp->ptp_lock, flags);
|
||||
|
||||
nsec_delta = target_ns - nsec_now;
|
||||
*cycles_delta = div64_u64(nsec_delta << ptp->cc.shift, ptp->cc.mult);
|
||||
@ -689,6 +697,7 @@ static int bnxt_stamp_tx_skb(struct bnxt *bp, int slot)
|
||||
struct skb_shared_hwtstamps timestamp;
|
||||
struct bnxt_ptp_tx_req *txts_req;
|
||||
unsigned long now = jiffies;
|
||||
unsigned long flags;
|
||||
u64 ts = 0, ns = 0;
|
||||
u32 tmo = 0;
|
||||
int rc;
|
||||
@ -702,9 +711,9 @@ static int bnxt_stamp_tx_skb(struct bnxt *bp, int slot)
|
||||
tmo, slot);
|
||||
if (!rc) {
|
||||
memset(×tamp, 0, sizeof(timestamp));
|
||||
spin_lock_bh(&ptp->ptp_lock);
|
||||
spin_lock_irqsave(&ptp->ptp_lock, flags);
|
||||
ns = timecounter_cyc2time(&ptp->tc, ts);
|
||||
spin_unlock_bh(&ptp->ptp_lock);
|
||||
spin_unlock_irqrestore(&ptp->ptp_lock, flags);
|
||||
timestamp.hwtstamp = ns_to_ktime(ns);
|
||||
skb_tstamp_tx(txts_req->tx_skb, ×tamp);
|
||||
ptp->stats.ts_pkts++;
|
||||
@ -730,6 +739,7 @@ static long bnxt_ptp_ts_aux_work(struct ptp_clock_info *ptp_info)
|
||||
unsigned long now = jiffies;
|
||||
struct bnxt *bp = ptp->bp;
|
||||
u16 cons = ptp->txts_cons;
|
||||
unsigned long flags;
|
||||
u32 num_requests;
|
||||
int rc = 0;
|
||||
|
||||
@ -757,9 +767,9 @@ next_slot:
|
||||
bnxt_ptp_get_current_time(bp);
|
||||
ptp->next_period = now + HZ;
|
||||
if (time_after_eq(now, ptp->next_overflow_check)) {
|
||||
spin_lock_bh(&ptp->ptp_lock);
|
||||
spin_lock_irqsave(&ptp->ptp_lock, flags);
|
||||
timecounter_read(&ptp->tc);
|
||||
spin_unlock_bh(&ptp->ptp_lock);
|
||||
spin_unlock_irqrestore(&ptp->ptp_lock, flags);
|
||||
ptp->next_overflow_check = now + BNXT_PHC_OVERFLOW_PERIOD;
|
||||
}
|
||||
if (rc == -EAGAIN)
|
||||
@ -819,6 +829,7 @@ void bnxt_tx_ts_cmp(struct bnxt *bp, struct bnxt_napi *bnapi,
|
||||
u32 opaque = tscmp->tx_ts_cmp_opaque;
|
||||
struct bnxt_tx_ring_info *txr;
|
||||
struct bnxt_sw_tx_bd *tx_buf;
|
||||
unsigned long flags;
|
||||
u64 ts, ns;
|
||||
u16 cons;
|
||||
|
||||
@ -833,9 +844,9 @@ void bnxt_tx_ts_cmp(struct bnxt *bp, struct bnxt_napi *bnapi,
|
||||
le32_to_cpu(tscmp->tx_ts_cmp_flags_type),
|
||||
le32_to_cpu(tscmp->tx_ts_cmp_errors_v));
|
||||
} else {
|
||||
spin_lock_bh(&ptp->ptp_lock);
|
||||
spin_lock_irqsave(&ptp->ptp_lock, flags);
|
||||
ns = timecounter_cyc2time(&ptp->tc, ts);
|
||||
spin_unlock_bh(&ptp->ptp_lock);
|
||||
spin_unlock_irqrestore(&ptp->ptp_lock, flags);
|
||||
timestamp.hwtstamp = ns_to_ktime(ns);
|
||||
skb_tstamp_tx(tx_buf->skb, ×tamp);
|
||||
}
|
||||
@ -975,6 +986,7 @@ void bnxt_ptp_rtc_timecounter_init(struct bnxt_ptp_cfg *ptp, u64 ns)
|
||||
int bnxt_ptp_init_rtc(struct bnxt *bp, bool phc_cfg)
|
||||
{
|
||||
struct timespec64 tsp;
|
||||
unsigned long flags;
|
||||
u64 ns;
|
||||
int rc;
|
||||
|
||||
@ -993,9 +1005,9 @@ int bnxt_ptp_init_rtc(struct bnxt *bp, bool phc_cfg)
|
||||
if (rc)
|
||||
return rc;
|
||||
}
|
||||
spin_lock_bh(&bp->ptp_cfg->ptp_lock);
|
||||
spin_lock_irqsave(&bp->ptp_cfg->ptp_lock, flags);
|
||||
bnxt_ptp_rtc_timecounter_init(bp->ptp_cfg, ns);
|
||||
spin_unlock_bh(&bp->ptp_cfg->ptp_lock);
|
||||
spin_unlock_irqrestore(&bp->ptp_cfg->ptp_lock, flags);
|
||||
|
||||
return 0;
|
||||
}
|
||||
@ -1063,10 +1075,12 @@ int bnxt_ptp_init(struct bnxt *bp, bool phc_cfg)
|
||||
atomic64_set(&ptp->stats.ts_err, 0);
|
||||
|
||||
if (bp->flags & BNXT_FLAG_CHIP_P5_PLUS) {
|
||||
spin_lock_bh(&ptp->ptp_lock);
|
||||
unsigned long flags;
|
||||
|
||||
spin_lock_irqsave(&ptp->ptp_lock, flags);
|
||||
bnxt_refclk_read(bp, NULL, &ptp->current_time);
|
||||
WRITE_ONCE(ptp->old_time, ptp->current_time);
|
||||
spin_unlock_bh(&ptp->ptp_lock);
|
||||
spin_unlock_irqrestore(&ptp->ptp_lock, flags);
|
||||
ptp_schedule_worker(ptp->ptp_clock, 0);
|
||||
}
|
||||
ptp->txts_tmo = BNXT_PTP_DFLT_TX_TMO;
|
||||
|
@ -146,11 +146,13 @@ struct bnxt_ptp_cfg {
|
||||
};
|
||||
|
||||
#if BITS_PER_LONG == 32
|
||||
#define BNXT_READ_TIME64(ptp, dst, src) \
|
||||
do { \
|
||||
spin_lock_bh(&(ptp)->ptp_lock); \
|
||||
(dst) = (src); \
|
||||
spin_unlock_bh(&(ptp)->ptp_lock); \
|
||||
#define BNXT_READ_TIME64(ptp, dst, src) \
|
||||
do { \
|
||||
unsigned long flags; \
|
||||
\
|
||||
spin_lock_irqsave(&(ptp)->ptp_lock, flags); \
|
||||
(dst) = (src); \
|
||||
spin_unlock_irqrestore(&(ptp)->ptp_lock, flags); \
|
||||
} while (0)
|
||||
#else
|
||||
#define BNXT_READ_TIME64(ptp, dst, src) \
|
||||
|
@ -1381,10 +1381,8 @@ static netdev_tx_t be_xmit(struct sk_buff *skb, struct net_device *netdev)
|
||||
be_get_wrb_params_from_skb(adapter, skb, &wrb_params);
|
||||
|
||||
wrb_cnt = be_xmit_enqueue(adapter, txo, skb, &wrb_params);
|
||||
if (unlikely(!wrb_cnt)) {
|
||||
dev_kfree_skb_any(skb);
|
||||
goto drop;
|
||||
}
|
||||
if (unlikely(!wrb_cnt))
|
||||
goto drop_skb;
|
||||
|
||||
/* if os2bmc is enabled and if the pkt is destined to bmc,
|
||||
* enqueue the pkt a 2nd time with mgmt bit set.
|
||||
@ -1393,7 +1391,7 @@ static netdev_tx_t be_xmit(struct sk_buff *skb, struct net_device *netdev)
|
||||
BE_WRB_F_SET(wrb_params.features, OS2BMC, 1);
|
||||
wrb_cnt = be_xmit_enqueue(adapter, txo, skb, &wrb_params);
|
||||
if (unlikely(!wrb_cnt))
|
||||
goto drop;
|
||||
goto drop_skb;
|
||||
else
|
||||
skb_get(skb);
|
||||
}
|
||||
@ -1407,6 +1405,8 @@ static netdev_tx_t be_xmit(struct sk_buff *skb, struct net_device *netdev)
|
||||
be_xmit_flush(adapter, txo);
|
||||
|
||||
return NETDEV_TX_OK;
|
||||
drop_skb:
|
||||
dev_kfree_skb_any(skb);
|
||||
drop:
|
||||
tx_stats(txo)->tx_drv_drops++;
|
||||
/* Flush the already enqueued tx requests */
|
||||
|
@ -197,55 +197,67 @@ static int mac_probe(struct platform_device *_of_dev)
|
||||
err = -EINVAL;
|
||||
goto _return_of_node_put;
|
||||
}
|
||||
mac_dev->fman_dev = &of_dev->dev;
|
||||
|
||||
/* Get the FMan cell-index */
|
||||
err = of_property_read_u32(dev_node, "cell-index", &val);
|
||||
if (err) {
|
||||
dev_err(dev, "failed to read cell-index for %pOF\n", dev_node);
|
||||
err = -EINVAL;
|
||||
goto _return_of_node_put;
|
||||
goto _return_dev_put;
|
||||
}
|
||||
/* cell-index 0 => FMan id 1 */
|
||||
fman_id = (u8)(val + 1);
|
||||
|
||||
priv->fman = fman_bind(&of_dev->dev);
|
||||
priv->fman = fman_bind(mac_dev->fman_dev);
|
||||
if (!priv->fman) {
|
||||
dev_err(dev, "fman_bind(%pOF) failed\n", dev_node);
|
||||
err = -ENODEV;
|
||||
goto _return_of_node_put;
|
||||
goto _return_dev_put;
|
||||
}
|
||||
|
||||
/* Two references have been taken in of_find_device_by_node()
|
||||
* and fman_bind(). Release one of them here. The second one
|
||||
* will be released in mac_remove().
|
||||
*/
|
||||
put_device(mac_dev->fman_dev);
|
||||
of_node_put(dev_node);
|
||||
dev_node = NULL;
|
||||
|
||||
/* Get the address of the memory mapped registers */
|
||||
mac_dev->res = platform_get_mem_or_io(_of_dev, 0);
|
||||
if (!mac_dev->res) {
|
||||
dev_err(dev, "could not get registers\n");
|
||||
return -EINVAL;
|
||||
err = -EINVAL;
|
||||
goto _return_dev_put;
|
||||
}
|
||||
|
||||
err = devm_request_resource(dev, fman_get_mem_region(priv->fman),
|
||||
mac_dev->res);
|
||||
if (err) {
|
||||
dev_err_probe(dev, err, "could not request resource\n");
|
||||
return err;
|
||||
goto _return_dev_put;
|
||||
}
|
||||
|
||||
mac_dev->vaddr = devm_ioremap(dev, mac_dev->res->start,
|
||||
resource_size(mac_dev->res));
|
||||
if (!mac_dev->vaddr) {
|
||||
dev_err(dev, "devm_ioremap() failed\n");
|
||||
return -EIO;
|
||||
err = -EIO;
|
||||
goto _return_dev_put;
|
||||
}
|
||||
|
||||
if (!of_device_is_available(mac_node))
|
||||
return -ENODEV;
|
||||
if (!of_device_is_available(mac_node)) {
|
||||
err = -ENODEV;
|
||||
goto _return_dev_put;
|
||||
}
|
||||
|
||||
/* Get the cell-index */
|
||||
err = of_property_read_u32(mac_node, "cell-index", &val);
|
||||
if (err) {
|
||||
dev_err(dev, "failed to read cell-index for %pOF\n", mac_node);
|
||||
return -EINVAL;
|
||||
err = -EINVAL;
|
||||
goto _return_dev_put;
|
||||
}
|
||||
priv->cell_index = (u8)val;
|
||||
|
||||
@ -259,22 +271,26 @@ static int mac_probe(struct platform_device *_of_dev)
|
||||
if (unlikely(nph < 0)) {
|
||||
dev_err(dev, "of_count_phandle_with_args(%pOF, fsl,fman-ports) failed\n",
|
||||
mac_node);
|
||||
return nph;
|
||||
err = nph;
|
||||
goto _return_dev_put;
|
||||
}
|
||||
|
||||
if (nph != ARRAY_SIZE(mac_dev->port)) {
|
||||
dev_err(dev, "Not supported number of fman-ports handles of mac node %pOF from device tree\n",
|
||||
mac_node);
|
||||
return -EINVAL;
|
||||
err = -EINVAL;
|
||||
goto _return_dev_put;
|
||||
}
|
||||
|
||||
for (i = 0; i < ARRAY_SIZE(mac_dev->port); i++) {
|
||||
/* PORT_NUM determines the size of the port array */
|
||||
for (i = 0; i < PORT_NUM; i++) {
|
||||
/* Find the port node */
|
||||
dev_node = of_parse_phandle(mac_node, "fsl,fman-ports", i);
|
||||
if (!dev_node) {
|
||||
dev_err(dev, "of_parse_phandle(%pOF, fsl,fman-ports) failed\n",
|
||||
mac_node);
|
||||
return -EINVAL;
|
||||
err = -EINVAL;
|
||||
goto _return_dev_arr_put;
|
||||
}
|
||||
|
||||
of_dev = of_find_device_by_node(dev_node);
|
||||
@ -282,17 +298,24 @@ static int mac_probe(struct platform_device *_of_dev)
|
||||
dev_err(dev, "of_find_device_by_node(%pOF) failed\n",
|
||||
dev_node);
|
||||
err = -EINVAL;
|
||||
goto _return_of_node_put;
|
||||
goto _return_dev_arr_put;
|
||||
}
|
||||
mac_dev->fman_port_devs[i] = &of_dev->dev;
|
||||
|
||||
mac_dev->port[i] = fman_port_bind(&of_dev->dev);
|
||||
mac_dev->port[i] = fman_port_bind(mac_dev->fman_port_devs[i]);
|
||||
if (!mac_dev->port[i]) {
|
||||
dev_err(dev, "dev_get_drvdata(%pOF) failed\n",
|
||||
dev_node);
|
||||
err = -EINVAL;
|
||||
goto _return_of_node_put;
|
||||
goto _return_dev_arr_put;
|
||||
}
|
||||
/* Two references have been taken in of_find_device_by_node()
|
||||
* and fman_port_bind(). Release one of them here. The second
|
||||
* one will be released in mac_remove().
|
||||
*/
|
||||
put_device(mac_dev->fman_port_devs[i]);
|
||||
of_node_put(dev_node);
|
||||
dev_node = NULL;
|
||||
}
|
||||
|
||||
/* Get the PHY connection type */
|
||||
@ -312,7 +335,7 @@ static int mac_probe(struct platform_device *_of_dev)
|
||||
|
||||
err = init(mac_dev, mac_node, ¶ms);
|
||||
if (err < 0)
|
||||
return err;
|
||||
goto _return_dev_arr_put;
|
||||
|
||||
if (!is_zero_ether_addr(mac_dev->addr))
|
||||
dev_info(dev, "FMan MAC address: %pM\n", mac_dev->addr);
|
||||
@ -327,6 +350,12 @@ static int mac_probe(struct platform_device *_of_dev)
|
||||
|
||||
return err;
|
||||
|
||||
_return_dev_arr_put:
|
||||
/* mac_dev is kzalloc'ed */
|
||||
for (i = 0; i < PORT_NUM; i++)
|
||||
put_device(mac_dev->fman_port_devs[i]);
|
||||
_return_dev_put:
|
||||
put_device(mac_dev->fman_dev);
|
||||
_return_of_node_put:
|
||||
of_node_put(dev_node);
|
||||
return err;
|
||||
@ -335,6 +364,11 @@ _return_of_node_put:
|
||||
static void mac_remove(struct platform_device *pdev)
|
||||
{
|
||||
struct mac_device *mac_dev = platform_get_drvdata(pdev);
|
||||
int i;
|
||||
|
||||
for (i = 0; i < PORT_NUM; i++)
|
||||
put_device(mac_dev->fman_port_devs[i]);
|
||||
put_device(mac_dev->fman_dev);
|
||||
|
||||
platform_device_unregister(mac_dev->priv->eth_dev);
|
||||
}
|
||||
|
@ -19,12 +19,13 @@
|
||||
struct fman_mac;
|
||||
struct mac_priv_s;
|
||||
|
||||
#define PORT_NUM 2
|
||||
struct mac_device {
|
||||
void __iomem *vaddr;
|
||||
struct device *dev;
|
||||
struct resource *res;
|
||||
u8 addr[ETH_ALEN];
|
||||
struct fman_port *port[2];
|
||||
struct fman_port *port[PORT_NUM];
|
||||
struct phylink *phylink;
|
||||
struct phylink_config phylink_config;
|
||||
phy_interface_t phy_if;
|
||||
@ -52,6 +53,9 @@ struct mac_device {
|
||||
|
||||
struct fman_mac *fman_mac;
|
||||
struct mac_priv_s *priv;
|
||||
|
||||
struct device *fman_dev;
|
||||
struct device *fman_port_devs[PORT_NUM];
|
||||
};
|
||||
|
||||
static inline struct mac_device
|
||||
|
@ -1012,6 +1012,7 @@ sun3_82586_send_packet(struct sk_buff *skb, struct net_device *dev)
|
||||
if(skb->len > XMIT_BUFF_SIZE)
|
||||
{
|
||||
printk("%s: Sorry, max. framelength is %d bytes. The length of your frame is %d bytes.\n",dev->name,XMIT_BUFF_SIZE,skb->len);
|
||||
dev_kfree_skb(skb);
|
||||
return NETDEV_TX_OK;
|
||||
}
|
||||
|
||||
|
@ -336,6 +336,51 @@ static int octep_oq_check_hw_for_pkts(struct octep_device *oct,
|
||||
return new_pkts;
|
||||
}
|
||||
|
||||
/**
|
||||
* octep_oq_next_pkt() - Move to the next packet in Rx queue.
|
||||
*
|
||||
* @oq: Octeon Rx queue data structure.
|
||||
* @buff_info: Current packet buffer info.
|
||||
* @read_idx: Current packet index in the ring.
|
||||
* @desc_used: Current packet descriptor number.
|
||||
*
|
||||
* Free the resources associated with a packet.
|
||||
* Increment packet index in the ring and packet descriptor number.
|
||||
*/
|
||||
static void octep_oq_next_pkt(struct octep_oq *oq,
|
||||
struct octep_rx_buffer *buff_info,
|
||||
u32 *read_idx, u32 *desc_used)
|
||||
{
|
||||
dma_unmap_page(oq->dev, oq->desc_ring[*read_idx].buffer_ptr,
|
||||
PAGE_SIZE, DMA_FROM_DEVICE);
|
||||
buff_info->page = NULL;
|
||||
(*read_idx)++;
|
||||
(*desc_used)++;
|
||||
if (*read_idx == oq->max_count)
|
||||
*read_idx = 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* octep_oq_drop_rx() - Free the resources associated with a packet.
|
||||
*
|
||||
* @oq: Octeon Rx queue data structure.
|
||||
* @buff_info: Current packet buffer info.
|
||||
* @read_idx: Current packet index in the ring.
|
||||
* @desc_used: Current packet descriptor number.
|
||||
*
|
||||
*/
|
||||
static void octep_oq_drop_rx(struct octep_oq *oq,
|
||||
struct octep_rx_buffer *buff_info,
|
||||
u32 *read_idx, u32 *desc_used)
|
||||
{
|
||||
int data_len = buff_info->len - oq->max_single_buffer_size;
|
||||
|
||||
while (data_len > 0) {
|
||||
octep_oq_next_pkt(oq, buff_info, read_idx, desc_used);
|
||||
data_len -= oq->buffer_size;
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* __octep_oq_process_rx() - Process hardware Rx queue and push to stack.
|
||||
*
|
||||
@ -367,10 +412,7 @@ static int __octep_oq_process_rx(struct octep_device *oct,
|
||||
desc_used = 0;
|
||||
for (pkt = 0; pkt < pkts_to_process; pkt++) {
|
||||
buff_info = (struct octep_rx_buffer *)&oq->buff_info[read_idx];
|
||||
dma_unmap_page(oq->dev, oq->desc_ring[read_idx].buffer_ptr,
|
||||
PAGE_SIZE, DMA_FROM_DEVICE);
|
||||
resp_hw = page_address(buff_info->page);
|
||||
buff_info->page = NULL;
|
||||
|
||||
/* Swap the length field that is in Big-Endian to CPU */
|
||||
buff_info->len = be64_to_cpu(resp_hw->length);
|
||||
@ -394,36 +436,33 @@ static int __octep_oq_process_rx(struct octep_device *oct,
|
||||
data_offset = OCTEP_OQ_RESP_HW_SIZE;
|
||||
rx_ol_flags = 0;
|
||||
}
|
||||
|
||||
octep_oq_next_pkt(oq, buff_info, &read_idx, &desc_used);
|
||||
|
||||
skb = build_skb((void *)resp_hw, PAGE_SIZE);
|
||||
if (!skb) {
|
||||
octep_oq_drop_rx(oq, buff_info,
|
||||
&read_idx, &desc_used);
|
||||
oq->stats.alloc_failures++;
|
||||
continue;
|
||||
}
|
||||
skb_reserve(skb, data_offset);
|
||||
|
||||
rx_bytes += buff_info->len;
|
||||
|
||||
if (buff_info->len <= oq->max_single_buffer_size) {
|
||||
skb = build_skb((void *)resp_hw, PAGE_SIZE);
|
||||
skb_reserve(skb, data_offset);
|
||||
skb_put(skb, buff_info->len);
|
||||
read_idx++;
|
||||
desc_used++;
|
||||
if (read_idx == oq->max_count)
|
||||
read_idx = 0;
|
||||
} else {
|
||||
struct skb_shared_info *shinfo;
|
||||
u16 data_len;
|
||||
|
||||
skb = build_skb((void *)resp_hw, PAGE_SIZE);
|
||||
skb_reserve(skb, data_offset);
|
||||
/* Head fragment includes response header(s);
|
||||
* subsequent fragments contains only data.
|
||||
*/
|
||||
skb_put(skb, oq->max_single_buffer_size);
|
||||
read_idx++;
|
||||
desc_used++;
|
||||
if (read_idx == oq->max_count)
|
||||
read_idx = 0;
|
||||
|
||||
shinfo = skb_shinfo(skb);
|
||||
data_len = buff_info->len - oq->max_single_buffer_size;
|
||||
while (data_len) {
|
||||
dma_unmap_page(oq->dev, oq->desc_ring[read_idx].buffer_ptr,
|
||||
PAGE_SIZE, DMA_FROM_DEVICE);
|
||||
buff_info = (struct octep_rx_buffer *)
|
||||
&oq->buff_info[read_idx];
|
||||
if (data_len < oq->buffer_size) {
|
||||
@ -438,11 +477,8 @@ static int __octep_oq_process_rx(struct octep_device *oct,
|
||||
buff_info->page, 0,
|
||||
buff_info->len,
|
||||
buff_info->len);
|
||||
buff_info->page = NULL;
|
||||
read_idx++;
|
||||
desc_used++;
|
||||
if (read_idx == oq->max_count)
|
||||
read_idx = 0;
|
||||
|
||||
octep_oq_next_pkt(oq, buff_info, &read_idx, &desc_used);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -3197,7 +3197,6 @@ mlxsw_sp_nexthop_sh_counter_get(struct mlxsw_sp *mlxsw_sp,
|
||||
{
|
||||
struct mlxsw_sp_nexthop_group *nh_grp = nh->nhgi->nh_grp;
|
||||
struct mlxsw_sp_nexthop_counter *nhct;
|
||||
void *ptr;
|
||||
int err;
|
||||
|
||||
nhct = xa_load(&nh_grp->nhgi->nexthop_counters, nh->id);
|
||||
@ -3210,12 +3209,10 @@ mlxsw_sp_nexthop_sh_counter_get(struct mlxsw_sp *mlxsw_sp,
|
||||
if (IS_ERR(nhct))
|
||||
return nhct;
|
||||
|
||||
ptr = xa_store(&nh_grp->nhgi->nexthop_counters, nh->id, nhct,
|
||||
GFP_KERNEL);
|
||||
if (IS_ERR(ptr)) {
|
||||
err = PTR_ERR(ptr);
|
||||
err = xa_err(xa_store(&nh_grp->nhgi->nexthop_counters, nh->id, nhct,
|
||||
GFP_KERNEL));
|
||||
if (err)
|
||||
goto err_store;
|
||||
}
|
||||
|
||||
return nhct;
|
||||
|
||||
|
@ -4682,7 +4682,9 @@ static irqreturn_t rtl8169_interrupt(int irq, void *dev_instance)
|
||||
if ((status & 0xffff) == 0xffff || !(status & tp->irq_mask))
|
||||
return IRQ_NONE;
|
||||
|
||||
if (unlikely(status & SYSErr)) {
|
||||
/* At least RTL8168fp may unexpectedly set the SYSErr bit */
|
||||
if (unlikely(status & SYSErr &&
|
||||
tp->mac_version <= RTL_GIGA_MAC_VER_06)) {
|
||||
rtl8169_pcierr_interrupt(tp->dev);
|
||||
goto out;
|
||||
}
|
||||
|
@ -2798,6 +2798,31 @@ static struct hv_driver netvsc_drv = {
|
||||
},
|
||||
};
|
||||
|
||||
/* Set VF's namespace same as the synthetic NIC */
|
||||
static void netvsc_event_set_vf_ns(struct net_device *ndev)
|
||||
{
|
||||
struct net_device_context *ndev_ctx = netdev_priv(ndev);
|
||||
struct net_device *vf_netdev;
|
||||
int ret;
|
||||
|
||||
vf_netdev = rtnl_dereference(ndev_ctx->vf_netdev);
|
||||
if (!vf_netdev)
|
||||
return;
|
||||
|
||||
if (!net_eq(dev_net(ndev), dev_net(vf_netdev))) {
|
||||
ret = dev_change_net_namespace(vf_netdev, dev_net(ndev),
|
||||
"eth%d");
|
||||
if (ret)
|
||||
netdev_err(vf_netdev,
|
||||
"Cannot move to same namespace as %s: %d\n",
|
||||
ndev->name, ret);
|
||||
else
|
||||
netdev_info(vf_netdev,
|
||||
"Moved VF to namespace with: %s\n",
|
||||
ndev->name);
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* On Hyper-V, every VF interface is matched with a corresponding
|
||||
* synthetic interface. The synthetic interface is presented first
|
||||
@ -2810,6 +2835,11 @@ static int netvsc_netdev_event(struct notifier_block *this,
|
||||
struct net_device *event_dev = netdev_notifier_info_to_dev(ptr);
|
||||
int ret = 0;
|
||||
|
||||
if (event_dev->netdev_ops == &device_ops && event == NETDEV_REGISTER) {
|
||||
netvsc_event_set_vf_ns(event_dev);
|
||||
return NOTIFY_DONE;
|
||||
}
|
||||
|
||||
ret = check_dev_is_matching_vf(event_dev);
|
||||
if (ret != 0)
|
||||
return NOTIFY_DONE;
|
||||
|
@ -45,8 +45,8 @@
|
||||
/* Control Register 2 bits */
|
||||
#define DP83822_FX_ENABLE BIT(14)
|
||||
|
||||
#define DP83822_HW_RESET BIT(15)
|
||||
#define DP83822_SW_RESET BIT(14)
|
||||
#define DP83822_SW_RESET BIT(15)
|
||||
#define DP83822_DIG_RESTART BIT(14)
|
||||
|
||||
/* PHY STS bits */
|
||||
#define DP83822_PHYSTS_DUPLEX BIT(2)
|
||||
|
@ -815,7 +815,7 @@ plip_send_packet(struct net_device *dev, struct net_local *nl,
|
||||
return HS_TIMEOUT;
|
||||
}
|
||||
}
|
||||
break;
|
||||
fallthrough;
|
||||
|
||||
case PLIP_PK_LENGTH_LSB:
|
||||
if (plip_send(nibble_timeout, dev,
|
||||
|
@ -113,7 +113,7 @@ static void pse_release_pis(struct pse_controller_dev *pcdev)
|
||||
{
|
||||
int i;
|
||||
|
||||
for (i = 0; i <= pcdev->nr_lines; i++) {
|
||||
for (i = 0; i < pcdev->nr_lines; i++) {
|
||||
of_node_put(pcdev->pi[i].pairset[0].np);
|
||||
of_node_put(pcdev->pi[i].pairset[1].np);
|
||||
of_node_put(pcdev->pi[i].np);
|
||||
@ -647,7 +647,7 @@ static int of_pse_match_pi(struct pse_controller_dev *pcdev,
|
||||
{
|
||||
int i;
|
||||
|
||||
for (i = 0; i <= pcdev->nr_lines; i++) {
|
||||
for (i = 0; i < pcdev->nr_lines; i++) {
|
||||
if (pcdev->pi[i].np == np)
|
||||
return i;
|
||||
}
|
||||
|
@ -1426,6 +1426,7 @@ static const struct usb_device_id products[] = {
|
||||
{QMI_FIXED_INTF(0x2c7c, 0x0296, 4)}, /* Quectel BG96 */
|
||||
{QMI_QUIRK_SET_DTR(0x2c7c, 0x030e, 4)}, /* Quectel EM05GV2 */
|
||||
{QMI_QUIRK_SET_DTR(0x2cb7, 0x0104, 4)}, /* Fibocom NL678 series */
|
||||
{QMI_QUIRK_SET_DTR(0x2cb7, 0x0112, 0)}, /* Fibocom FG132 */
|
||||
{QMI_FIXED_INTF(0x0489, 0xe0b4, 0)}, /* Foxconn T77W968 LTE */
|
||||
{QMI_FIXED_INTF(0x0489, 0xe0b5, 0)}, /* Foxconn T77W968 LTE with eSIM support*/
|
||||
{QMI_FIXED_INTF(0x2692, 0x9025, 4)}, /* Cellient MPL200 (rebranded Qualcomm 05c6:9025) */
|
||||
|
@ -1767,7 +1767,8 @@ usbnet_probe (struct usb_interface *udev, const struct usb_device_id *prod)
|
||||
// can rename the link if it knows better.
|
||||
if ((dev->driver_info->flags & FLAG_ETHER) != 0 &&
|
||||
((dev->driver_info->flags & FLAG_POINTTOPOINT) == 0 ||
|
||||
(net->dev_addr [0] & 0x02) == 0))
|
||||
/* somebody touched it*/
|
||||
!is_zero_ether_addr(net->dev_addr)))
|
||||
strscpy(net->name, "eth%d", sizeof(net->name));
|
||||
/* WLAN devices should always be named "wlan%d" */
|
||||
if ((dev->driver_info->flags & FLAG_WLAN) != 0)
|
||||
|
@ -4155,7 +4155,7 @@ struct virtnet_stats_ctx {
|
||||
u32 desc_num[3];
|
||||
|
||||
/* The actual supported stat types. */
|
||||
u32 bitmap[3];
|
||||
u64 bitmap[3];
|
||||
|
||||
/* Used to calculate the reply buffer size. */
|
||||
u32 size[3];
|
||||
|
@ -1038,7 +1038,7 @@ static const struct nla_policy wwan_rtnl_policy[IFLA_WWAN_MAX + 1] = {
|
||||
|
||||
static struct rtnl_link_ops wwan_rtnl_link_ops __read_mostly = {
|
||||
.kind = "wwan",
|
||||
.maxtype = __IFLA_WWAN_MAX,
|
||||
.maxtype = IFLA_WWAN_MAX,
|
||||
.alloc = wwan_rtnl_alloc,
|
||||
.validate = wwan_rtnl_validate,
|
||||
.newlink = wwan_rtnl_newlink,
|
||||
|
@ -3325,6 +3325,12 @@ static inline void netif_tx_wake_all_queues(struct net_device *dev)
|
||||
|
||||
static __always_inline void netif_tx_stop_queue(struct netdev_queue *dev_queue)
|
||||
{
|
||||
/* Paired with READ_ONCE() from dev_watchdog() */
|
||||
WRITE_ONCE(dev_queue->trans_start, jiffies);
|
||||
|
||||
/* This barrier is paired with smp_mb() from dev_watchdog() */
|
||||
smp_mb__before_atomic();
|
||||
|
||||
/* Must be an atomic op see netif_txq_try_stop() */
|
||||
set_bit(__QUEUE_STATE_DRV_XOFF, &dev_queue->state);
|
||||
}
|
||||
@ -3451,6 +3457,12 @@ static inline void netdev_tx_sent_queue(struct netdev_queue *dev_queue,
|
||||
if (likely(dql_avail(&dev_queue->dql) >= 0))
|
||||
return;
|
||||
|
||||
/* Paired with READ_ONCE() from dev_watchdog() */
|
||||
WRITE_ONCE(dev_queue->trans_start, jiffies);
|
||||
|
||||
/* This barrier is paired with smp_mb() from dev_watchdog() */
|
||||
smp_mb__before_atomic();
|
||||
|
||||
set_bit(__QUEUE_STATE_STACK_XOFF, &dev_queue->state);
|
||||
|
||||
/*
|
||||
|
@ -403,6 +403,7 @@ int bt_sock_register(int proto, const struct net_proto_family *ops);
|
||||
void bt_sock_unregister(int proto);
|
||||
void bt_sock_link(struct bt_sock_list *l, struct sock *s);
|
||||
void bt_sock_unlink(struct bt_sock_list *l, struct sock *s);
|
||||
bool bt_sock_linked(struct bt_sock_list *l, struct sock *s);
|
||||
struct sock *bt_sock_alloc(struct net *net, struct socket *sock,
|
||||
struct proto *prot, int proto, gfp_t prio, int kern);
|
||||
int bt_sock_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
|
||||
|
@ -51,7 +51,6 @@ struct netns_xfrm {
|
||||
struct hlist_head *policy_byidx;
|
||||
unsigned int policy_idx_hmask;
|
||||
unsigned int idx_generator;
|
||||
struct hlist_head policy_inexact[XFRM_POLICY_MAX];
|
||||
struct xfrm_policy_hash policy_bydst[XFRM_POLICY_MAX];
|
||||
unsigned int policy_count[XFRM_POLICY_MAX * 2];
|
||||
struct work_struct policy_hash_work;
|
||||
|
@ -349,20 +349,25 @@ struct xfrm_if_cb {
|
||||
void xfrm_if_register_cb(const struct xfrm_if_cb *ifcb);
|
||||
void xfrm_if_unregister_cb(void);
|
||||
|
||||
struct xfrm_dst_lookup_params {
|
||||
struct net *net;
|
||||
int tos;
|
||||
int oif;
|
||||
xfrm_address_t *saddr;
|
||||
xfrm_address_t *daddr;
|
||||
u32 mark;
|
||||
__u8 ipproto;
|
||||
union flowi_uli uli;
|
||||
};
|
||||
|
||||
struct net_device;
|
||||
struct xfrm_type;
|
||||
struct xfrm_dst;
|
||||
struct xfrm_policy_afinfo {
|
||||
struct dst_ops *dst_ops;
|
||||
struct dst_entry *(*dst_lookup)(struct net *net,
|
||||
int tos, int oif,
|
||||
const xfrm_address_t *saddr,
|
||||
const xfrm_address_t *daddr,
|
||||
u32 mark);
|
||||
int (*get_saddr)(struct net *net, int oif,
|
||||
xfrm_address_t *saddr,
|
||||
xfrm_address_t *daddr,
|
||||
u32 mark);
|
||||
struct dst_entry *(*dst_lookup)(const struct xfrm_dst_lookup_params *params);
|
||||
int (*get_saddr)(xfrm_address_t *saddr,
|
||||
const struct xfrm_dst_lookup_params *params);
|
||||
int (*fill_dst)(struct xfrm_dst *xdst,
|
||||
struct net_device *dev,
|
||||
const struct flowi *fl);
|
||||
@ -1764,10 +1769,7 @@ static inline int xfrm_user_policy(struct sock *sk, int optname,
|
||||
}
|
||||
#endif
|
||||
|
||||
struct dst_entry *__xfrm_dst_lookup(struct net *net, int tos, int oif,
|
||||
const xfrm_address_t *saddr,
|
||||
const xfrm_address_t *daddr,
|
||||
int family, u32 mark);
|
||||
struct dst_entry *__xfrm_dst_lookup(int family, const struct xfrm_dst_lookup_params *params);
|
||||
|
||||
struct xfrm_policy *xfrm_policy_alloc(struct net *net, gfp_t gfp);
|
||||
|
||||
|
@ -309,6 +309,9 @@ static int pc_clock_settime(clockid_t id, const struct timespec64 *ts)
|
||||
struct posix_clock_desc cd;
|
||||
int err;
|
||||
|
||||
if (!timespec64_valid_strict(ts))
|
||||
return -EINVAL;
|
||||
|
||||
err = get_clock_desc(id, &cd);
|
||||
if (err)
|
||||
return err;
|
||||
@ -318,9 +321,6 @@ static int pc_clock_settime(clockid_t id, const struct timespec64 *ts)
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (!timespec64_valid_strict(ts))
|
||||
return -EINVAL;
|
||||
|
||||
if (cd.clk->ops.clock_settime)
|
||||
err = cd.clk->ops.clock_settime(cd.clk, ts);
|
||||
else
|
||||
|
@ -185,6 +185,28 @@ void bt_sock_unlink(struct bt_sock_list *l, struct sock *sk)
|
||||
}
|
||||
EXPORT_SYMBOL(bt_sock_unlink);
|
||||
|
||||
bool bt_sock_linked(struct bt_sock_list *l, struct sock *s)
|
||||
{
|
||||
struct sock *sk;
|
||||
|
||||
if (!l || !s)
|
||||
return false;
|
||||
|
||||
read_lock(&l->lock);
|
||||
|
||||
sk_for_each(sk, &l->head) {
|
||||
if (s == sk) {
|
||||
read_unlock(&l->lock);
|
||||
return true;
|
||||
}
|
||||
}
|
||||
|
||||
read_unlock(&l->lock);
|
||||
|
||||
return false;
|
||||
}
|
||||
EXPORT_SYMBOL(bt_sock_linked);
|
||||
|
||||
void bt_accept_enqueue(struct sock *parent, struct sock *sk, bool bh)
|
||||
{
|
||||
const struct cred *old_cred;
|
||||
|
@ -1644,12 +1644,12 @@ void hci_adv_instances_clear(struct hci_dev *hdev)
|
||||
struct adv_info *adv_instance, *n;
|
||||
|
||||
if (hdev->adv_instance_timeout) {
|
||||
cancel_delayed_work(&hdev->adv_instance_expire);
|
||||
disable_delayed_work(&hdev->adv_instance_expire);
|
||||
hdev->adv_instance_timeout = 0;
|
||||
}
|
||||
|
||||
list_for_each_entry_safe(adv_instance, n, &hdev->adv_instances, list) {
|
||||
cancel_delayed_work_sync(&adv_instance->rpa_expired_cb);
|
||||
disable_delayed_work_sync(&adv_instance->rpa_expired_cb);
|
||||
list_del(&adv_instance->list);
|
||||
kfree(adv_instance);
|
||||
}
|
||||
@ -2685,11 +2685,11 @@ void hci_unregister_dev(struct hci_dev *hdev)
|
||||
list_del(&hdev->list);
|
||||
write_unlock(&hci_dev_list_lock);
|
||||
|
||||
cancel_work_sync(&hdev->rx_work);
|
||||
cancel_work_sync(&hdev->cmd_work);
|
||||
cancel_work_sync(&hdev->tx_work);
|
||||
cancel_work_sync(&hdev->power_on);
|
||||
cancel_work_sync(&hdev->error_reset);
|
||||
disable_work_sync(&hdev->rx_work);
|
||||
disable_work_sync(&hdev->cmd_work);
|
||||
disable_work_sync(&hdev->tx_work);
|
||||
disable_work_sync(&hdev->power_on);
|
||||
disable_work_sync(&hdev->error_reset);
|
||||
|
||||
hci_cmd_sync_clear(hdev);
|
||||
|
||||
@ -2796,8 +2796,14 @@ static void hci_cancel_cmd_sync(struct hci_dev *hdev, int err)
|
||||
{
|
||||
bt_dev_dbg(hdev, "err 0x%2.2x", err);
|
||||
|
||||
cancel_delayed_work_sync(&hdev->cmd_timer);
|
||||
cancel_delayed_work_sync(&hdev->ncmd_timer);
|
||||
if (hci_dev_test_flag(hdev, HCI_UNREGISTER)) {
|
||||
disable_delayed_work_sync(&hdev->cmd_timer);
|
||||
disable_delayed_work_sync(&hdev->ncmd_timer);
|
||||
} else {
|
||||
cancel_delayed_work_sync(&hdev->cmd_timer);
|
||||
cancel_delayed_work_sync(&hdev->ncmd_timer);
|
||||
}
|
||||
|
||||
atomic_set(&hdev->cmd_cnt, 1);
|
||||
|
||||
hci_cmd_sync_cancel_sync(hdev, err);
|
||||
|
@ -5131,9 +5131,15 @@ int hci_dev_close_sync(struct hci_dev *hdev)
|
||||
|
||||
bt_dev_dbg(hdev, "");
|
||||
|
||||
cancel_delayed_work(&hdev->power_off);
|
||||
cancel_delayed_work(&hdev->ncmd_timer);
|
||||
cancel_delayed_work(&hdev->le_scan_disable);
|
||||
if (hci_dev_test_flag(hdev, HCI_UNREGISTER)) {
|
||||
disable_delayed_work(&hdev->power_off);
|
||||
disable_delayed_work(&hdev->ncmd_timer);
|
||||
disable_delayed_work(&hdev->le_scan_disable);
|
||||
} else {
|
||||
cancel_delayed_work(&hdev->power_off);
|
||||
cancel_delayed_work(&hdev->ncmd_timer);
|
||||
cancel_delayed_work(&hdev->le_scan_disable);
|
||||
}
|
||||
|
||||
hci_cmd_sync_cancel_sync(hdev, ENODEV);
|
||||
|
||||
|
@ -93,6 +93,16 @@ static struct sock *iso_get_sock(bdaddr_t *src, bdaddr_t *dst,
|
||||
#define ISO_CONN_TIMEOUT (HZ * 40)
|
||||
#define ISO_DISCONN_TIMEOUT (HZ * 2)
|
||||
|
||||
static struct sock *iso_sock_hold(struct iso_conn *conn)
|
||||
{
|
||||
if (!conn || !bt_sock_linked(&iso_sk_list, conn->sk))
|
||||
return NULL;
|
||||
|
||||
sock_hold(conn->sk);
|
||||
|
||||
return conn->sk;
|
||||
}
|
||||
|
||||
static void iso_sock_timeout(struct work_struct *work)
|
||||
{
|
||||
struct iso_conn *conn = container_of(work, struct iso_conn,
|
||||
@ -100,9 +110,7 @@ static void iso_sock_timeout(struct work_struct *work)
|
||||
struct sock *sk;
|
||||
|
||||
iso_conn_lock(conn);
|
||||
sk = conn->sk;
|
||||
if (sk)
|
||||
sock_hold(sk);
|
||||
sk = iso_sock_hold(conn);
|
||||
iso_conn_unlock(conn);
|
||||
|
||||
if (!sk)
|
||||
@ -209,9 +217,7 @@ static void iso_conn_del(struct hci_conn *hcon, int err)
|
||||
|
||||
/* Kill socket */
|
||||
iso_conn_lock(conn);
|
||||
sk = conn->sk;
|
||||
if (sk)
|
||||
sock_hold(sk);
|
||||
sk = iso_sock_hold(conn);
|
||||
iso_conn_unlock(conn);
|
||||
|
||||
if (sk) {
|
||||
|
@ -76,6 +76,16 @@ struct sco_pinfo {
|
||||
#define SCO_CONN_TIMEOUT (HZ * 40)
|
||||
#define SCO_DISCONN_TIMEOUT (HZ * 2)
|
||||
|
||||
static struct sock *sco_sock_hold(struct sco_conn *conn)
|
||||
{
|
||||
if (!conn || !bt_sock_linked(&sco_sk_list, conn->sk))
|
||||
return NULL;
|
||||
|
||||
sock_hold(conn->sk);
|
||||
|
||||
return conn->sk;
|
||||
}
|
||||
|
||||
static void sco_sock_timeout(struct work_struct *work)
|
||||
{
|
||||
struct sco_conn *conn = container_of(work, struct sco_conn,
|
||||
@ -87,9 +97,7 @@ static void sco_sock_timeout(struct work_struct *work)
|
||||
sco_conn_unlock(conn);
|
||||
return;
|
||||
}
|
||||
sk = conn->sk;
|
||||
if (sk)
|
||||
sock_hold(sk);
|
||||
sk = sco_sock_hold(conn);
|
||||
sco_conn_unlock(conn);
|
||||
|
||||
if (!sk)
|
||||
@ -194,9 +202,7 @@ static void sco_conn_del(struct hci_conn *hcon, int err)
|
||||
|
||||
/* Kill socket */
|
||||
sco_conn_lock(conn);
|
||||
sk = conn->sk;
|
||||
if (sk)
|
||||
sock_hold(sk);
|
||||
sk = sco_sock_hold(conn);
|
||||
sco_conn_unlock(conn);
|
||||
|
||||
if (sk) {
|
||||
|
@ -17,47 +17,43 @@
|
||||
#include <net/ip.h>
|
||||
#include <net/l3mdev.h>
|
||||
|
||||
static struct dst_entry *__xfrm4_dst_lookup(struct net *net, struct flowi4 *fl4,
|
||||
int tos, int oif,
|
||||
const xfrm_address_t *saddr,
|
||||
const xfrm_address_t *daddr,
|
||||
u32 mark)
|
||||
static struct dst_entry *__xfrm4_dst_lookup(struct flowi4 *fl4,
|
||||
const struct xfrm_dst_lookup_params *params)
|
||||
{
|
||||
struct rtable *rt;
|
||||
|
||||
memset(fl4, 0, sizeof(*fl4));
|
||||
fl4->daddr = daddr->a4;
|
||||
fl4->flowi4_tos = tos;
|
||||
fl4->flowi4_l3mdev = l3mdev_master_ifindex_by_index(net, oif);
|
||||
fl4->flowi4_mark = mark;
|
||||
if (saddr)
|
||||
fl4->saddr = saddr->a4;
|
||||
fl4->daddr = params->daddr->a4;
|
||||
fl4->flowi4_tos = params->tos;
|
||||
fl4->flowi4_l3mdev = l3mdev_master_ifindex_by_index(params->net,
|
||||
params->oif);
|
||||
fl4->flowi4_mark = params->mark;
|
||||
if (params->saddr)
|
||||
fl4->saddr = params->saddr->a4;
|
||||
fl4->flowi4_proto = params->ipproto;
|
||||
fl4->uli = params->uli;
|
||||
|
||||
rt = __ip_route_output_key(net, fl4);
|
||||
rt = __ip_route_output_key(params->net, fl4);
|
||||
if (!IS_ERR(rt))
|
||||
return &rt->dst;
|
||||
|
||||
return ERR_CAST(rt);
|
||||
}
|
||||
|
||||
static struct dst_entry *xfrm4_dst_lookup(struct net *net, int tos, int oif,
|
||||
const xfrm_address_t *saddr,
|
||||
const xfrm_address_t *daddr,
|
||||
u32 mark)
|
||||
static struct dst_entry *xfrm4_dst_lookup(const struct xfrm_dst_lookup_params *params)
|
||||
{
|
||||
struct flowi4 fl4;
|
||||
|
||||
return __xfrm4_dst_lookup(net, &fl4, tos, oif, saddr, daddr, mark);
|
||||
return __xfrm4_dst_lookup(&fl4, params);
|
||||
}
|
||||
|
||||
static int xfrm4_get_saddr(struct net *net, int oif,
|
||||
xfrm_address_t *saddr, xfrm_address_t *daddr,
|
||||
u32 mark)
|
||||
static int xfrm4_get_saddr(xfrm_address_t *saddr,
|
||||
const struct xfrm_dst_lookup_params *params)
|
||||
{
|
||||
struct dst_entry *dst;
|
||||
struct flowi4 fl4;
|
||||
|
||||
dst = __xfrm4_dst_lookup(net, &fl4, 0, oif, NULL, daddr, mark);
|
||||
dst = __xfrm4_dst_lookup(&fl4, params);
|
||||
if (IS_ERR(dst))
|
||||
return -EHOSTUNREACH;
|
||||
|
||||
|
@ -23,23 +23,24 @@
|
||||
#include <net/ip6_route.h>
|
||||
#include <net/l3mdev.h>
|
||||
|
||||
static struct dst_entry *xfrm6_dst_lookup(struct net *net, int tos, int oif,
|
||||
const xfrm_address_t *saddr,
|
||||
const xfrm_address_t *daddr,
|
||||
u32 mark)
|
||||
static struct dst_entry *xfrm6_dst_lookup(const struct xfrm_dst_lookup_params *params)
|
||||
{
|
||||
struct flowi6 fl6;
|
||||
struct dst_entry *dst;
|
||||
int err;
|
||||
|
||||
memset(&fl6, 0, sizeof(fl6));
|
||||
fl6.flowi6_l3mdev = l3mdev_master_ifindex_by_index(net, oif);
|
||||
fl6.flowi6_mark = mark;
|
||||
memcpy(&fl6.daddr, daddr, sizeof(fl6.daddr));
|
||||
if (saddr)
|
||||
memcpy(&fl6.saddr, saddr, sizeof(fl6.saddr));
|
||||
fl6.flowi6_l3mdev = l3mdev_master_ifindex_by_index(params->net,
|
||||
params->oif);
|
||||
fl6.flowi6_mark = params->mark;
|
||||
memcpy(&fl6.daddr, params->daddr, sizeof(fl6.daddr));
|
||||
if (params->saddr)
|
||||
memcpy(&fl6.saddr, params->saddr, sizeof(fl6.saddr));
|
||||
|
||||
dst = ip6_route_output(net, NULL, &fl6);
|
||||
fl6.flowi4_proto = params->ipproto;
|
||||
fl6.uli = params->uli;
|
||||
|
||||
dst = ip6_route_output(params->net, NULL, &fl6);
|
||||
|
||||
err = dst->error;
|
||||
if (dst->error) {
|
||||
@ -50,15 +51,14 @@ static struct dst_entry *xfrm6_dst_lookup(struct net *net, int tos, int oif,
|
||||
return dst;
|
||||
}
|
||||
|
||||
static int xfrm6_get_saddr(struct net *net, int oif,
|
||||
xfrm_address_t *saddr, xfrm_address_t *daddr,
|
||||
u32 mark)
|
||||
static int xfrm6_get_saddr(xfrm_address_t *saddr,
|
||||
const struct xfrm_dst_lookup_params *params)
|
||||
{
|
||||
struct dst_entry *dst;
|
||||
struct net_device *dev;
|
||||
struct inet6_dev *idev;
|
||||
|
||||
dst = xfrm6_dst_lookup(net, 0, oif, NULL, daddr, mark);
|
||||
dst = xfrm6_dst_lookup(params);
|
||||
if (IS_ERR(dst))
|
||||
return -EHOSTUNREACH;
|
||||
|
||||
@ -68,7 +68,8 @@ static int xfrm6_get_saddr(struct net *net, int oif,
|
||||
return -EHOSTUNREACH;
|
||||
}
|
||||
dev = idev->dev;
|
||||
ipv6_dev_get_saddr(dev_net(dev), dev, &daddr->in6, 0, &saddr->in6);
|
||||
ipv6_dev_get_saddr(dev_net(dev), dev, ¶ms->daddr->in6, 0,
|
||||
&saddr->in6);
|
||||
dst_release(dst);
|
||||
return 0;
|
||||
}
|
||||
|
@ -23,6 +23,7 @@ static unsigned int nf_hook_run_bpf(void *bpf_prog, struct sk_buff *skb,
|
||||
struct bpf_nf_link {
|
||||
struct bpf_link link;
|
||||
struct nf_hook_ops hook_ops;
|
||||
netns_tracker ns_tracker;
|
||||
struct net *net;
|
||||
u32 dead;
|
||||
const struct nf_defrag_hook *defrag_hook;
|
||||
@ -120,6 +121,7 @@ static void bpf_nf_link_release(struct bpf_link *link)
|
||||
if (!cmpxchg(&nf_link->dead, 0, 1)) {
|
||||
nf_unregister_net_hook(nf_link->net, &nf_link->hook_ops);
|
||||
bpf_nf_disable_defrag(nf_link);
|
||||
put_net_track(nf_link->net, &nf_link->ns_tracker);
|
||||
}
|
||||
}
|
||||
|
||||
@ -258,6 +260,8 @@ int bpf_nf_link_attach(const union bpf_attr *attr, struct bpf_prog *prog)
|
||||
return err;
|
||||
}
|
||||
|
||||
get_net_track(net, &link->ns_tracker, GFP_KERNEL);
|
||||
|
||||
return bpf_link_settle(&link_primer);
|
||||
}
|
||||
|
||||
|
@ -79,7 +79,7 @@ static struct xt_target nflog_tg_reg[] __read_mostly = {
|
||||
{
|
||||
.name = "NFLOG",
|
||||
.revision = 0,
|
||||
.family = NFPROTO_IPV4,
|
||||
.family = NFPROTO_IPV6,
|
||||
.checkentry = nflog_tg_check,
|
||||
.destroy = nflog_tg_destroy,
|
||||
.target = nflog_tg,
|
||||
|
@ -49,6 +49,7 @@ static struct xt_target trace_tg_reg[] __read_mostly = {
|
||||
.target = trace_tg,
|
||||
.checkentry = trace_tg_check,
|
||||
.destroy = trace_tg_destroy,
|
||||
.me = THIS_MODULE,
|
||||
},
|
||||
#endif
|
||||
};
|
||||
|
@ -62,7 +62,7 @@ static struct xt_target mark_tg_reg[] __read_mostly = {
|
||||
{
|
||||
.name = "MARK",
|
||||
.revision = 2,
|
||||
.family = NFPROTO_IPV4,
|
||||
.family = NFPROTO_IPV6,
|
||||
.target = mark_tg,
|
||||
.targetsize = sizeof(struct xt_mark_tginfo2),
|
||||
.me = THIS_MODULE,
|
||||
|
@ -1498,8 +1498,29 @@ int tcf_action_init(struct net *net, struct tcf_proto *tp, struct nlattr *nla,
|
||||
bool skip_sw = tc_skip_sw(fl_flags);
|
||||
bool skip_hw = tc_skip_hw(fl_flags);
|
||||
|
||||
if (tc_act_bind(act->tcfa_flags))
|
||||
if (tc_act_bind(act->tcfa_flags)) {
|
||||
/* Action is created by classifier and is not
|
||||
* standalone. Check that the user did not set
|
||||
* any action flags different than the
|
||||
* classifier flags, and inherit the flags from
|
||||
* the classifier for the compatibility case
|
||||
* where no flags were specified at all.
|
||||
*/
|
||||
if ((tc_act_skip_sw(act->tcfa_flags) && !skip_sw) ||
|
||||
(tc_act_skip_hw(act->tcfa_flags) && !skip_hw)) {
|
||||
NL_SET_ERR_MSG(extack,
|
||||
"Mismatch between action and filter offload flags");
|
||||
err = -EINVAL;
|
||||
goto err;
|
||||
}
|
||||
if (skip_sw)
|
||||
act->tcfa_flags |= TCA_ACT_FLAGS_SKIP_SW;
|
||||
if (skip_hw)
|
||||
act->tcfa_flags |= TCA_ACT_FLAGS_SKIP_HW;
|
||||
continue;
|
||||
}
|
||||
|
||||
/* Action is standalone */
|
||||
if (skip_sw != tc_act_skip_sw(act->tcfa_flags) ||
|
||||
skip_hw != tc_act_skip_hw(act->tcfa_flags)) {
|
||||
NL_SET_ERR_MSG(extack,
|
||||
|
@ -512,9 +512,15 @@ static void dev_watchdog(struct timer_list *t)
|
||||
struct netdev_queue *txq;
|
||||
|
||||
txq = netdev_get_tx_queue(dev, i);
|
||||
trans_start = READ_ONCE(txq->trans_start);
|
||||
if (!netif_xmit_stopped(txq))
|
||||
continue;
|
||||
|
||||
/* Paired with WRITE_ONCE() + smp_mb...() in
|
||||
* netdev_tx_sent_queue() and netif_tx_stop_queue().
|
||||
*/
|
||||
smp_mb();
|
||||
trans_start = READ_ONCE(txq->trans_start);
|
||||
|
||||
if (time_after(jiffies, trans_start + dev->watchdog_timeo)) {
|
||||
timedout_ms = jiffies_to_msecs(jiffies - trans_start);
|
||||
atomic_long_inc(&txq->trans_timeout);
|
||||
|
@ -1965,7 +1965,8 @@ static int taprio_change(struct Qdisc *sch, struct nlattr *opt,
|
||||
|
||||
taprio_start_sched(sch, start, new_admin);
|
||||
|
||||
rcu_assign_pointer(q->admin_sched, new_admin);
|
||||
admin = rcu_replace_pointer(q->admin_sched, new_admin,
|
||||
lockdep_rtnl_is_held());
|
||||
if (admin)
|
||||
call_rcu(&admin->rcu, taprio_free_sched_cb);
|
||||
|
||||
@ -2373,9 +2374,6 @@ static int taprio_dump(struct Qdisc *sch, struct sk_buff *skb)
|
||||
struct tc_mqprio_qopt opt = { 0 };
|
||||
struct nlattr *nest, *sched_nest;
|
||||
|
||||
oper = rtnl_dereference(q->oper_sched);
|
||||
admin = rtnl_dereference(q->admin_sched);
|
||||
|
||||
mqprio_qopt_reconstruct(dev, &opt);
|
||||
|
||||
nest = nla_nest_start_noflag(skb, TCA_OPTIONS);
|
||||
@ -2396,18 +2394,23 @@ static int taprio_dump(struct Qdisc *sch, struct sk_buff *skb)
|
||||
nla_put_u32(skb, TCA_TAPRIO_ATTR_TXTIME_DELAY, q->txtime_delay))
|
||||
goto options_error;
|
||||
|
||||
rcu_read_lock();
|
||||
|
||||
oper = rtnl_dereference(q->oper_sched);
|
||||
admin = rtnl_dereference(q->admin_sched);
|
||||
|
||||
if (oper && taprio_dump_tc_entries(skb, q, oper))
|
||||
goto options_error;
|
||||
goto options_error_rcu;
|
||||
|
||||
if (oper && dump_schedule(skb, oper))
|
||||
goto options_error;
|
||||
goto options_error_rcu;
|
||||
|
||||
if (!admin)
|
||||
goto done;
|
||||
|
||||
sched_nest = nla_nest_start_noflag(skb, TCA_TAPRIO_ATTR_ADMIN_SCHED);
|
||||
if (!sched_nest)
|
||||
goto options_error;
|
||||
goto options_error_rcu;
|
||||
|
||||
if (dump_schedule(skb, admin))
|
||||
goto admin_error;
|
||||
@ -2415,11 +2418,15 @@ static int taprio_dump(struct Qdisc *sch, struct sk_buff *skb)
|
||||
nla_nest_end(skb, sched_nest);
|
||||
|
||||
done:
|
||||
rcu_read_unlock();
|
||||
return nla_nest_end(skb, nest);
|
||||
|
||||
admin_error:
|
||||
nla_nest_cancel(skb, sched_nest);
|
||||
|
||||
options_error_rcu:
|
||||
rcu_read_unlock();
|
||||
|
||||
options_error:
|
||||
nla_nest_cancel(skb, nest);
|
||||
|
||||
|
@ -269,6 +269,8 @@ int xfrm_dev_state_add(struct net *net, struct xfrm_state *x,
|
||||
|
||||
dev = dev_get_by_index(net, xuo->ifindex);
|
||||
if (!dev) {
|
||||
struct xfrm_dst_lookup_params params;
|
||||
|
||||
if (!(xuo->flags & XFRM_OFFLOAD_INBOUND)) {
|
||||
saddr = &x->props.saddr;
|
||||
daddr = &x->id.daddr;
|
||||
@ -277,9 +279,12 @@ int xfrm_dev_state_add(struct net *net, struct xfrm_state *x,
|
||||
daddr = &x->props.saddr;
|
||||
}
|
||||
|
||||
dst = __xfrm_dst_lookup(net, 0, 0, saddr, daddr,
|
||||
x->props.family,
|
||||
xfrm_smark_get(0, x));
|
||||
memset(¶ms, 0, sizeof(params));
|
||||
params.net = net;
|
||||
params.saddr = saddr;
|
||||
params.daddr = daddr;
|
||||
params.mark = xfrm_smark_get(0, x);
|
||||
dst = __xfrm_dst_lookup(x->props.family, ¶ms);
|
||||
if (IS_ERR(dst))
|
||||
return (is_packet_offload) ? -EINVAL : 0;
|
||||
|
||||
|
@ -270,10 +270,8 @@ static const struct xfrm_if_cb *xfrm_if_get_cb(void)
|
||||
return rcu_dereference(xfrm_if_cb);
|
||||
}
|
||||
|
||||
struct dst_entry *__xfrm_dst_lookup(struct net *net, int tos, int oif,
|
||||
const xfrm_address_t *saddr,
|
||||
const xfrm_address_t *daddr,
|
||||
int family, u32 mark)
|
||||
struct dst_entry *__xfrm_dst_lookup(int family,
|
||||
const struct xfrm_dst_lookup_params *params)
|
||||
{
|
||||
const struct xfrm_policy_afinfo *afinfo;
|
||||
struct dst_entry *dst;
|
||||
@ -282,7 +280,7 @@ struct dst_entry *__xfrm_dst_lookup(struct net *net, int tos, int oif,
|
||||
if (unlikely(afinfo == NULL))
|
||||
return ERR_PTR(-EAFNOSUPPORT);
|
||||
|
||||
dst = afinfo->dst_lookup(net, tos, oif, saddr, daddr, mark);
|
||||
dst = afinfo->dst_lookup(params);
|
||||
|
||||
rcu_read_unlock();
|
||||
|
||||
@ -296,6 +294,7 @@ static inline struct dst_entry *xfrm_dst_lookup(struct xfrm_state *x,
|
||||
xfrm_address_t *prev_daddr,
|
||||
int family, u32 mark)
|
||||
{
|
||||
struct xfrm_dst_lookup_params params;
|
||||
struct net *net = xs_net(x);
|
||||
xfrm_address_t *saddr = &x->props.saddr;
|
||||
xfrm_address_t *daddr = &x->id.daddr;
|
||||
@ -310,7 +309,29 @@ static inline struct dst_entry *xfrm_dst_lookup(struct xfrm_state *x,
|
||||
daddr = x->coaddr;
|
||||
}
|
||||
|
||||
dst = __xfrm_dst_lookup(net, tos, oif, saddr, daddr, family, mark);
|
||||
params.net = net;
|
||||
params.saddr = saddr;
|
||||
params.daddr = daddr;
|
||||
params.tos = tos;
|
||||
params.oif = oif;
|
||||
params.mark = mark;
|
||||
params.ipproto = x->id.proto;
|
||||
if (x->encap) {
|
||||
switch (x->encap->encap_type) {
|
||||
case UDP_ENCAP_ESPINUDP:
|
||||
params.ipproto = IPPROTO_UDP;
|
||||
params.uli.ports.sport = x->encap->encap_sport;
|
||||
params.uli.ports.dport = x->encap->encap_dport;
|
||||
break;
|
||||
case TCP_ENCAP_ESPINTCP:
|
||||
params.ipproto = IPPROTO_TCP;
|
||||
params.uli.ports.sport = x->encap->encap_sport;
|
||||
params.uli.ports.dport = x->encap->encap_dport;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
dst = __xfrm_dst_lookup(family, ¶ms);
|
||||
|
||||
if (!IS_ERR(dst)) {
|
||||
if (prev_saddr != saddr)
|
||||
@ -2432,15 +2453,15 @@ int __xfrm_sk_clone_policy(struct sock *sk, const struct sock *osk)
|
||||
}
|
||||
|
||||
static int
|
||||
xfrm_get_saddr(struct net *net, int oif, xfrm_address_t *local,
|
||||
xfrm_address_t *remote, unsigned short family, u32 mark)
|
||||
xfrm_get_saddr(unsigned short family, xfrm_address_t *saddr,
|
||||
const struct xfrm_dst_lookup_params *params)
|
||||
{
|
||||
int err;
|
||||
const struct xfrm_policy_afinfo *afinfo = xfrm_policy_get_afinfo(family);
|
||||
|
||||
if (unlikely(afinfo == NULL))
|
||||
return -EINVAL;
|
||||
err = afinfo->get_saddr(net, oif, local, remote, mark);
|
||||
err = afinfo->get_saddr(saddr, params);
|
||||
rcu_read_unlock();
|
||||
return err;
|
||||
}
|
||||
@ -2469,9 +2490,14 @@ xfrm_tmpl_resolve_one(struct xfrm_policy *policy, const struct flowi *fl,
|
||||
remote = &tmpl->id.daddr;
|
||||
local = &tmpl->saddr;
|
||||
if (xfrm_addr_any(local, tmpl->encap_family)) {
|
||||
error = xfrm_get_saddr(net, fl->flowi_oif,
|
||||
&tmp, remote,
|
||||
tmpl->encap_family, 0);
|
||||
struct xfrm_dst_lookup_params params;
|
||||
|
||||
memset(¶ms, 0, sizeof(params));
|
||||
params.net = net;
|
||||
params.oif = fl->flowi_oif;
|
||||
params.daddr = remote;
|
||||
error = xfrm_get_saddr(tmpl->encap_family, &tmp,
|
||||
¶ms);
|
||||
if (error)
|
||||
goto fail;
|
||||
local = &tmp;
|
||||
@ -4180,7 +4206,6 @@ static int __net_init xfrm_policy_init(struct net *net)
|
||||
|
||||
net->xfrm.policy_count[dir] = 0;
|
||||
net->xfrm.policy_count[XFRM_POLICY_MAX + dir] = 0;
|
||||
INIT_HLIST_HEAD(&net->xfrm.policy_inexact[dir]);
|
||||
|
||||
htab = &net->xfrm.policy_bydst[dir];
|
||||
htab->table = xfrm_hash_alloc(sz);
|
||||
@ -4234,8 +4259,6 @@ static void xfrm_policy_fini(struct net *net)
|
||||
for (dir = 0; dir < XFRM_POLICY_MAX; dir++) {
|
||||
struct xfrm_policy_hash *htab;
|
||||
|
||||
WARN_ON(!hlist_empty(&net->xfrm.policy_inexact[dir]));
|
||||
|
||||
htab = &net->xfrm.policy_bydst[dir];
|
||||
sz = (htab->hmask + 1) * sizeof(struct hlist_head);
|
||||
WARN_ON(!hlist_empty(htab->table));
|
||||
|
@ -201,6 +201,7 @@ static int verify_newsa_info(struct xfrm_usersa_info *p,
|
||||
{
|
||||
int err;
|
||||
u8 sa_dir = attrs[XFRMA_SA_DIR] ? nla_get_u8(attrs[XFRMA_SA_DIR]) : 0;
|
||||
u16 family = p->sel.family;
|
||||
|
||||
err = -EINVAL;
|
||||
switch (p->family) {
|
||||
@ -221,7 +222,10 @@ static int verify_newsa_info(struct xfrm_usersa_info *p,
|
||||
goto out;
|
||||
}
|
||||
|
||||
switch (p->sel.family) {
|
||||
if (!family && !(p->flags & XFRM_STATE_AF_UNSPEC))
|
||||
family = p->family;
|
||||
|
||||
switch (family) {
|
||||
case AF_UNSPEC:
|
||||
break;
|
||||
|
||||
@ -1098,7 +1102,9 @@ static int copy_to_user_auth(struct xfrm_algo_auth *auth, struct sk_buff *skb)
|
||||
if (!nla)
|
||||
return -EMSGSIZE;
|
||||
ap = nla_data(nla);
|
||||
memcpy(ap, auth, sizeof(struct xfrm_algo_auth));
|
||||
strscpy_pad(ap->alg_name, auth->alg_name, sizeof(ap->alg_name));
|
||||
ap->alg_key_len = auth->alg_key_len;
|
||||
ap->alg_trunc_len = auth->alg_trunc_len;
|
||||
if (redact_secret && auth->alg_key_len)
|
||||
memset(ap->alg_key, 0, (auth->alg_key_len + 7) / 8);
|
||||
else
|
||||
|
Loading…
Reference in New Issue
Block a user