A lot of networking people were at a conference last week, busy
catching COVID, so relatively short PR. Including fixes from bpf and netfilter. Current release - regressions: - tcp: process the 3rd ACK with sk_socket for TFO and MPTCP Current release - new code bugs: - l2tp: protect session IDR and tunnel session list with one lock, make sure the state is coherent to avoid a warning - eth: bnxt_en: update xdp_rxq_info in queue restart logic - eth: airoha: fix location of the MBI_RX_AGE_SEL_MASK field Previous releases - regressions: - xsk: require XDP_UMEM_TX_METADATA_LEN to actuate tx_metadata_len, the field reuses previously un-validated pad Previous releases - always broken: - tap/tun: drop short frames to prevent crashes later in the stack - eth: ice: add a per-VF limit on number of FDIR filters - af_unix: disable MSG_OOB handling for sockets in sockmap/sockhash Signed-off-by: Jakub Kicinski <kuba@kernel.org> -----BEGIN PGP SIGNATURE----- iQIzBAABCAAdFiEE6jPA+I1ugmIBA4hXMUZtbf5SIrsFAmaibxAACgkQMUZtbf5S IruuIRAAu96TiN/urPwmKznyb/Sk8x7p8iUzn6OvPS/TUlFUkURQtOh6M9uvbpN4 x/L//EWkMR0hY4SkBegoiXfb1GS0PjBdWTWUiROm5X9nVHqp5KRZAxWXhjFiS1BO BIYOT+JfCl7mQiPs90Mys/cEtYOggMBsCZQVIGw/iYoJLFREqxFSONwa0dG+tGMX jn9WNu4yCVDhJ/jtl2MaTsCNtYUaBUgYrKHJBfNGfJ2Lz/7rH9yFui2WSMlmOd/U QGeCb1DWURlShlCqY37wNinbFsxWkI5JN00ukTtwFAXLIaqc+zgHcIjrDjTJwK43 F4tKbJT3+bmehMU/h3Uo3c7DhXl7n9zDGiDtbCxnkykp0sFGJpjhDrWydo51c+YB qW5HaNrII2LiDicOVN8L29ylvKp7AEkClxgivEhZVGGk2f/szJRXfp9u3WBn5kAx 3paH55YN0DEsKbYbb1ZENEI1Vnc/4ff4PxZJCUNKwzcS8wCn1awqwcriK9TjS/cp fjilNFT4J3/uFrodHWTkx0jJT6UJFT0aF03qPLUH/J5kG+EVukOf1jBPInNdf1si 1j47SpblHUe86HiHphFMt32KZ210lJzWxh8uGma57Y2sB9makdLiK4etrFjkiMJJ Z8A3kGp3KpFjbuK4tHY25rp+5oxLNNOBNpay29lQrWtCL/NDcaQ= =9OsH -----END PGP SIGNATURE----- Merge tag 'net-6.11-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net Pull networking fixes from Jakub Kicinski: "Including fixes from bpf and netfilter. A lot of networking people were at a conference last week, busy catching COVID, so relatively short PR. Current release - regressions: - tcp: process the 3rd ACK with sk_socket for TFO and MPTCP Current release - new code bugs: - l2tp: protect session IDR and tunnel session list with one lock, make sure the state is coherent to avoid a warning - eth: bnxt_en: update xdp_rxq_info in queue restart logic - eth: airoha: fix location of the MBI_RX_AGE_SEL_MASK field Previous releases - regressions: - xsk: require XDP_UMEM_TX_METADATA_LEN to actuate tx_metadata_len, the field reuses previously un-validated pad Previous releases - always broken: - tap/tun: drop short frames to prevent crashes later in the stack - eth: ice: add a per-VF limit on number of FDIR filters - af_unix: disable MSG_OOB handling for sockets in sockmap/sockhash" * tag 'net-6.11-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (34 commits) tun: add missing verification for short frame tap: add missing verification for short frame mISDN: Fix a use after free in hfcmulti_tx() gve: Fix an edge case for TSO skb validity check bnxt_en: update xdp_rxq_info in queue restart logic tcp: process the 3rd ACK with sk_socket for TFO/MPTCP selftests/bpf: Add XDP_UMEM_TX_METADATA_LEN to XSK TX metadata test xsk: Require XDP_UMEM_TX_METADATA_LEN to actuate tx_metadata_len bpf: Fix a segment issue when downgrading gso_size net: mediatek: Fix potential NULL pointer dereference in dummy net_device handling MAINTAINERS: make Breno the netconsole maintainer MAINTAINERS: Update bonding entry net: nexthop: Initialize all fields in dumped nexthops net: stmmac: Correct byte order of perfect_match selftests: forwarding: skip if kernel not support setting bridge fdb learning limit tipc: Return non-zero value from tipc_udp_addr2str() on error netfilter: nft_set_pipapo_avx2: disable softinterrupts ice: Fix recipe read procedure ice: Add a per-VF limit on number of FDIR filters net: bonding: correctly annotate RCU in bond_should_notify_peers() ...
This commit is contained in:
commit
1722389b0d
2
.mailmap
2
.mailmap
@ -474,6 +474,8 @@ Nadia Yvette Chambers <nyc@holomorphy.com> William Lee Irwin III <wli@holomorphy
|
||||
Naoya Horiguchi <nao.horiguchi@gmail.com> <n-horiguchi@ah.jp.nec.com>
|
||||
Naoya Horiguchi <nao.horiguchi@gmail.com> <naoya.horiguchi@nec.com>
|
||||
Nathan Chancellor <nathan@kernel.org> <natechancellor@gmail.com>
|
||||
Naveen N Rao <naveen@kernel.org> <naveen.n.rao@linux.ibm.com>
|
||||
Naveen N Rao <naveen@kernel.org> <naveen.n.rao@linux.vnet.ibm.com>
|
||||
Neeraj Upadhyay <neeraj.upadhyay@kernel.org> <quic_neeraju@quicinc.com>
|
||||
Neeraj Upadhyay <neeraj.upadhyay@kernel.org> <neeraju@codeaurora.org>
|
||||
Neil Armstrong <neil.armstrong@linaro.org> <narmstrong@baylibre.com>
|
||||
|
@ -11,12 +11,16 @@ metadata on the receive side.
|
||||
General Design
|
||||
==============
|
||||
|
||||
The headroom for the metadata is reserved via ``tx_metadata_len`` in
|
||||
``struct xdp_umem_reg``. The metadata length is therefore the same for
|
||||
every socket that shares the same umem. The metadata layout is a fixed UAPI,
|
||||
refer to ``union xsk_tx_metadata`` in ``include/uapi/linux/if_xdp.h``.
|
||||
Thus, generally, the ``tx_metadata_len`` field above should contain
|
||||
``sizeof(union xsk_tx_metadata)``.
|
||||
The headroom for the metadata is reserved via ``tx_metadata_len`` and
|
||||
``XDP_UMEM_TX_METADATA_LEN`` flag in ``struct xdp_umem_reg``. The metadata
|
||||
length is therefore the same for every socket that shares the same umem.
|
||||
The metadata layout is a fixed UAPI, refer to ``union xsk_tx_metadata`` in
|
||||
``include/uapi/linux/if_xdp.h``. Thus, generally, the ``tx_metadata_len``
|
||||
field above should contain ``sizeof(union xsk_tx_metadata)``.
|
||||
|
||||
Note that in the original implementation the ``XDP_UMEM_TX_METADATA_LEN``
|
||||
flag was not required. Applications might attempt to create a umem
|
||||
with a flag first and if it fails, do another attempt without a flag.
|
||||
|
||||
The headroom and the metadata itself should be located right before
|
||||
``xdp_desc->addr`` in the umem frame. Within a frame, the metadata
|
||||
|
19
MAINTAINERS
19
MAINTAINERS
@ -3906,11 +3906,10 @@ F: include/net/bluetooth/
|
||||
F: net/bluetooth/
|
||||
|
||||
BONDING DRIVER
|
||||
M: Jay Vosburgh <j.vosburgh@gmail.com>
|
||||
M: Jay Vosburgh <jv@jvosburgh.net>
|
||||
M: Andy Gospodarek <andy@greyhouse.net>
|
||||
L: netdev@vger.kernel.org
|
||||
S: Supported
|
||||
W: http://sourceforge.net/projects/bonding/
|
||||
S: Maintained
|
||||
F: Documentation/networking/bonding.rst
|
||||
F: drivers/net/bonding/
|
||||
F: include/net/bond*
|
||||
@ -3974,8 +3973,10 @@ S: Odd Fixes
|
||||
F: drivers/net/ethernet/netronome/nfp/bpf/
|
||||
|
||||
BPF JIT for POWERPC (32-BIT AND 64-BIT)
|
||||
M: Naveen N. Rao <naveen.n.rao@linux.ibm.com>
|
||||
M: Michael Ellerman <mpe@ellerman.id.au>
|
||||
M: Hari Bathini <hbathini@linux.ibm.com>
|
||||
M: Christophe Leroy <christophe.leroy@csgroup.eu>
|
||||
R: Naveen N Rao <naveen@kernel.org>
|
||||
L: bpf@vger.kernel.org
|
||||
S: Supported
|
||||
F: arch/powerpc/net/
|
||||
@ -12529,7 +12530,7 @@ F: mm/kmsan/
|
||||
F: scripts/Makefile.kmsan
|
||||
|
||||
KPROBES
|
||||
M: Naveen N. Rao <naveen.n.rao@linux.ibm.com>
|
||||
M: Naveen N Rao <naveen@kernel.org>
|
||||
M: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
|
||||
M: "David S. Miller" <davem@davemloft.net>
|
||||
M: Masami Hiramatsu <mhiramat@kernel.org>
|
||||
@ -12906,7 +12907,7 @@ LINUX FOR POWERPC (32-BIT AND 64-BIT)
|
||||
M: Michael Ellerman <mpe@ellerman.id.au>
|
||||
R: Nicholas Piggin <npiggin@gmail.com>
|
||||
R: Christophe Leroy <christophe.leroy@csgroup.eu>
|
||||
R: Naveen N. Rao <naveen.n.rao@linux.ibm.com>
|
||||
R: Naveen N Rao <naveen@kernel.org>
|
||||
L: linuxppc-dev@lists.ozlabs.org
|
||||
S: Supported
|
||||
W: https://github.com/linuxppc/wiki/wiki
|
||||
@ -15762,6 +15763,12 @@ S: Maintained
|
||||
F: Documentation/devicetree/bindings/hwmon/nuvoton,nct6775.yaml
|
||||
F: drivers/hwmon/nct6775-i2c.c
|
||||
|
||||
NETCONSOLE
|
||||
M: Breno Leitao <leitao@debian.org>
|
||||
S: Maintained
|
||||
F: Documentation/networking/netconsole.rst
|
||||
F: drivers/net/netconsole.c
|
||||
|
||||
NETDEVSIM
|
||||
M: Jakub Kicinski <kuba@kernel.org>
|
||||
S: Maintained
|
||||
|
@ -1901,7 +1901,7 @@ hfcmulti_dtmf(struct hfc_multi *hc)
|
||||
static void
|
||||
hfcmulti_tx(struct hfc_multi *hc, int ch)
|
||||
{
|
||||
int i, ii, temp, len = 0;
|
||||
int i, ii, temp, tmp_len, len = 0;
|
||||
int Zspace, z1, z2; /* must be int for calculation */
|
||||
int Fspace, f1, f2;
|
||||
u_char *d;
|
||||
@ -2122,14 +2122,15 @@ next_frame:
|
||||
HFC_wait_nodebug(hc);
|
||||
}
|
||||
|
||||
tmp_len = (*sp)->len;
|
||||
dev_kfree_skb(*sp);
|
||||
/* check for next frame */
|
||||
if (bch && get_next_bframe(bch)) {
|
||||
len = (*sp)->len;
|
||||
len = tmp_len;
|
||||
goto next_frame;
|
||||
}
|
||||
if (dch && get_next_dframe(dch)) {
|
||||
len = (*sp)->len;
|
||||
len = tmp_len;
|
||||
goto next_frame;
|
||||
}
|
||||
|
||||
|
@ -1121,13 +1121,10 @@ static struct slave *bond_find_best_slave(struct bonding *bond)
|
||||
return bestslave;
|
||||
}
|
||||
|
||||
/* must be called in RCU critical section or with RTNL held */
|
||||
static bool bond_should_notify_peers(struct bonding *bond)
|
||||
{
|
||||
struct slave *slave;
|
||||
|
||||
rcu_read_lock();
|
||||
slave = rcu_dereference(bond->curr_active_slave);
|
||||
rcu_read_unlock();
|
||||
struct slave *slave = rcu_dereference_rtnl(bond->curr_active_slave);
|
||||
|
||||
if (!slave || !bond->send_peer_notif ||
|
||||
bond->send_peer_notif %
|
||||
|
@ -4052,6 +4052,7 @@ static void bnxt_reset_rx_ring_struct(struct bnxt *bp,
|
||||
|
||||
rxr->page_pool->p.napi = NULL;
|
||||
rxr->page_pool = NULL;
|
||||
memset(&rxr->xdp_rxq, 0, sizeof(struct xdp_rxq_info));
|
||||
|
||||
ring = &rxr->rx_ring_struct;
|
||||
rmem = &ring->ring_mem;
|
||||
@ -15018,6 +15019,16 @@ static int bnxt_queue_mem_alloc(struct net_device *dev, void *qmem, int idx)
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
rc = xdp_rxq_info_reg(&clone->xdp_rxq, bp->dev, idx, 0);
|
||||
if (rc < 0)
|
||||
goto err_page_pool_destroy;
|
||||
|
||||
rc = xdp_rxq_info_reg_mem_model(&clone->xdp_rxq,
|
||||
MEM_TYPE_PAGE_POOL,
|
||||
clone->page_pool);
|
||||
if (rc)
|
||||
goto err_rxq_info_unreg;
|
||||
|
||||
ring = &clone->rx_ring_struct;
|
||||
rc = bnxt_alloc_ring(bp, &ring->ring_mem);
|
||||
if (rc)
|
||||
@ -15047,6 +15058,9 @@ err_free_rx_agg_ring:
|
||||
bnxt_free_ring(bp, &clone->rx_agg_ring_struct.ring_mem);
|
||||
err_free_rx_ring:
|
||||
bnxt_free_ring(bp, &clone->rx_ring_struct.ring_mem);
|
||||
err_rxq_info_unreg:
|
||||
xdp_rxq_info_unreg(&clone->xdp_rxq);
|
||||
err_page_pool_destroy:
|
||||
clone->page_pool->p.napi = NULL;
|
||||
page_pool_destroy(clone->page_pool);
|
||||
clone->page_pool = NULL;
|
||||
@ -15062,6 +15076,8 @@ static void bnxt_queue_mem_free(struct net_device *dev, void *qmem)
|
||||
bnxt_free_one_rx_ring(bp, rxr);
|
||||
bnxt_free_one_rx_agg_ring(bp, rxr);
|
||||
|
||||
xdp_rxq_info_unreg(&rxr->xdp_rxq);
|
||||
|
||||
page_pool_destroy(rxr->page_pool);
|
||||
rxr->page_pool = NULL;
|
||||
|
||||
@ -15145,6 +15161,7 @@ static int bnxt_queue_start(struct net_device *dev, void *qmem, int idx)
|
||||
rxr->rx_sw_agg_prod = clone->rx_sw_agg_prod;
|
||||
rxr->rx_next_cons = clone->rx_next_cons;
|
||||
rxr->page_pool = clone->page_pool;
|
||||
rxr->xdp_rxq = clone->xdp_rxq;
|
||||
|
||||
bnxt_copy_rx_ring(bp, rxr, clone);
|
||||
|
||||
|
@ -866,22 +866,42 @@ static bool gve_can_send_tso(const struct sk_buff *skb)
|
||||
const int header_len = skb_tcp_all_headers(skb);
|
||||
const int gso_size = shinfo->gso_size;
|
||||
int cur_seg_num_bufs;
|
||||
int prev_frag_size;
|
||||
int cur_seg_size;
|
||||
int i;
|
||||
|
||||
cur_seg_size = skb_headlen(skb) - header_len;
|
||||
prev_frag_size = skb_headlen(skb);
|
||||
cur_seg_num_bufs = cur_seg_size > 0;
|
||||
|
||||
for (i = 0; i < shinfo->nr_frags; i++) {
|
||||
if (cur_seg_size >= gso_size) {
|
||||
cur_seg_size %= gso_size;
|
||||
cur_seg_num_bufs = cur_seg_size > 0;
|
||||
|
||||
if (prev_frag_size > GVE_TX_MAX_BUF_SIZE_DQO) {
|
||||
int prev_frag_remain = prev_frag_size %
|
||||
GVE_TX_MAX_BUF_SIZE_DQO;
|
||||
|
||||
/* If the last descriptor of the previous frag
|
||||
* is less than cur_seg_size, the segment will
|
||||
* span two descriptors in the previous frag.
|
||||
* Since max gso size (9728) is less than
|
||||
* GVE_TX_MAX_BUF_SIZE_DQO, it is impossible
|
||||
* for the segment to span more than two
|
||||
* descriptors.
|
||||
*/
|
||||
if (prev_frag_remain &&
|
||||
cur_seg_size > prev_frag_remain)
|
||||
cur_seg_num_bufs++;
|
||||
}
|
||||
}
|
||||
|
||||
if (unlikely(++cur_seg_num_bufs > max_bufs_per_seg))
|
||||
return false;
|
||||
|
||||
cur_seg_size += skb_frag_size(&shinfo->frags[i]);
|
||||
prev_frag_size = skb_frag_size(&shinfo->frags[i]);
|
||||
cur_seg_size += prev_frag_size;
|
||||
}
|
||||
|
||||
return true;
|
||||
|
@ -534,7 +534,7 @@ ice_parse_rx_flow_user_data(struct ethtool_rx_flow_spec *fsp,
|
||||
*
|
||||
* Returns the number of available flow director filters to this VSI
|
||||
*/
|
||||
static int ice_fdir_num_avail_fltr(struct ice_hw *hw, struct ice_vsi *vsi)
|
||||
int ice_fdir_num_avail_fltr(struct ice_hw *hw, struct ice_vsi *vsi)
|
||||
{
|
||||
u16 vsi_num = ice_get_hw_vsi_num(hw, vsi->idx);
|
||||
u16 num_guar;
|
||||
|
@ -207,6 +207,8 @@ struct ice_fdir_base_pkt {
|
||||
const u8 *tun_pkt;
|
||||
};
|
||||
|
||||
struct ice_vsi;
|
||||
|
||||
int ice_alloc_fd_res_cntr(struct ice_hw *hw, u16 *cntr_id);
|
||||
int ice_free_fd_res_cntr(struct ice_hw *hw, u16 cntr_id);
|
||||
int ice_alloc_fd_guar_item(struct ice_hw *hw, u16 *cntr_id, u16 num_fltr);
|
||||
@ -218,6 +220,7 @@ int
|
||||
ice_fdir_get_gen_prgm_pkt(struct ice_hw *hw, struct ice_fdir_fltr *input,
|
||||
u8 *pkt, bool frag, bool tun);
|
||||
int ice_get_fdir_cnt_all(struct ice_hw *hw);
|
||||
int ice_fdir_num_avail_fltr(struct ice_hw *hw, struct ice_vsi *vsi);
|
||||
bool ice_fdir_is_dup_fltr(struct ice_hw *hw, struct ice_fdir_fltr *input);
|
||||
bool ice_fdir_has_frag(enum ice_fltr_ptype flow);
|
||||
struct ice_fdir_fltr *
|
||||
|
@ -2400,10 +2400,10 @@ ice_get_recp_frm_fw(struct ice_hw *hw, struct ice_sw_recipe *recps, u8 rid,
|
||||
|
||||
/* Propagate some data to the recipe database */
|
||||
recps[idx].priority = root_bufs.content.act_ctrl_fwd_priority;
|
||||
recps[idx].need_pass_l2 = root_bufs.content.act_ctrl &
|
||||
ICE_AQ_RECIPE_ACT_NEED_PASS_L2;
|
||||
recps[idx].allow_pass_l2 = root_bufs.content.act_ctrl &
|
||||
ICE_AQ_RECIPE_ACT_ALLOW_PASS_L2;
|
||||
recps[idx].need_pass_l2 = !!(root_bufs.content.act_ctrl &
|
||||
ICE_AQ_RECIPE_ACT_NEED_PASS_L2);
|
||||
recps[idx].allow_pass_l2 = !!(root_bufs.content.act_ctrl &
|
||||
ICE_AQ_RECIPE_ACT_ALLOW_PASS_L2);
|
||||
bitmap_zero(recps[idx].res_idxs, ICE_MAX_FV_WORDS);
|
||||
if (root_bufs.content.result_indx & ICE_AQ_RECIPE_RESULT_EN) {
|
||||
set_bit(root_bufs.content.result_indx &
|
||||
|
@ -536,6 +536,8 @@ static void ice_vc_fdir_reset_cnt_all(struct ice_vf_fdir *fdir)
|
||||
fdir->fdir_fltr_cnt[flow][0] = 0;
|
||||
fdir->fdir_fltr_cnt[flow][1] = 0;
|
||||
}
|
||||
|
||||
fdir->fdir_fltr_cnt_total = 0;
|
||||
}
|
||||
|
||||
/**
|
||||
@ -1560,6 +1562,7 @@ ice_vc_add_fdir_fltr_post(struct ice_vf *vf, struct ice_vf_fdir_ctx *ctx,
|
||||
resp->status = status;
|
||||
resp->flow_id = conf->flow_id;
|
||||
vf->fdir.fdir_fltr_cnt[conf->input.flow_type][is_tun]++;
|
||||
vf->fdir.fdir_fltr_cnt_total++;
|
||||
|
||||
ret = ice_vc_send_msg_to_vf(vf, ctx->v_opcode, v_ret,
|
||||
(u8 *)resp, len);
|
||||
@ -1624,6 +1627,7 @@ ice_vc_del_fdir_fltr_post(struct ice_vf *vf, struct ice_vf_fdir_ctx *ctx,
|
||||
resp->status = status;
|
||||
ice_vc_fdir_remove_entry(vf, conf, conf->flow_id);
|
||||
vf->fdir.fdir_fltr_cnt[conf->input.flow_type][is_tun]--;
|
||||
vf->fdir.fdir_fltr_cnt_total--;
|
||||
|
||||
ret = ice_vc_send_msg_to_vf(vf, ctx->v_opcode, v_ret,
|
||||
(u8 *)resp, len);
|
||||
@ -1790,6 +1794,7 @@ int ice_vc_add_fdir_fltr(struct ice_vf *vf, u8 *msg)
|
||||
struct virtchnl_fdir_add *stat = NULL;
|
||||
struct virtchnl_fdir_fltr_conf *conf;
|
||||
enum virtchnl_status_code v_ret;
|
||||
struct ice_vsi *vf_vsi;
|
||||
struct device *dev;
|
||||
struct ice_pf *pf;
|
||||
int is_tun = 0;
|
||||
@ -1798,6 +1803,17 @@ int ice_vc_add_fdir_fltr(struct ice_vf *vf, u8 *msg)
|
||||
|
||||
pf = vf->pf;
|
||||
dev = ice_pf_to_dev(pf);
|
||||
vf_vsi = ice_get_vf_vsi(vf);
|
||||
|
||||
#define ICE_VF_MAX_FDIR_FILTERS 128
|
||||
if (!ice_fdir_num_avail_fltr(&pf->hw, vf_vsi) ||
|
||||
vf->fdir.fdir_fltr_cnt_total >= ICE_VF_MAX_FDIR_FILTERS) {
|
||||
v_ret = VIRTCHNL_STATUS_ERR_PARAM;
|
||||
dev_err(dev, "Max number of FDIR filters for VF %d is reached\n",
|
||||
vf->vf_id);
|
||||
goto err_exit;
|
||||
}
|
||||
|
||||
ret = ice_vc_fdir_param_check(vf, fltr->vsi_id);
|
||||
if (ret) {
|
||||
v_ret = VIRTCHNL_STATUS_ERR_PARAM;
|
||||
|
@ -29,6 +29,7 @@ struct ice_vf_fdir_ctx {
|
||||
struct ice_vf_fdir {
|
||||
u16 fdir_fltr_cnt[ICE_FLTR_PTYPE_MAX][ICE_FD_HW_SEG_MAX];
|
||||
int prof_entry_cnt[ICE_FLTR_PTYPE_MAX][ICE_FD_HW_SEG_MAX];
|
||||
u16 fdir_fltr_cnt_total;
|
||||
struct ice_fd_hw_prof **fdir_prof;
|
||||
|
||||
struct idr fdir_rule_idr;
|
||||
|
@ -249,7 +249,7 @@
|
||||
#define REG_FE_GDM_RX_ETH_L1023_CNT_H(_n) (GDM_BASE(_n) + 0x2fc)
|
||||
|
||||
#define REG_GDM2_CHN_RLS (GDM2_BASE + 0x20)
|
||||
#define MBI_RX_AGE_SEL_MASK GENMASK(18, 17)
|
||||
#define MBI_RX_AGE_SEL_MASK GENMASK(26, 25)
|
||||
#define MBI_TX_AGE_SEL_MASK GENMASK(18, 17)
|
||||
|
||||
#define REG_GDM3_FWD_CFG GDM3_BASE
|
||||
|
@ -4223,8 +4223,6 @@ static int mtk_free_dev(struct mtk_eth *eth)
|
||||
metadata_dst_free(eth->dsa_meta[i]);
|
||||
}
|
||||
|
||||
free_netdev(eth->dummy_dev);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
@ -5090,6 +5088,7 @@ static void mtk_remove(struct platform_device *pdev)
|
||||
netif_napi_del(ð->tx_napi);
|
||||
netif_napi_del(ð->rx_napi);
|
||||
mtk_cleanup(eth);
|
||||
free_netdev(eth->dummy_dev);
|
||||
mtk_mdio_cleanup(eth);
|
||||
}
|
||||
|
||||
|
@ -977,7 +977,7 @@ static void dwmac4_set_mac_loopback(void __iomem *ioaddr, bool enable)
|
||||
}
|
||||
|
||||
static void dwmac4_update_vlan_hash(struct mac_device_info *hw, u32 hash,
|
||||
__le16 perfect_match, bool is_double)
|
||||
u16 perfect_match, bool is_double)
|
||||
{
|
||||
void __iomem *ioaddr = hw->pcsr;
|
||||
u32 value;
|
||||
|
@ -615,7 +615,7 @@ static int dwxgmac2_rss_configure(struct mac_device_info *hw,
|
||||
}
|
||||
|
||||
static void dwxgmac2_update_vlan_hash(struct mac_device_info *hw, u32 hash,
|
||||
__le16 perfect_match, bool is_double)
|
||||
u16 perfect_match, bool is_double)
|
||||
{
|
||||
void __iomem *ioaddr = hw->pcsr;
|
||||
|
||||
|
@ -393,7 +393,7 @@ struct stmmac_ops {
|
||||
struct stmmac_rss *cfg, u32 num_rxq);
|
||||
/* VLAN */
|
||||
void (*update_vlan_hash)(struct mac_device_info *hw, u32 hash,
|
||||
__le16 perfect_match, bool is_double);
|
||||
u16 perfect_match, bool is_double);
|
||||
void (*enable_vlan)(struct mac_device_info *hw, u32 type);
|
||||
void (*rx_hw_vlan)(struct mac_device_info *hw, struct dma_desc *rx_desc,
|
||||
struct sk_buff *skb);
|
||||
|
@ -6641,7 +6641,7 @@ static u32 stmmac_vid_crc32_le(__le16 vid_le)
|
||||
static int stmmac_vlan_update(struct stmmac_priv *priv, bool is_double)
|
||||
{
|
||||
u32 crc, hash = 0;
|
||||
__le16 pmatch = 0;
|
||||
u16 pmatch = 0;
|
||||
int count = 0;
|
||||
u16 vid = 0;
|
||||
|
||||
@ -6656,7 +6656,7 @@ static int stmmac_vlan_update(struct stmmac_priv *priv, bool is_double)
|
||||
if (count > 2) /* VID = 0 always passes filter */
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
pmatch = cpu_to_le16(vid);
|
||||
pmatch = vid;
|
||||
hash = 0;
|
||||
}
|
||||
|
||||
|
@ -1177,6 +1177,11 @@ static int tap_get_user_xdp(struct tap_queue *q, struct xdp_buff *xdp)
|
||||
struct sk_buff *skb;
|
||||
int err, depth;
|
||||
|
||||
if (unlikely(xdp->data_end - xdp->data < ETH_HLEN)) {
|
||||
err = -EINVAL;
|
||||
goto err;
|
||||
}
|
||||
|
||||
if (q->flags & IFF_VNET_HDR)
|
||||
vnet_hdr_len = READ_ONCE(q->vnet_hdr_sz);
|
||||
|
||||
|
@ -2455,6 +2455,9 @@ static int tun_xdp_one(struct tun_struct *tun,
|
||||
bool skb_xdp = false;
|
||||
struct page *page;
|
||||
|
||||
if (unlikely(datasize < ETH_HLEN))
|
||||
return -EINVAL;
|
||||
|
||||
xdp_prog = rcu_dereference(tun->xdp_prog);
|
||||
if (xdp_prog) {
|
||||
if (gso->gso_type) {
|
||||
|
@ -41,6 +41,10 @@
|
||||
*/
|
||||
#define XDP_UMEM_TX_SW_CSUM (1 << 1)
|
||||
|
||||
/* Request to reserve tx_metadata_len bytes of per-chunk metadata.
|
||||
*/
|
||||
#define XDP_UMEM_TX_METADATA_LEN (1 << 2)
|
||||
|
||||
struct sockaddr_xdp {
|
||||
__u16 sxdp_family;
|
||||
__u16 sxdp_flags;
|
||||
|
@ -9328,13 +9328,12 @@ static void perf_event_bpf_emit_ksymbols(struct bpf_prog *prog,
|
||||
bool unregister = type == PERF_BPF_EVENT_PROG_UNLOAD;
|
||||
int i;
|
||||
|
||||
if (prog->aux->func_cnt == 0) {
|
||||
perf_event_ksymbol(PERF_RECORD_KSYMBOL_TYPE_BPF,
|
||||
(u64)(unsigned long)prog->bpf_func,
|
||||
prog->jited_len, unregister,
|
||||
prog->aux->ksym.name);
|
||||
} else {
|
||||
for (i = 0; i < prog->aux->func_cnt; i++) {
|
||||
|
||||
for (i = 1; i < prog->aux->func_cnt; i++) {
|
||||
struct bpf_prog *subprog = prog->aux->func[i];
|
||||
|
||||
perf_event_ksymbol(
|
||||
@ -9343,7 +9342,6 @@ static void perf_event_bpf_emit_ksymbols(struct bpf_prog *prog,
|
||||
subprog->jited_len, unregister,
|
||||
subprog->aux->ksym.name);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
void perf_event_bpf_event(struct bpf_prog *prog,
|
||||
|
@ -3548,13 +3548,20 @@ static int bpf_skb_net_grow(struct sk_buff *skb, u32 off, u32 len_diff,
|
||||
if (skb_is_gso(skb)) {
|
||||
struct skb_shared_info *shinfo = skb_shinfo(skb);
|
||||
|
||||
/* Due to header grow, MSS needs to be downgraded. */
|
||||
if (!(flags & BPF_F_ADJ_ROOM_FIXED_GSO))
|
||||
skb_decrease_gso_size(shinfo, len_diff);
|
||||
|
||||
/* Header must be checked, and gso_segs recomputed. */
|
||||
shinfo->gso_type |= gso_type;
|
||||
shinfo->gso_segs = 0;
|
||||
|
||||
/* Due to header growth, MSS needs to be downgraded.
|
||||
* There is a BUG_ON() when segmenting the frag_list with
|
||||
* head_frag true, so linearize the skb after downgrading
|
||||
* the MSS.
|
||||
*/
|
||||
if (!(flags & BPF_F_ADJ_ROOM_FIXED_GSO)) {
|
||||
skb_decrease_gso_size(shinfo, len_diff);
|
||||
if (shinfo->frag_list)
|
||||
return skb_linearize(skb);
|
||||
}
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
@ -888,9 +888,10 @@ static int nla_put_nh_group(struct sk_buff *skb, struct nexthop *nh,
|
||||
|
||||
p = nla_data(nla);
|
||||
for (i = 0; i < nhg->num_nh; ++i) {
|
||||
p->id = nhg->nh_entries[i].nh->id;
|
||||
p->weight = nhg->nh_entries[i].weight - 1;
|
||||
p += 1;
|
||||
*p++ = (struct nexthop_grp) {
|
||||
.id = nhg->nh_entries[i].nh->id,
|
||||
.weight = nhg->nh_entries[i].weight - 1,
|
||||
};
|
||||
}
|
||||
|
||||
if (nhg->resilient && nla_put_nh_group_res(skb, nhg))
|
||||
|
@ -1263,7 +1263,7 @@ void ip_rt_get_source(u8 *addr, struct sk_buff *skb, struct rtable *rt)
|
||||
struct flowi4 fl4 = {
|
||||
.daddr = iph->daddr,
|
||||
.saddr = iph->saddr,
|
||||
.flowi4_tos = RT_TOS(iph->tos),
|
||||
.flowi4_tos = iph->tos & IPTOS_RT_MASK,
|
||||
.flowi4_oif = rt->dst.dev->ifindex,
|
||||
.flowi4_iif = skb->dev->ifindex,
|
||||
.flowi4_mark = skb->mark,
|
||||
|
@ -6819,9 +6819,6 @@ tcp_rcv_state_process(struct sock *sk, struct sk_buff *skb)
|
||||
tcp_fast_path_on(tp);
|
||||
if (sk->sk_shutdown & SEND_SHUTDOWN)
|
||||
tcp_shutdown(sk, SEND_SHUTDOWN);
|
||||
|
||||
if (sk->sk_socket)
|
||||
goto consume;
|
||||
break;
|
||||
|
||||
case TCP_FIN_WAIT1: {
|
||||
|
@ -441,14 +441,15 @@ int l2tp_session_register(struct l2tp_session *session,
|
||||
int err;
|
||||
|
||||
spin_lock_bh(&tunnel->list_lock);
|
||||
spin_lock_bh(&pn->l2tp_session_idr_lock);
|
||||
|
||||
if (!tunnel->acpt_newsess) {
|
||||
err = -ENODEV;
|
||||
goto err_tlock;
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (tunnel->version == L2TP_HDR_VER_3) {
|
||||
session_key = session->session_id;
|
||||
spin_lock_bh(&pn->l2tp_session_idr_lock);
|
||||
err = idr_alloc_u32(&pn->l2tp_v3_session_idr, NULL,
|
||||
&session_key, session_key, GFP_ATOMIC);
|
||||
/* IP encap expects session IDs to be globally unique, while
|
||||
@ -462,43 +463,36 @@ int l2tp_session_register(struct l2tp_session *session,
|
||||
err = l2tp_session_collision_add(pn, session,
|
||||
other_session);
|
||||
}
|
||||
spin_unlock_bh(&pn->l2tp_session_idr_lock);
|
||||
} else {
|
||||
session_key = l2tp_v2_session_key(tunnel->tunnel_id,
|
||||
session->session_id);
|
||||
spin_lock_bh(&pn->l2tp_session_idr_lock);
|
||||
err = idr_alloc_u32(&pn->l2tp_v2_session_idr, NULL,
|
||||
&session_key, session_key, GFP_ATOMIC);
|
||||
spin_unlock_bh(&pn->l2tp_session_idr_lock);
|
||||
}
|
||||
|
||||
if (err) {
|
||||
if (err == -ENOSPC)
|
||||
err = -EEXIST;
|
||||
goto err_tlock;
|
||||
goto out;
|
||||
}
|
||||
|
||||
l2tp_tunnel_inc_refcount(tunnel);
|
||||
|
||||
list_add(&session->list, &tunnel->session_list);
|
||||
spin_unlock_bh(&tunnel->list_lock);
|
||||
|
||||
spin_lock_bh(&pn->l2tp_session_idr_lock);
|
||||
if (tunnel->version == L2TP_HDR_VER_3) {
|
||||
if (!other_session)
|
||||
idr_replace(&pn->l2tp_v3_session_idr, session, session_key);
|
||||
} else {
|
||||
idr_replace(&pn->l2tp_v2_session_idr, session, session_key);
|
||||
}
|
||||
|
||||
out:
|
||||
spin_unlock_bh(&pn->l2tp_session_idr_lock);
|
||||
|
||||
trace_register_session(session);
|
||||
|
||||
return 0;
|
||||
|
||||
err_tlock:
|
||||
spin_unlock_bh(&tunnel->list_lock);
|
||||
|
||||
if (!err)
|
||||
trace_register_session(session);
|
||||
|
||||
return err;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(l2tp_session_register);
|
||||
@ -1260,13 +1254,13 @@ static void l2tp_session_unhash(struct l2tp_session *session)
|
||||
struct l2tp_net *pn = l2tp_pernet(tunnel->l2tp_net);
|
||||
struct l2tp_session *removed = session;
|
||||
|
||||
/* Remove from the per-tunnel list */
|
||||
spin_lock_bh(&tunnel->list_lock);
|
||||
spin_lock_bh(&pn->l2tp_session_idr_lock);
|
||||
|
||||
/* Remove from the per-tunnel list */
|
||||
list_del_init(&session->list);
|
||||
spin_unlock_bh(&tunnel->list_lock);
|
||||
|
||||
/* Remove from per-net IDR */
|
||||
spin_lock_bh(&pn->l2tp_session_idr_lock);
|
||||
if (tunnel->version == L2TP_HDR_VER_3) {
|
||||
if (hash_hashed(&session->hlist))
|
||||
l2tp_session_collision_del(pn, session);
|
||||
@ -1280,7 +1274,9 @@ static void l2tp_session_unhash(struct l2tp_session *session)
|
||||
session_key);
|
||||
}
|
||||
WARN_ON_ONCE(removed && removed != session);
|
||||
|
||||
spin_unlock_bh(&pn->l2tp_session_idr_lock);
|
||||
spin_unlock_bh(&tunnel->list_lock);
|
||||
|
||||
synchronize_rcu();
|
||||
}
|
||||
|
@ -1139,8 +1139,14 @@ bool nft_pipapo_avx2_lookup(const struct net *net, const struct nft_set *set,
|
||||
bool map_index;
|
||||
int i, ret = 0;
|
||||
|
||||
if (unlikely(!irq_fpu_usable()))
|
||||
return nft_pipapo_lookup(net, set, key, ext);
|
||||
local_bh_disable();
|
||||
|
||||
if (unlikely(!irq_fpu_usable())) {
|
||||
bool fallback_res = nft_pipapo_lookup(net, set, key, ext);
|
||||
|
||||
local_bh_enable();
|
||||
return fallback_res;
|
||||
}
|
||||
|
||||
m = rcu_dereference(priv->match);
|
||||
|
||||
@ -1155,6 +1161,7 @@ bool nft_pipapo_avx2_lookup(const struct net *net, const struct nft_set *set,
|
||||
scratch = *raw_cpu_ptr(m->scratch);
|
||||
if (unlikely(!scratch)) {
|
||||
kernel_fpu_end();
|
||||
local_bh_enable();
|
||||
return false;
|
||||
}
|
||||
|
||||
@ -1235,6 +1242,7 @@ out:
|
||||
if (i % 2)
|
||||
scratch->map_index = !map_index;
|
||||
kernel_fpu_end();
|
||||
local_bh_enable();
|
||||
|
||||
return ret >= 0;
|
||||
}
|
||||
|
@ -135,8 +135,11 @@ static int tipc_udp_addr2str(struct tipc_media_addr *a, char *buf, int size)
|
||||
snprintf(buf, size, "%pI4:%u", &ua->ipv4, ntohs(ua->port));
|
||||
else if (ntohs(ua->proto) == ETH_P_IPV6)
|
||||
snprintf(buf, size, "%pI6:%u", &ua->ipv6, ntohs(ua->port));
|
||||
else
|
||||
else {
|
||||
pr_err("Invalid UDP media address\n");
|
||||
return 1;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -2721,10 +2721,49 @@ static struct sk_buff *manage_oob(struct sk_buff *skb, struct sock *sk,
|
||||
|
||||
static int unix_stream_read_skb(struct sock *sk, skb_read_actor_t recv_actor)
|
||||
{
|
||||
struct unix_sock *u = unix_sk(sk);
|
||||
struct sk_buff *skb;
|
||||
int err;
|
||||
|
||||
if (unlikely(READ_ONCE(sk->sk_state) != TCP_ESTABLISHED))
|
||||
return -ENOTCONN;
|
||||
|
||||
return unix_read_skb(sk, recv_actor);
|
||||
mutex_lock(&u->iolock);
|
||||
skb = skb_recv_datagram(sk, MSG_DONTWAIT, &err);
|
||||
mutex_unlock(&u->iolock);
|
||||
if (!skb)
|
||||
return err;
|
||||
|
||||
#if IS_ENABLED(CONFIG_AF_UNIX_OOB)
|
||||
if (unlikely(skb == READ_ONCE(u->oob_skb))) {
|
||||
bool drop = false;
|
||||
|
||||
unix_state_lock(sk);
|
||||
|
||||
if (sock_flag(sk, SOCK_DEAD)) {
|
||||
unix_state_unlock(sk);
|
||||
kfree_skb(skb);
|
||||
return -ECONNRESET;
|
||||
}
|
||||
|
||||
spin_lock(&sk->sk_receive_queue.lock);
|
||||
if (likely(skb == u->oob_skb)) {
|
||||
WRITE_ONCE(u->oob_skb, NULL);
|
||||
drop = true;
|
||||
}
|
||||
spin_unlock(&sk->sk_receive_queue.lock);
|
||||
|
||||
unix_state_unlock(sk);
|
||||
|
||||
if (drop) {
|
||||
WARN_ON_ONCE(skb_unref(skb));
|
||||
kfree_skb(skb);
|
||||
return -EAGAIN;
|
||||
}
|
||||
}
|
||||
#endif
|
||||
|
||||
return recv_actor(sk, skb);
|
||||
}
|
||||
|
||||
static int unix_stream_read_generic(struct unix_stream_read_state *state,
|
||||
|
@ -54,6 +54,9 @@ static int unix_bpf_recvmsg(struct sock *sk, struct msghdr *msg,
|
||||
struct sk_psock *psock;
|
||||
int copied;
|
||||
|
||||
if (flags & MSG_OOB)
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
if (!len)
|
||||
return 0;
|
||||
|
||||
|
@ -151,6 +151,7 @@ static int xdp_umem_account_pages(struct xdp_umem *umem)
|
||||
#define XDP_UMEM_FLAGS_VALID ( \
|
||||
XDP_UMEM_UNALIGNED_CHUNK_FLAG | \
|
||||
XDP_UMEM_TX_SW_CSUM | \
|
||||
XDP_UMEM_TX_METADATA_LEN | \
|
||||
0)
|
||||
|
||||
static int xdp_umem_reg(struct xdp_umem *umem, struct xdp_umem_reg *mr)
|
||||
@ -204,8 +205,11 @@ static int xdp_umem_reg(struct xdp_umem *umem, struct xdp_umem_reg *mr)
|
||||
if (headroom >= chunk_size - XDP_PACKET_HEADROOM)
|
||||
return -EINVAL;
|
||||
|
||||
if (mr->flags & XDP_UMEM_TX_METADATA_LEN) {
|
||||
if (mr->tx_metadata_len >= 256 || mr->tx_metadata_len % 8)
|
||||
return -EINVAL;
|
||||
umem->tx_metadata_len = mr->tx_metadata_len;
|
||||
}
|
||||
|
||||
umem->size = size;
|
||||
umem->headroom = headroom;
|
||||
@ -215,7 +219,6 @@ static int xdp_umem_reg(struct xdp_umem *umem, struct xdp_umem_reg *mr)
|
||||
umem->pgs = NULL;
|
||||
umem->user = NULL;
|
||||
umem->flags = mr->flags;
|
||||
umem->tx_metadata_len = mr->tx_metadata_len;
|
||||
|
||||
INIT_LIST_HEAD(&umem->xsk_dma_list);
|
||||
refcount_set(&umem->users, 1);
|
||||
|
@ -2489,7 +2489,7 @@ static int do_help(int argc, char **argv)
|
||||
" cgroup/connect_unix | cgroup/getpeername4 | cgroup/getpeername6 |\n"
|
||||
" cgroup/getpeername_unix | cgroup/getsockname4 | cgroup/getsockname6 |\n"
|
||||
" cgroup/getsockname_unix | cgroup/sendmsg4 | cgroup/sendmsg6 |\n"
|
||||
" cgroup/sendmsg°unix | cgroup/recvmsg4 | cgroup/recvmsg6 | cgroup/recvmsg_unix |\n"
|
||||
" cgroup/sendmsg_unix | cgroup/recvmsg4 | cgroup/recvmsg6 | cgroup/recvmsg_unix |\n"
|
||||
" cgroup/getsockopt | cgroup/setsockopt | cgroup/sock_release |\n"
|
||||
" struct_ops | fentry | fexit | freplace | sk_lookup }\n"
|
||||
" ATTACH_TYPE := { sk_msg_verdict | sk_skb_verdict | sk_skb_stream_verdict |\n"
|
||||
|
@ -704,7 +704,7 @@ static int sets_patch(struct object *obj)
|
||||
* Make sure id is at the beginning of the pairs
|
||||
* struct, otherwise the below qsort would not work.
|
||||
*/
|
||||
BUILD_BUG_ON(set8->pairs != &set8->pairs[0].id);
|
||||
BUILD_BUG_ON((u32 *)set8->pairs != &set8->pairs[0].id);
|
||||
qsort(set8->pairs, set8->cnt, sizeof(set8->pairs[0]), cmp_id);
|
||||
|
||||
/*
|
||||
|
@ -41,6 +41,10 @@
|
||||
*/
|
||||
#define XDP_UMEM_TX_SW_CSUM (1 << 1)
|
||||
|
||||
/* Request to reserve tx_metadata_len bytes of per-chunk metadata.
|
||||
*/
|
||||
#define XDP_UMEM_TX_METADATA_LEN (1 << 2)
|
||||
|
||||
struct sockaddr_xdp {
|
||||
__u16 sxdp_family;
|
||||
__u16 sxdp_flags;
|
||||
|
@ -1559,10 +1559,12 @@ static void btf_dump_emit_type_chain(struct btf_dump *d,
|
||||
* Clang for BPF target generates func_proto with no
|
||||
* args as a func_proto with a single void arg (e.g.,
|
||||
* `int (*f)(void)` vs just `int (*f)()`). We are
|
||||
* going to pretend there are no args for such case.
|
||||
* going to emit valid empty args (void) syntax for
|
||||
* such case. Similarly and conveniently, valid
|
||||
* no args case can be special-cased here as well.
|
||||
*/
|
||||
if (vlen == 1 && p->type == 0) {
|
||||
btf_dump_printf(d, ")");
|
||||
if (vlen == 0 || (vlen == 1 && p->type == 0)) {
|
||||
btf_dump_printf(d, "void)");
|
||||
return;
|
||||
}
|
||||
|
||||
|
@ -1,6 +1,5 @@
|
||||
bpf_cookie/multi_kprobe_attach_api # kprobe_multi_link_api_subtest:FAIL:fentry_raw_skel_load unexpected error: -3
|
||||
bpf_cookie/multi_kprobe_link_api # kprobe_multi_link_api_subtest:FAIL:fentry_raw_skel_load unexpected error: -3
|
||||
fexit_sleep # The test never returns. The remaining tests cannot start.
|
||||
kprobe_multi_bench_attach # needs CONFIG_FPROBE
|
||||
kprobe_multi_test # needs CONFIG_FPROBE
|
||||
module_attach # prog 'kprobe_multi': failed to auto-attach: -95
|
||||
|
@ -21,13 +21,13 @@ static int do_sleep(void *skel)
|
||||
}
|
||||
|
||||
#define STACK_SIZE (1024 * 1024)
|
||||
static char child_stack[STACK_SIZE];
|
||||
|
||||
void test_fexit_sleep(void)
|
||||
{
|
||||
struct fexit_sleep_lskel *fexit_skel = NULL;
|
||||
int wstatus, duration = 0;
|
||||
pid_t cpid;
|
||||
char *child_stack = NULL;
|
||||
int err, fexit_cnt;
|
||||
|
||||
fexit_skel = fexit_sleep_lskel__open_and_load();
|
||||
@ -38,6 +38,11 @@ void test_fexit_sleep(void)
|
||||
if (CHECK(err, "fexit_attach", "fexit attach failed: %d\n", err))
|
||||
goto cleanup;
|
||||
|
||||
child_stack = mmap(NULL, STACK_SIZE, PROT_READ | PROT_WRITE, MAP_PRIVATE |
|
||||
MAP_ANONYMOUS | MAP_STACK, -1, 0);
|
||||
if (!ASSERT_NEQ(child_stack, MAP_FAILED, "mmap"))
|
||||
goto cleanup;
|
||||
|
||||
cpid = clone(do_sleep, child_stack + STACK_SIZE, CLONE_FILES | SIGCHLD, fexit_skel);
|
||||
if (CHECK(cpid == -1, "clone", "%s\n", strerror(errno)))
|
||||
goto cleanup;
|
||||
@ -78,5 +83,6 @@ void test_fexit_sleep(void)
|
||||
goto cleanup;
|
||||
|
||||
cleanup:
|
||||
munmap(child_stack, STACK_SIZE);
|
||||
fexit_sleep_lskel__destroy(fexit_skel);
|
||||
}
|
||||
|
@ -29,6 +29,8 @@
|
||||
|
||||
#include "sockmap_helpers.h"
|
||||
|
||||
#define NO_FLAGS 0
|
||||
|
||||
static void test_insert_invalid(struct test_sockmap_listen *skel __always_unused,
|
||||
int family, int sotype, int mapfd)
|
||||
{
|
||||
@ -1376,7 +1378,8 @@ static void test_redir(struct test_sockmap_listen *skel, struct bpf_map *map,
|
||||
|
||||
static void pairs_redir_to_connected(int cli0, int peer0, int cli1, int peer1,
|
||||
int sock_mapfd, int nop_mapfd,
|
||||
int verd_mapfd, enum redir_mode mode)
|
||||
int verd_mapfd, enum redir_mode mode,
|
||||
int send_flags)
|
||||
{
|
||||
const char *log_prefix = redir_mode_str(mode);
|
||||
unsigned int pass;
|
||||
@ -1396,12 +1399,11 @@ static void pairs_redir_to_connected(int cli0, int peer0, int cli1, int peer1,
|
||||
return;
|
||||
}
|
||||
|
||||
n = write(cli1, "a", 1);
|
||||
if (n < 0)
|
||||
FAIL_ERRNO("%s: write", log_prefix);
|
||||
if (n == 0)
|
||||
FAIL("%s: incomplete write", log_prefix);
|
||||
if (n < 1)
|
||||
/* Last byte is OOB data when send_flags has MSG_OOB bit set */
|
||||
n = xsend(cli1, "ab", 2, send_flags);
|
||||
if (n >= 0 && n < 2)
|
||||
FAIL("%s: incomplete send", log_prefix);
|
||||
if (n < 2)
|
||||
return;
|
||||
|
||||
key = SK_PASS;
|
||||
@ -1416,6 +1418,25 @@ static void pairs_redir_to_connected(int cli0, int peer0, int cli1, int peer1,
|
||||
FAIL_ERRNO("%s: recv_timeout", log_prefix);
|
||||
if (n == 0)
|
||||
FAIL("%s: incomplete recv", log_prefix);
|
||||
|
||||
if (send_flags & MSG_OOB) {
|
||||
/* Check that we can't read OOB while in sockmap */
|
||||
errno = 0;
|
||||
n = recv(peer1, &b, 1, MSG_OOB | MSG_DONTWAIT);
|
||||
if (n != -1 || errno != EOPNOTSUPP)
|
||||
FAIL("%s: recv(MSG_OOB): expected EOPNOTSUPP: retval=%d errno=%d",
|
||||
log_prefix, n, errno);
|
||||
|
||||
/* Remove peer1 from sockmap */
|
||||
xbpf_map_delete_elem(sock_mapfd, &(int){ 1 });
|
||||
|
||||
/* Check that OOB was dropped on redirect */
|
||||
errno = 0;
|
||||
n = recv(peer1, &b, 1, MSG_OOB | MSG_DONTWAIT);
|
||||
if (n != -1 || errno != EINVAL)
|
||||
FAIL("%s: recv(MSG_OOB): expected EINVAL: retval=%d errno=%d",
|
||||
log_prefix, n, errno);
|
||||
}
|
||||
}
|
||||
|
||||
static void unix_redir_to_connected(int sotype, int sock_mapfd,
|
||||
@ -1432,7 +1453,8 @@ static void unix_redir_to_connected(int sotype, int sock_mapfd,
|
||||
goto close0;
|
||||
c1 = sfd[0], p1 = sfd[1];
|
||||
|
||||
pairs_redir_to_connected(c0, p0, c1, p1, sock_mapfd, -1, verd_mapfd, mode);
|
||||
pairs_redir_to_connected(c0, p0, c1, p1, sock_mapfd, -1, verd_mapfd,
|
||||
mode, NO_FLAGS);
|
||||
|
||||
xclose(c1);
|
||||
xclose(p1);
|
||||
@ -1722,7 +1744,8 @@ static void udp_redir_to_connected(int family, int sock_mapfd, int verd_mapfd,
|
||||
if (err)
|
||||
goto close_cli0;
|
||||
|
||||
pairs_redir_to_connected(c0, p0, c1, p1, sock_mapfd, -1, verd_mapfd, mode);
|
||||
pairs_redir_to_connected(c0, p0, c1, p1, sock_mapfd, -1, verd_mapfd,
|
||||
mode, NO_FLAGS);
|
||||
|
||||
xclose(c1);
|
||||
xclose(p1);
|
||||
@ -1780,7 +1803,8 @@ static void inet_unix_redir_to_connected(int family, int type, int sock_mapfd,
|
||||
if (err)
|
||||
goto close;
|
||||
|
||||
pairs_redir_to_connected(c0, p0, c1, p1, sock_mapfd, -1, verd_mapfd, mode);
|
||||
pairs_redir_to_connected(c0, p0, c1, p1, sock_mapfd, -1, verd_mapfd,
|
||||
mode, NO_FLAGS);
|
||||
|
||||
xclose(c1);
|
||||
xclose(p1);
|
||||
@ -1815,10 +1839,9 @@ static void inet_unix_skb_redir_to_connected(struct test_sockmap_listen *skel,
|
||||
xbpf_prog_detach2(verdict, sock_map, BPF_SK_SKB_VERDICT);
|
||||
}
|
||||
|
||||
static void unix_inet_redir_to_connected(int family, int type,
|
||||
int sock_mapfd, int nop_mapfd,
|
||||
int verd_mapfd,
|
||||
enum redir_mode mode)
|
||||
static void unix_inet_redir_to_connected(int family, int type, int sock_mapfd,
|
||||
int nop_mapfd, int verd_mapfd,
|
||||
enum redir_mode mode, int send_flags)
|
||||
{
|
||||
int c0, c1, p0, p1;
|
||||
int sfd[2];
|
||||
@ -1828,19 +1851,18 @@ static void unix_inet_redir_to_connected(int family, int type,
|
||||
if (err)
|
||||
return;
|
||||
|
||||
if (socketpair(AF_UNIX, SOCK_DGRAM | SOCK_NONBLOCK, 0, sfd))
|
||||
if (socketpair(AF_UNIX, type | SOCK_NONBLOCK, 0, sfd))
|
||||
goto close_cli0;
|
||||
c1 = sfd[0], p1 = sfd[1];
|
||||
|
||||
pairs_redir_to_connected(c0, p0, c1, p1,
|
||||
sock_mapfd, nop_mapfd, verd_mapfd, mode);
|
||||
pairs_redir_to_connected(c0, p0, c1, p1, sock_mapfd, nop_mapfd,
|
||||
verd_mapfd, mode, send_flags);
|
||||
|
||||
xclose(c1);
|
||||
xclose(p1);
|
||||
close_cli0:
|
||||
xclose(c0);
|
||||
xclose(p0);
|
||||
|
||||
}
|
||||
|
||||
static void unix_inet_skb_redir_to_connected(struct test_sockmap_listen *skel,
|
||||
@ -1859,31 +1881,42 @@ static void unix_inet_skb_redir_to_connected(struct test_sockmap_listen *skel,
|
||||
skel->bss->test_ingress = false;
|
||||
unix_inet_redir_to_connected(family, SOCK_DGRAM,
|
||||
sock_map, -1, verdict_map,
|
||||
REDIR_EGRESS);
|
||||
REDIR_EGRESS, NO_FLAGS);
|
||||
unix_inet_redir_to_connected(family, SOCK_DGRAM,
|
||||
sock_map, -1, verdict_map,
|
||||
REDIR_EGRESS);
|
||||
REDIR_EGRESS, NO_FLAGS);
|
||||
|
||||
unix_inet_redir_to_connected(family, SOCK_DGRAM,
|
||||
sock_map, nop_map, verdict_map,
|
||||
REDIR_EGRESS);
|
||||
REDIR_EGRESS, NO_FLAGS);
|
||||
unix_inet_redir_to_connected(family, SOCK_STREAM,
|
||||
sock_map, nop_map, verdict_map,
|
||||
REDIR_EGRESS);
|
||||
REDIR_EGRESS, NO_FLAGS);
|
||||
|
||||
/* MSG_OOB not supported by AF_UNIX SOCK_DGRAM */
|
||||
unix_inet_redir_to_connected(family, SOCK_STREAM,
|
||||
sock_map, nop_map, verdict_map,
|
||||
REDIR_EGRESS, MSG_OOB);
|
||||
|
||||
skel->bss->test_ingress = true;
|
||||
unix_inet_redir_to_connected(family, SOCK_DGRAM,
|
||||
sock_map, -1, verdict_map,
|
||||
REDIR_INGRESS);
|
||||
REDIR_INGRESS, NO_FLAGS);
|
||||
unix_inet_redir_to_connected(family, SOCK_STREAM,
|
||||
sock_map, -1, verdict_map,
|
||||
REDIR_INGRESS);
|
||||
REDIR_INGRESS, NO_FLAGS);
|
||||
|
||||
unix_inet_redir_to_connected(family, SOCK_DGRAM,
|
||||
sock_map, nop_map, verdict_map,
|
||||
REDIR_INGRESS);
|
||||
REDIR_INGRESS, NO_FLAGS);
|
||||
unix_inet_redir_to_connected(family, SOCK_STREAM,
|
||||
sock_map, nop_map, verdict_map,
|
||||
REDIR_INGRESS);
|
||||
REDIR_INGRESS, NO_FLAGS);
|
||||
|
||||
/* MSG_OOB not supported by AF_UNIX SOCK_DGRAM */
|
||||
unix_inet_redir_to_connected(family, SOCK_STREAM,
|
||||
sock_map, nop_map, verdict_map,
|
||||
REDIR_INGRESS, MSG_OOB);
|
||||
|
||||
xbpf_prog_detach2(verdict, sock_map, BPF_SK_SKB_VERDICT);
|
||||
}
|
||||
|
@ -68,7 +68,8 @@ static int open_xsk(int ifindex, struct xsk *xsk)
|
||||
.fill_size = XSK_RING_PROD__DEFAULT_NUM_DESCS,
|
||||
.comp_size = XSK_RING_CONS__DEFAULT_NUM_DESCS,
|
||||
.frame_size = XSK_UMEM__DEFAULT_FRAME_SIZE,
|
||||
.flags = XDP_UMEM_UNALIGNED_CHUNK_FLAG | XDP_UMEM_TX_SW_CSUM,
|
||||
.flags = XDP_UMEM_UNALIGNED_CHUNK_FLAG | XDP_UMEM_TX_SW_CSUM |
|
||||
XDP_UMEM_TX_METADATA_LEN,
|
||||
.tx_metadata_len = sizeof(struct xsk_tx_metadata),
|
||||
};
|
||||
__u32 idx;
|
||||
|
@ -14,9 +14,9 @@ typedef int *ptr_arr_t[6];
|
||||
|
||||
typedef int *ptr_multiarr_t[7][8][9][10];
|
||||
|
||||
typedef int * (*fn_ptr_arr_t[11])();
|
||||
typedef int * (*fn_ptr_arr_t[11])(void);
|
||||
|
||||
typedef int * (*fn_ptr_multiarr_t[12][13])();
|
||||
typedef int * (*fn_ptr_multiarr_t[12][13])(void);
|
||||
|
||||
struct root_struct {
|
||||
arr_t _1;
|
||||
|
@ -100,7 +100,7 @@ typedef void (*printf_fn_t)(const char *, ...);
|
||||
* `int -> char *` function and returns pointer to a char. Equivalent:
|
||||
* typedef char * (*fn_input_t)(int);
|
||||
* typedef char * (*fn_output_outer_t)(fn_input_t);
|
||||
* typedef const fn_output_outer_t (* fn_output_inner_t)();
|
||||
* typedef const fn_output_outer_t (* fn_output_inner_t)(void);
|
||||
* typedef const fn_output_inner_t fn_ptr_arr2_t[5];
|
||||
*/
|
||||
/* ----- START-EXPECTED-OUTPUT ----- */
|
||||
@ -127,7 +127,7 @@ typedef void (* (*signal_t)(int, void (*)(int)))(int);
|
||||
|
||||
typedef char * (*fn_ptr_arr1_t[10])(int **);
|
||||
|
||||
typedef char * (* (* const fn_ptr_arr2_t[5])())(char * (*)(int));
|
||||
typedef char * (* (* const fn_ptr_arr2_t[5])(void))(char * (*)(int));
|
||||
|
||||
struct struct_w_typedefs {
|
||||
int_t a;
|
||||
|
@ -178,6 +178,22 @@ fdb_del()
|
||||
check_err $? "Failed to remove a FDB entry of type ${type}"
|
||||
}
|
||||
|
||||
check_fdb_n_learned_support()
|
||||
{
|
||||
if ! ip link help bridge 2>&1 | grep -q "fdb_max_learned"; then
|
||||
echo "SKIP: iproute2 too old, missing bridge max learned support"
|
||||
exit $ksft_skip
|
||||
fi
|
||||
|
||||
ip link add dev br0 type bridge
|
||||
local learned=$(fdb_get_n_learned)
|
||||
ip link del dev br0
|
||||
if [ "$learned" == "null" ]; then
|
||||
echo "SKIP: kernel too old; bridge fdb_n_learned feature not supported."
|
||||
exit $ksft_skip
|
||||
fi
|
||||
}
|
||||
|
||||
check_accounting_one_type()
|
||||
{
|
||||
local type=$1 is_counted=$2 overrides_learned=$3
|
||||
@ -274,6 +290,8 @@ check_limit()
|
||||
done
|
||||
}
|
||||
|
||||
check_fdb_n_learned_support
|
||||
|
||||
trap cleanup EXIT
|
||||
|
||||
setup_prepare
|
||||
|
Loading…
Reference in New Issue
Block a user