From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-pg1-x549.google.com (mail-pg1-x549.google.com [IPv6:2607:f8b0:4864:20::549]) by mail.toke.dk (Postfix) with ESMTPS id 525789CF2F6 for ; Tue, 13 Dec 2022 03:36:32 +0100 (CET) Authentication-Results: mail.toke.dk; dkim=pass (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=KuEbNXoC Received: by mail-pg1-x549.google.com with SMTP id a33-20020a630b61000000b00429d91cc649so8669423pgl.8 for ; Mon, 12 Dec 2022 18:36:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:from:to:cc:subject:date:message-id :reply-to; bh=eUUGP8axJTHw7H7VaVSaqhRj4+Te4WMeId4UDZ6Ut7Y=; b=KuEbNXoCjrPi7Tb2R23J86dzGPEiHzqcIcA/Y+zoYlXdN49ClKXtNWtG1SV51EYp98 hxq7ih/OtjtIHqzl85oVI0z9XFpJO5f3bKPJewT0GeMJbne8F0iGj27PKXQGV22J2HfN 0iksZpw6Ddap8ukD1hg7tJ7roDjanmJULJy0PjcLyjjE8bcATd1ikqwgwEfBDsU+Guzf pNv9GSsCxpK0KTsdYxQgC+hrI+B2ns8ih+0/fdGjul5l0KdNEcy2llEIJ7x7nkycDtxW 36oTBgigpCjxW8vUoeedQrzEZpdtGOjKDt1vsHMVBts7vYgGzyOaJ5nA+ME1h+Lxy7El wQ1Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:x-gm-message-state:from:to:cc:subject :date:message-id:reply-to; bh=eUUGP8axJTHw7H7VaVSaqhRj4+Te4WMeId4UDZ6Ut7Y=; b=5413HviJUX64yWKUMjfdKSf5E7/7A+A4FlEWJ/HISO4rODxVRPVsqgS16IoBcbb/jg DC6z+fbrs5GByvtGJFYLAbyc4qSeZ1oiS9ycIpvDty13kuVSoYHEzohemjDR3BIRcExU 5uOhLliQ2rIBf/UsKaykVpOjPjGnvrlsuYRhPrAzUeu8RCA2UT1IjM1tYTL4RP1EBSuw os778J89gIPuKefXxeC+7pHy2T4a3uu2KQ1wIanhvggJ1jEgzXTxx/klOB6w9MaFQIC/ SYJYudjcC67tk6UGxqBQ9gYlmzW4eK3rb96t4++S66iZm2hL7gyq/LcC5I71Eh2+ps3O XTYw== X-Gm-Message-State: ANoB5plznb5yxeD2EMxtSql8abKeOqbji9fwIV7MdQsbmzlBvmQl+Mqt PfrdeTmZ1v5MQwXf3PTa3eb8VZ0= X-Google-Smtp-Source: AA0mqf5QJxsD2ao4N3W2e/VVbIdssVrO1lzClCthbKwDaikM4PHQgVkGS45RaplS8Qiek+g/vmkiQTs= X-Received: from sdf.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5935]) (user=sdf job=sendgmr) by 2002:a17:902:b7cc:b0:189:b36a:544c with SMTP id v12-20020a170902b7cc00b00189b36a544cmr34640135plz.78.1670898990763; Mon, 12 Dec 2022 18:36:30 -0800 (PST) Date: Mon, 12 Dec 2022 18:36:04 -0800 In-Reply-To: <20221213023605.737383-1-sdf@google.com> Mime-Version: 1.0 References: <20221213023605.737383-1-sdf@google.com> X-Mailer: git-send-email 2.39.0.rc1.256.g54fd8350bd-goog Message-ID: <20221213023605.737383-15-sdf@google.com> From: Stanislav Fomichev To: bpf@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Message-ID-Hash: GPYNXKN5SSXZDXBPHPKRFL4ZH73CHPJE X-Message-ID-Hash: GPYNXKN5SSXZDXBPHPKRFL4ZH73CHPJE X-MailFrom: 3LuWXYwMKCXsrcefnnfkd.bnlwco-ghmsrwco-oqnidbs.mds@flex--sdf.bounces.google.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org, martin.lau@linux.dev, song@kernel.org, yhs@fb.com, john.fastabend@gmail.com, kpsingh@kernel.org, sdf@google.com, haoluo@google.com, jolsa@kernel.org, =?UTF-8?q?Toke=20H=C3=B8iland-J=C3=B8rgensen?= , Saeed Mahameed , David Ahern , Jakub Kicinski , Willem de Bruijn , Jesper Dangaard Brouer , Anatoly Burakov , Alexander Lobakin , Magnus Karlsson , Maryam Tahhan , xdp-hints@xdp-project.net, netdev@vger.kernel.org X-Mailman-Version: 3.3.7 Precedence: list Subject: [xdp-hints] [PATCH bpf-next v4 14/15] net/mlx5e: Support RX XDP metadata List-Id: XDP hardware hints design discussion Archived-At: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: From: Toke H=C3=B8iland-J=C3=B8rgensen Support RX hash and timestamp metadata kfuncs. We need to pass in the cqe pointer to the mlx5e_skb_from* functions so it can be retrieved from the XDP ctx to do this. Cc: Saeed Mahameed Cc: John Fastabend Cc: David Ahern Cc: Martin KaFai Lau Cc: Jakub Kicinski Cc: Willem de Bruijn Cc: Jesper Dangaard Brouer Cc: Anatoly Burakov Cc: Alexander Lobakin Cc: Magnus Karlsson Cc: Maryam Tahhan Cc: xdp-hints@xdp-project.net Cc: netdev@vger.kernel.org Signed-off-by: Toke H=C3=B8iland-J=C3=B8rgensen Signed-off-by: Stanislav Fomichev --- drivers/net/ethernet/mellanox/mlx5/core/en.h | 10 +++- .../net/ethernet/mellanox/mlx5/core/en/xdp.c | 23 +++++++++ .../net/ethernet/mellanox/mlx5/core/en/xdp.h | 5 ++ .../ethernet/mellanox/mlx5/core/en/xsk/rx.c | 10 ++++ .../ethernet/mellanox/mlx5/core/en/xsk/rx.h | 2 + .../net/ethernet/mellanox/mlx5/core/en_main.c | 6 +++ .../net/ethernet/mellanox/mlx5/core/en_rx.c | 51 ++++++++++--------- 7 files changed, 81 insertions(+), 26 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/eth= ernet/mellanox/mlx5/core/en.h index be86a599df97..b3f7a3e5926c 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h @@ -627,10 +627,11 @@ struct mlx5e_rq; typedef void (*mlx5e_fp_handle_rx_cqe)(struct mlx5e_rq*, struct mlx5_cqe64= *); typedef struct sk_buff * (*mlx5e_fp_skb_from_cqe_mpwrq)(struct mlx5e_rq *rq, struct mlx5e_mpw_info = *wi, - u16 cqe_bcnt, u32 head_offset, u32 page_idx); + struct mlx5_cqe64 *cqe, u16 cqe_bcnt, + u32 head_offset, u32 page_idx); typedef struct sk_buff * (*mlx5e_fp_skb_from_cqe)(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *= wi, - u32 cqe_bcnt); + struct mlx5_cqe64 *cqe, u32 cqe_bcnt); typedef bool (*mlx5e_fp_post_rx_wqes)(struct mlx5e_rq *rq); typedef void (*mlx5e_fp_dealloc_wqe)(struct mlx5e_rq*, u16); typedef void (*mlx5e_fp_shampo_dealloc_hd)(struct mlx5e_rq*, u16, u16, boo= l); @@ -1036,6 +1037,11 @@ int mlx5e_vlan_rx_kill_vid(struct net_device *dev, _= _always_unused __be16 proto, u16 vid); void mlx5e_timestamp_init(struct mlx5e_priv *priv); =20 +static inline bool mlx5e_rx_hw_stamp(struct hwtstamp_config *config) +{ + return config->rx_filter =3D=3D HWTSTAMP_FILTER_ALL; +} + struct mlx5e_xsk_param; =20 struct mlx5e_rq_param; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c b/drivers/net= /ethernet/mellanox/mlx5/core/en/xdp.c index 31bb6806bf5d..d10d31e12ba2 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c @@ -156,6 +156,29 @@ mlx5e_xmit_xdp_buff(struct mlx5e_xdpsq *sq, struct mlx= 5e_rq *rq, return true; } =20 +int mlx5e_xdp_rx_timestamp(const struct xdp_md *ctx, u64 *timestamp) +{ + const struct mlx5e_xdp_buff *_ctx =3D (void *)ctx; + + if (unlikely(!mlx5e_rx_hw_stamp(_ctx->rq->tstamp))) + return -EOPNOTSUPP; + + *timestamp =3D mlx5e_cqe_ts_to_ns(_ctx->rq->ptp_cyc2time, + _ctx->rq->clock, get_cqe_ts(_ctx->cqe)); + return 0; +} + +int mlx5e_xdp_rx_hash(const struct xdp_md *ctx, u32 *hash) +{ + const struct mlx5e_xdp_buff *_ctx =3D (void *)ctx; + + if (unlikely(!(_ctx->xdp.rxq->dev->features & NETIF_F_RXHASH))) + return -EOPNOTSUPP; + + *hash =3D be32_to_cpu(_ctx->cqe->rss_hash_result); + return 0; +} + /* returns true if packet was consumed by xdp */ bool mlx5e_xdp_handle(struct mlx5e_rq *rq, struct page *page, struct bpf_prog *prog, struct mlx5e_xdp_buff *mxbuf) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h b/drivers/net= /ethernet/mellanox/mlx5/core/en/xdp.h index 389818bf6833..cb568c62aba0 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h @@ -46,6 +46,8 @@ =20 struct mlx5e_xdp_buff { struct xdp_buff xdp; + struct mlx5_cqe64 *cqe; + struct mlx5e_rq *rq; }; =20 struct mlx5e_xsk_param; @@ -60,6 +62,9 @@ void mlx5e_xdp_rx_poll_complete(struct mlx5e_rq *rq); int mlx5e_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **frame= s, u32 flags); =20 +int mlx5e_xdp_rx_timestamp(const struct xdp_md *ctx, u64 *timestamp); +int mlx5e_xdp_rx_hash(const struct xdp_md *ctx, u32 *hash); + INDIRECT_CALLABLE_DECLARE(bool mlx5e_xmit_xdp_frame_mpwqe(struct mlx5e_xdp= sq *sq, struct mlx5e_xmit_data *xdptxd, struct skb_shared_info *sinfo, diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c b/drivers/= net/ethernet/mellanox/mlx5/core/en/xsk/rx.c index 9cff82d764e3..8bf3029abd3c 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c @@ -49,6 +49,7 @@ int mlx5e_xsk_alloc_rx_mpwqe(struct mlx5e_rq *rq, u16 ix) umr_wqe->inline_mtts[i] =3D (struct mlx5_mtt) { .ptag =3D cpu_to_be64(addr | MLX5_EN_WR), }; + wi->alloc_units[i].mxbuf->rq =3D rq; } } else if (unlikely(rq->mpwqe.umr_mode =3D=3D MLX5E_MPWRQ_UMR_MODE_UNALIG= NED)) { for (i =3D 0; i < batch; i++) { @@ -58,6 +59,7 @@ int mlx5e_xsk_alloc_rx_mpwqe(struct mlx5e_rq *rq, u16 ix) .key =3D rq->mkey_be, .va =3D cpu_to_be64(addr), }; + wi->alloc_units[i].mxbuf->rq =3D rq; } } else if (likely(rq->mpwqe.umr_mode =3D=3D MLX5E_MPWRQ_UMR_MODE_TRIPLE))= { u32 mapping_size =3D 1 << (rq->mpwqe.page_shift - 2); @@ -81,6 +83,7 @@ int mlx5e_xsk_alloc_rx_mpwqe(struct mlx5e_rq *rq, u16 ix) .key =3D rq->mkey_be, .va =3D cpu_to_be64(rq->wqe_overflow.addr), }; + wi->alloc_units[i].mxbuf->rq =3D rq; } } else { __be32 pad_size =3D cpu_to_be32((1 << rq->mpwqe.page_shift) - @@ -100,6 +103,7 @@ int mlx5e_xsk_alloc_rx_mpwqe(struct mlx5e_rq *rq, u16 i= x) .va =3D cpu_to_be64(rq->wqe_overflow.addr), .bcount =3D pad_size, }; + wi->alloc_units[i].mxbuf->rq =3D rq; } } =20 @@ -230,6 +234,7 @@ static struct sk_buff *mlx5e_xsk_construct_skb(struct m= lx5e_rq *rq, struct xdp_b =20 struct sk_buff *mlx5e_xsk_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi, + struct mlx5_cqe64 *cqe, u16 cqe_bcnt, u32 head_offset, u32 page_idx) @@ -250,6 +255,8 @@ struct sk_buff *mlx5e_xsk_skb_from_cqe_mpwrq_linear(str= uct mlx5e_rq *rq, */ WARN_ON_ONCE(head_offset); =20 + /* mxbuf->rq is set on allocation, but cqe is per-packet so set it here *= / + mxbuf->cqe =3D cqe; xsk_buff_set_size(&mxbuf->xdp, cqe_bcnt); xsk_buff_dma_sync_for_cpu(&mxbuf->xdp, rq->xsk_pool); net_prefetch(mxbuf->xdp.data); @@ -284,6 +291,7 @@ struct sk_buff *mlx5e_xsk_skb_from_cqe_mpwrq_linear(str= uct mlx5e_rq *rq, =20 struct sk_buff *mlx5e_xsk_skb_from_cqe_linear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi, + struct mlx5_cqe64 *cqe, u32 cqe_bcnt) { struct mlx5e_xdp_buff *mxbuf =3D wi->au->mxbuf; @@ -296,6 +304,8 @@ struct sk_buff *mlx5e_xsk_skb_from_cqe_linear(struct ml= x5e_rq *rq, */ WARN_ON_ONCE(wi->offset); =20 + /* mxbuf->rq is set on allocation, but cqe is per-packet so set it here *= / + mxbuf->cqe =3D cqe; xsk_buff_set_size(&mxbuf->xdp, cqe_bcnt); xsk_buff_dma_sync_for_cpu(&mxbuf->xdp, rq->xsk_pool); net_prefetch(mxbuf->xdp.data); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.h b/drivers/= net/ethernet/mellanox/mlx5/core/en/xsk/rx.h index 087c943bd8e9..cefc0ef6105d 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.h @@ -13,11 +13,13 @@ int mlx5e_xsk_alloc_rx_wqes_batched(struct mlx5e_rq *rq= , u16 ix, int wqe_bulk); int mlx5e_xsk_alloc_rx_wqes(struct mlx5e_rq *rq, u16 ix, int wqe_bulk); struct sk_buff *mlx5e_xsk_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi, + struct mlx5_cqe64 *cqe, u16 cqe_bcnt, u32 head_offset, u32 page_idx); struct sk_buff *mlx5e_xsk_skb_from_cqe_linear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi, + struct mlx5_cqe64 *cqe, u32 cqe_bcnt); =20 #endif /* __MLX5_EN_XSK_RX_H__ */ diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/ne= t/ethernet/mellanox/mlx5/core/en_main.c index 217c8a478977..6395ab141273 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c @@ -4913,6 +4913,11 @@ const struct net_device_ops mlx5e_netdev_ops =3D { #endif }; =20 +static const struct xdp_metadata_ops mlx5_xdp_metadata_ops =3D { + .xmo_rx_timestamp =3D mlx5e_xdp_rx_timestamp, + .xmo_rx_hash =3D mlx5e_xdp_rx_hash, +}; + static u32 mlx5e_choose_lro_timeout(struct mlx5_core_dev *mdev, u32 wanted= _timeout) { int i; @@ -5053,6 +5058,7 @@ static void mlx5e_build_nic_netdev(struct net_device = *netdev) SET_NETDEV_DEV(netdev, mdev->device); =20 netdev->netdev_ops =3D &mlx5e_netdev_ops; + netdev->xdp_metadata_ops =3D &mlx5_xdp_metadata_ops; =20 mlx5e_dcbnl_build_netdev(netdev); =20 diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/= ethernet/mellanox/mlx5/core/en_rx.c index 9785b58ff3bc..cbdb9e6199a5 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c @@ -62,10 +62,12 @@ =20 static struct sk_buff * mlx5e_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq, struct mlx5e_mpw_info= *wi, - u16 cqe_bcnt, u32 head_offset, u32 page_idx); + struct mlx5_cqe64 *cqe, u16 cqe_bcnt, u32 head_offset, + u32 page_idx); static struct sk_buff * mlx5e_skb_from_cqe_mpwrq_nonlinear(struct mlx5e_rq *rq, struct mlx5e_mpw_i= nfo *wi, - u16 cqe_bcnt, u32 head_offset, u32 page_idx); + struct mlx5_cqe64 *cqe, u16 cqe_bcnt, u32 head_offset, + u32 page_idx); static void mlx5e_handle_rx_cqe(struct mlx5e_rq *rq, struct mlx5_cqe64 *cq= e); static void mlx5e_handle_rx_cqe_mpwrq(struct mlx5e_rq *rq, struct mlx5_cqe= 64 *cqe); static void mlx5e_handle_rx_cqe_mpwrq_shampo(struct mlx5e_rq *rq, struct m= lx5_cqe64 *cqe); @@ -76,11 +78,6 @@ const struct mlx5e_rx_handlers mlx5e_rx_handlers_nic =3D= { .handle_rx_cqe_mpwqe_shampo =3D mlx5e_handle_rx_cqe_mpwrq_shampo, }; =20 -static inline bool mlx5e_rx_hw_stamp(struct hwtstamp_config *config) -{ - return config->rx_filter =3D=3D HWTSTAMP_FILTER_ALL; -} - static inline void mlx5e_read_cqe_slot(struct mlx5_cqwq *wq, u32 cqcc, void *data) { @@ -1564,16 +1561,19 @@ struct sk_buff *mlx5e_build_linear_skb(struct mlx5e= _rq *rq, void *va, return skb; } =20 -static void mlx5e_fill_xdp_buff(struct mlx5e_rq *rq, void *va, u16 headroo= m, - u32 len, struct mlx5e_xdp_buff *mxbuf) +static void mlx5e_fill_xdp_buff(struct mlx5e_rq *rq, struct mlx5_cqe64 *cq= e, + void *va, u16 headroom, u32 len, + struct mlx5e_xdp_buff *mxbuf) { xdp_init_buff(&mxbuf->xdp, rq->buff.frame0_sz, &rq->xdp_rxq); xdp_prepare_buff(&mxbuf->xdp, va, headroom, len, true); + mxbuf->cqe =3D cqe; + mxbuf->rq =3D rq; } =20 static struct sk_buff * mlx5e_skb_from_cqe_linear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info = *wi, - u32 cqe_bcnt) + struct mlx5_cqe64 *cqe, u32 cqe_bcnt) { union mlx5e_alloc_unit *au =3D wi->au; u16 rx_headroom =3D rq->buff.headroom; @@ -1598,7 +1598,7 @@ mlx5e_skb_from_cqe_linear(struct mlx5e_rq *rq, struct= mlx5e_wqe_frag_info *wi, struct mlx5e_xdp_buff mxbuf; =20 net_prefetchw(va); /* xdp_frame data area */ - mlx5e_fill_xdp_buff(rq, va, rx_headroom, cqe_bcnt, &mxbuf); + mlx5e_fill_xdp_buff(rq, cqe, va, rx_headroom, cqe_bcnt, &mxbuf); if (mlx5e_xdp_handle(rq, au->page, prog, &mxbuf)) return NULL; /* page/packet was consumed by XDP */ =20 @@ -1619,7 +1619,7 @@ mlx5e_skb_from_cqe_linear(struct mlx5e_rq *rq, struct= mlx5e_wqe_frag_info *wi, =20 static struct sk_buff * mlx5e_skb_from_cqe_nonlinear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_in= fo *wi, - u32 cqe_bcnt) + struct mlx5_cqe64 *cqe, u32 cqe_bcnt) { struct mlx5e_rq_frag_info *frag_info =3D &rq->wqe.info.arr[0]; struct mlx5e_wqe_frag_info *head_wi =3D wi; @@ -1643,7 +1643,7 @@ mlx5e_skb_from_cqe_nonlinear(struct mlx5e_rq *rq, str= uct mlx5e_wqe_frag_info *wi net_prefetchw(va); /* xdp_frame data area */ net_prefetch(va + rx_headroom); =20 - mlx5e_fill_xdp_buff(rq, va, rx_headroom, frag_consumed_bytes, &mxbuf); + mlx5e_fill_xdp_buff(rq, cqe, va, rx_headroom, frag_consumed_bytes, &mxbuf= ); sinfo =3D xdp_get_shared_info_from_buff(&mxbuf.xdp); truesize =3D 0; =20 @@ -1766,7 +1766,7 @@ static void mlx5e_handle_rx_cqe(struct mlx5e_rq *rq, = struct mlx5_cqe64 *cqe) mlx5e_skb_from_cqe_linear, mlx5e_skb_from_cqe_nonlinear, mlx5e_xsk_skb_from_cqe_linear, - rq, wi, cqe_bcnt); + rq, wi, cqe, cqe_bcnt); if (!skb) { /* probably for XDP */ if (__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags)) { @@ -1819,7 +1819,7 @@ static void mlx5e_handle_rx_cqe_rep(struct mlx5e_rq *= rq, struct mlx5_cqe64 *cqe) skb =3D INDIRECT_CALL_2(rq->wqe.skb_from_cqe, mlx5e_skb_from_cqe_linear, mlx5e_skb_from_cqe_nonlinear, - rq, wi, cqe_bcnt); + rq, wi, cqe, cqe_bcnt); if (!skb) { /* probably for XDP */ if (__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags)) { @@ -1878,7 +1878,7 @@ static void mlx5e_handle_rx_cqe_mpwrq_rep(struct mlx5= e_rq *rq, struct mlx5_cqe64 skb =3D INDIRECT_CALL_2(rq->mpwqe.skb_from_cqe_mpwrq, mlx5e_skb_from_cqe_mpwrq_linear, mlx5e_skb_from_cqe_mpwrq_nonlinear, - rq, wi, cqe_bcnt, head_offset, page_idx); + rq, wi, cqe, cqe_bcnt, head_offset, page_idx); if (!skb) goto mpwrq_cqe_out; =20 @@ -1929,7 +1929,8 @@ mlx5e_fill_skb_data(struct sk_buff *skb, struct mlx5e= _rq *rq, =20 static struct sk_buff * mlx5e_skb_from_cqe_mpwrq_nonlinear(struct mlx5e_rq *rq, struct mlx5e_mpw_i= nfo *wi, - u16 cqe_bcnt, u32 head_offset, u32 page_idx) + struct mlx5_cqe64 *cqe, u16 cqe_bcnt, u32 head_offset, + u32 page_idx) { union mlx5e_alloc_unit *au =3D &wi->alloc_units[page_idx]; u16 headlen =3D min_t(u16, MLX5E_RX_MAX_HEAD, cqe_bcnt); @@ -1968,7 +1969,8 @@ mlx5e_skb_from_cqe_mpwrq_nonlinear(struct mlx5e_rq *r= q, struct mlx5e_mpw_info *w =20 static struct sk_buff * mlx5e_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq, struct mlx5e_mpw_info= *wi, - u16 cqe_bcnt, u32 head_offset, u32 page_idx) + struct mlx5_cqe64 *cqe, u16 cqe_bcnt, u32 head_offset, + u32 page_idx) { union mlx5e_alloc_unit *au =3D &wi->alloc_units[page_idx]; u16 rx_headroom =3D rq->buff.headroom; @@ -1999,7 +2001,7 @@ mlx5e_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq, = struct mlx5e_mpw_info *wi, struct mlx5e_xdp_buff mxbuf; =20 net_prefetchw(va); /* xdp_frame data area */ - mlx5e_fill_xdp_buff(rq, va, rx_headroom, cqe_bcnt, &mxbuf); + mlx5e_fill_xdp_buff(rq, cqe, va, rx_headroom, cqe_bcnt, &mxbuf); if (mlx5e_xdp_handle(rq, au->page, prog, &mxbuf)) { if (__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags)) __set_bit(page_idx, wi->xdp_xmit_bitmap); /* non-atomic */ @@ -2163,8 +2165,8 @@ static void mlx5e_handle_rx_cqe_mpwrq_shampo(struct m= lx5e_rq *rq, struct mlx5_cq if (likely(head_size)) *skb =3D mlx5e_skb_from_cqe_shampo(rq, wi, cqe, header_index); else - *skb =3D mlx5e_skb_from_cqe_mpwrq_nonlinear(rq, wi, cqe_bcnt, data_offs= et, - page_idx); + *skb =3D mlx5e_skb_from_cqe_mpwrq_nonlinear(rq, wi, cqe, cqe_bcnt, + data_offset, page_idx); if (unlikely(!*skb)) goto free_hd_entry; =20 @@ -2238,7 +2240,8 @@ static void mlx5e_handle_rx_cqe_mpwrq(struct mlx5e_rq= *rq, struct mlx5_cqe64 *cq mlx5e_skb_from_cqe_mpwrq_linear, mlx5e_skb_from_cqe_mpwrq_nonlinear, mlx5e_xsk_skb_from_cqe_mpwrq_linear, - rq, wi, cqe_bcnt, head_offset, page_idx); + rq, wi, cqe, cqe_bcnt, head_offset, + page_idx); if (!skb) goto mpwrq_cqe_out; =20 @@ -2483,7 +2486,7 @@ static void mlx5i_handle_rx_cqe(struct mlx5e_rq *rq, = struct mlx5_cqe64 *cqe) skb =3D INDIRECT_CALL_2(rq->wqe.skb_from_cqe, mlx5e_skb_from_cqe_linear, mlx5e_skb_from_cqe_nonlinear, - rq, wi, cqe_bcnt); + rq, wi, cqe, cqe_bcnt); if (!skb) goto wq_free_wqe; =20 @@ -2575,7 +2578,7 @@ static void mlx5e_trap_handle_rx_cqe(struct mlx5e_rq = *rq, struct mlx5_cqe64 *cqe goto free_wqe; } =20 - skb =3D mlx5e_skb_from_cqe_nonlinear(rq, wi, cqe_bcnt); + skb =3D mlx5e_skb_from_cqe_nonlinear(rq, wi, cqe, cqe_bcnt); if (!skb) goto free_wqe; =20 --=20 2.39.0.rc1.256.g54fd8350bd-goog