From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by mail.toke.dk (Postfix) with ESMTPS id 2FBD19E05E7 for ; Thu, 19 Jan 2023 23:16:05 +0100 (CET) Authentication-Results: mail.toke.dk; dkim=pass (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=gcByxiP8 Received: by mail-pj1-x104a.google.com with SMTP id u7-20020a17090a410700b002291f69fbb5so1506734pjf.2 for ; Thu, 19 Jan 2023 14:16:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:from:to:cc:subject:date:message-id :reply-to; bh=pQrBcvoL0XLWujgKm/eXCsJNB2211jxLnyCv3C4j3Bk=; b=gcByxiP88YmwgjLiQqFVEd5VDemsLZg5YFyBclXJkSFCKFe7rJnw9K5ITKfrIVPNCk 5yFAT2slgU7c5Vg+GgLUgGuHk35ks9ysWVuatBSoDHWa7esi8MYSe6ZfHHakFMouZTMT Ry6bwl7weT9GoBqulLeH/TkOhjbrs14K2/G69rDK7uQMcpZ75ujPEzEKBTZYtdPJzymJ X8sOgq3ylzW47NixQrX1pDne0FncPfxp/uMZeixXrKQjWwRGzSL2FW8UL7WUzjXE8w4A iGafJYwnFfPofYPsBNCTy0iml0kUCgc56dSXzyi8ah+saG+HqCoIOvXir3nsH2/uBfhQ 81zQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:x-gm-message-state:from:to:cc:subject :date:message-id:reply-to; bh=pQrBcvoL0XLWujgKm/eXCsJNB2211jxLnyCv3C4j3Bk=; b=HDuNYXDKK4zvtxiYebWxKD6sO+badqwcsCKL7KMCCR7D8GwO2S8FEyCDsIS5PQHUib I20jwEYoqJIaLcLW8x2vVm9reQ5SFYUiBKK1YhGdwq5YStS/YvWpBzeOkN8OGf08u6LH EY2lLlsQW/pX6uq2bnTdHUgdee4TXKZTHkSMdLwsUhG0LeQsGvu98j1x16gj1zNsqUxw Iq4Vzr/ii28vXXzhl38OZBcjOGuN1PTpp0z1k912PhJiYLxvzuuWbOWcelukrAG8LI5v 89ulTXG5PhzJRyHGtr67zyTish3d7fq4gBPDzpCVdPkb8628cpxckYbdawaBvUeH3Dgz dZpA== X-Gm-Message-State: AFqh2kqmquu3vRDemlwst0y1oO2IRcBZejk2KGvk00QRodtuZ+bg1knz 9IG98cJXCdzIUHvGAVKz4KpO88A= X-Google-Smtp-Source: AMrXdXuRz534DtmJE8jM9nn10nH+2ntqQbuT1publnud6N5ddAC1SryNgvXR88irzCzwT//Ru0opQAU= X-Received: from sdf.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5935]) (user=sdf job=sendgmr) by 2002:a17:902:c10d:b0:189:97e9:c8e with SMTP id 13-20020a170902c10d00b0018997e90c8emr1122795pli.63.1674166563625; Thu, 19 Jan 2023 14:16:03 -0800 (PST) Date: Thu, 19 Jan 2023 14:15:34 -0800 In-Reply-To: <20230119221536.3349901-1-sdf@google.com> Mime-Version: 1.0 References: <20230119221536.3349901-1-sdf@google.com> X-Mailer: git-send-email 2.39.0.246.g2a6d74b583-goog Message-ID: <20230119221536.3349901-16-sdf@google.com> From: Stanislav Fomichev To: bpf@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Message-ID-Hash: BVMEHKAJCJ6WUQAG5P3V3XNZLVNJ4JGQ X-Message-ID-Hash: BVMEHKAJCJ6WUQAG5P3V3XNZLVNJ4JGQ X-MailFrom: 3I8HJYwMKCfAkVXYggYdW.UgepVh-ZaflkpVh-hjgbWUl.fWl@flex--sdf.bounces.google.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org, martin.lau@linux.dev, song@kernel.org, yhs@fb.com, john.fastabend@gmail.com, kpsingh@kernel.org, sdf@google.com, haoluo@google.com, jolsa@kernel.org, =?UTF-8?q?Toke=20H=C3=B8iland-J=C3=B8rgensen?= , Tariq Toukan , Saeed Mahameed , David Ahern , Jakub Kicinski , Willem de Bruijn , Jesper Dangaard Brouer , Anatoly Burakov , Alexander Lobakin , Magnus Karlsson , Maryam Tahhan , xdp-hints@xdp-project.net, netdev@vger.kernel.org X-Mailman-Version: 3.3.7 Precedence: list Subject: [xdp-hints] [PATCH bpf-next v8 15/17] net/mlx5e: Introduce wrapper for xdp_buff List-Id: XDP hardware hints design discussion Archived-At: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: From: Toke H=C3=B8iland-J=C3=B8rgensen Preparation for implementing HW metadata kfuncs. No functional change. Cc: Tariq Toukan Cc: Saeed Mahameed Cc: John Fastabend Cc: David Ahern Cc: Martin KaFai Lau Cc: Jakub Kicinski Cc: Willem de Bruijn Cc: Jesper Dangaard Brouer Cc: Anatoly Burakov Cc: Alexander Lobakin Cc: Magnus Karlsson Cc: Maryam Tahhan Cc: xdp-hints@xdp-project.net Cc: netdev@vger.kernel.org Signed-off-by: Toke H=C3=B8iland-J=C3=B8rgensen Signed-off-by: Stanislav Fomichev --- .../net/ethernet/mellanox/mlx5/core/en/xdp.c | 3 +- .../net/ethernet/mellanox/mlx5/core/en/xdp.h | 6 +- .../ethernet/mellanox/mlx5/core/en/xsk/rx.c | 33 +++++++---- .../net/ethernet/mellanox/mlx5/core/en_rx.c | 58 +++++++++---------- 4 files changed, 57 insertions(+), 43 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c b/drivers/net= /ethernet/mellanox/mlx5/core/en/xdp.c index 20507ef2f956..31bb6806bf5d 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c @@ -158,8 +158,9 @@ mlx5e_xmit_xdp_buff(struct mlx5e_xdpsq *sq, struct mlx5= e_rq *rq, =20 /* returns true if packet was consumed by xdp */ bool mlx5e_xdp_handle(struct mlx5e_rq *rq, struct page *page, - struct bpf_prog *prog, struct xdp_buff *xdp) + struct bpf_prog *prog, struct mlx5e_xdp_buff *mxbuf) { + struct xdp_buff *xdp =3D &mxbuf->xdp; u32 act; int err; =20 diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h b/drivers/net= /ethernet/mellanox/mlx5/core/en/xdp.h index bc2d9034af5b..389818bf6833 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h @@ -44,10 +44,14 @@ (MLX5E_XDP_INLINE_WQE_MAX_DS_CNT * MLX5_SEND_WQE_DS - \ sizeof(struct mlx5_wqe_inline_seg)) =20 +struct mlx5e_xdp_buff { + struct xdp_buff xdp; +}; + struct mlx5e_xsk_param; int mlx5e_xdp_max_mtu(struct mlx5e_params *params, struct mlx5e_xsk_param = *xsk); bool mlx5e_xdp_handle(struct mlx5e_rq *rq, struct page *page, - struct bpf_prog *prog, struct xdp_buff *xdp); + struct bpf_prog *prog, struct mlx5e_xdp_buff *mlctx); void mlx5e_xdp_mpwqe_complete(struct mlx5e_xdpsq *sq); bool mlx5e_poll_xdpsq_cq(struct mlx5e_cq *cq); void mlx5e_free_xdpsq_descs(struct mlx5e_xdpsq *sq); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c b/drivers/= net/ethernet/mellanox/mlx5/core/en/xsk/rx.c index c91b54d9ff27..08d4e5c30b40 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c @@ -8,6 +8,14 @@ =20 /* RX data path */ =20 +static struct mlx5e_xdp_buff *xsk_buff_to_mxbuf(struct xdp_buff *xdp) +{ + /* mlx5e_xdp_buff shares its layout with xdp_buff_xsk + * and private mlx5e_xdp_buff fields fall into xdp_buff_xsk->cb + */ + return (struct mlx5e_xdp_buff *)xdp; +} + int mlx5e_xsk_alloc_rx_mpwqe(struct mlx5e_rq *rq, u16 ix) { struct mlx5e_mpw_info *wi =3D mlx5e_get_mpw_info(rq, ix); @@ -22,6 +30,7 @@ int mlx5e_xsk_alloc_rx_mpwqe(struct mlx5e_rq *rq, u16 ix) goto err; =20 BUILD_BUG_ON(sizeof(wi->alloc_units[0]) !=3D sizeof(wi->alloc_units[0].xs= k)); + XSK_CHECK_PRIV_TYPE(struct mlx5e_xdp_buff); batch =3D xsk_buff_alloc_batch(rq->xsk_pool, (struct xdp_buff **)wi->allo= c_units, rq->mpwqe.pages_per_wqe); =20 @@ -233,7 +242,7 @@ struct sk_buff *mlx5e_xsk_skb_from_cqe_mpwrq_linear(str= uct mlx5e_rq *rq, u32 head_offset, u32 page_idx) { - struct xdp_buff *xdp =3D wi->alloc_units[page_idx].xsk; + struct mlx5e_xdp_buff *mxbuf =3D xsk_buff_to_mxbuf(wi->alloc_units[page_i= dx].xsk); struct bpf_prog *prog; =20 /* Check packet size. Note LRO doesn't use linear SKB */ @@ -249,9 +258,9 @@ struct sk_buff *mlx5e_xsk_skb_from_cqe_mpwrq_linear(str= uct mlx5e_rq *rq, */ WARN_ON_ONCE(head_offset); =20 - xsk_buff_set_size(xdp, cqe_bcnt); - xsk_buff_dma_sync_for_cpu(xdp, rq->xsk_pool); - net_prefetch(xdp->data); + xsk_buff_set_size(&mxbuf->xdp, cqe_bcnt); + xsk_buff_dma_sync_for_cpu(&mxbuf->xdp, rq->xsk_pool); + net_prefetch(mxbuf->xdp.data); =20 /* Possible flows: * - XDP_REDIRECT to XSKMAP: @@ -269,7 +278,7 @@ struct sk_buff *mlx5e_xsk_skb_from_cqe_mpwrq_linear(str= uct mlx5e_rq *rq, */ =20 prog =3D rcu_dereference(rq->xdp_prog); - if (likely(prog && mlx5e_xdp_handle(rq, NULL, prog, xdp))) { + if (likely(prog && mlx5e_xdp_handle(rq, NULL, prog, mxbuf))) { if (likely(__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags))) __set_bit(page_idx, wi->xdp_xmit_bitmap); /* non-atomic */ return NULL; /* page/packet was consumed by XDP */ @@ -278,14 +287,14 @@ struct sk_buff *mlx5e_xsk_skb_from_cqe_mpwrq_linear(s= truct mlx5e_rq *rq, /* XDP_PASS: copy the data from the UMEM to a new SKB and reuse the * frame. On SKB allocation failure, NULL is returned. */ - return mlx5e_xsk_construct_skb(rq, xdp); + return mlx5e_xsk_construct_skb(rq, &mxbuf->xdp); } =20 struct sk_buff *mlx5e_xsk_skb_from_cqe_linear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi, u32 cqe_bcnt) { - struct xdp_buff *xdp =3D wi->au->xsk; + struct mlx5e_xdp_buff *mxbuf =3D xsk_buff_to_mxbuf(wi->au->xsk); struct bpf_prog *prog; =20 /* wi->offset is not used in this function, because xdp->data and the @@ -295,17 +304,17 @@ struct sk_buff *mlx5e_xsk_skb_from_cqe_linear(struct = mlx5e_rq *rq, */ WARN_ON_ONCE(wi->offset); =20 - xsk_buff_set_size(xdp, cqe_bcnt); - xsk_buff_dma_sync_for_cpu(xdp, rq->xsk_pool); - net_prefetch(xdp->data); + xsk_buff_set_size(&mxbuf->xdp, cqe_bcnt); + xsk_buff_dma_sync_for_cpu(&mxbuf->xdp, rq->xsk_pool); + net_prefetch(mxbuf->xdp.data); =20 prog =3D rcu_dereference(rq->xdp_prog); - if (likely(prog && mlx5e_xdp_handle(rq, NULL, prog, xdp))) + if (likely(prog && mlx5e_xdp_handle(rq, NULL, prog, mxbuf))) return NULL; /* page/packet was consumed by XDP */ =20 /* XDP_PASS: copy the data from the UMEM to a new SKB. The frame reuse * will be handled by mlx5e_free_rx_wqe. * On SKB allocation failure, NULL is returned. */ - return mlx5e_xsk_construct_skb(rq, xdp); + return mlx5e_xsk_construct_skb(rq, &mxbuf->xdp); } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/= ethernet/mellanox/mlx5/core/en_rx.c index c8820ab22169..c6810ca75530 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c @@ -1575,11 +1575,11 @@ struct sk_buff *mlx5e_build_linear_skb(struct mlx5e= _rq *rq, void *va, return skb; } =20 -static void mlx5e_fill_xdp_buff(struct mlx5e_rq *rq, void *va, u16 headroo= m, - u32 len, struct xdp_buff *xdp) +static void mlx5e_fill_mxbuf(struct mlx5e_rq *rq, void *va, u16 headroom, + u32 len, struct mlx5e_xdp_buff *mxbuf) { - xdp_init_buff(xdp, rq->buff.frame0_sz, &rq->xdp_rxq); - xdp_prepare_buff(xdp, va, headroom, len, true); + xdp_init_buff(&mxbuf->xdp, rq->buff.frame0_sz, &rq->xdp_rxq); + xdp_prepare_buff(&mxbuf->xdp, va, headroom, len, true); } =20 static struct sk_buff * @@ -1606,16 +1606,16 @@ mlx5e_skb_from_cqe_linear(struct mlx5e_rq *rq, stru= ct mlx5e_wqe_frag_info *wi, =20 prog =3D rcu_dereference(rq->xdp_prog); if (prog) { - struct xdp_buff xdp; + struct mlx5e_xdp_buff mxbuf; =20 net_prefetchw(va); /* xdp_frame data area */ - mlx5e_fill_xdp_buff(rq, va, rx_headroom, cqe_bcnt, &xdp); - if (mlx5e_xdp_handle(rq, au->page, prog, &xdp)) + mlx5e_fill_mxbuf(rq, va, rx_headroom, cqe_bcnt, &mxbuf); + if (mlx5e_xdp_handle(rq, au->page, prog, &mxbuf)) return NULL; /* page/packet was consumed by XDP */ =20 - rx_headroom =3D xdp.data - xdp.data_hard_start; - metasize =3D xdp.data - xdp.data_meta; - cqe_bcnt =3D xdp.data_end - xdp.data; + rx_headroom =3D mxbuf.xdp.data - mxbuf.xdp.data_hard_start; + metasize =3D mxbuf.xdp.data - mxbuf.xdp.data_meta; + cqe_bcnt =3D mxbuf.xdp.data_end - mxbuf.xdp.data; } frag_size =3D MLX5_SKB_FRAG_SZ(rx_headroom + cqe_bcnt); skb =3D mlx5e_build_linear_skb(rq, va, frag_size, rx_headroom, cqe_bcnt, = metasize); @@ -1637,9 +1637,9 @@ mlx5e_skb_from_cqe_nonlinear(struct mlx5e_rq *rq, str= uct mlx5e_wqe_frag_info *wi union mlx5e_alloc_unit *au =3D wi->au; u16 rx_headroom =3D rq->buff.headroom; struct skb_shared_info *sinfo; + struct mlx5e_xdp_buff mxbuf; u32 frag_consumed_bytes; struct bpf_prog *prog; - struct xdp_buff xdp; struct sk_buff *skb; dma_addr_t addr; u32 truesize; @@ -1654,8 +1654,8 @@ mlx5e_skb_from_cqe_nonlinear(struct mlx5e_rq *rq, str= uct mlx5e_wqe_frag_info *wi net_prefetchw(va); /* xdp_frame data area */ net_prefetch(va + rx_headroom); =20 - mlx5e_fill_xdp_buff(rq, va, rx_headroom, frag_consumed_bytes, &xdp); - sinfo =3D xdp_get_shared_info_from_buff(&xdp); + mlx5e_fill_mxbuf(rq, va, rx_headroom, frag_consumed_bytes, &mxbuf); + sinfo =3D xdp_get_shared_info_from_buff(&mxbuf.xdp); truesize =3D 0; =20 cqe_bcnt -=3D frag_consumed_bytes; @@ -1673,13 +1673,13 @@ mlx5e_skb_from_cqe_nonlinear(struct mlx5e_rq *rq, s= truct mlx5e_wqe_frag_info *wi dma_sync_single_for_cpu(rq->pdev, addr + wi->offset, frag_consumed_bytes, rq->buff.map_dir); =20 - if (!xdp_buff_has_frags(&xdp)) { + if (!xdp_buff_has_frags(&mxbuf.xdp)) { /* Init on the first fragment to avoid cold cache access * when possible. */ sinfo->nr_frags =3D 0; sinfo->xdp_frags_size =3D 0; - xdp_buff_set_frags_flag(&xdp); + xdp_buff_set_frags_flag(&mxbuf.xdp); } =20 frag =3D &sinfo->frags[sinfo->nr_frags++]; @@ -1688,7 +1688,7 @@ mlx5e_skb_from_cqe_nonlinear(struct mlx5e_rq *rq, str= uct mlx5e_wqe_frag_info *wi skb_frag_size_set(frag, frag_consumed_bytes); =20 if (page_is_pfmemalloc(au->page)) - xdp_buff_set_frag_pfmemalloc(&xdp); + xdp_buff_set_frag_pfmemalloc(&mxbuf.xdp); =20 sinfo->xdp_frags_size +=3D frag_consumed_bytes; truesize +=3D frag_info->frag_stride; @@ -1701,7 +1701,7 @@ mlx5e_skb_from_cqe_nonlinear(struct mlx5e_rq *rq, str= uct mlx5e_wqe_frag_info *wi au =3D head_wi->au; =20 prog =3D rcu_dereference(rq->xdp_prog); - if (prog && mlx5e_xdp_handle(rq, au->page, prog, &xdp)) { + if (prog && mlx5e_xdp_handle(rq, au->page, prog, &mxbuf)) { if (test_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags)) { int i; =20 @@ -1711,22 +1711,22 @@ mlx5e_skb_from_cqe_nonlinear(struct mlx5e_rq *rq, s= truct mlx5e_wqe_frag_info *wi return NULL; /* page/packet was consumed by XDP */ } =20 - skb =3D mlx5e_build_linear_skb(rq, xdp.data_hard_start, rq->buff.frame0_s= z, - xdp.data - xdp.data_hard_start, - xdp.data_end - xdp.data, - xdp.data - xdp.data_meta); + skb =3D mlx5e_build_linear_skb(rq, mxbuf.xdp.data_hard_start, rq->buff.fr= ame0_sz, + mxbuf.xdp.data - mxbuf.xdp.data_hard_start, + mxbuf.xdp.data_end - mxbuf.xdp.data, + mxbuf.xdp.data - mxbuf.xdp.data_meta); if (unlikely(!skb)) return NULL; =20 page_ref_inc(au->page); =20 - if (unlikely(xdp_buff_has_frags(&xdp))) { + if (unlikely(xdp_buff_has_frags(&mxbuf.xdp))) { int i; =20 /* sinfo->nr_frags is reset by build_skb, calculate again. */ xdp_update_skb_shared_info(skb, wi - head_wi - 1, sinfo->xdp_frags_size, truesize, - xdp_buff_is_frag_pfmemalloc(&xdp)); + xdp_buff_is_frag_pfmemalloc(&mxbuf.xdp)); =20 for (i =3D 0; i < sinfo->nr_frags; i++) { skb_frag_t *frag =3D &sinfo->frags[i]; @@ -2007,19 +2007,19 @@ mlx5e_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq= , struct mlx5e_mpw_info *wi, =20 prog =3D rcu_dereference(rq->xdp_prog); if (prog) { - struct xdp_buff xdp; + struct mlx5e_xdp_buff mxbuf; =20 net_prefetchw(va); /* xdp_frame data area */ - mlx5e_fill_xdp_buff(rq, va, rx_headroom, cqe_bcnt, &xdp); - if (mlx5e_xdp_handle(rq, au->page, prog, &xdp)) { + mlx5e_fill_mxbuf(rq, va, rx_headroom, cqe_bcnt, &mxbuf); + if (mlx5e_xdp_handle(rq, au->page, prog, &mxbuf)) { if (__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags)) __set_bit(page_idx, wi->xdp_xmit_bitmap); /* non-atomic */ return NULL; /* page/packet was consumed by XDP */ } =20 - rx_headroom =3D xdp.data - xdp.data_hard_start; - metasize =3D xdp.data - xdp.data_meta; - cqe_bcnt =3D xdp.data_end - xdp.data; + rx_headroom =3D mxbuf.xdp.data - mxbuf.xdp.data_hard_start; + metasize =3D mxbuf.xdp.data - mxbuf.xdp.data_meta; + cqe_bcnt =3D mxbuf.xdp.data_end - mxbuf.xdp.data; } frag_size =3D MLX5_SKB_FRAG_SZ(rx_headroom + cqe_bcnt); skb =3D mlx5e_build_linear_skb(rq, va, frag_size, rx_headroom, cqe_bcnt, = metasize); --=20 2.39.0.246.g2a6d74b583-goog