From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by mail.toke.dk (Postfix) with ESMTPS id 7CAF09D29F8 for ; Tue, 20 Dec 2022 23:21:14 +0100 (CET) Authentication-Results: mail.toke.dk; dkim=pass (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=XnMM3dkT Received: by mail-yb1-xb49.google.com with SMTP id y66-20020a25c845000000b00733b5049b6fso15575247ybf.3 for ; Tue, 20 Dec 2022 14:21:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:from:to:cc:subject:date:message-id :reply-to; bh=hOAPPGNXqjjvZ23gkqyigMWwZjhekyJ81ZMmeSOOs3Y=; b=XnMM3dkTDlBIsUvwWtI7s32Z2AL6aXQvvFG6zhzVhZOMCsGldnI5iaaNrfLNxTxdXw hZVM2PSn1pRw+tqwDSUwFpvpgy/kQBte2u+Q9GqGdcTFrLRXpRd0VM2T20pYkfWu30XJ OTsy/XP7G34SOvdiHP/rc5T1Vzjem0LRFdROZkIy+/ObFl9oHAeg3UCYnBKLYvEu9hpE ixuPVcp2FwnwCiYvU3neRr9N0EFfh346FkovygKtHlyd7UxAvF0O+ddPWSzIzf4MvVq7 lLL/viDay/WGHWTV0+Dc+Q2n7uO4Yz3sT0Nr69KsqFL41gqQSwWutzNWXo86lMIm34se Q8xA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:x-gm-message-state:from:to:cc:subject :date:message-id:reply-to; bh=hOAPPGNXqjjvZ23gkqyigMWwZjhekyJ81ZMmeSOOs3Y=; b=fA7fNC/EtfFBb91BG53wjcKM79+yIF5e0dn/TUQSg8L56octc7qJ5ZUJYKKsZL/QaD LqdjHXs2QIBDLpsGC6uHoL+IFnEgP7ZB7mwYpU9ueyWvjpAMgPrn9XbAwqpD+les26Li KB3/ruMEgvz6xc2c32hL4e0MhUkGJPuUYwHhY5kQKtbaFUnXLBEpa6ALbgliVldP8dnR jkgXik2WlpEOoSCRI4ATYqfPc9Dg3fNyF8yS0Xguj8YKSASeq4P8xbuZkVMJCYPX7cxk WiQzWZTubjT/QqHNd2rWqGNBpspwzg6RQiFeuP9b8gdg2uRJAQMCZAf2Q6D3U393C9l7 heHg== X-Gm-Message-State: AFqh2kqzBDykNrSDdqTEOZoz+JOkgO/WvjoekXdrwnLGtmG1QP4XGcbr Cky4HjroV/hyn2sOWq0eU8eaCvM= X-Google-Smtp-Source: AMrXdXshC4SpSGW5St5DDOtlq7yJ3JBj0HSkbDljuD2Y0fmo83SG06OEvfyrDK2+HFYmjHkzTcc8xwE= X-Received: from sdf.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5935]) (user=sdf job=sendgmr) by 2002:a05:690c:848:b0:361:1939:e38a with SMTP id bz8-20020a05690c084800b003611939e38amr1558997ywb.400.1671574872374; Tue, 20 Dec 2022 14:21:12 -0800 (PST) Date: Tue, 20 Dec 2022 14:20:41 -0800 In-Reply-To: <20221220222043.3348718-1-sdf@google.com> Mime-Version: 1.0 References: <20221220222043.3348718-1-sdf@google.com> X-Mailer: git-send-email 2.39.0.314.g84b9a713c41-goog Message-ID: <20221220222043.3348718-16-sdf@google.com> From: Stanislav Fomichev To: bpf@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Message-ID-Hash: PVE6FRXFFFNAOXRNCQCSNLJI6M6COL7Q X-Message-ID-Hash: PVE6FRXFFFNAOXRNCQCSNLJI6M6COL7Q X-MailFrom: 3WDWiYwMKCW8fQSTbbTYR.PbZkQc-UVagfkQc-cebWRPg.aRg@flex--sdf.bounces.google.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org, martin.lau@linux.dev, song@kernel.org, yhs@fb.com, john.fastabend@gmail.com, kpsingh@kernel.org, sdf@google.com, haoluo@google.com, jolsa@kernel.org, =?UTF-8?q?Toke=20H=C3=B8iland-J=C3=B8rgensen?= , Saeed Mahameed , David Ahern , Jakub Kicinski , Willem de Bruijn , Jesper Dangaard Brouer , Anatoly Burakov , Alexander Lobakin , Magnus Karlsson , Maryam Tahhan , xdp-hints@xdp-project.net, netdev@vger.kernel.org X-Mailman-Version: 3.3.7 Precedence: list Subject: [xdp-hints] [PATCH bpf-next v5 15/17] net/mlx5e: Introduce wrapper for xdp_buff List-Id: XDP hardware hints design discussion Archived-At: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: From: Toke H=C3=B8iland-J=C3=B8rgensen Preparation for implementing HW metadata kfuncs. No functional change. Cc: Saeed Mahameed Cc: John Fastabend Cc: David Ahern Cc: Martin KaFai Lau Cc: Jakub Kicinski Cc: Willem de Bruijn Cc: Jesper Dangaard Brouer Cc: Anatoly Burakov Cc: Alexander Lobakin Cc: Magnus Karlsson Cc: Maryam Tahhan Cc: xdp-hints@xdp-project.net Cc: netdev@vger.kernel.org Signed-off-by: Toke H=C3=B8iland-J=C3=B8rgensen Signed-off-by: Stanislav Fomichev --- drivers/net/ethernet/mellanox/mlx5/core/en.h | 1 + .../net/ethernet/mellanox/mlx5/core/en/xdp.c | 3 +- .../net/ethernet/mellanox/mlx5/core/en/xdp.h | 6 +- .../ethernet/mellanox/mlx5/core/en/xsk/rx.c | 25 +++++---- .../net/ethernet/mellanox/mlx5/core/en_rx.c | 56 +++++++++---------- 5 files changed, 49 insertions(+), 42 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/eth= ernet/mellanox/mlx5/core/en.h index 2d77fb8a8a01..af663978d1b4 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h @@ -469,6 +469,7 @@ struct mlx5e_txqsq { union mlx5e_alloc_unit { struct page *page; struct xdp_buff *xsk; + struct mlx5e_xdp_buff *mxbuf; }; =20 /* XDP packets can be transmitted in different ways. On completion, we nee= d to diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c b/drivers/net= /ethernet/mellanox/mlx5/core/en/xdp.c index 20507ef2f956..31bb6806bf5d 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c @@ -158,8 +158,9 @@ mlx5e_xmit_xdp_buff(struct mlx5e_xdpsq *sq, struct mlx5= e_rq *rq, =20 /* returns true if packet was consumed by xdp */ bool mlx5e_xdp_handle(struct mlx5e_rq *rq, struct page *page, - struct bpf_prog *prog, struct xdp_buff *xdp) + struct bpf_prog *prog, struct mlx5e_xdp_buff *mxbuf) { + struct xdp_buff *xdp =3D &mxbuf->xdp; u32 act; int err; =20 diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h b/drivers/net= /ethernet/mellanox/mlx5/core/en/xdp.h index bc2d9034af5b..389818bf6833 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h @@ -44,10 +44,14 @@ (MLX5E_XDP_INLINE_WQE_MAX_DS_CNT * MLX5_SEND_WQE_DS - \ sizeof(struct mlx5_wqe_inline_seg)) =20 +struct mlx5e_xdp_buff { + struct xdp_buff xdp; +}; + struct mlx5e_xsk_param; int mlx5e_xdp_max_mtu(struct mlx5e_params *params, struct mlx5e_xsk_param = *xsk); bool mlx5e_xdp_handle(struct mlx5e_rq *rq, struct page *page, - struct bpf_prog *prog, struct xdp_buff *xdp); + struct bpf_prog *prog, struct mlx5e_xdp_buff *mlctx); void mlx5e_xdp_mpwqe_complete(struct mlx5e_xdpsq *sq); bool mlx5e_poll_xdpsq_cq(struct mlx5e_cq *cq); void mlx5e_free_xdpsq_descs(struct mlx5e_xdpsq *sq); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c b/drivers/= net/ethernet/mellanox/mlx5/core/en/xsk/rx.c index c91b54d9ff27..9cff82d764e3 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c @@ -22,6 +22,7 @@ int mlx5e_xsk_alloc_rx_mpwqe(struct mlx5e_rq *rq, u16 ix) goto err; =20 BUILD_BUG_ON(sizeof(wi->alloc_units[0]) !=3D sizeof(wi->alloc_units[0].xs= k)); + XSK_CHECK_PRIV_TYPE(struct mlx5e_xdp_buff); batch =3D xsk_buff_alloc_batch(rq->xsk_pool, (struct xdp_buff **)wi->allo= c_units, rq->mpwqe.pages_per_wqe); =20 @@ -233,7 +234,7 @@ struct sk_buff *mlx5e_xsk_skb_from_cqe_mpwrq_linear(str= uct mlx5e_rq *rq, u32 head_offset, u32 page_idx) { - struct xdp_buff *xdp =3D wi->alloc_units[page_idx].xsk; + struct mlx5e_xdp_buff *mxbuf =3D wi->alloc_units[page_idx].mxbuf; struct bpf_prog *prog; =20 /* Check packet size. Note LRO doesn't use linear SKB */ @@ -249,9 +250,9 @@ struct sk_buff *mlx5e_xsk_skb_from_cqe_mpwrq_linear(str= uct mlx5e_rq *rq, */ WARN_ON_ONCE(head_offset); =20 - xsk_buff_set_size(xdp, cqe_bcnt); - xsk_buff_dma_sync_for_cpu(xdp, rq->xsk_pool); - net_prefetch(xdp->data); + xsk_buff_set_size(&mxbuf->xdp, cqe_bcnt); + xsk_buff_dma_sync_for_cpu(&mxbuf->xdp, rq->xsk_pool); + net_prefetch(mxbuf->xdp.data); =20 /* Possible flows: * - XDP_REDIRECT to XSKMAP: @@ -269,7 +270,7 @@ struct sk_buff *mlx5e_xsk_skb_from_cqe_mpwrq_linear(str= uct mlx5e_rq *rq, */ =20 prog =3D rcu_dereference(rq->xdp_prog); - if (likely(prog && mlx5e_xdp_handle(rq, NULL, prog, xdp))) { + if (likely(prog && mlx5e_xdp_handle(rq, NULL, prog, mxbuf))) { if (likely(__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags))) __set_bit(page_idx, wi->xdp_xmit_bitmap); /* non-atomic */ return NULL; /* page/packet was consumed by XDP */ @@ -278,14 +279,14 @@ struct sk_buff *mlx5e_xsk_skb_from_cqe_mpwrq_linear(s= truct mlx5e_rq *rq, /* XDP_PASS: copy the data from the UMEM to a new SKB and reuse the * frame. On SKB allocation failure, NULL is returned. */ - return mlx5e_xsk_construct_skb(rq, xdp); + return mlx5e_xsk_construct_skb(rq, &mxbuf->xdp); } =20 struct sk_buff *mlx5e_xsk_skb_from_cqe_linear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi, u32 cqe_bcnt) { - struct xdp_buff *xdp =3D wi->au->xsk; + struct mlx5e_xdp_buff *mxbuf =3D wi->au->mxbuf; struct bpf_prog *prog; =20 /* wi->offset is not used in this function, because xdp->data and the @@ -295,17 +296,17 @@ struct sk_buff *mlx5e_xsk_skb_from_cqe_linear(struct = mlx5e_rq *rq, */ WARN_ON_ONCE(wi->offset); =20 - xsk_buff_set_size(xdp, cqe_bcnt); - xsk_buff_dma_sync_for_cpu(xdp, rq->xsk_pool); - net_prefetch(xdp->data); + xsk_buff_set_size(&mxbuf->xdp, cqe_bcnt); + xsk_buff_dma_sync_for_cpu(&mxbuf->xdp, rq->xsk_pool); + net_prefetch(mxbuf->xdp.data); =20 prog =3D rcu_dereference(rq->xdp_prog); - if (likely(prog && mlx5e_xdp_handle(rq, NULL, prog, xdp))) + if (likely(prog && mlx5e_xdp_handle(rq, NULL, prog, mxbuf))) return NULL; /* page/packet was consumed by XDP */ =20 /* XDP_PASS: copy the data from the UMEM to a new SKB. The frame reuse * will be handled by mlx5e_free_rx_wqe. * On SKB allocation failure, NULL is returned. */ - return mlx5e_xsk_construct_skb(rq, xdp); + return mlx5e_xsk_construct_skb(rq, &mxbuf->xdp); } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/= ethernet/mellanox/mlx5/core/en_rx.c index c8820ab22169..c8a2b26de36e 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c @@ -1576,10 +1576,10 @@ struct sk_buff *mlx5e_build_linear_skb(struct mlx5e= _rq *rq, void *va, } =20 static void mlx5e_fill_xdp_buff(struct mlx5e_rq *rq, void *va, u16 headroo= m, - u32 len, struct xdp_buff *xdp) + u32 len, struct mlx5e_xdp_buff *mxbuf) { - xdp_init_buff(xdp, rq->buff.frame0_sz, &rq->xdp_rxq); - xdp_prepare_buff(xdp, va, headroom, len, true); + xdp_init_buff(&mxbuf->xdp, rq->buff.frame0_sz, &rq->xdp_rxq); + xdp_prepare_buff(&mxbuf->xdp, va, headroom, len, true); } =20 static struct sk_buff * @@ -1606,16 +1606,16 @@ mlx5e_skb_from_cqe_linear(struct mlx5e_rq *rq, stru= ct mlx5e_wqe_frag_info *wi, =20 prog =3D rcu_dereference(rq->xdp_prog); if (prog) { - struct xdp_buff xdp; + struct mlx5e_xdp_buff mxbuf; =20 net_prefetchw(va); /* xdp_frame data area */ - mlx5e_fill_xdp_buff(rq, va, rx_headroom, cqe_bcnt, &xdp); - if (mlx5e_xdp_handle(rq, au->page, prog, &xdp)) + mlx5e_fill_xdp_buff(rq, va, rx_headroom, cqe_bcnt, &mxbuf); + if (mlx5e_xdp_handle(rq, au->page, prog, &mxbuf)) return NULL; /* page/packet was consumed by XDP */ =20 - rx_headroom =3D xdp.data - xdp.data_hard_start; - metasize =3D xdp.data - xdp.data_meta; - cqe_bcnt =3D xdp.data_end - xdp.data; + rx_headroom =3D mxbuf.xdp.data - mxbuf.xdp.data_hard_start; + metasize =3D mxbuf.xdp.data - mxbuf.xdp.data_meta; + cqe_bcnt =3D mxbuf.xdp.data_end - mxbuf.xdp.data; } frag_size =3D MLX5_SKB_FRAG_SZ(rx_headroom + cqe_bcnt); skb =3D mlx5e_build_linear_skb(rq, va, frag_size, rx_headroom, cqe_bcnt, = metasize); @@ -1637,9 +1637,9 @@ mlx5e_skb_from_cqe_nonlinear(struct mlx5e_rq *rq, str= uct mlx5e_wqe_frag_info *wi union mlx5e_alloc_unit *au =3D wi->au; u16 rx_headroom =3D rq->buff.headroom; struct skb_shared_info *sinfo; + struct mlx5e_xdp_buff mxbuf; u32 frag_consumed_bytes; struct bpf_prog *prog; - struct xdp_buff xdp; struct sk_buff *skb; dma_addr_t addr; u32 truesize; @@ -1654,8 +1654,8 @@ mlx5e_skb_from_cqe_nonlinear(struct mlx5e_rq *rq, str= uct mlx5e_wqe_frag_info *wi net_prefetchw(va); /* xdp_frame data area */ net_prefetch(va + rx_headroom); =20 - mlx5e_fill_xdp_buff(rq, va, rx_headroom, frag_consumed_bytes, &xdp); - sinfo =3D xdp_get_shared_info_from_buff(&xdp); + mlx5e_fill_xdp_buff(rq, va, rx_headroom, frag_consumed_bytes, &mxbuf); + sinfo =3D xdp_get_shared_info_from_buff(&mxbuf.xdp); truesize =3D 0; =20 cqe_bcnt -=3D frag_consumed_bytes; @@ -1673,13 +1673,13 @@ mlx5e_skb_from_cqe_nonlinear(struct mlx5e_rq *rq, s= truct mlx5e_wqe_frag_info *wi dma_sync_single_for_cpu(rq->pdev, addr + wi->offset, frag_consumed_bytes, rq->buff.map_dir); =20 - if (!xdp_buff_has_frags(&xdp)) { + if (!xdp_buff_has_frags(&mxbuf.xdp)) { /* Init on the first fragment to avoid cold cache access * when possible. */ sinfo->nr_frags =3D 0; sinfo->xdp_frags_size =3D 0; - xdp_buff_set_frags_flag(&xdp); + xdp_buff_set_frags_flag(&mxbuf.xdp); } =20 frag =3D &sinfo->frags[sinfo->nr_frags++]; @@ -1688,7 +1688,7 @@ mlx5e_skb_from_cqe_nonlinear(struct mlx5e_rq *rq, str= uct mlx5e_wqe_frag_info *wi skb_frag_size_set(frag, frag_consumed_bytes); =20 if (page_is_pfmemalloc(au->page)) - xdp_buff_set_frag_pfmemalloc(&xdp); + xdp_buff_set_frag_pfmemalloc(&mxbuf.xdp); =20 sinfo->xdp_frags_size +=3D frag_consumed_bytes; truesize +=3D frag_info->frag_stride; @@ -1701,7 +1701,7 @@ mlx5e_skb_from_cqe_nonlinear(struct mlx5e_rq *rq, str= uct mlx5e_wqe_frag_info *wi au =3D head_wi->au; =20 prog =3D rcu_dereference(rq->xdp_prog); - if (prog && mlx5e_xdp_handle(rq, au->page, prog, &xdp)) { + if (prog && mlx5e_xdp_handle(rq, au->page, prog, &mxbuf)) { if (test_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags)) { int i; =20 @@ -1711,22 +1711,22 @@ mlx5e_skb_from_cqe_nonlinear(struct mlx5e_rq *rq, s= truct mlx5e_wqe_frag_info *wi return NULL; /* page/packet was consumed by XDP */ } =20 - skb =3D mlx5e_build_linear_skb(rq, xdp.data_hard_start, rq->buff.frame0_s= z, - xdp.data - xdp.data_hard_start, - xdp.data_end - xdp.data, - xdp.data - xdp.data_meta); + skb =3D mlx5e_build_linear_skb(rq, mxbuf.xdp.data_hard_start, rq->buff.fr= ame0_sz, + mxbuf.xdp.data - mxbuf.xdp.data_hard_start, + mxbuf.xdp.data_end - mxbuf.xdp.data, + mxbuf.xdp.data - mxbuf.xdp.data_meta); if (unlikely(!skb)) return NULL; =20 page_ref_inc(au->page); =20 - if (unlikely(xdp_buff_has_frags(&xdp))) { + if (unlikely(xdp_buff_has_frags(&mxbuf.xdp))) { int i; =20 /* sinfo->nr_frags is reset by build_skb, calculate again. */ xdp_update_skb_shared_info(skb, wi - head_wi - 1, sinfo->xdp_frags_size, truesize, - xdp_buff_is_frag_pfmemalloc(&xdp)); + xdp_buff_is_frag_pfmemalloc(&mxbuf.xdp)); =20 for (i =3D 0; i < sinfo->nr_frags; i++) { skb_frag_t *frag =3D &sinfo->frags[i]; @@ -2007,19 +2007,19 @@ mlx5e_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq= , struct mlx5e_mpw_info *wi, =20 prog =3D rcu_dereference(rq->xdp_prog); if (prog) { - struct xdp_buff xdp; + struct mlx5e_xdp_buff mxbuf; =20 net_prefetchw(va); /* xdp_frame data area */ - mlx5e_fill_xdp_buff(rq, va, rx_headroom, cqe_bcnt, &xdp); - if (mlx5e_xdp_handle(rq, au->page, prog, &xdp)) { + mlx5e_fill_xdp_buff(rq, va, rx_headroom, cqe_bcnt, &mxbuf); + if (mlx5e_xdp_handle(rq, au->page, prog, &mxbuf)) { if (__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags)) __set_bit(page_idx, wi->xdp_xmit_bitmap); /* non-atomic */ return NULL; /* page/packet was consumed by XDP */ } =20 - rx_headroom =3D xdp.data - xdp.data_hard_start; - metasize =3D xdp.data - xdp.data_meta; - cqe_bcnt =3D xdp.data_end - xdp.data; + rx_headroom =3D mxbuf.xdp.data - mxbuf.xdp.data_hard_start; + metasize =3D mxbuf.xdp.data - mxbuf.xdp.data_meta; + cqe_bcnt =3D mxbuf.xdp.data_end - mxbuf.xdp.data; } frag_size =3D MLX5_SKB_FRAG_SZ(rx_headroom + cqe_bcnt); skb =3D mlx5e_build_linear_skb(rq, va, frag_size, rx_headroom, cqe_bcnt, = metasize); --=20 2.39.0.314.g84b9a713c41-goog