From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mail.toke.dk (Postfix) with ESMTPS id 4CEE39DD145 for ; Thu, 12 Jan 2023 22:55:14 +0100 (CET) Authentication-Results: mail.toke.dk; dkim=pass (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=LcwmYEJx DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1673560513; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=nYkN7GoLAZAOJM2o7oiua5mYpB7dRPk8B+hXqc0CV2I=; b=LcwmYEJxC6HnX6OGlJJBKAe6QGs43JPBBW3WoloIheC5blEZ7XWvrZXdqFKQD+iNjKb8Um j+7ejAczLh9rLLfsca5qZUWkjshQLJ6SdDmXQo0J9ap4y7qUr+imqB2Yb/tDGQX4jU1vVl /ZcjnbmmvZU78Sk2B+2mBnH9XzNqFJ0= Received: from mail-ej1-f72.google.com (mail-ej1-f72.google.com [209.85.218.72]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id us-mta-482-eh3W65gLNIC_lhosYdsUhg-1; Thu, 12 Jan 2023 16:55:12 -0500 X-MC-Unique: eh3W65gLNIC_lhosYdsUhg-1 Received: by mail-ej1-f72.google.com with SMTP id qb2-20020a1709077e8200b00842b790008fso13790574ejc.21 for ; Thu, 12 Jan 2023 13:55:11 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:message-id:date:references :in-reply-to:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=o2pqGz5pxnSufN+M86D4kyp6huaACG/FfcB/U9shMco=; b=Qk1ManfMA31jpWhMWERjulIIainFUifcTFz/xsHZ2kbcANcS66o5n/FU9CFgVOLoi0 a6w6Uhitpfcz8sWAs4ko8qksd6zKKXUipc+3MnYgwBueltktImhr3OMNyb+LhBxYD8I/ mcK3aYqYNrEnzc/7I2fUzWaTKtlu52LwLHg4vFvCnet76ZSzh1VY2qI0UCWCOVTP8rhM jwQh4Aqs/HsJ8S8V5mkcjdp4n5WMtaGW5OWMqq5Y2lb2FCYuB6EDYWW/C9SawYv03vL4 gVqxiQc+rGgSA1V/nx/ap9vZ/dxn3/U9O9SiSIODBb0eDyE/uisydmvmfBz/XPBzUWzw IUsA== X-Gm-Message-State: AFqh2koKWYqES1kWjO9UroYT6xdFRAmePI+mCDpamuTeRuq/CDI6wOqs 6y1rRAaM5nGHWNoQqP1jE7L7woCB9cap3iD4Ygs0YUi1/MmE9OBvKaPK3+v1bYF8JjH41Jn0Gfq n55q2AhCtiMgpgXuadzR/ X-Received: by 2002:a17:907:2bed:b0:7c0:dd80:e95e with SMTP id gv45-20020a1709072bed00b007c0dd80e95emr1031482ejc.51.1673560507466; Thu, 12 Jan 2023 13:55:07 -0800 (PST) X-Google-Smtp-Source: AMrXdXskfiIzt1rRtcZmMG8KEcbynfep73CDRMmHyuGL20jYnjmciHfnH5BK2mK79hQJV+1g26p9+A== X-Received: by 2002:a17:907:2bed:b0:7c0:dd80:e95e with SMTP id gv45-20020a1709072bed00b007c0dd80e95emr1031428ejc.51.1673560506631; Thu, 12 Jan 2023 13:55:06 -0800 (PST) Received: from alrua-x1.borgediget.toke.dk ([2a0c:4d80:42:443::2]) by smtp.gmail.com with ESMTPSA id r2-20020a17090609c200b007bd28b50305sm7861149eje.200.2023.01.12.13.55.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 12 Jan 2023 13:55:05 -0800 (PST) Received: by alrua-x1.borgediget.toke.dk (Postfix, from userid 1000) id 3667E900729; Thu, 12 Jan 2023 22:55:05 +0100 (CET) From: Toke =?utf-8?Q?H=C3=B8iland-J=C3=B8rgensen?= To: Stanislav Fomichev , Tariq Toukan In-Reply-To: <87k01rfojm.fsf@toke.dk> References: <20230112003230.3779451-1-sdf@google.com> <20230112003230.3779451-16-sdf@google.com> <87k01rfojm.fsf@toke.dk> X-Clacks-Overhead: GNU Terry Pratchett Date: Thu, 12 Jan 2023 22:55:05 +0100 Message-ID: <87h6wvfmfa.fsf@toke.dk> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Message-ID-Hash: 765EIKPJSLLMJIC2RNKJQRJY7JEGDGM3 X-Message-ID-Hash: 765EIKPJSLLMJIC2RNKJQRJY7JEGDGM3 X-MailFrom: toke@redhat.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: bpf@vger.kernel.org, ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org, martin.lau@linux.dev, song@kernel.org, yhs@fb.com, john.fastabend@gmail.com, kpsingh@kernel.org, haoluo@google.com, jolsa@kernel.org, Tariq Toukan , Saeed Mahameed , David Ahern , Jakub Kicinski , Willem de Bruijn , Jesper Dangaard Brouer , Anatoly Burakov , Alexander Lobakin , Magnus Karlsson , Maryam Tahhan , xdp-hints@xdp-project.net, netdev@vger.kernel.org X-Mailman-Version: 3.3.7 Precedence: list Subject: [xdp-hints] Re: [PATCH bpf-next v7 15/17] net/mlx5e: Introduce wrapper for xdp_buff List-Id: XDP hardware hints design discussion Archived-At: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: Toke H=C3=B8iland-J=C3=B8rgensen writes: > Stanislav Fomichev writes: > >> On Thu, Jan 12, 2023 at 12:07 AM Tariq Toukan = wrote: >>> >>> >>> >>> On 12/01/2023 2:32, Stanislav Fomichev wrote: >>> > From: Toke H=C3=B8iland-J=C3=B8rgensen >>> > >>> > Preparation for implementing HW metadata kfuncs. No functional change= . >>> > >>> > Cc: Tariq Toukan >>> > Cc: Saeed Mahameed >>> > Cc: John Fastabend >>> > Cc: David Ahern >>> > Cc: Martin KaFai Lau >>> > Cc: Jakub Kicinski >>> > Cc: Willem de Bruijn >>> > Cc: Jesper Dangaard Brouer >>> > Cc: Anatoly Burakov >>> > Cc: Alexander Lobakin >>> > Cc: Magnus Karlsson >>> > Cc: Maryam Tahhan >>> > Cc: xdp-hints@xdp-project.net >>> > Cc: netdev@vger.kernel.org >>> > Signed-off-by: Toke H=C3=B8iland-J=C3=B8rgensen >>> > Signed-off-by: Stanislav Fomichev >>> > --- >>> > drivers/net/ethernet/mellanox/mlx5/core/en.h | 1 + >>> > .../net/ethernet/mellanox/mlx5/core/en/xdp.c | 3 +- >>> > .../net/ethernet/mellanox/mlx5/core/en/xdp.h | 6 +- >>> > .../ethernet/mellanox/mlx5/core/en/xsk/rx.c | 25 ++++---- >>> > .../net/ethernet/mellanox/mlx5/core/en_rx.c | 58 +++++++++-------= --- >>> > 5 files changed, 50 insertions(+), 43 deletions(-) >>> > >>> > diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/n= et/ethernet/mellanox/mlx5/core/en.h >>> > index 2d77fb8a8a01..af663978d1b4 100644 >>> > --- a/drivers/net/ethernet/mellanox/mlx5/core/en.h >>> > +++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h >>> > @@ -469,6 +469,7 @@ struct mlx5e_txqsq { >>> > union mlx5e_alloc_unit { >>> > struct page *page; >>> > struct xdp_buff *xsk; >>> > + struct mlx5e_xdp_buff *mxbuf; >>> >>> In XSK files below you mix usage of both alloc_units[page_idx].mxbuf an= d >>> alloc_units[page_idx].xsk, while both fields share the memory of a unio= n. >>> >>> As struct mlx5e_xdp_buff wraps struct xdp_buff, I think that you just >>> need to change the existing xsk field type from struct xdp_buff *xsk >>> into struct mlx5e_xdp_buff *xsk and align the usage. >> >> Hmmm, good point. I'm actually not sure how it works currently. >> mlx5e_alloc_unit.mxbuf doesn't seem to be initialized anywhere? Toke, >> am I missing something? > > It's initialised piecemeal in different places; but yeah, we're mixing > things a bit... > >> I'm thinking about something like this: >> >> diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h >> b/drivers/net/ethernet/mellanox/mlx5/core/en.h >> index af663978d1b4..2d77fb8a8a01 100644 >> --- a/drivers/net/ethernet/mellanox/mlx5/core/en.h >> +++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h >> @@ -469,7 +469,6 @@ struct mlx5e_txqsq { >> union mlx5e_alloc_unit { >> struct page *page; >> struct xdp_buff *xsk; >> - struct mlx5e_xdp_buff *mxbuf; >> }; > > Hmm, for consistency with the non-XSK path we should rather go the other > direction and lose the xsk member, moving everything to mxbuf? Let me > give that a shot... Something like the below? -Toke diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/eth= ernet/mellanox/mlx5/core/en.h index 6de02d8aeab8..cb9cdb6421c5 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h @@ -468,7 +468,6 @@ struct mlx5e_txqsq { =20 union mlx5e_alloc_unit { =09struct page *page; -=09struct xdp_buff *xsk; =09struct mlx5e_xdp_buff *mxbuf; }; =20 diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h b/drivers/net= /ethernet/mellanox/mlx5/core/en/xdp.h index cb568c62aba0..95694a25ec31 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h @@ -33,6 +33,7 @@ #define __MLX5_EN_XDP_H__ =20 #include +#include =20 #include "en.h" #include "en/txrx.h" @@ -112,6 +113,21 @@ static inline void mlx5e_xmit_xdp_doorbell(struct mlx5= e_xdpsq *sq) =09} } =20 +static inline struct mlx5e_xdp_buff *mlx5e_xsk_buff_alloc(struct xsk_buff_= pool *pool) +{ +=09return (struct mlx5e_xdp_buff *)xsk_buff_alloc(pool); +} + +static inline void mlx5e_xsk_buff_free(struct mlx5e_xdp_buff *mxbuf) +{ +=09xsk_buff_free(&mxbuf->xdp); +} + +static inline dma_addr_t mlx5e_xsk_buff_xdp_get_frame_dma(struct mlx5e_xdp= _buff *mxbuf) +{ +=09return xsk_buff_xdp_get_frame_dma(&mxbuf->xdp); +} + /* Enable inline WQEs to shift some load from a congested HCA (HW) to * a less congested cpu (SW). */ diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c b/drivers/= net/ethernet/mellanox/mlx5/core/en/xsk/rx.c index 8bf3029abd3c..1f166dbb7f22 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c @@ -3,7 +3,6 @@ =20 #include "rx.h" #include "en/xdp.h" -#include #include =20 /* RX data path */ @@ -21,7 +20,7 @@ int mlx5e_xsk_alloc_rx_mpwqe(struct mlx5e_rq *rq, u16 ix) =09if (unlikely(!xsk_buff_can_alloc(rq->xsk_pool, rq->mpwqe.pages_per_wqe)= )) =09=09goto err; =20 -=09BUILD_BUG_ON(sizeof(wi->alloc_units[0]) !=3D sizeof(wi->alloc_units[0].= xsk)); +=09BUILD_BUG_ON(sizeof(wi->alloc_units[0]) !=3D sizeof(wi->alloc_units[0].= mxbuf)); =09XSK_CHECK_PRIV_TYPE(struct mlx5e_xdp_buff); =09batch =3D xsk_buff_alloc_batch(rq->xsk_pool, (struct xdp_buff **)wi->al= loc_units, =09=09=09=09 rq->mpwqe.pages_per_wqe); @@ -33,8 +32,8 @@ int mlx5e_xsk_alloc_rx_mpwqe(struct mlx5e_rq *rq, u16 ix) =09 * the first error, which will mean there are no more valid descriptors= . =09 */ =09for (; batch < rq->mpwqe.pages_per_wqe; batch++) { -=09=09wi->alloc_units[batch].xsk =3D xsk_buff_alloc(rq->xsk_pool); -=09=09if (unlikely(!wi->alloc_units[batch].xsk)) +=09=09wi->alloc_units[batch].mxbuf =3D mlx5e_xsk_buff_alloc(rq->xsk_pool); +=09=09if (unlikely(!wi->alloc_units[batch].mxbuf)) =09=09=09goto err_reuse_batch; =09} =20 @@ -44,7 +43,7 @@ int mlx5e_xsk_alloc_rx_mpwqe(struct mlx5e_rq *rq, u16 ix) =20 =09if (likely(rq->mpwqe.umr_mode =3D=3D MLX5E_MPWRQ_UMR_MODE_ALIGNED)) { =09=09for (i =3D 0; i < batch; i++) { -=09=09=09dma_addr_t addr =3D xsk_buff_xdp_get_frame_dma(wi->alloc_units[i]= .xsk); +=09=09=09dma_addr_t addr =3D mlx5e_xsk_buff_xdp_get_frame_dma(wi->alloc_un= its[i].mxbuf); =20 =09=09=09umr_wqe->inline_mtts[i] =3D (struct mlx5_mtt) { =09=09=09=09.ptag =3D cpu_to_be64(addr | MLX5_EN_WR), @@ -53,7 +52,7 @@ int mlx5e_xsk_alloc_rx_mpwqe(struct mlx5e_rq *rq, u16 ix) =09=09} =09} else if (unlikely(rq->mpwqe.umr_mode =3D=3D MLX5E_MPWRQ_UMR_MODE_UNAL= IGNED)) { =09=09for (i =3D 0; i < batch; i++) { -=09=09=09dma_addr_t addr =3D xsk_buff_xdp_get_frame_dma(wi->alloc_units[i]= .xsk); +=09=09=09dma_addr_t addr =3D mlx5e_xsk_buff_xdp_get_frame_dma(wi->alloc_un= its[i].mxbuf); =20 =09=09=09umr_wqe->inline_ksms[i] =3D (struct mlx5_ksm) { =09=09=09=09.key =3D rq->mkey_be, @@ -65,7 +64,7 @@ int mlx5e_xsk_alloc_rx_mpwqe(struct mlx5e_rq *rq, u16 ix) =09=09u32 mapping_size =3D 1 << (rq->mpwqe.page_shift - 2); =20 =09=09for (i =3D 0; i < batch; i++) { -=09=09=09dma_addr_t addr =3D xsk_buff_xdp_get_frame_dma(wi->alloc_units[i]= .xsk); +=09=09=09dma_addr_t addr =3D mlx5e_xsk_buff_xdp_get_frame_dma(wi->alloc_un= its[i].mxbuf); =20 =09=09=09umr_wqe->inline_ksms[i << 2] =3D (struct mlx5_ksm) { =09=09=09=09.key =3D rq->mkey_be, @@ -91,7 +90,7 @@ int mlx5e_xsk_alloc_rx_mpwqe(struct mlx5e_rq *rq, u16 ix) =09=09__be32 frame_size =3D cpu_to_be32(rq->xsk_pool->chunk_size); =20 =09=09for (i =3D 0; i < batch; i++) { -=09=09=09dma_addr_t addr =3D xsk_buff_xdp_get_frame_dma(wi->alloc_units[i]= .xsk); +=09=09=09dma_addr_t addr =3D mlx5e_xsk_buff_xdp_get_frame_dma(wi->alloc_un= its[i].mxbuf); =20 =09=09=09umr_wqe->inline_klms[i << 1] =3D (struct mlx5_klm) { =09=09=09=09.key =3D rq->mkey_be, @@ -137,7 +136,7 @@ int mlx5e_xsk_alloc_rx_mpwqe(struct mlx5e_rq *rq, u16 i= x) =20 err_reuse_batch: =09while (--batch >=3D 0) -=09=09xsk_buff_free(wi->alloc_units[batch].xsk); +=09=09mlx5e_xsk_buff_free(wi->alloc_units[batch].mxbuf); =20 err: =09rq->stats->buff_alloc_err++; @@ -156,7 +155,7 @@ int mlx5e_xsk_alloc_rx_wqes_batched(struct mlx5e_rq *rq= , u16 ix, int wqe_bulk) =09 * allocate XDP buffers straight into alloc_units. =09 */ =09BUILD_BUG_ON(sizeof(rq->wqe.alloc_units[0]) !=3D -=09=09 sizeof(rq->wqe.alloc_units[0].xsk)); +=09=09 sizeof(rq->wqe.alloc_units[0].mxbuf)); =09buffs =3D (struct xdp_buff **)rq->wqe.alloc_units; =09contig =3D mlx5_wq_cyc_get_size(wq) - ix; =09if (wqe_bulk <=3D contig) { @@ -177,8 +176,9 @@ int mlx5e_xsk_alloc_rx_wqes_batched(struct mlx5e_rq *rq= , u16 ix, int wqe_bulk) =09=09/* Assumes log_num_frags =3D=3D 0. */ =09=09frag =3D &rq->wqe.frags[j]; =20 -=09=09addr =3D xsk_buff_xdp_get_frame_dma(frag->au->xsk); +=09=09addr =3D mlx5e_xsk_buff_xdp_get_frame_dma(frag->au->mxbuf); =09=09wqe->data[0].addr =3D cpu_to_be64(addr + rq->buff.headroom); +=09=09frag->au->mxbuf->rq =3D rq; =09} =20 =09return alloc; @@ -199,12 +199,13 @@ int mlx5e_xsk_alloc_rx_wqes(struct mlx5e_rq *rq, u16 = ix, int wqe_bulk) =09=09/* Assumes log_num_frags =3D=3D 0. */ =09=09frag =3D &rq->wqe.frags[j]; =20 -=09=09frag->au->xsk =3D xsk_buff_alloc(rq->xsk_pool); -=09=09if (unlikely(!frag->au->xsk)) +=09=09frag->au->mxbuf =3D mlx5e_xsk_buff_alloc(rq->xsk_pool); +=09=09if (unlikely(!frag->au->mxbuf)) =09=09=09return i; =20 -=09=09addr =3D xsk_buff_xdp_get_frame_dma(frag->au->xsk); +=09=09addr =3D mlx5e_xsk_buff_xdp_get_frame_dma(frag->au->mxbuf); =09=09wqe->data[0].addr =3D cpu_to_be64(addr + rq->buff.headroom); +=09=09frag->au->mxbuf->rq =3D rq; =09} =20 =09return wqe_bulk; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/= ethernet/mellanox/mlx5/core/en_rx.c index 7b08653be000..4313165709cb 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c @@ -41,7 +41,6 @@ #include #include #include -#include #include "en.h" #include "en/txrx.h" #include "en_tc.h" @@ -434,7 +433,7 @@ static inline void mlx5e_free_rx_wqe(struct mlx5e_rq *r= q, =09=09 * put into the Reuse Ring, because there is no way to return =09=09 * the page to the userspace when the interface goes down. =09=09 */ -=09=09xsk_buff_free(wi->au->xsk); +=09=09mlx5e_xsk_buff_free(wi->au->mxbuf); =09=09return; =09} =20 @@ -515,7 +514,7 @@ mlx5e_free_rx_mpwqe(struct mlx5e_rq *rq, struct mlx5e_m= pw_info *wi, bool recycle =09=09 */ =09=09for (i =3D 0; i < rq->mpwqe.pages_per_wqe; i++) =09=09=09if (no_xdp_xmit || !test_bit(i, wi->xdp_xmit_bitmap)) -=09=09=09=09xsk_buff_free(alloc_units[i].xsk); +=09=09=09=09mlx5e_xsk_buff_free(alloc_units[i].mxbuf); =09} else { =09=09for (i =3D 0; i < rq->mpwqe.pages_per_wqe; i++) =09=09=09if (no_xdp_xmit || !test_bit(i, wi->xdp_xmit_bitmap))