From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-ej1-x634.google.com (mail-ej1-x634.google.com [IPv6:2a00:1450:4864:20::634]) by mail.toke.dk (Postfix) with ESMTPS id F00239CCBF7 for ; Thu, 8 Dec 2022 07:11:44 +0100 (CET) Authentication-Results: mail.toke.dk; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20210112 header.b=e0RPRbXh Received: by mail-ej1-x634.google.com with SMTP id n20so1587011ejh.0 for ; Wed, 07 Dec 2022 22:11:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:in-reply-to:from:references:cc:to :content-language:subject:user-agent:mime-version:date:message-id :from:to:cc:subject:date:message-id:reply-to; bh=nzfVoeg9k4c+g14oj8AbfzWK7zcrvrSvT/ySZBlnXqc=; b=e0RPRbXhfmg2nwJtzaYM1w5XWLJnm0MbxMcKjLVPSN+M8iHHnd/ByR9YFqlaiOTbf5 O5OGDB3eZQPE9+e19MgR1svcwlclER3kgzHNRfQ/cXNTT1/97JWLEzXVmzmlQorgOGu6 p8M2ZwuAgF6ou3OJu1rOgxue9NVhnMgJgdlmMk4XgJv17yAQJnKbEfq3hbeqAYc3gkzD Fo562+Kau//8UKplYWHCALeaV/thWyfqaZN386e5+XkQr2IsjDoCVHUmM/PQVTCEdY3P uUHUiTpyr0wumFaNKmY8RHLRHEE1J+6MoqB3wWMKV2k8JcJcA+On0XLa3JxNcIeeBZlK WRdg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:in-reply-to:from:references:cc:to :content-language:subject:user-agent:mime-version:date:message-id :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=nzfVoeg9k4c+g14oj8AbfzWK7zcrvrSvT/ySZBlnXqc=; b=aC4ZGHHt8iCPNh1oVD2zpCPgIgf4YNViW2/cWYl8D3VbuBzjTRE8B0jvc48sNv9q7q fMHYMB79BTNggbsVqgIa5QHeIlNZ2ZJJKydMLMeolTziLHZafUlywFdBwzJ/B7F4O4+a xjrbWRY/3DLY4R636MthWQFoDjz+L/n4immCG7HoFiGIvtXQgdX9e12ggW7MHCs6wQ78 88AAIWDT1DlJkLebWm5rD+nOwmftqKcYaw1spg3ZxsLIcJg1kVJft7HHBEF+YQYHgkJi 6Vafo1pDXfDyUymXZK1N+48LXSKPjqD6rEIb66pFpZWMwMboV/EJSTHI5is7ItpC9Fd1 Xyxg== X-Gm-Message-State: ANoB5pk9wxDckQRSkG/sVAciK8kq+Zb0bwO03Dnz1OJFl1giFgcK95XM 5L6S0ymabF1D5YW+F3+UUQA= X-Google-Smtp-Source: AA0mqf6NdJVPfCrmzM6vjMCluPbZZipK71na5jEX0OEUV80Ft2XpUNn1oBqJEx+K7+vhW5yB95G6Tg== X-Received: by 2002:a17:907:9951:b0:7b2:7e7a:11c1 with SMTP id kl17-20020a170907995100b007b27e7a11c1mr61598889ejc.684.1670479904055; Wed, 07 Dec 2022 22:11:44 -0800 (PST) Received: from [192.168.0.105] ([77.126.19.155]) by smtp.gmail.com with ESMTPSA id bj15-20020a170906b04f00b007b5903e595bsm9272045ejb.84.2022.12.07.22.11.41 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 07 Dec 2022 22:11:43 -0800 (PST) Message-ID: <8d5f451a-c49b-1abc-6573-71831aa09739@gmail.com> Date: Thu, 8 Dec 2022 08:11:40 +0200 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101 Thunderbird/102.5.0 Content-Language: en-US To: Stanislav Fomichev , bpf@vger.kernel.org References: <20221206024554.3826186-1-sdf@google.com> <20221206024554.3826186-8-sdf@google.com> From: Tariq Toukan In-Reply-To: <20221206024554.3826186-8-sdf@google.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Message-ID-Hash: ZZBKL6X67DWTPAGF4MPFZ2SDY4PRS5E3 X-Message-ID-Hash: ZZBKL6X67DWTPAGF4MPFZ2SDY4PRS5E3 X-MailFrom: ttoukan.linux@gmail.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org, martin.lau@linux.dev, song@kernel.org, yhs@fb.com, john.fastabend@gmail.com, kpsingh@kernel.org, haoluo@google.com, jolsa@kernel.org, Tariq Toukan , David Ahern , Jakub Kicinski , Willem de Bruijn , Jesper Dangaard Brouer , Anatoly Burakov , Alexander Lobakin , Magnus Karlsson , Maryam Tahhan , xdp-hints@xdp-project.net, netdev@vger.kernel.org X-Mailman-Version: 3.3.7 Precedence: list Subject: [xdp-hints] Re: [PATCH bpf-next v3 07/12] mlx4: Introduce mlx4_xdp_buff wrapper for xdp_buff List-Id: XDP hardware hints design discussion Archived-At: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: On 12/6/2022 4:45 AM, Stanislav Fomichev wrote: > No functional changes. Boilerplate to allow stuffing more data after xdp_buff. > > Cc: Tariq Toukan > Cc: John Fastabend > Cc: David Ahern > Cc: Martin KaFai Lau > Cc: Jakub Kicinski > Cc: Willem de Bruijn > Cc: Jesper Dangaard Brouer > Cc: Anatoly Burakov > Cc: Alexander Lobakin > Cc: Magnus Karlsson > Cc: Maryam Tahhan > Cc: xdp-hints@xdp-project.net > Cc: netdev@vger.kernel.org > Signed-off-by: Stanislav Fomichev > --- > drivers/net/ethernet/mellanox/mlx4/en_rx.c | 26 +++++++++++++--------- > 1 file changed, 15 insertions(+), 11 deletions(-) > > diff --git a/drivers/net/ethernet/mellanox/mlx4/en_rx.c b/drivers/net/ethernet/mellanox/mlx4/en_rx.c > index 8f762fc170b3..9c114fc723e3 100644 > --- a/drivers/net/ethernet/mellanox/mlx4/en_rx.c > +++ b/drivers/net/ethernet/mellanox/mlx4/en_rx.c > @@ -661,9 +661,14 @@ static int check_csum(struct mlx4_cqe *cqe, struct sk_buff *skb, void *va, > #define MLX4_CQE_STATUS_IP_ANY (MLX4_CQE_STATUS_IPV4) > #endif > > +struct mlx4_xdp_buff { > + struct xdp_buff xdp; > +}; > + Prefer name with 'en', struct mlx4_en_xdp_buff. > int mlx4_en_process_rx_cq(struct net_device *dev, struct mlx4_en_cq *cq, int budget) > { > struct mlx4_en_priv *priv = netdev_priv(dev); > + struct mlx4_xdp_buff mxbuf = {}; > int factor = priv->cqe_factor; > struct mlx4_en_rx_ring *ring; > struct bpf_prog *xdp_prog; > @@ -671,7 +676,6 @@ int mlx4_en_process_rx_cq(struct net_device *dev, struct mlx4_en_cq *cq, int bud > bool doorbell_pending; > bool xdp_redir_flush; > struct mlx4_cqe *cqe; > - struct xdp_buff xdp; > int polled = 0; > int index; > > @@ -681,7 +685,7 @@ int mlx4_en_process_rx_cq(struct net_device *dev, struct mlx4_en_cq *cq, int bud > ring = priv->rx_ring[cq_ring]; > > xdp_prog = rcu_dereference_bh(ring->xdp_prog); > - xdp_init_buff(&xdp, priv->frag_info[0].frag_stride, &ring->xdp_rxq); > + xdp_init_buff(&mxbuf.xdp, priv->frag_info[0].frag_stride, &ring->xdp_rxq); > doorbell_pending = false; > xdp_redir_flush = false; > > @@ -776,24 +780,24 @@ int mlx4_en_process_rx_cq(struct net_device *dev, struct mlx4_en_cq *cq, int bud > priv->frag_info[0].frag_size, > DMA_FROM_DEVICE); > > - xdp_prepare_buff(&xdp, va - frags[0].page_offset, > + xdp_prepare_buff(&mxbuf.xdp, va - frags[0].page_offset, > frags[0].page_offset, length, false); > - orig_data = xdp.data; > + orig_data = mxbuf.xdp.data; > > - act = bpf_prog_run_xdp(xdp_prog, &xdp); > + act = bpf_prog_run_xdp(xdp_prog, &mxbuf.xdp); > > - length = xdp.data_end - xdp.data; > - if (xdp.data != orig_data) { > - frags[0].page_offset = xdp.data - > - xdp.data_hard_start; > - va = xdp.data; > + length = mxbuf.xdp.data_end - mxbuf.xdp.data; > + if (mxbuf.xdp.data != orig_data) { > + frags[0].page_offset = mxbuf.xdp.data - > + mxbuf.xdp.data_hard_start; > + va = mxbuf.xdp.data; > } > > switch (act) { > case XDP_PASS: > break; > case XDP_REDIRECT: > - if (likely(!xdp_do_redirect(dev, &xdp, xdp_prog))) { > + if (likely(!xdp_do_redirect(dev, &mxbuf.xdp, xdp_prog))) { > ring->xdp_redirect++; > xdp_redir_flush = true; > frags[0].page = NULL;