From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-ed1-x52f.google.com (mail-ed1-x52f.google.com [IPv6:2a00:1450:4864:20::52f]) by mail.toke.dk (Postfix) with ESMTPS id 4A5719CF4C6 for ; Tue, 13 Dec 2022 09:56:16 +0100 (CET) Authentication-Results: mail.toke.dk; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20210112 header.b=WVsPe78y Received: by mail-ed1-x52f.google.com with SMTP id i15so16727221edf.2 for ; Tue, 13 Dec 2022 00:56:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:in-reply-to:from:content-language :references:cc:to:subject:user-agent:mime-version:date:message-id :from:to:cc:subject:date:message-id:reply-to; bh=e6Z3+Ru5Hns8lMMMOwhQFZXE5Nu44PgsT6yMgMnKR64=; b=WVsPe78yuf+qG8m3haEyg1F4WXQKZRDAGiSJqxo6oJKonl4VQF4eZt6YVRGyhs9UD8 dWVWGAhiUYBj3y5laMajYeDIMfhQt0sFAR6oR7rmqgtBxL/hBeNJfNQNoSfK40Dj+/Vu lSrpKWgZozeVbOE8RFeB59Ce5QyNoa0JuzJyjHzp8/q2G04KYrGBhbiEghDI9OnzNG3C 32eF0aQnBhJciq4XX76R/d2A5yVCmlbiT7oiBrx8NtbsnbVNR7WIcBlyQEcJFlURDPtu 1EntrWN0zMagdW92QY1++GbBc4uE7BplxWPDKULFndWvL/U2+jqEMNn6WSVFXkiAfUFO Ozzg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:in-reply-to:from:content-language :references:cc:to:subject:user-agent:mime-version:date:message-id :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=e6Z3+Ru5Hns8lMMMOwhQFZXE5Nu44PgsT6yMgMnKR64=; b=UWfbtM3T+THqyr6CHIrtBkrxpg+8YYqZDO+AM1fEKVTM5Uay2rLmv81SeDPeACzH0V ea3mMmi5EN11oWVO54dXSgIhp6LCIj3by1zIgx+1AemqJYlhkp/cQ0HRxuOB9S4QsIp7 q8afnz8AcAAfHf42/sk8XC49+Zq7kH2c6t8GpBZ3nfiij1iLbkCQ66QP4fFlqHvgA35V SY9sYmDVpWGRarx6hPZY2EsE0Ki5Htw1H+hpNdeP0CgjSWN2cmmEqMdblwffV0YhusWt aHH1PXhXBCdEsgHPgCGNLIM4z+Oj8Hayn6m2SlZCMHO8AudjI6Qgfbf2VVbEfmoF0ec/ cXdA== X-Gm-Message-State: ANoB5pmHYm5/TKFLHTTDPlHDKe+ZnBBNliNISqv8DV+xP7r1s7+j/isJ JqYm1kiA/oLwiuhySwtH5A8= X-Google-Smtp-Source: AA0mqf7TGl/+z/W0YQnqXsZeUZCW6dtBlZNFJdG55lPg/hpwBJYQFXoh6QmaMkGxGGO+CaCttAzipg== X-Received: by 2002:a05:6402:4006:b0:46c:d5e8:30e4 with SMTP id d6-20020a056402400600b0046cd5e830e4mr19108311eda.23.1670921774440; Tue, 13 Dec 2022 00:56:14 -0800 (PST) Received: from [192.168.1.115] ([77.124.106.18]) by smtp.gmail.com with ESMTPSA id o10-20020aa7d3ca000000b004701c6a403asm891754edr.86.2022.12.13.00.56.12 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 13 Dec 2022 00:56:13 -0800 (PST) Message-ID: Date: Tue, 13 Dec 2022 10:56:10 +0200 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101 Thunderbird/102.5.1 To: Stanislav Fomichev , bpf@vger.kernel.org References: <20221213023605.737383-1-sdf@google.com> <20221213023605.737383-11-sdf@google.com> Content-Language: en-US From: Tariq Toukan In-Reply-To: <20221213023605.737383-11-sdf@google.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Message-ID-Hash: T2SZJN2MXFRYB5US3P2TZV3HWLSWSQET X-Message-ID-Hash: T2SZJN2MXFRYB5US3P2TZV3HWLSWSQET X-MailFrom: ttoukan.linux@gmail.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org, martin.lau@linux.dev, song@kernel.org, yhs@fb.com, john.fastabend@gmail.com, kpsingh@kernel.org, haoluo@google.com, jolsa@kernel.org, Tariq Toukan , David Ahern , Jakub Kicinski , Willem de Bruijn , Jesper Dangaard Brouer , Anatoly Burakov , Alexander Lobakin , Magnus Karlsson , Maryam Tahhan , xdp-hints@xdp-project.net, netdev@vger.kernel.org X-Mailman-Version: 3.3.7 Precedence: list Subject: [xdp-hints] Re: [PATCH bpf-next v4 10/15] net/mlx4_en: Introduce wrapper for xdp_buff List-Id: XDP hardware hints design discussion Archived-At: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: On 12/13/2022 4:36 AM, Stanislav Fomichev wrote: > No functional changes. Boilerplate to allow stuffing more data after xdp_buff. > > Cc: Tariq Toukan > Cc: John Fastabend > Cc: David Ahern > Cc: Martin KaFai Lau > Cc: Jakub Kicinski > Cc: Willem de Bruijn > Cc: Jesper Dangaard Brouer > Cc: Anatoly Burakov > Cc: Alexander Lobakin > Cc: Magnus Karlsson > Cc: Maryam Tahhan > Cc: xdp-hints@xdp-project.net > Cc: netdev@vger.kernel.org > Signed-off-by: Stanislav Fomichev > --- > drivers/net/ethernet/mellanox/mlx4/en_rx.c | 26 +++++++++++++--------- > 1 file changed, 15 insertions(+), 11 deletions(-) > > diff --git a/drivers/net/ethernet/mellanox/mlx4/en_rx.c b/drivers/net/ethernet/mellanox/mlx4/en_rx.c > index 8f762fc170b3..014a80af2813 100644 > --- a/drivers/net/ethernet/mellanox/mlx4/en_rx.c > +++ b/drivers/net/ethernet/mellanox/mlx4/en_rx.c > @@ -661,9 +661,14 @@ static int check_csum(struct mlx4_cqe *cqe, struct sk_buff *skb, void *va, > #define MLX4_CQE_STATUS_IP_ANY (MLX4_CQE_STATUS_IPV4) > #endif > > +struct mlx4_en_xdp_buff { > + struct xdp_buff xdp; > +}; > + > int mlx4_en_process_rx_cq(struct net_device *dev, struct mlx4_en_cq *cq, int budget) > { > struct mlx4_en_priv *priv = netdev_priv(dev); > + struct mlx4_en_xdp_buff mxbuf = {}; > int factor = priv->cqe_factor; > struct mlx4_en_rx_ring *ring; > struct bpf_prog *xdp_prog; > @@ -671,7 +676,6 @@ int mlx4_en_process_rx_cq(struct net_device *dev, struct mlx4_en_cq *cq, int bud > bool doorbell_pending; > bool xdp_redir_flush; > struct mlx4_cqe *cqe; > - struct xdp_buff xdp; > int polled = 0; > int index; > > @@ -681,7 +685,7 @@ int mlx4_en_process_rx_cq(struct net_device *dev, struct mlx4_en_cq *cq, int bud > ring = priv->rx_ring[cq_ring]; > > xdp_prog = rcu_dereference_bh(ring->xdp_prog); > - xdp_init_buff(&xdp, priv->frag_info[0].frag_stride, &ring->xdp_rxq); > + xdp_init_buff(&mxbuf.xdp, priv->frag_info[0].frag_stride, &ring->xdp_rxq); > doorbell_pending = false; > xdp_redir_flush = false; > > @@ -776,24 +780,24 @@ int mlx4_en_process_rx_cq(struct net_device *dev, struct mlx4_en_cq *cq, int bud > priv->frag_info[0].frag_size, > DMA_FROM_DEVICE); > > - xdp_prepare_buff(&xdp, va - frags[0].page_offset, > + xdp_prepare_buff(&mxbuf.xdp, va - frags[0].page_offset, > frags[0].page_offset, length, false); > - orig_data = xdp.data; > + orig_data = mxbuf.xdp.data; > > - act = bpf_prog_run_xdp(xdp_prog, &xdp); > + act = bpf_prog_run_xdp(xdp_prog, &mxbuf.xdp); > > - length = xdp.data_end - xdp.data; > - if (xdp.data != orig_data) { > - frags[0].page_offset = xdp.data - > - xdp.data_hard_start; > - va = xdp.data; > + length = mxbuf.xdp.data_end - mxbuf.xdp.data; > + if (mxbuf.xdp.data != orig_data) { > + frags[0].page_offset = mxbuf.xdp.data - > + mxbuf.xdp.data_hard_start; > + va = mxbuf.xdp.data; > } > > switch (act) { > case XDP_PASS: > break; > case XDP_REDIRECT: > - if (likely(!xdp_do_redirect(dev, &xdp, xdp_prog))) { > + if (likely(!xdp_do_redirect(dev, &mxbuf.xdp, xdp_prog))) { > ring->xdp_redirect++; > xdp_redir_flush = true; > frags[0].page = NULL; Thanks for your patches. Reviewed-by: Tariq Toukan