From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-qv1-xf35.google.com (mail-qv1-xf35.google.com [IPv6:2607:f8b0:4864:20::f35]) by mail.toke.dk (Postfix) with ESMTPS id ECDCDA32508 for ; Tue, 17 Oct 2023 18:37:25 +0200 (CEST) Authentication-Results: mail.toke.dk; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20230601 header.b=GSlmf2d/ Received: by mail-qv1-xf35.google.com with SMTP id 6a1803df08f44-66d00415a92so7395556d6.1 for ; Tue, 17 Oct 2023 09:37:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1697560638; x=1698165438; darn=xdp-project.net; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=Acry/SkmOO+//pdMk3iso04YDCLFfseKjRwabK+zSS0=; b=GSlmf2d/KobjPBJurekugocGJEP108g1gZXv+sleeEyc8UD2+V5JHL/dsv68A/UDkr jB7rb++3gyVqyZWiOztRRpgfC8f2q2iapX8yTIKFd87hmNm70wcqehgHi8SzA1z3AeWO MeGmOH96TpdZAV4sLtF5kZZqntwpEpnqmASp5VadYBVQuqcqBa+zjgrkrefdZXK0YgNQ 8LacyXnLpqu1NqhHn8K7IhVgA4aX69R3TKPf+wdcp+SQMQJAY/mW5PGNrmAMBbpWLJ7H jDtC7w1qPOZ4J6l353Ho+gLuZMhs12B9YfbEJjr/wtVHLZ6YdJd8nrQoc70Un6VeFBpA wFrQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1697560638; x=1698165438; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=Acry/SkmOO+//pdMk3iso04YDCLFfseKjRwabK+zSS0=; b=TFQuf4f0HRXR/Jt1Yj2pTOYBuENeu5AOBB2N8SWYUm/5ZIY11Fa7fWvX+lAaQP3UqN HXuAZPSJzP+xntsfd1/lhGm7e4PYTa9AeRSbzbgHZUb0GaQes8uSYJGlfatKSEEFVplB M4pQDQKJxnalKGcLz9MoPUFnZ8mFyb1KUgOa0aZILgBMSwX2IZPLz8vmH2lxNaO/O0AU 7HtSAMF4/xr49PxQ9ZAqoTtqsiXROPEFGiKdgMyzf9LKT7mE9fLafWqZVlszbglUwrT0 NH2PkmsIMtrF1K5v+oAp/ppYjrrkIXyw3M+dYSCzvaTXfhAJwenmoZqg5Zo2TDlhgBJM wENA== X-Gm-Message-State: AOJu0Yx/ZtMHpGt/HmGWNTHwDnZb9Ozs4osiu3P1Bjr3teD9phAX6tj4 8cDDe8tgBc5joM2jy1fjSth2edJ3bwwK6qkZc+w= X-Google-Smtp-Source: AGHT+IGTat1ho+DMJZVtLhYHIBuyyy4aKRP3WdLAkSBcaTTNmTkoRlB1b+S0RfQSQd4IZ+QGS6+4PwHvfyk+vVrRMwY= X-Received: by 2002:a05:6214:4402:b0:66d:169a:a661 with SMTP id oj2-20020a056214440200b0066d169aa661mr2949294qvb.4.1697560638502; Tue, 17 Oct 2023 09:37:18 -0700 (PDT) MIME-Version: 1.0 References: <20231012170524.21085-1-larysa.zaremba@intel.com> <20231012170524.21085-8-larysa.zaremba@intel.com> In-Reply-To: From: Magnus Karlsson Date: Tue, 17 Oct 2023 18:37:07 +0200 Message-ID: To: Maciej Fijalkowski Content-Type: text/plain; charset="UTF-8" Message-ID-Hash: ETA2WBXDJNENKS76T4ZYX55XV555NTXC X-Message-ID-Hash: ETA2WBXDJNENKS76T4ZYX55XV555NTXC X-MailFrom: magnus.karlsson@gmail.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: Larysa Zaremba , bpf@vger.kernel.org, ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org, martin.lau@linux.dev, song@kernel.org, yhs@fb.com, john.fastabend@gmail.com, kpsingh@kernel.org, sdf@google.com, haoluo@google.com, jolsa@kernel.org, David Ahern , Jakub Kicinski , Willem de Bruijn , Jesper Dangaard Brouer , Anatoly Burakov , Alexander Lobakin , Maryam Tahhan , xdp-hints@xdp-project.net, netdev@vger.kernel.org, Willem de Bruijn , Alexei Starovoitov , Simon Horman , Tariq Toukan , Saeed Mahameed , magnus.karlsson@intel.com X-Mailman-Version: 3.3.8 Precedence: list Subject: [xdp-hints] Re: [PATCH bpf-next v6 07/18] ice: Support XDP hints in AF_XDP ZC mode List-Id: XDP hardware hints design discussion Archived-At: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: On Tue, 17 Oct 2023 at 18:13, Maciej Fijalkowski wrote: > > On Thu, Oct 12, 2023 at 07:05:13PM +0200, Larysa Zaremba wrote: > > In AF_XDP ZC, xdp_buff is not stored on ring, > > instead it is provided by xsk_buff_pool. > > Space for metadata sources right after such buffers was already reserved > > in commit 94ecc5ca4dbf ("xsk: Add cb area to struct xdp_buff_xsk"). > > This makes the implementation rather straightforward. > > > > Update AF_XDP ZC packet processing to support XDP hints. > > > > Signed-off-by: Larysa Zaremba > > --- > > drivers/net/ethernet/intel/ice/ice_xsk.c | 34 ++++++++++++++++++++++-- > > 1 file changed, 32 insertions(+), 2 deletions(-) > > > > diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c > > index ef778b8e6d1b..6ca620b2fbdd 100644 > > --- a/drivers/net/ethernet/intel/ice/ice_xsk.c > > +++ b/drivers/net/ethernet/intel/ice/ice_xsk.c > > @@ -752,22 +752,51 @@ static int ice_xmit_xdp_tx_zc(struct xdp_buff *xdp, > > return ICE_XDP_CONSUMED; > > } > > > > +/** > > + * ice_prepare_pkt_ctx_zc - Prepare packet context for XDP hints > > + * @xdp: xdp_buff used as input to the XDP program > > + * @eop_desc: End of packet descriptor > > + * @rx_ring: Rx ring with packet context > > + * > > + * In regular XDP, xdp_buff is placed inside the ring structure, > > + * just before the packet context, so the latter can be accessed > > + * with xdp_buff address only at all times, but in ZC mode, > > + * xdp_buffs come from the pool, so we need to reinitialize > > + * context for every packet. > > + * > > + * We can safely convert xdp_buff_xsk to ice_xdp_buff, > > + * because there are XSK_PRIV_MAX bytes reserved in xdp_buff_xsk > > + * right after xdp_buff, for our private use. > > + * XSK_CHECK_PRIV_TYPE() ensures we do not go above the limit. > > + */ > > +static void ice_prepare_pkt_ctx_zc(struct xdp_buff *xdp, > > + union ice_32b_rx_flex_desc *eop_desc, > > + struct ice_rx_ring *rx_ring) > > +{ > > + XSK_CHECK_PRIV_TYPE(struct ice_xdp_buff); > > + ((struct ice_xdp_buff *)xdp)->pkt_ctx = rx_ring->pkt_ctx; > > I will be loud thinking over here, but this could be set in > ice_fill_rx_descs(), while grabbing xdp_buffs from xsk_pool, should > minimize the performance overhead. > > But then again you address that with static branch in later patch. > > OTOH, I was thinking that we could come with xsk_buff_pool API that would > let drivers assign this at setup time. Similar what is being done with dma > mappings. > > Magnus, do you think it is worth the hassle? Thoughts? I would measure the overhead of the current assignment and if it is significant (incurs a cache miss for example), then why not try out your idea. Usually good not to have to touch things when not needed. > Or should we advise any other driver that support hints to mimic static > branch solution? > > > + ice_xdp_meta_set_desc(xdp, eop_desc); > > +} > > + > > /** > > * ice_run_xdp_zc - Executes an XDP program in zero-copy path > > * @rx_ring: Rx ring > > * @xdp: xdp_buff used as input to the XDP program > > * @xdp_prog: XDP program to run > > * @xdp_ring: ring to be used for XDP_TX action > > + * @rx_desc: packet descriptor > > * > > * Returns any of ICE_XDP_{PASS, CONSUMED, TX, REDIR} > > */ > > static int > > ice_run_xdp_zc(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp, > > - struct bpf_prog *xdp_prog, struct ice_tx_ring *xdp_ring) > > + struct bpf_prog *xdp_prog, struct ice_tx_ring *xdp_ring, > > + union ice_32b_rx_flex_desc *rx_desc) > > { > > int err, result = ICE_XDP_PASS; > > u32 act; > > > > + ice_prepare_pkt_ctx_zc(xdp, rx_desc, rx_ring); > > act = bpf_prog_run_xdp(xdp_prog, xdp); > > > > if (likely(act == XDP_REDIRECT)) { > > @@ -907,7 +936,8 @@ int ice_clean_rx_irq_zc(struct ice_rx_ring *rx_ring, int budget) > > if (ice_is_non_eop(rx_ring, rx_desc)) > > continue; > > > > - xdp_res = ice_run_xdp_zc(rx_ring, first, xdp_prog, xdp_ring); > > + xdp_res = ice_run_xdp_zc(rx_ring, first, xdp_prog, xdp_ring, > > + rx_desc); > > if (likely(xdp_res & (ICE_XDP_TX | ICE_XDP_REDIR))) { > > xdp_xmit |= xdp_res; > > } else if (xdp_res == ICE_XDP_EXIT) { > > -- > > 2.41.0 > >