XDP hardware hints discussion mail archive
 help / color / mirror / Atom feed
From: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
To: Larysa Zaremba <larysa.zaremba@intel.com>
Cc: bpf@vger.kernel.org, ast@kernel.org, daniel@iogearbox.net,
	andrii@kernel.org, martin.lau@linux.dev, song@kernel.org,
	yhs@fb.com, john.fastabend@gmail.com, kpsingh@kernel.org,
	sdf@google.com, haoluo@google.com, jolsa@kernel.org,
	David Ahern <dsahern@gmail.com>, Jakub Kicinski <kuba@kernel.org>,
	Willem de Bruijn <willemb@google.com>,
	Jesper Dangaard Brouer <hawk@kernel.org>,
	Anatoly Burakov <anatoly.burakov@intel.com>,
	Alexander Lobakin <alexandr.lobakin@intel.com>,
	Magnus Karlsson <magnus.karlsson@gmail.com>,
	Maryam Tahhan <mtahhan@redhat.com>,
	xdp-hints@xdp-project.net, netdev@vger.kernel.org,
	Willem de Bruijn <willemdebruijn.kernel@gmail.com>,
	Alexei Starovoitov <alexei.starovoitov@gmail.com>,
	Simon Horman <simon.horman@corigine.com>,
	Tariq Toukan <tariqt@mellanox.com>,
	Saeed Mahameed <saeedm@mellanox.com>,
	magnus.karlsson@intel.com
Subject: [xdp-hints] Re: [PATCH bpf-next v6 07/18] ice: Support XDP hints in AF_XDP ZC mode
Date: Tue, 17 Oct 2023 18:13:30 +0200	[thread overview]
Message-ID: <ZS6yqqMZD1mojQNr@boxer> (raw)
In-Reply-To: <20231012170524.21085-8-larysa.zaremba@intel.com>

On Thu, Oct 12, 2023 at 07:05:13PM +0200, Larysa Zaremba wrote:
> In AF_XDP ZC, xdp_buff is not stored on ring,
> instead it is provided by xsk_buff_pool.
> Space for metadata sources right after such buffers was already reserved
> in commit 94ecc5ca4dbf ("xsk: Add cb area to struct xdp_buff_xsk").
> This makes the implementation rather straightforward.
> 
> Update AF_XDP ZC packet processing to support XDP hints.
> 
> Signed-off-by: Larysa Zaremba <larysa.zaremba@intel.com>
> ---
>  drivers/net/ethernet/intel/ice/ice_xsk.c | 34 ++++++++++++++++++++++--
>  1 file changed, 32 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c
> index ef778b8e6d1b..6ca620b2fbdd 100644
> --- a/drivers/net/ethernet/intel/ice/ice_xsk.c
> +++ b/drivers/net/ethernet/intel/ice/ice_xsk.c
> @@ -752,22 +752,51 @@ static int ice_xmit_xdp_tx_zc(struct xdp_buff *xdp,
>  	return ICE_XDP_CONSUMED;
>  }
>  
> +/**
> + * ice_prepare_pkt_ctx_zc - Prepare packet context for XDP hints
> + * @xdp: xdp_buff used as input to the XDP program
> + * @eop_desc: End of packet descriptor
> + * @rx_ring: Rx ring with packet context
> + *
> + * In regular XDP, xdp_buff is placed inside the ring structure,
> + * just before the packet context, so the latter can be accessed
> + * with xdp_buff address only at all times, but in ZC mode,
> + * xdp_buffs come from the pool, so we need to reinitialize
> + * context for every packet.
> + *
> + * We can safely convert xdp_buff_xsk to ice_xdp_buff,
> + * because there are XSK_PRIV_MAX bytes reserved in xdp_buff_xsk
> + * right after xdp_buff, for our private use.
> + * XSK_CHECK_PRIV_TYPE() ensures we do not go above the limit.
> + */
> +static void ice_prepare_pkt_ctx_zc(struct xdp_buff *xdp,
> +				   union ice_32b_rx_flex_desc *eop_desc,
> +				   struct ice_rx_ring *rx_ring)
> +{
> +	XSK_CHECK_PRIV_TYPE(struct ice_xdp_buff);
> +	((struct ice_xdp_buff *)xdp)->pkt_ctx = rx_ring->pkt_ctx;

I will be loud thinking over here, but this could be set in
ice_fill_rx_descs(), while grabbing xdp_buffs from xsk_pool, should
minimize the performance overhead.

But then again you address that with static branch in later patch.

OTOH, I was thinking that we could come with xsk_buff_pool API that would
let drivers assign this at setup time. Similar what is being done with dma
mappings.

Magnus, do you think it is worth the hassle? Thoughts?

Or should we advise any other driver that support hints to mimic static
branch solution?

> +	ice_xdp_meta_set_desc(xdp, eop_desc);
> +}
> +
>  /**
>   * ice_run_xdp_zc - Executes an XDP program in zero-copy path
>   * @rx_ring: Rx ring
>   * @xdp: xdp_buff used as input to the XDP program
>   * @xdp_prog: XDP program to run
>   * @xdp_ring: ring to be used for XDP_TX action
> + * @rx_desc: packet descriptor
>   *
>   * Returns any of ICE_XDP_{PASS, CONSUMED, TX, REDIR}
>   */
>  static int
>  ice_run_xdp_zc(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp,
> -	       struct bpf_prog *xdp_prog, struct ice_tx_ring *xdp_ring)
> +	       struct bpf_prog *xdp_prog, struct ice_tx_ring *xdp_ring,
> +	       union ice_32b_rx_flex_desc *rx_desc)
>  {
>  	int err, result = ICE_XDP_PASS;
>  	u32 act;
>  
> +	ice_prepare_pkt_ctx_zc(xdp, rx_desc, rx_ring);
>  	act = bpf_prog_run_xdp(xdp_prog, xdp);
>  
>  	if (likely(act == XDP_REDIRECT)) {
> @@ -907,7 +936,8 @@ int ice_clean_rx_irq_zc(struct ice_rx_ring *rx_ring, int budget)
>  		if (ice_is_non_eop(rx_ring, rx_desc))
>  			continue;
>  
> -		xdp_res = ice_run_xdp_zc(rx_ring, first, xdp_prog, xdp_ring);
> +		xdp_res = ice_run_xdp_zc(rx_ring, first, xdp_prog, xdp_ring,
> +					 rx_desc);
>  		if (likely(xdp_res & (ICE_XDP_TX | ICE_XDP_REDIR))) {
>  			xdp_xmit |= xdp_res;
>  		} else if (xdp_res == ICE_XDP_EXIT) {
> -- 
> 2.41.0
> 

  reply	other threads:[~2023-10-17 16:13 UTC|newest]

Thread overview: 38+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-10-12 17:05 [xdp-hints] [PATCH bpf-next v6 00/18] XDP metadata via kfuncs for ice + VLAN hint Larysa Zaremba
2023-10-12 17:05 ` [xdp-hints] [PATCH bpf-next v6 01/18] ice: make RX hash reading code more reusable Larysa Zaremba
2023-10-12 17:05 ` [xdp-hints] [PATCH bpf-next v6 02/18] ice: make RX HW timestamp " Larysa Zaremba
2023-10-12 17:05 ` [xdp-hints] [PATCH bpf-next v6 03/18] ice: Make ptype internal to descriptor info processing Larysa Zaremba
2023-10-12 17:05 ` [xdp-hints] [PATCH bpf-next v6 04/18] ice: Introduce ice_xdp_buff Larysa Zaremba
2023-10-12 17:05 ` [xdp-hints] [PATCH bpf-next v6 05/18] ice: Support HW timestamp hint Larysa Zaremba
2023-10-12 17:05 ` [xdp-hints] [PATCH bpf-next v6 06/18] ice: Support RX hash XDP hint Larysa Zaremba
2023-10-20 15:27   ` [xdp-hints] " Maciej Fijalkowski
2023-10-23 10:04     ` Larysa Zaremba
2023-10-12 17:05 ` [xdp-hints] [PATCH bpf-next v6 07/18] ice: Support XDP hints in AF_XDP ZC mode Larysa Zaremba
2023-10-17 16:13   ` Maciej Fijalkowski [this message]
2023-10-17 16:37     ` [xdp-hints] " Magnus Karlsson
2023-10-17 16:45       ` Maciej Fijalkowski
2023-10-17 17:03         ` Larysa Zaremba
2023-10-18 10:43           ` Maciej Fijalkowski
2023-10-20 15:29   ` Maciej Fijalkowski
2023-10-23  9:37     ` Larysa Zaremba
2023-10-12 17:05 ` [xdp-hints] [PATCH bpf-next v6 08/18] xdp: Add VLAN tag hint Larysa Zaremba
2023-10-18 23:59   ` [xdp-hints] " Jakub Kicinski
2023-10-19  8:05     ` Larysa Zaremba
2023-10-12 17:05 ` [xdp-hints] [PATCH bpf-next v6 09/18] ice: Implement " Larysa Zaremba
2023-10-12 17:05 ` [xdp-hints] [PATCH bpf-next v6 10/18] ice: use VLAN proto from ring packet context in skb path Larysa Zaremba
2023-10-12 17:05 ` [xdp-hints] [PATCH bpf-next v6 11/18] ice: put XDP meta sources assignment under a static key condition Larysa Zaremba
2023-10-20 16:32   ` [xdp-hints] " Maciej Fijalkowski
2023-10-23  9:35     ` Larysa Zaremba
2023-10-28 19:55       ` Maciej Fijalkowski
2023-10-31 14:22         ` Larysa Zaremba
2023-11-02 13:23           ` Maciej Fijalkowski
2023-11-02 13:48             ` Larysa Zaremba
2023-10-31 17:32         ` Larysa Zaremba
2023-10-12 17:05 ` [xdp-hints] [PATCH bpf-next v6 12/18] veth: Implement VLAN tag XDP hint Larysa Zaremba
2023-10-12 17:05 ` [xdp-hints] [PATCH bpf-next v6 13/18] net: make vlan_get_tag() return -ENODATA instead of -EINVAL Larysa Zaremba
2023-10-12 17:05 ` [xdp-hints] [PATCH bpf-next v6 14/18] mlx5: implement VLAN tag XDP hint Larysa Zaremba
2023-10-17 12:40   ` [xdp-hints] " Tariq Toukan
2023-10-12 17:05 ` [xdp-hints] [PATCH bpf-next v6 15/18] selftests/bpf: Allow VLAN packets in xdp_hw_metadata Larysa Zaremba
2023-10-12 17:05 ` [xdp-hints] [PATCH bpf-next v6 16/18] selftests/bpf: Add flags and VLAN hint to xdp_hw_metadata Larysa Zaremba
2023-10-12 17:05 ` [xdp-hints] [PATCH bpf-next v6 17/18] selftests/bpf: Use AF_INET for TX in xdp_metadata Larysa Zaremba
2023-10-12 17:05 ` [xdp-hints] [PATCH bpf-next v6 18/18] selftests/bpf: Check VLAN tag and proto " Larysa Zaremba

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

  List information: https://lists.xdp-project.net/postorius/lists/xdp-hints.xdp-project.net/

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZS6yqqMZD1mojQNr@boxer \
    --to=maciej.fijalkowski@intel.com \
    --cc=alexandr.lobakin@intel.com \
    --cc=alexei.starovoitov@gmail.com \
    --cc=anatoly.burakov@intel.com \
    --cc=andrii@kernel.org \
    --cc=ast@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=daniel@iogearbox.net \
    --cc=dsahern@gmail.com \
    --cc=haoluo@google.com \
    --cc=hawk@kernel.org \
    --cc=john.fastabend@gmail.com \
    --cc=jolsa@kernel.org \
    --cc=kpsingh@kernel.org \
    --cc=kuba@kernel.org \
    --cc=larysa.zaremba@intel.com \
    --cc=magnus.karlsson@gmail.com \
    --cc=magnus.karlsson@intel.com \
    --cc=martin.lau@linux.dev \
    --cc=mtahhan@redhat.com \
    --cc=netdev@vger.kernel.org \
    --cc=saeedm@mellanox.com \
    --cc=sdf@google.com \
    --cc=simon.horman@corigine.com \
    --cc=song@kernel.org \
    --cc=tariqt@mellanox.com \
    --cc=willemb@google.com \
    --cc=willemdebruijn.kernel@gmail.com \
    --cc=xdp-hints@xdp-project.net \
    --cc=yhs@fb.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox