From mboxrd@z Thu Jan 1 00:00:00 1970 From: Toke =?utf-8?Q?H=C3=B8iland-J=C3=B8rgensen?= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=toke.dk; s=20161023; t=1707829231; bh=xGjzUWmaPugYVvzCnAqJWC3hffYdRCVUX+8tjtOYStc=; h=From:To:Subject:In-Reply-To:References:Date:From; b=VMfjLY5VffIACgusBpfSIjz9mNTZJ/9KFVIqACFjj+9OfXeZM+wbbqg2CFvqu4K0Z +DsVLkcNR+ooJuvUVwqLH6E8K+7KzIJXcCB5G4UYtiFyCvKQrgp6vhKPB0rUniOe4G lsLLynEkSAykp7W8Kuc9OkkZz5QXcFjFnYMKZL3s8CN+OF0LVOddVdxE1U4/uPy0uG YGiDEywsOOVMFA305QfdgcgFSV7rkq5ybvaJEQgTbIKxC7ZHN/ur6zg+ZU3cXjeKpq 5rpTvMaw/tNoTjcCnJMasqbSrIAMuzDIn2hoA5Vp7Ose9CkMZp/4MbLYBLkfkWhRiq 61Iblct/Z/ZTw== To: Florian Kauer , xdp-hints@xdp-project.net, xdp-newbies@vger.kernel.org In-Reply-To: References: <87v86tg5qp.fsf@toke.dk> Date: Tue, 13 Feb 2024 14:00:31 +0100 X-Clacks-Overhead: GNU Terry Pratchett Message-ID: <87o7ckfrio.fsf@toke.dk> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Message-ID-Hash: XTXP7ZQVXJBNDK7KZ4YGPRX7NKL3X3IJ X-Message-ID-Hash: XTXP7ZQVXJBNDK7KZ4YGPRX7NKL3X3IJ X-MailFrom: toke@toke.dk X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header X-Mailman-Version: 3.3.9 Precedence: list Subject: [xdp-hints] Re: XDP Redirect and TX Metadata List-Id: XDP hardware hints design discussion Archived-At: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: Florian Kauer writes: > On 12.02.24 14:41, Toke H=C3=B8iland-J=C3=B8rgensen wrote: >> Florian Kauer writes: >>=20 >>> Hi, >>> I am currently implementing an eBPF for redirecting from one physical i= nterface to another. So basically loading the following at enp8s0: >>> >>> SEC("prog") >>> int xdp_redirect(struct xdp_md *ctx) { >>> /* ... */ >>> return bpf_redirect(3 /* enp5s0 */, 0); >>> } >>> >>> I made three observations that I would like to discuss with you: >>> >>> 1. The redirection only works when I ALSO load some eBPF at the egress = interface (enp5s0). It can be just >>> >>> SEC("prog") >>> int xdp_pass(struct xdp_md *ctx) { >>> return XDP_PASS; >>> } >>> >>> but there has to be at least something. Otherwise, only xdp_redirect is= called, but xdp_devmap_xmit is not. >>> It seems somewhat reasonable that the interface where the traffic is re= directed to also needs to have the >>> XDP functionality initialized somehow, but it was unexpected at first. = It tested it with an i225-IT (igc driver) >>> and a 82576 (igb driver). So, is this a bug or a feature? >>=20 >> I personally consider it a bug, but all the Intel drivers work this way, >> unfortunately. The was some discussion around making the XDP feature >> bits read-write, making it possible to enable XDP via ethtool instead of >> having to load a dummy XDP program. But no patches have materialised yet. > > I see, thanks! So at least it is expected behavior for now. > How do other non-Intel drivers handle this? I believe Mellanox drivers have some kind of global switch that can completely disable XDP, but if it's enabled (which it is by default) everything works including redirect. Other drivers just have XDP features always enabled. >>> 2. For the RX side, the metadata is documented as "XDP RX Metadata" >>> (https://docs.kernel.org/networking/xdp-rx-metadata.html), while for >>> TX it is "AF_XDP TX Metadata" >>> (https://www.kernel.org/doc/html/next/networking/xsk-tx-metadata.html). >>> That seems to imply that TX metadata only works for AF_XDP, but not >>> for direct redirection. Is there a reason for that? >>=20 >> Well, IIRC, AF_XDP was the most pressing use case, and no one has gotten >> around to extending this to the regular XDP forwarding path yet. > > Ok, that is fine. I had the fear that there is some fundamental problem > that prevents to implement this. > > >>> 3. At least for the igc, the egress queue is currently selected by >>> using the smp_processor_id. >>> (https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tre= e/drivers/net/ethernet/intel/igc/igc_main.c?h=3Dv6.8-rc4#n2453) >>> For our application, I would like to define the queue on a per-packet >>> basis via the eBPF. This would allow to steer the traffic to the >>> correct queue when using TAPRIO full hardware offload. Do you see any >>> problem with introducing a new metadata field to define the egress >>> queue? >>=20 >> Well, a couple :) >>=20 >> 1. We'd have to find agreement across drivers for a numbering scheme to >> refer to queues. > > Good point! At least we already refer to queues in the MQPRIO qdisc > ( queues count1@offset1 count2@offset2 ... ). > There might be different alternatives (like using the traffic class) > for this IF we want to implement this ... Oh, plenty of options; the tricky bit is agreeing on one, and figuring out what the right kernel abstraction is. For instance, in the regular networking stack, the concept of a queue is exposed from the driver into the core stack, but for XDP it isn't. >> 2. Selecting queues based on CPU index the way its done now means we >> guarantee that the same queue will only be served from one CPU. Which >> means we don't have to do any locking, which helps tremendously with >> performance. Drivers handle the case where there are more CPUs than >> queues a bit differently, but the ones that do generally have a lock >> (with associated performance overhead). > > ... but this will likely completely prevent to implement this in the > straight forward way. You are right, we do not want the CPUs to constantly > fight for access to the same queues for every packet. > >> As a workaround, you can use a cpumap to steer packets to specific CPUs >> and perform the egress redirect inside the cpumap instead of directly on >> RX. Requires a bit of knowledge of the hardware configuration, but it >> may be enough for what you're trying to do. > > So I really like this approach on first glance since it prevents the issue > you describe above. > > However, as you write, it is very hardware dependent and also depends on > how exactly the driver handles the CPU -> Queue mapping internally. > I have the feeling that the mapping CPU % Queue Number -> Queue as it is > implemented in the moment might neither be stable over time nor over > different drivers, even if it is the most likely one. No, the application would have to figure that out. FWIW I looked at this for other reasons at some point and didn't find any drivers that did something different than using the CPU number (with or without the modulus operation). So in practice I think using the CPU ID as a proxy for queue number will work just fine on most hardware... > What do you think maybe about exporting an interface (e.g. via > ethtool) to define the mapping of CPU -> Queue? Well, this would require ethtool to know about those queues, which means defining a driver<->stack concept of queues for XDP. There was some attempt at doing this some years ago, but it never went anywhere, unfortunately. I personally think doing something like this would be worthwhile, but it's a decidedly non-trivial undertaking :) -Toke