From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mail.toke.dk (Postfix) with ESMTPS id 0C054A8114F for ; Tue, 13 Aug 2024 16:54:27 +0200 (CEST) Authentication-Results: mail.toke.dk; dkim=pass (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=RpldYlGb DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1723560866; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=E6wxMvH5Stej7Yv5ATmISJ/l6opYMRxwGMga9kMqk9k=; b=RpldYlGbyjF7veiEvVhDDdCoAjMGDo4PUONiNJNVt3BOzM1OnqB95ii2nT+ZcBF1NX/438 LOtgWCqyEea+eFOprgzUSpFFzYyet3ReDEJlITIEZKGlTg/HepLIsL/LTfr7nwHJP2sgK3 yUyzQMnSy+uONpjaKT+ZqWUsHrgqE7w= Received: from mail-wm1-f69.google.com (mail-wm1-f69.google.com [209.85.128.69]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-628-d_gBm18VOh2eHUtiasAsKw-1; Tue, 13 Aug 2024 10:54:25 -0400 X-MC-Unique: d_gBm18VOh2eHUtiasAsKw-1 Received: by mail-wm1-f69.google.com with SMTP id 5b1f17b1804b1-42816096cb8so64321775e9.0 for ; Tue, 13 Aug 2024 07:54:24 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1723560864; x=1724165664; h=mime-version:message-id:date:references:in-reply-to:subject:cc:to :from:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=E6wxMvH5Stej7Yv5ATmISJ/l6opYMRxwGMga9kMqk9k=; b=dlGvntshOp0DkKZj9FH19TSF1Qwexw3ll2M8Yi674RGoWlR52cON6Mh8VZcwUywuyQ rT1gtIbr9mA8uivzwgzrv/rhakiulzeAtSu9ygQTqct+z8ypXxN6Lm/KnYsIlmkF4zST a5vCBxU9szWEApd1MZXLWglG8MFDhoQhukS3mPiRXJuj2yPvVoe9dXKP4yYngF5AY11z GbBCRXnJHCfWc7alxTnrQ7+/bPHmtDMTzG+kOYyHwmbFNSEcYcWVUqRS52INnNcwFt/2 jojH30CZgWiNzlWKrKj+/AveVmmVwGrZVAYq7591n53uyuURIlYCLCIcDhgH68oZ0WYR +ODg== X-Forwarded-Encrypted: i=1; AJvYcCViA/V8/yn6714XK3cN/7dUvl0xMXcQES9KND98m8achRFewtL1b3uuiqTP831bpgmrIJhQ/wViGs5Bw+NGGoCMgj4pFYyHmHAS X-Gm-Message-State: AOJu0YyAcbxBG5nH7oETcCb0Cf1rlxhyWxRMlCme61Vtk3YgVb698Wne GDb4nC7nDIGngWanuvTsFUstGrn+vsrkAJMVnPCJytopNJ6d+mlflAA/2frAM5ce7OZIdXebwAz rT5HWjJgpYn5ExwITVkn7RQiceB8gG8yFiVHTcKwh/DhpSoIntP/lwZ85uA== X-Received: by 2002:a5d:5284:0:b0:368:6606:bd01 with SMTP id ffacd0b85a97d-3716cd250c5mr2570197f8f.55.1723560863602; Tue, 13 Aug 2024 07:54:23 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHoo4XWM+a6cr4sdY3abB/BPFL+oExtFb9xuFkxORKiSBgHvfqk12YWOOxnIo0yrvk1Sh4ZIA== X-Received: by 2002:a5d:5284:0:b0:368:6606:bd01 with SMTP id ffacd0b85a97d-3716cd250c5mr2570152f8f.55.1723560862967; Tue, 13 Aug 2024 07:54:22 -0700 (PDT) Received: from alrua-x1.borgediget.toke.dk ([45.145.92.2]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-36e4c938045sm10651120f8f.43.2024.08.13.07.54.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 13 Aug 2024 07:54:22 -0700 (PDT) Received: by alrua-x1.borgediget.toke.dk (Postfix, from userid 1000) id 9123D14ADF45; Tue, 13 Aug 2024 16:54:21 +0200 (CEST) From: Toke =?utf-8?Q?H=C3=B8iland-J=C3=B8rgensen?= To: Alexander Lobakin , Lorenzo Bianconi , Daniel Xu In-Reply-To: References: <20220628194812.1453059-1-alexandr.lobakin@intel.com> <20220628194812.1453059-33-alexandr.lobakin@intel.com> <54aab7ec-80e9-44fd-8249-fe0cabda0393@intel.com> X-Clacks-Overhead: GNU Terry Pratchett Date: Tue, 13 Aug 2024 16:54:21 +0200 Message-ID: <874j7oean6.fsf@toke.dk> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain Message-ID-Hash: UYXEPBJYMHOES7GHG52HN7EWFSRC4QQZ X-Message-ID-Hash: UYXEPBJYMHOES7GHG52HN7EWFSRC4QQZ X-MailFrom: toke@redhat.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: Alexander Lobakin , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Larysa Zaremba , Michal Swiatkowski , Jesper Dangaard Brouer , =?utf-8?B?QmrDtnJuIFTDtnBlbA==?= , Magnus Karlsson , Maciej Fijalkowski , Jonathan Lemon , Lorenzo Bianconi , David Miller , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Jesse Brandeburg , John Fastabend , Yajun Deng , Willem de Bruijn , "bpf@vger.kernel.org" , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, xdp-hints@xdp-project.net X-Mailman-Version: 3.3.9 Precedence: list Subject: [xdp-hints] Re: [PATCH RFC bpf-next 32/52] bpf, cpumap: switch to GRO from netif_receive_skb_list() List-Id: XDP hardware hints design discussion Archived-At: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: Alexander Lobakin writes: > From: Alexander Lobakin > Date: Thu, 8 Aug 2024 13:57:00 +0200 > >> From: Lorenzo Bianconi >> Date: Thu, 8 Aug 2024 06:54:06 +0200 >> >>>> Hi Alexander, >>>> >>>> On Tue, Jun 28, 2022, at 12:47 PM, Alexander Lobakin wrote: >>>>> cpumap has its own BH context based on kthread. It has a sane batch >>>>> size of 8 frames per one cycle. >>>>> GRO can be used on its own, adjust cpumap calls to the >>>>> upper stack to use GRO API instead of netif_receive_skb_list() which >>>>> processes skbs by batches, but doesn't involve GRO layer at all. >>>>> It is most beneficial when a NIC which frame come from is XDP >>>>> generic metadata-enabled, but in plenty of tests GRO performs better >>>>> than listed receiving even given that it has to calculate full frame >>>>> checksums on CPU. >>>>> As GRO passes the skbs to the upper stack in the batches of >>>>> @gro_normal_batch, i.e. 8 by default, and @skb->dev point to the >>>>> device where the frame comes from, it is enough to disable GRO >>>>> netdev feature on it to completely restore the original behaviour: >>>>> untouched frames will be being bulked and passed to the upper stack >>>>> by 8, as it was with netif_receive_skb_list(). >>>>> >>>>> Signed-off-by: Alexander Lobakin >>>>> --- >>>>> kernel/bpf/cpumap.c | 43 ++++++++++++++++++++++++++++++++++++++----- >>>>> 1 file changed, 38 insertions(+), 5 deletions(-) >>>>> >>>> >>>> AFAICT the cpumap + GRO is a good standalone improvement. I think >>>> cpumap is still missing this. >> >> The only concern for having GRO in cpumap without metadata from the NIC >> descriptor was that when the checksum status is missing, GRO calculates >> the checksum on CPU, which is not really fast. >> But I remember sometimes GRO was faster despite that. >> >>>> >>>> I have a production use case for this now. We want to do some intelligent >>>> RX steering and I think GRO would help over list-ified receive in some cases. >>>> We would prefer steer in HW (and thus get existing GRO support) but not all >>>> our NICs support it. So we need a software fallback. >>>> >>>> Are you still interested in merging the cpumap + GRO patches? >> >> For sure I can revive this part. I was planning to get back to this >> branch and pick patches which were not related to XDP hints and send >> them separately. >> >>> >>> Hi Daniel and Alex, >>> >>> Recently I worked on a PoC to add GRO support to cpumap codebase: >>> - https://github.com/LorenzoBianconi/bpf-next/commit/a4b8264d5000ecf016da5a2dd9ac302deaf38b3e >>> Here I added GRO support to cpumap through gro-cells. >>> - https://github.com/LorenzoBianconi/bpf-next/commit/da6cb32a4674aa72401c7414c9a8a0775ef41a55 >>> Here I added GRO support to cpumap trough napi-threaded APIs (with a some >>> changes to them). >> >> Hmm, when I was testing it, adding a whole NAPI to cpumap was sorta >> overkill, that's why I separated GRO structure from &napi_struct. >> >> Let me maybe find some free time, I would then test all 3 solutions >> (mine, gro_cells, threaded NAPI) and pick/send the best? >> >>> >>> Please note I have not run any performance tests so far, just verified it does >>> not crash (I was planning to resume this work soon). Please let me know if it >>> works for you. > > I did tests on both threaded NAPI for cpumap and my old implementation > with a traffic generator and I have the following (in Kpps): > > direct Rx direct GRO cpumap cpumap GRO > baseline 2900 5800 2700 2700 (N/A) > threaded 2300 4000 > old GRO 2300 4000 > > IOW, > > 1. There are no differences in perf between Lorenzo's threaded NAPI > GRO implementation and my old implementation, but Lorenzo's is also > a very nice cleanup as it switches cpumap to threaded NAPI completely > and the final diffstat even removes more lines than adds, while mine > adds a bunch of lines and refactors a couple hundred, so I'd go with > his variant. > > 2. After switching to NAPI, the performance without GRO decreases (2.3 > Mpps vs 2.7 Mpps), but after enabling GRO the perf increases hugely > (4 Mpps vs 2.7 Mpps) even though the CPU needs to compute checksums > manually. One question for this: IIUC, the benefit of GRO varies with the traffic mix, depending on how much the GRO logic can actually aggregate. So did you test the pathological case as well (spraying packets over so many flows that there is basically no aggregation taking place)? Just to make sure we don't accidentally screw up performance in that case while optimising for the aggregating case :) -Toke