Skip to content

Conversation

@icing
Copy link
Contributor

@icing icing commented Oct 17, 2025

When using recvmsg/recvmmsg and GSO is enabled, we may get many packets in a single message buffer. Instead of invoking the recv_cb on each of these, add the GSO size to the callback and pass it the complete message so it may iterate itself.

@icing icing added the HTTP/3 h3 or quic related label Oct 17, 2025
@icing icing force-pushed the recvmmsg-improvements branch from 409d015 to 6374740 Compare October 17, 2025 13:24
@icing icing requested a review from bagder October 17, 2025 15:21
@bagder bagder closed this in 5cefb45 Oct 17, 2025
@icing
Copy link
Contributor Author

icing commented Oct 17, 2025

@tatsuhiro-t we now see in our testing of 50 parallel downloads of 100MB (with an apache backend) that nghttpx runs at 100% cpu and curl at about 67% on a debian sid.

The download speed then depends on the machine. My old one goes to ~500MB/s and @bagder's beast goes to ~1000 MB/s. The cpu ratios stay.

Just an FYI what we are currently seeing with your nice work. Thanks!

Clarification: that is on localhost, but the MTU stays at 14xx bytes. I imagine that a third of the nghtpx cpu goes into the backend fetch.

@tatsuhiro-t
Copy link
Contributor

Your curl work does the brilliant job here, great work.
Yeah, basically a proxy does 2 things that are client and server, roughly doing 2x work and CPU usage tends to be higher. I made some optimization in the current nghttp2 master, so hopefully the next nghttp2 release will improve the situation a little bit. Building nghttpx including ngtcp2, and nghttp3 with LTO enabled also boosts a little bit of RPS (~10%).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

HTTP/3 h3 or quic related

Development

Successfully merging this pull request may close these issues.

3 participants