hckrnws
What's the point of not supporting the TLS changes? A lot of the HTTP/3 holdup in other libraries has been the TLS situation, so not supporting that means you're getting basically minimal value for the work you're putting in.
Can you elaborate for those of us who aren't up to speed on the TLS + HTTP/3 situation? Is there a problem somewhere?
Also - are people still doing TLS in their app directly? Modern setups often terminate TLS at the gateway/edge/ingress instead of at the app level. If you use something like k8s, you can even do m2m TLS within your cluster via sidecars - with your app knowing absolutely nothing about TLS.
As defense in depth becomes more and more important, in-app TLS is becoming more, not less common. Especially as Zero Trust Network Access (ZTNA) is being mandated by the US federal government for contracts, the idea that you would terminate TLS at the edge and send unencrypted network traffic inside the server network is becoming a thing of the past.
This remains true even inside of a Kubernetes cluster. You shouldn't trust the network there any more than you should trust your enterprise network. I'm less sure about the implications of sending unencrypted traffic between a container and its sidecar, but certainly pods should be talking to each other over TLS.
The sidecar and the main container run in the same network namespace. They can reach eachother over the loopback interface. It's "safe".
However I'm also of the opinion you should just be mounting tls certs in your container and use your TLS stack of whatever language you're using directly instead. It's a lot simpler
> but certainly pods should be talking to each other over TLS
They do under that scheme. TLS is terminated at the gateway, but k8s/sidecars handle m2m TLS. This provides some advantages for automating short-lived certs, makes deployments more simple, etc and helps your pods remain unaware they are pods (kind of the holy grail of "cloud").
A lot of your edge/serverless stuff will be similar from my understanding.
How about on one machine internally, for example, using NGINX to handle HTTPS then doing an HTTP proxy pass to another process on localhost?
It's up to you how much you trust the traffic on that machine and how you've set up access rights etc. In principle, a process with the right capabilities could snoop on the unencrypted traffic but might not be able to snoop on encrypted traffic. However, given how common local privilege escalations bugs are, if an attacker process is running on the same system, you have probably already lost (especially one that has enough privileges to capture network traffic).
QUIC has no unencrypted mode as one of its more controversial decisions, and nobody has braved the backlash to propose it despite obvious use cases.
Awesome. We use a java connector for one of our services, would be interesting to see if he would speed it up by keeping that connection state up.
I've been running a few h3 sites with Nginx 1.25 for a few months, so far no problems and nice when I disconnect WiFi to cell that it keeps the connection. We have a use case for that so been trying to get that in production.
Node is starting to land WebTransport, intending for it to be the js API for quic. Still a rung down from http/3 but happy seeing things inching along. https://github.com/nodejs/node/pull/52628
I hope this also makes it over to the jax-rs client
Not doing this would be weird. Still good to see.
Comment was deleted :(
[flagged]
Snark aside - the interesting thing here is that we have basically fixed all the issues.
HTTP/2 was fixing all the issues while still keeping TCP as the transport layer. Http/3 is fixing all the issues you can fix if you are allowed to change the transport layer.
There are no more layers to fix.
> There are no more layers to fix.
Or rather the next layers suffer too much protocol ossification to do anything about them. Replacing UDP isn't viable because too much network gear refuses to route anything but TCP and UDP. And the layer below that is upgrading at a pace comparable to our fight against climate change, with IPv6's 30th birthday coming up in two years.
Why would you want to replace UDP? It's just 8 bytes overhead. Like 0.5% of typical packet payload. With basically no guarantees or algorithms in OS to handle. As simple as possible.
While its true that there is severe ossification at those layers, changing them would have about zero impact on HTTP.
TCP isn't "ossified" it's stable.
Yup. We've finally been able to consolidate the redundant flow/congestion control in the HTTP multiplexing and TCP layers while solving the head-of-line blocking problems. It only took 15 years (2009 was SPDY)!
HTTP has been around since 1991, 33 years ago. We really pushed things faster in the second half of HTTP's life so far.
The layer to fix is not below, it's above. Web apps are ridiculously heavy and advertising / surveillance traffic has been the real driver for HTTP/3.
That doesn't make sense. HTTP2&3 has been largely about better multiplexing & congestion control, particularly to reduce latency. Most ad-tech (at least historically) were served from separate domains involving separate connections, so shouldn't really get much benefit for http 2&3.
The snark is because the "issues" are misattributed and HTTP does not need to be replaced.
TCP is fine. HTTP is fine. We need to stop pretending HTTP/2 and HTTP/3 are "upgrades" and just call it a separate protocol. Maybe someday chrome will be able to load from QUIC://news.ycombinator.com
QUIC is a lot better than TCP, but it still has plenty of room for improvement. However, I do not know enough about HTTP to know if any of them would be relevant for the HTTP use case, or if they would only be relevant for other transport use cases.
Crafted by Rajat
Source Code