Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This works for some websites, not all. And often it results in significant loading times, because you're loading tons of unnecessary data. In many cases it is much better to send requests more frequently, similarly to how a server side rendered website would behave. Instead of fetching 10,000 products, just fetch a page of products. Much faster. Have endpoints for filters and searches, also paginated. This can be fast if you do it right. Since you're only sending minimal amounts of data everything is fast. The server can handle lots of requests and clients don't need a powerful connection to have a good experience.

Network requests themselves are not slow. Even a computer with a bad connection and long distance will probably have less than 500ms round trip. More like 50 to a few hundred at most. Anything beyond that is not the client, it's the server. If you make the backend slow then it'll be slow. If you make the request really large then the bad connection will struggle.

It's also worth mentioning that I would much rather deliver a good service to most users than make everyone's experience worse just for the sake of allowing someone to load the page and continue using it offline. Most websites don't make much sense to use offline anyway. You need to send some requests, best approach is simply to make them fast and small.





> In many cases it is much better to send requests more frequently, similarly to how a server side rendered website would behave. Instead of fetching 10,000 products, just fetch a page of products. Much faster. Have endpoints for filters and searches, also paginated. This can be fast if you do it right. Since you're only sending minimal amounts of data everything is fast.

This is actually a great example of what I mentioned elsewhere about how people seem to have forgotten how to make a SPA responsive. These are both simpler implementations, but not really the best choice for user interaction. A better solution is to take the paginated version and pre-cache what the user might do next: When the results are loaded, return page 1 and 2, display page 1, and cache page 2 so it can be displayed immediately if clicked on. If they do, display it immediately and silently request and cache page 3 in the background, etc etc. This keeps the SPA responsive with few to no loading spinners if they take the happy path, and because it's happening in the background you can automatically do retries for a flaky connection without bothering the user.

This was how gmail and google maps blew peoples' minds when they were first released, by moving data to the frontend and pushing requests to the server into the background, the user could keep working without interruption while updates happened the background without interrupting their flow.


Yeah you can do that, but it's not necessary. You'll have to figure out a clever way to first wait for the data the user actually needs on the first load, then silently cache the other stuff in the background. Then you'll have the best of both worlds, but it complicates the application. It might make sense for Gmail and google maps, but there are a lot of web apps for which it does not make sense to put the work into this.

> Network requests themselves are not slow. Even a computer with a bad connection and long distance will probably have less than 500ms round trip. More like 50 to a few hundred at most. Anything beyond that is not the client, it's the server.

Not if the client is, e.g., constantly moving between cell towers, or right near the end of their range, a situation that frequently happens to me on road trips. Some combination of dropped packets and link-layer reconnections can make a roundtrip take 2 seconds at best, and upwards of 10 seconds at worst.

I don't at all disagree that too many tiny requests is the cause of many slow websites, nor that many SPAs have that issue. But it isn't a defining feature of the SPA model, and nothing's stopping you from thoughtfully batching the requests you do make.

What I mainly dislike is the idea of saving a bit of client effort at the cost of more roundtrips. E.g., one can write an SSR site where every form click takes a roundtrip for validation, and also rejects inputs until it gets a response. Many search forms in particular are guilty of this, and also run on an overloaded server. Bonus points if a few filter changes are enough to hit a 429.

That is to say, SSR makes sense for websites with little interaction, such as HN or old Reddit, which still run great on high-latency connections. But I get the sense it's being pushed into having the server respond to every minor redraw, which can easily drive up the number of roundtrips.

Personally, having learned web development only a few years ago, my impression is that roundtrips are nearly the costliest thing there is. A browser can do quite a lot in the span of 100,000 μs. Yet very few people seem to care about what's going over the wire. If done well, the SPA model seems to offer a great way to reduce this cost, but it's been tainted by the proliferation of overly massive JS blobs.

I guess the moral of the story is "people can write poorly-written websites in any rendering model, and there's no panacea except for careful optimization". Though I still don't get how JS blobs got so big in the first place.


> Not if the client is, e.g., constantly moving between cell towers, or right near the end of their range, a situation that frequently happens to me on road trips.

Right, but how often is a user both using my website and on a roadtrip with bad coverage? In the grand scheme of things, not very often. I also think this depends on what the round trip is for. Maybe the 10s round trip is simply because it's a rather large request.

> I don't at all disagree that too many tiny requests is the cause of many slow websites

That's not really what I was saying, though I don't disagree with it. If you're sending multiple small requests then there are two ways to go about it: You can send all of them at the same time, then wait for responses and handle them as they come back. The other option is to send a request, wait for a response, then send the next etc. The latter option causes slowness, because now you're stacking round trips on top of one another. The former option can be completely fine.

But I'm not saying the client should be sending lots of requests. I'm saying they should get the data they need rather than all the data they could possibly need. This can be done in one request that gets a few kilobytes of data, you can fit 64kb in a single tcp packet. That's quite a bit of data, easily enough space to do useful stuff. For example the front page of HN is 8kb. It loads fast.

I'm also not saying you should use SSR. I do think that SSR is a great way to build websites, but my previous comment was specifically about SPAs. You don't have to send requests for every little thing - you can validate forms on the frontend in both SPAs and SSR.

Round trips are costly but not that much. A lot of round trips are unavoidable, what I'm saying is that you should avoid making them slower by sending too much data. And also avoid stacking them serially.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: