If you have an international user base look for a provider with a multi-redundant network connected to key Internet exchanges, tho in fairness most good data centres will be. So the other thing you can do is on the hardware side, with things like an SSD and enough RAM.
ioping on two servers, one with SSDs and the other with standard HDs (both configured in a RAID array with the same number of drives each):
1
ioping -RD -w 10 .
--- . (ext4 /dev/md2) ioping statistics ---
112.1 k requests completed in 9.48 s, 437.9 MiB read, 11.8 k iops, 46.2 MiB/s
generated 112.1 k requests in 10.0 s, 437.9 MiB, 11.2 k iops, 43.8 MiB/s
min/avg/max/mdev = 53.2 us / 84.6 us / 6.10 ms / 45.5 us
2
ioping -RD -w 10 .
--- . (ext4 /dev/md2) ioping statistics ---
11.8 k requests completed in 9.92 s, 46.1 MiB read, 1.19 k iops, 4.65 MiB/s
generated 11.8 k requests in 10.0 s, 46.1 MiB, 1.18 k iops, 4.61 MiB/s
min/avg/max/mdev = 147.7 us / 840.6 us / 163.3 ms / 2.71 ms
However standard SSDs and enough RAM is fine for most applications. What will be interesting (for me at least) is to see how they differ with a LiveView app (which I can’t wait to do! )
It will depend on their infrastructure - most well-connected countries should be more than fine tho (and for those with poor infrastructure, the experience will most likely be typical of what they’re already used to).
Hence the usual stuff of minimizing how much you send and especially how many different things are sent. But it’s not hard to replicate out copies around, I tend to make things that are static and the dynamic parts are regenerated out of phase so it can deploy those updates around. Few things of mine actually require a database outside of the “generator” server. I’m a huge fan of static site generation on the fly. ^.^
Lots of sites in the real world are not like that, especially the ones in US or other countries not complying with GDPR, that load marketing/tracking stuff like crazy, that easily reach 200/300 requests in the network tab.
For example, the current forum makes 90 requests and takes 4.7 seconds to load, and I am in UK (near of the server, I guess):
It (forum homepage) takes 156ms to load here as a guest and 223ms logged in with Safari (and keep in mind the forum is an SPA, so heavier than many sites). With FireFox it takes 526ms and 304ms with Chrome.
You need to remeber that not everyone has high spec computers and very super fast broadband. I am always amazed how many devs forget this so basic reality check.
Yes that’s true (and in these cases those load times are likely to be typical of similar sites).
For the same reason I am not a huge fan of SPAs, and why I think LiveView with the first page server rendered is a great thing. What’s it like visiting a LiveView site on your connection?
For what I understand the server of this forum is hosted in Ireland, Now, imagine someone visit this forum form US west coast, Brazil, Australia, Asia and other faraway locations in connections similar to mine or even slower? Their user experience will not be great for sure.
Bear in mind that I am not bashing on this forum. I am just using it as an example of a common problem we face in the internet because devs simply forgot that the rest of the world don’t have the same conditions they have.
I don’t know of any at the top of my head that are hosted professionally (just the demo sites people have posted using things like free tiers or small cloud based hosts).
Could be a great topic for a thread on EF? List of sites running LiveView in production
I also refuse to host ad-infested websites that require that stuff though, mine are always free, no ads, I pay for everything out of pocket from my own day job, and I host over 40 sites for various people as long as they abide by my very few rules, no ads is one (patreon and such they can get money, I don’t restrict that, but no tracking stuff). I’ve worked on all the sites to make them lightweight, I compile the javascript and CSS into minimal bundles, etc… etc…
I do host a couple discourse forums, and they are not as lightweight as I’d want at all (we really need a discourse clone made in rust, this heavy stuff for no reason is sooooooo annoying, I’d have made one by now if I’d had the time for it), but they are on a few dedicated hardware and even with as many connections as they open it’s http2 and it still only totals like 1-2megs on a page load I just tested on one of mine. Not much I can do about discourse without a better thing coming out.
I have a gigabit network with 12ms latency to my servers that are 1500 miles away and yet this site took 9.6s to load here, lol.
EDIT: Just tested, 160mbits/s download to the desktop I’m on now, but it’s going through an old router that’s constraining its speed (been too cheap to replace it with another new one), with 12ms unloaded latency and 54ms loaded latency.
This just confirms what I keep saying all my life to devs… peek a very old laptop you have at home, go for a bus or train ride and try out what you build and then tell me if is still fast? Oh, dam I need to work on it, but in my shiny high-tec laptop and office/home broadband works like a charm
Strong static typing. I “enjoy” erlang and elixir more (erlang syntax more, elixir metaprogramming more, both of the actor model), but the lack of strong static (especially static) typing part really makes me code a lot more buggy in it then in something like Rust, where if it compiles then it’s generally bugless. Gleam is interesting but it’s also missing the vast rust library ecosystem too.
Yeah Discourse is extremely connection heavy, which is “fine’ish” on http2, but still inefficient because it doesn’t preload them via http2, rather a lot are serialized based on things getting run and asking for more stuff… If discourse actually used HTTP2’s preloading effectively then it would make a world of difference.
Heh, yeah I find erlang more readable than elixir… by far… ^.^;
Not that either are hard at all, the complexity differences in the languages are minor compared to the concepts they actually supply.