I asked this same question on Reddit and I got zero engagement, so perhaps Lemmy has people that care more about their hardware.

I recently decided to use some of the tools provided by Mr Salter (netburn) and I have to ask the community if you want to see multi-client stress tests (4K streaming, VoIP, web browsing) used on a wireless router or if the single-client iperf tests are good enough. Bear in mind that pretty much all publications that still test their devices (most don’t) rely on the single-client test method.

  • Puzzle_Sluts_4Ever@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 year ago

    For a consumer/household: This is almost entirely unnecessary. Basically any halfway competent name brand router/firewall will have no problems with this. You are more likely to see issues coming from your wifi network (which is probably part of your “router”), but that is also an incredibly situational one depending on your environment (how many neighbors, etc). But LAN->WAN NAT is a “solved problem” as it were and you mostly just want to stress test that speed wise.

    For enterprise/hotels? Yeah, that is when you are going to have issues with too many clients. And the answer to that is almost always “buy enterprise hardware” rather than “figure out which netgear router I can tape to the ceiling”

    More data is always fun (unless you are the one collecting it…) but I just don’t see much benefit from this. And most of the suggestions in this thread are really just ISP tests.

    • SamB@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      Oh, I agree wholeheartedly that collecting the data is not that much fun, especially since yes, I will have to do it. But I think users may benefit to see if the non-enterprise wireless routers can accomplish a certain task. For example, can that expensive Netgear router actually handle four client devices streaming 4K at the same time? What if we add browsing in the mix? The point of this thread was to get an idea if it’s actually worth running these tests (which take quite a bit) and if people are interested in seeing this type of data on the web.

      • Puzzle_Sluts_4Ever@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        For example, can that expensive Netgear router actually handle four client devices streaming 4K at the same time

        Can your ISP? If so, yes. Because ~25 Mbps * 4 is not a lot of data. And the NAT for four clients mapped to the same firewall/router is pretty trivial. And no, adding “browsing” is not going to be an issue.

        Again, NAT is easy. And it happens on every single packet (big ol’ asterisk on this, but not the venue to get into the specifics), regardless of whether it is one client or two. So what matters is the amount of packets per second that can be processed which these speed tests already cover (albeit, somewhat obfuscated because most people don’t understand the network layers).

        And in the enterprise case? That is mostly about whether you can run a mesh network, what signal coverage you have, and the total number of clients (and packets) that need to be processed per second. Which… you are either a complete sicko who wouldn’t be watching reviews online or you are just going to buy a ubiquiti or omada setup.

        • SamB@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          1 year ago

          Can your ISP? If so, yes. Because ~25 Mbps * 4 is not a lot of data. And the NAT for four clients mapped to the same firewall/router is pretty trivial. And no, adding “browsing” is not going to be an issue.

          On paper, it is not a lot of data, but then adding more clients requesting 25Mbps continuously and then adding some spontaneous, but intensive web browsing can lead to latency spikes. And the user no longer gets a good streaming/browsing experience. I’ve even seen it on an expensive (by consumer-based networking standard) router such as the GT-AX6000.

          So what matters is the amount of packets per second that can be processed which these speed tests already cover (albeit, somewhat obfuscated because most people don’t understand the network layers.

          I am just trying to better understand this stuff, so I have to ask if seeing how long it takes for a client device to accomplish a certain task wouldn’t be better than just glancing over the average Mbps in a graph? That’s what most publications are showing.

          • Puzzle_Sluts_4Ever@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            edit-2
            1 year ago

            Those “lag spikes” are almost always a result of content servers, load balancing, and possibly even client system resources. Or even just a crappy modem.

            A quick overview of an average LAN is:

            • Clients connect to access points. This might be wifi, it might be an ethernet cable
            • Those access points connect to Network Switches
            • Those Switches connect to something that handles routing of packets. This is literally a router, but that term is overloaded so note the lowercase ‘r’
            • Eventually, everything connects to a Firewall which uses NAT to make it look like everything is one giant computer. This gets a bit weird when you move on to ipv6 but that breaks every bit of software at this point so whatever.
            • Your Firewall then connects to a Modem which then connects to the fat internet tubes.

            A “Router” generally handles everything up to the Modem. For a consumer/household, this is fine. Because no matter how many streams of Frasier you have running on your desktop, you aren’t actually generating that much traffic. Even a server isn’t going to generate that much traffic (unless you are speccing it out specifically with multiple NICs and so forth). Hell, your OS is more likely to fall over before you make any decent Router break a sweat. Your crappy Netgear can likely handle a LOT more than the god awful piece of crap modem Comcast rents out.

            The difference between consumer and enterprise is how many computers are involved. Because yes, if you add enough clients you can stress things. But in that case, we are talking closer to hundreds of clients than three or four kids who are on their phone and their tablet at the same time. Which, again, you are either a network sicko building a bespoke solution or you just pay the Ubiquiti tax.

            (And, as an aside, it usually isn’t even the network hardware that falls over in hotels. It is their captive portal. And it very much violates a lot of the terms of staying at a hotel and can be considered “hacking” for legal reasons but… if you know the trick to forcing a reboot you can usually fix the network for the entire hotel for the next couple days).

            And if we then consider the WAN (internet), the usual path these days is to then connect to a Content Delivery Network (CDN) that is effectively a bunch of small relatively local servers that mirror other parts of the internet. Cloudflare is probably the most famous. And a lot of those “Prove you are a human” checks are kind of masking the fetching of data (which doubles as a way to protect against DDOS attacks). This is almost definitely where those “lag spikes” came from, not your hardware.

            I am just trying to better understand this stuff, so I have to ask if seeing how long it takes for a client device to accomplish a certain task wouldn’t be better than just glancing over the average Mbps in a graph? That’s what most publications are showing.

            That is an incredibly user specific review. That doesn’t necessarily make it bad, but “how fast can I download an episode of Frasier” provides a subset of the amount of information you get from “what speeds does this router support?”

            But it sounds like what you want is a “review” of a full network (hardware) stack. And… that is again, not something you can get online. That is what you literally pay someone to come over and check out your building for. Because your wifi? That is going to be heavily impacted by where you place the access point, what you have in your walls, etc. Same with your modem (almost always a piece of crap) and even how many times the comcast tech spliced off your coax before it even gets to the box.

            Because, for a router? What matters is a controlled-ish environment and then how many packets it can process per second. And just measuring average speed over a large file transfer is probably the best way to get that as it normalizes all the CDN and stack shenanigans.

            Adding “more clients” mostly just lets you find out how your traffic is being routed to said CDN while not providing much more data than just a sustained high speed transfer would.


            In case it is not obvious, I am definitely a home networking sicko who decided an enterprise level solution was the cost effective way to set up a mesh network for wifi coverage (and… it will come out cheaper when I upgrade my access points in a few years). I’ve never found a need to test “number of clients” because I know that even with my complete mess oh a DHCP table, it is nothing. What I do do is local transfers of large files between clients on different setups. So I will connect my laptop to the wifi and download some files from my NAS. Then I’ll do the same with a wired connection to a few different Switches. And then I’ll just have Steam download a game or two to make sure my modem isn’t a piece of crap.

            • SamB@lemmy.worldOP
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 year ago

              You’re talking about real-world scenarios, but I am just trying to get something simulated that resembles general real-life conditions. So, no CDN, the modem and even the ISP don’t matter in this particular scenario. You have written a phenomenal response, so I am sorry that I ask to take some more of your time and check out this article: smallnetbuilder.com/wireless/wireless-reviews/2x2-ac-access-point-roundup-part-2/ This is pretty much what I am trying to accomplish and it does seem that the APs can be stressed by fewer client devices than expected. or maybe again, there’s something that I am missing.

              • Puzzle_Sluts_4Ever@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                1 year ago

                What you linked to is literally someone doing the kind of survey you pay a professional for (or do yourself). It is multiple clients running literal stress tests. Because yes, those are designed to represent website requests… except they are done near constantly for five minutes. That will never happen in the real world between caching of resources and people generally wanting to at least look at a web page before loading the next one. And mostly boils down to “packets per second” but in a way that provides much less data in terms of what was actually being tested. it is simulating an enterprise network load in a manner that is very prone to quirks of the hardware (they even mention their wifi dongles weren’t properly supported in linux) while drawing conclusions that actually are pretty suspect (the idea of needing to refresh the page because of errors CAN happen but is generally unlikely due to cached resources and the resiliency of codecs for media streaming. Most of the time, those “the page didn’t load right” are the CDN).

                Same with the roaming tests and the like. Yes, it is nice looking data but mostly it boils down to being INCREDIBLY situational and, honestly, not useful unless you live in that dude’s office.

                I don’t know that site very well. But, to me, this looks like a lot of data spam that can be summarized as “If you are dealing with enterprise level traffic, get an enterprise solution” while also having a LOT of affiliate links to buy the hardware.

                In a lot of ways, this reminds me of the computer hardware review channels. The better ones just play a suite of games and give you data from that because that conveys most of the useful information while being a realistic scenario. Gamers Nexus deserves an extra shout out (as they almost always do) for actually explaining why each game was used and reminding people of things like “Hitman 3 is a good bloatware test” because of the quirks of those games. But there are the ones who want to flood the consumer with nonsense data because it overloads their brains while coming to the same conclusion but being more “authoritative”. It is one of the reasons I actually love that when Jays Two Cents does a stress test they repeatedly emphasize “This will never happen to your computer in reality. We are doing this to stress test our cooling solution”.

                In fact, I would go so far as to say that reviews like this becoming ubiquitous would actually make the product space worse. We have already seen it happen. When people started discovering that mesh networks exist, there was a lot of interest. And many tech channels (I don’t want to JUST call out LTT but… I am gonna call out LTT because they always do this bullshit) reviewed enterprise equipment, particularly Ubiquiti. And that more or less led to the idea that you either buy a shitty netgear router for your dorm or you buy an enterprise solution. Which means there is no product that is good for 95% of consumers anymore. You either have trash or really expensive overkill (although, I AM a fan of TP-Link’s Omada approach as that is very much built out of consumer grade hardware at the low end). Because nobody needs feature X if they aren’t running a hotel but… are you really going to buy something that scores lower because it doesn’t have it?

                • SamB@lemmy.worldOP
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  edit-2
                  1 year ago

                  I understand perfectly where you’re coming from and I would love to find some way to objectively test wireless networking hardware which can be easily replicated in pretty much any other site. The Octoscope tools (now assimilated by Spirent) may be the closest since we get to put the wireless AP/router in a box and then simulate the conditions we want. But, for those that don’t have fat wallets, I guess these open-source tools are good enough, even if the results are heavily subjective. I know that people don’t like good enough, but at this point in time, even the fine edge between realistic and unrealistic is better than nothing.

                  And funny thing, WiFi 6 is not really that much better than WiFi 5, unless some very specific conditions are met - I’ve seen it in testing. So yes, I know what hype and advertising can do…

                  Gamer Nexus are my favorite when it comes to PC hardware as well and I would love to see them give it a try at testing wireless networking hardware. Who knows, maybe they’ll create a standard for testing these non-enterprise wireless routers.

            • Reliant1087@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              Well, I have a MiniPC running a VyOS router and the only time I’ve seen it even break a sweat is when I have to run openvpn or wireguaurd with lots of throughput which is probably because of the encryption involved.

              I’m curious as to how the wireless access points part of the network work. I have no problem saturating my bandwidth on wired connections but on wireless, I do get choking when say 3-4 devices try to stream 4k.

  • Reliant1087@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    Multi client test seems better honestly. I end up running 3-4 iperfs from different clients to a wired server to see how the bandwidth chokes. I wonder how it will be if one of the clients are running the iperf server as well.

    Real life workloads like 4k and VoIP with multiple clients seem much more realistic and representative.

    • SamB@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      1 year ago

      Seriously Lemmy is the best. Few minutes in and people are already answering questions. The concept behind the multi-client tests are to SSH into the server and then simulate whatever type of traffic one wants. I have already done it on a couple of wireless routers and it worked great. I hope at least. Iperf can’t really accomplish this, as far as I know, it’s only one instance on a single client at a time. Netburn seems to be the best tool so far, while keeping things free and open-source.

  • Devion@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    I recently ran into an issue with my home network where I suspected that the current wifi router (3-point mesh) couldn’t handle all the clients simultaneously. Not in a manner of throughput, but just with keeping all the devices online in the first place.

    I have at minimum 30 clients online all the time, up to a max of 40 or something, depending on who’s home or what is active. (Went a bit overboard with home automation and stuff.)

    I was getting random disconnects or stalling wifi on some of the devices. The coverage was fine, so I figured it was just the number of wifi signals that was overwhelming the AP.

    Point of the story: I was disappointed that absolutely no review/benchmark ever pays attention to this kind of upper limit that seems to exist in practice. It’s all range and speed, but never about the maximum number of active clients.

    I’ve replaced the mesh network and everything is fine now. But I had to trial and error that shit…

    • SamB@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      It isn’t really about the maximum number of client devices, it’s what they do, what type of standard are they a part of, how far away, the interference (!). This is why it’s pretty much impossible to put a number and say: hey, this TP-Link router will handle 30 client devices, while this Asus router goes up to 100… In a sense, a multi-client stress test kind of addresses this issue, but it kind of doesn’t. It’s because it’s extremely dependent on the conditions that the tester has in their lab/office/home.

      One thing to check on a review could be the attenuation as a better factor than the distance (this way, you can reproduce the result in your house even if just with a single-client test).

  • jet@hackertalks.com
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    I totally be interested in this sort of testing methodology being published. Maybe in a wiki?

    Getting comparable numbers for buffer bloat and queuing would be great for commercial routers. Of course you would want to compare against Enterprise solution so that people know where on the spectrum they’re landing.

    Full disclosure I roll my own GLI net open WRT router and I enforce different queues for qos seperation… i.e. downloading and streaming shouldn’t interfere with VoIP calls and gaming

  • d3Xt3r@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    1 year ago

    Not really, to be honest. I’d rather see how compatible a particular router is against popular open-source firmware, how frequently the updates are delivered, etc.

    For instance, the Asuswrt Merlin is a pretty good firmware for ASUS routers, but the updates (stable) are irregular - the last stable update currently was two months ago, which to me is unacceptable considering there have been critical vulnerabilities in ASUS routers. Given how malware and botnets are increasingly targeting routers these days, it’s imperative that updates get delivered at least once a month - with an out-of-band policy for critical vulnerabilities.

  • TimeMuncher@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    One client downloading a torrent, second client viewing 1080p60 video from YT and clients 3&4 transferring a few TB of data through lan.

  • A Mouse@midwest.social
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    When I run iPerf tests I almost always use multiple clients because of exactly what another comment said, it’s more realistic and a better representation.

    • SamB@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      So you basically spawn multiple instances of iperf3 and then connect all clients to a single server (using the same port?) What do you think about checking the latency experienced at the client level when various tests are running at the same time?

      • A Mouse@midwest.social
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Yeah, I spawn multiple instance of iperf3. Checking the latency would be a very useful metric, it should give a good result of what the connection will be like under load.