• 0 Posts
  • 62 Comments
Joined 10 months ago
cake
Cake day: January 26th, 2025

help-circle

  • It’s literally the same chip designers, production facilities and software. Every product using <5nm silicon fabs compete for the same manufacturing capabilities (fab time at TSMC in Taiwan) and all Nvidia GPUs share lots of commonalities in their software stack.

    The silicon fab producing the latest Blackwell AI chips is the same fab producing the latest consumer silicon for both AMD, Apple, Intel and Nvidia. (Let’s ignore the fabs making memory for now.) Internally at Nvidia, I assume they have shuffled lots and lots of internal resources over from the consumer oriented parts of the company to the B2B oriented parts, severely reducing consumer focus.

    And then we have any intentional price inflation and market segmentation. Cheap consumer GPUs that are a bit too efficient at LLM inference will compete with Nvidias DC offerings. The amount of consumer grade silicon used for AI inference is already staggering, and Nvidia is actively holding back that market segment.











  • Your numbers are old. If you are building today with anyone ad much as mentioning AI, you might as well consider 100kW/rack as ”normal”. An off-the-shelf CPU today runs at 500W, and you usually have two of them per server, along with memory, storage and networking. With old school 1U pizza boxes, that’s basically 100kW/rack. If you start adding GPUs, just double or quadruple power density right off the bat. Of course, assume everything is direct liquid cooled.


  • enumerator4829@sh.itjust.workstoLinux@lemmy.ml*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    3
    ·
    7 months ago

    What I’ve seen of rustdesk so far is that it’s absolutely not even close to the options available for X. It replaces TeamViewer, not thin clients.

    You would need the following to get viability in my eyes:

    • Multiple users per server (~50 users)
    • Enterprise SSO authentication, working kerberos on desktop
    • Good and easily deployable native clients for Windows, Linux and Mac, plus html5 client
    • Performant headless software rendered desktops
    • GPU acceleration possible but not required
    • Clustering, HA control plane, load balancing
    • Configuration management available

    This isn’t even an edge case. Current and upcoming regulations on information security drags the entire industry this way. Medical, research, defence, banking, basically every regulated landscape gets easier to work in when going down this route. Close to zero worries about endpoint security. Microsoft is working hard on this. It’s easy to do with X. And the best thing on Wayland is RustDesk? As stated earlier, these issues were brought up and discarded as FUD in 2008, and here we are.

    Wayland isn’t a better replacement, after 15 years it’s still not a replacement. The Wayland implementations certainly haven’t been rushed, but the architecture was. At this point, fucking Arcan will be viable before Wayland.


  • Exactly my point. The issues people consider ”solved” with wayland today will be solved in production in 3-5 years.

    People are still running RHEL 7, and Wayland in RHEL 9 isn’t that polished. In 4-5 years when RHEL 10 lands, it might start to be usable. Oh right, then we need another few years for vendors to port garbage software that’s absolutely mission critical and barely works on Xorg, sure as fuck won’t work in xwayland. I’m betting several large RHEL-clients will either remain on RHEL8 far past EOL or just switch to alternative distros.

    Basically, Xorg might be dead, but in some (paying commercial) contexts, Wayland won’t be a viable option within the next 5-10 years.



  • Please note that the nominal FLOP/s from both Nvidia and Huawei are kinda bullshit. What precision we run at greatly affect that number. Nvidias marketing nowadays refer to fp4 tensor operations. Traditionally, FLOP/s are measured with fp64 matrix-matrix multiplication. That’s a lot more bits per FLOP.

    Also, that GPU-GPU bandwidth is kinda shit compared to Nvidias marketing numbers if I’m parsing correctly (NVLink is 18x 10GB/s links per GPU, big ’B’ in GB). I might read the numbers incorrectly, but anyway. How and if they manage multi-GPU cache coherency will be interesting to see. Nvidia and AMD both do (to varying degrees) have cache coherency in those settings. Developer experience matters…

    Now, the real interesting thing is power draw, density and price. Power draw and price obviously influence TCO. On 7nm, I guess the power bill won’t be very fun to read, but that’s just a guess. The density influences network options - are DAC-cables viable at all, or is it (more expensive) optical all the way?


  • There is actually less to ’xkill’. It nukes the X window from orbit in a very violent manner. The owning process(-tree) will usually just instantly curl up and die.

    The main benefit is that it doesn’t actually kill the process, it only nukes the window. As such, you can get rid of windows belonging to otherwise unkillable processes (zombies, etc).

    Also, it’s fun. Just don’t miss the window and accidentally kill your WM. (Beat that Wayland)


  • enumerator4829@sh.itjust.workstoLinux@lemmy.ml*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    2
    ·
    7 months ago

    Now consider that most enterprises are about five years behind that. Takes a few years before what’s available in Fedora trickles down to RHEL, and a few more years before it’s rolled out to clients. Ubuntu is on a similar timeline.

    The fixes you got two years ago might be rolled out in 3 years in these places. Oh, and these are the people forking up much of the money for the Wayland development efforts. The current state of Wayland if you pay for it is kinda meh.


  • enumerator4829@sh.itjust.workstoLinux@lemmy.ml*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    41
    arrow-down
    7
    ·
    7 months ago

    I’ll bite. It’s getting better, but still a long way to go.

    • No commercially viable remote desktop or thin client solutions. I’m not talking about just VNC, take a look at for example ThinLinc to see what I’m looking for - a complete solution. (Also, it took like ten rough years before basic unencrypted single user VNC was available at all.) Free multimillion dollar business idea right here folks!
    • Related to the above point - software rendered wayland is painful. To experience this yourselves, install any distro in VirtualBox or VMWare or whatever and compare the usability between a Xorg DE (with compositing turned off) and the same Wayland DE. Just look at the click-to-photon latency and weep. I’ve seen X11 perform better with VNC over WAN.
    • ”We don’t need network transparency, VNC will save us”. See points above.
    • ”Every frame is perfect” went just as well as can be expected, there is a reason VSYNC is an option in games and professional graphics applications. Thanks Valve.
    • I’m assuming wlroots still won’t work on Nvidia, and that the Gnome/KDE implementations are still a hodgepodge, and that Nvidia will still ask me to install the supported Xorg drivers. If I’m wrong, it only took a decade or so to get a desktop working on hardware from the dominant GPU vendor. (Tangentially related - historically the only vendor with product lines specifically for serving GPU-accelerated desktops to thin clients)
    • After over a decade of struggles, we can finally (mostly) share out screens in Zoom. Or so I’m told.

    But what do I know, I’ve only deployed and managed desktop linux for a few thousand people. People were screaming about these design flaws back in 2008 when this all started. The criticisms above were known and dismissed as FUD, and here we are. A few architectural changes back then, and we could have done this migration a decade faster. Just imagine, screen sharing during the pandemic!

    As an example, see Arcan, a small research project with an impressively large subset of features from both X11 and Wayland (including working screen sharing, network transparency and a functioning security model). I wouldn’t use it in production, but if it was more than one guy in a basement working on it, it would probably be very usable fairly fast, compared to the decade and half that RedHat and friends have poured into Wayland thus far. Using a good architecture from the start would have done wonders. And Wayland isn’t even close to a good architecture. It’s just what we have to work with now.

    Hopefully Xorg can die at some point, a decade or so from now. I’m just glad I don’t work with desktops anymore, the swap to Wayland will be painful for a lot of organisations.