• sunzu@kbin.run
    link
    fedilink
    arrow-up
    12
    ·
    3 months ago

    405b ain’t running local unless you got a proepr set up is enterpise grade lol

    I think 70b is possible but I haven’t find anyone confirming it yet

    Also would like to know specs on whoever did it

      • bizarroland@fedia.io
        link
        fedilink
        arrow-up
        3
        ·
        3 months ago

        I have a home server with 140 gigs of RAM, it was surprisingly cheap. It’s an HP z6 with the 6146 gold xeon processor.

        I found a seller who was selling it with a low spec silver and 16 gigs of RAM for like 250 bucks.

        Found the processor upgrade for about $120 and spend another $150 on 128gb of second-hand ECC ddr4.

        I think the total cost was something like $700 after throwing a couple of 8 TB hard drives in.

        I’ve also placed a Nvidia 4070 in it, which I got doing some horse trading.

        How close am I on the specs to being able to run the 70b version?

        • BaroqueInMind@lemmy.one
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          3 months ago

          What’s the bus speed of the RAM? You might run it just fine but still bottlenecked there.

            • BaroqueInMind@lemmy.one
              link
              fedilink
              English
              arrow-up
              2
              ·
              edit-2
              3 months ago

              With 144Gb of total RAM, you should be able to run any CPU intensive software.

              The LLMs use GPU vRAM though, so it doesn’t matter how much system RAM you have, since GPU vRAM is what the xformers and tensor scripts prioritize and have been ultimately optimized to use over CPU and RAM.

      • sunzu@kbin.run
        link
        fedilink
        arrow-up
        3
        ·
        3 months ago

        I gonna add some RAM with hope I can split original 70b between GPU and RAM. 8b is great what it is as is

        Looks like it should be possible, not sure how much performance hit offloading to RAM will do. Fafo

    • raldone01@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      edit-2
      3 months ago

      I regularly run llama3 70b unqantized on two P40s and CPU at like 7tokens/s. It’s usable but not very fast.

        • raldone01@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          3 months ago

          My specs because you asked:

          CPU: Intel(R) Xeon(R) E5-2699 v3 (72) @ 3.60 GHz
          GPU 1: NVIDIA Tesla P40 [Discrete]
          GPU 2: NVIDIA Tesla P40 [Discrete]
          GPU 3: Matrox Electronics Systems Ltd. MGA G200EH
          Memory: 66.75 GiB / 251.75 GiB (27%)
          Swap: 75.50 MiB / 40.00 GiB (0%)
          
            • raldone01@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              3 months ago

              Each card has 24GB so 48GB vram total. I use ollama it fills whatever vrams is available on both cards and runs the rest on the CPU cores.

        • raldone01@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 months ago

          What are you asking exactly?

          What do you want to run? I assume you have a 24GB GPU and 64GB host RAM?