• ch00f@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    20 hours ago

    Well, not off to a great start.

    To be clear, I think getting an LLM to run locally at all is super cool, but saying “go self hosted” sort of gloms over the fact that getting a local LLM to do anything close to what ChatGPT can do is a very expensive hobby.

    • lexiw@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      15 hours ago

      I agree, it is a very expensive hobby, and it gets decent in the range 30-80b. However, the model you are using should not perform that bad, it seems that you might be hitting a config issue. Would you mind sharing the cli command you use to run it?

      • ch00f@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 hours ago

        Thanks for taking the time.

        So I’m not using a CLI. I’ve got the intelanalytics/ipex-llm-inference-cpp-xpu image running and hosting LLMs to be used by a separate open-webui container. I originally set it up with Deepseek-R1:latest per the tutorial to get the results above. This was straight out of the box with no tweaks.

        The interface offers some controls settings (below screenshot). Is that what you’re talking about?

        • lexiw@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 hours ago

          Those values are most of what I was looking for. An LLM is just predicting the next token (for simplicity, a word). It does so by generating every possible word with a probability associated with it, and then picking a random word from this list, influenced by its probability. So for the sentence “a cat sat” it might generate “on: 0.6”, “down: 0.2”, and so on. 0.6 just means 60%, and all the values add up to 1 (100%). Now, the number of tokens generated can be as big as the context, so you might want to pick randomly from the top 10, you control this with the parameter top_k, or you might want to discard all the words below 20%, you control this with min_p. And finally, in cases where you have a token with a big probability followed by tokens with very low probability, you might want to squash these probabilities to be closer together, by decreasing the higher tokens and increasing the lower tokens. You control this with the temperature parameter where 0.1 is very little squashing, and 1 a lot of it. In layman terms this is the amount of creativity of your model. 0 is none, 1 is a lot, 2 is mentally insane.

          Now, without knowing your hardware or why you need docker, it is hard to suggest a tool to run LLMs. I am not familiar with what you are using, but it seems to not be maintained, and likely lacks the features needed for a modern LLM to work properly. For consumer grade hardware and personal use, the best tool these days is llamacpp, usually through a newbie friendly wrapper like LMStudio which support other backends as well and provide so much more than just a UI to download and run models. My advice is to download it and start there (it will download the right backend for you, so no need to install anything else manually).