• andyburke@fedia.io
    link
    fedilink
    arrow-up
    26
    arrow-down
    1
    ·
    1 year ago

    There are a lot of open source LLMs being developed, ones you can run at home on your own data.

      • LainTrain@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        1 year ago

        What would be the threshold for them to “take off”? It’s all already out, so already there no?

                • LainTrain@lemmy.dbzer0.com
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  1 year ago

                  Honestly i think speed is something I don’t care too much about with models, because even things like ChatGPT will be slower than Google for most things, and if something is more complex and a good use case for an LLM it’s unlikely to be the primary bottleneck.

                  My gf private chat bot right now is a combination of Mistral 7B with a custom finetune and she it directs some queries to ChatGPT if I ask (I got free tokens way back might as well burn through them).

                  How much of an improvement is Mixtral over Mistral in practice?

                  • just another dev@lemmy.my-box.dev
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    1 year ago

                    Sillytavern by any chance?

                    And I’d say the difference between mistral and mixtral is pretty big for general usage, feels like it’s a next generation.