Basically a deer with a human face. Despite probably being some sort of magical nature spirit, his interests are primarily in technology and politics and science fiction.

Spent many years on Reddit before joining the Threadiverse as well.

  • 0 Posts
  • 1.25K Comments
Joined 2 years ago
cake
Cake day: March 3rd, 2024

help-circle




  • I’ve found Qwen3-30B-A3B-Thinking-2507 to be the best all-around “do stuff for me” model that fits on my hardware. I’ve mostly been using it for analyzing and summarizing documents I’ve got on my local hard drive; meeting transcripts, books, and so forth. It’s done surprisingly well on those transcripts, I daresay its summaries are able to tease out patterns that a human wouldn’t have had an easy time spotting.

    When it comes to creative writing I mix it up with Llama-3.3-70B-Instruct to enrich the text, using multiple models helps keep it from becoming repetitive and too recognizable in style.

    I’ve got Qwen3-Coder-30B-A3B-Instruct kicking around as a programming assistant, but while it’s competent at its job I’ve been finding that the big online models do better (unsurprisingly) so I use those more. Perhaps if I was focusing on code analysis and cleanup I’d be using that one instead but when it comes to writing big new classes or applications in one swoop it pays to go with the best right off the bat. Maybe once the IDEs get a little better at integrating LLMs it might catch up.

    I’ve been using Ollama as the framework for running them, it’s got a nice simple API and it runs in the background so it’ll claim and release memory whenever demand for it comes. I used to use KoboldCPP but I had to manually start and stop it a lot and that got tedious.




  • In order to make that assumption you have to first assume that they know qualitatively what is better and what is worse, that they have the appropriate skills or opportunity necessary to choose to opt in or opt out, and that they are making their decision on what tools to use based on which one is better or worse.

    I don’t think you can make any of those assumptions. In fact I think you can assume the opposite.

    Isn’t that what you yourself are doing, right now?

    The average person does not choose their tools based on what is the most effective at producing the correct truth but instead on which one is the most usable, user friendly, convenient, generally accepted, and relatively inexpensive.

    Yes, because people have more than one single criterion for determining whether a tool is “better.”

    If there was a machine that would always give me a thorough well-researched answer to any question I put to it, but it did so by tattooing the answer onto my face with a rusty nail, I think I would not use that machine. I would prefer to use a different machine even if its answers were not as well-researched.

    But I wasn’t trying to present an argument for which is “better” in the first place, I should note. I’m just pointing out that AI isn’t going to “go away.” A huge number of people want to use AI. You may not personally want to, and that’s fine, but other people do and that’s also fine.



  • OpenAI has an enormous debt burden from having developed this tech in the first place. If OpenAI went bankrupt the models would be sold off to companies that didn’t have that burden, so I doubt they’d “go away.”

    As I mentioned elsewhere in this thread I use local LLMs on my own personal computer and the cost of actually running inference is negligible.





  • It’s so near zero it makes no difference. It is not a noticeable factor in my decision on whether to use it or not for any given task.

    The training of a brand new model is expensive, but once the model has been created it’s cheap to run. If OpenAI went bankrupt tomorrow and shut down the models it had trained would just be sold off to other companies and they’d run them instead, free from the debt burden that OpenAI accrued from the research and training costs that went into producing them. That’s actually a fairly common pattern for first-movers like that, they spend a lot of money blazing the trail and then other companies follow along afterwards and eat their lunch.