• Zos_Kia@lemmynsfw.com
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    2
    ·
    5 months ago

    You cannot in all seriousness use a LLM as a research tool. That is explicitly not what it is useful for. A LLM’s latent space is like a person’s memory : sure there is some accurate data in there, but also a lot of “misremembered” or “misinterpreted” facts, and some bullshit.

    Think of it like a reasoning engine. Provide it some data which you have researched yourself, and ask it to aggregate it, or summarize it, you’ll get some great results. But asking it to “do the research for you” is plain stupid. If you’re going to query a probabilistic machine for accurate information, you’d be better off rolling dice.

    • SpaceNoodle@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      3
      ·
      edit-2
      5 months ago

      Exactly my point - except that the word “reasoning” is far too generous, as it implies that there would be some way for it to guarantee that its logic is sound, not just highly resembling legible text.

      • Zos_Kia@lemmynsfw.com
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        4
        ·
        5 months ago

        I don’t understand. Have you ever worked an office job? Most humans have no way to guarantee their logic is sound yet they are the ones who do all of the reasoning on earth. Why would you have higher standards for a machine?