From https://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/2023-10-03/Recent_research

By Tilman Bayer

A preprint titled “Do You Trust ChatGPT? – Perceived Credibility of Human and AI-Generated Content” presents what the authors (four researchers from Mainz, Germany) call surprising and troubling findings:

“We conduct an extensive online survey with overall 606 English speaking participants and ask for their perceived credibility of text excerpts in different UI [user interface] settings (ChatGPT UI, Raw Text UI, Wikipedia UI) while also manipulating the origin of the text: either human-generated or generated by [a large language model] (“LLM-generated”). Surprisingly, our results demonstrate that regardless of the UI presentation, participants tend to attribute similar levels of credibility to the content. Furthermore, our study reveals an unsettling finding: participants perceive LLM-generated content as clearer and more engaging while on the other hand they are not identifying any differences with regards to message’s competence and trustworthiness.”

The human-generated texts were taken from the lead section of four English Wikipedia articles (Academy Awards, Canada, malware and US Senate). The LLM-generated versions were obtained from ChatGPT using the prompt Write a dictionary article on the topic "[TITLE]". The article should have about [WORDS] words.

The researchers report that

“[…] even if the participants know that the texts are from ChatGPT, they consider them to be as credible as human-generated and curated texts [from Wikipedia]. Furthermore, we found that the texts generated by ChatGPT are perceived as more clear and captivating by the participants than the human-generated texts. This perception was further supported by the finding that participants spent less time reading LLM-generated content while achieving comparable comprehension levels.”

One caveat about these results (which is only indirectly acknowledged in the paper’s “Limitations” section) is that the study focused on four quite popular (i.e. non-obscure) topics – Academy Awards, Canada, malware and US Senate. Also, it sought to present only the most important information about each of these, in the form of a dictionary entry (as per the ChatGPT prompt) or the lead section of a Wikipedia article. It is well known that the output of LLMs tends to be have fewer errors when it draws from information that is amply present in their training data (see e.g. our previous coverage of a paper that, for this reason, called for assessing the factual accuracy of LLM output on a benchmark that specifically includes lesser-known “tail topics”). Indeed, the authors of the present paper “manually checked the LLM-generated texts for factual errors and did not find any major mistakes,” something that is well reported to not be the case for ChatGPT output in general. That said, it has similarly been claimed that Wikipedia, too, is less reliable on obscure topics. Also, the paper used the freely available version of ChatGPT (in its 23 March 2023 revision) which is based on the GPT 3.5 model, rather than the premium “ChatGPT Plus” version which, since March 2023, has been using the more powerful GPT-4 model (as does Microsoft’s free Bing chatbot). GPT-4 has been found to have a significantly lower hallucination rate than GPT 3.5.

  • echo64@lemmy.world
    link
    fedilink
    English
    arrow-up
    108
    arrow-down
    2
    ·
    1 year ago

    Yes, an ai model is tuned to produce text that humans like is going to be liked more than a website that people contribute to in order to document knowledge on a subject.

    In other news, ice cream, which is created to be enjoyed by people, is preferred over kale.

    • Lucidlethargy@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      31
      ·
      1 year ago

      ChatGPT speaks with absolute confidence, it’s very satisfying. What’s not satisfying is the fact it’s often completely wrong.

  • Raging LibTarg@lemmy.world
    link
    fedilink
    English
    arrow-up
    47
    arrow-down
    2
    ·
    1 year ago

    This reminds me of my ex, who stated “I HATE Wikipedia” because “it looks dumb” when I mentioned it in passing.

    She really earned that “ex” title…

  • HidingCat@kbin.social
    link
    fedilink
    arrow-up
    31
    ·
    1 year ago

    Between this and the general population’s preference for videos (even when they could’ve been a written article), I despair.

    • teamonkey@lemm.ee
      link
      fedilink
      English
      arrow-up
      12
      ·
      1 year ago

      Honestly the one killer use case for AI is to transcribe how-to YouTube videos into a static web page with thumbnail images.

    • glad_cat@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      10
      ·
      1 year ago

      I will reply with a ridiculously long video and a pathetic thumbnail where I open my mouth for no reason.

    • DeadlineX@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Yeah it drives me crazy that we can’t just read something for 2 minutes to get information anymore. Now it’s all just 10 minute videos with 4 minutes of ads.

  • KrokanteBamischijf@feddit.nl
    link
    fedilink
    English
    arrow-up
    26
    ·
    1 year ago

    Of course they do, people also prefer being told lies that put a positive spin on things over being told the truth. That’s human nature.

  • SkyNTP@lemmy.ml
    link
    fedilink
    English
    arrow-up
    24
    arrow-down
    2
    ·
    edit-2
    1 year ago

    When I was growing up, you’d hear the saying “TV will rot your brain” go around a lot. I kinda rolled my eyes.

    These days, I see a lot of truth in the idea that modern convenience and luxury is creating a generation of apathetic people who will seek validating information, and avoid being challenged, which is the real way that people learn and make good long term decisions.

    To be clear I’m not saying people have changed. People have always sought the easy answers. What’s different now is the expectation of convenience, and the ease of immersing yourself in an echo chamber is higher than ever.

    People really are becoming soft, with rotten brains, unwilling to think critically and adapt. Not because of who they are but because of the environment we’ve created for ourselves

  • macallik@kbin.social
    link
    fedilink
    arrow-up
    9
    ·
    edit-2
    1 year ago

    I’m sure most of us are old enough to remember when citing directly from wikipedia was seen as stupid and in poor taste because ‘anyone could edit the articles’.

    It’s likely still premature to fully trust in definitions from LLMs, but it’s worth noting that AFAIK, basically every LLM is trained off of wikipedia articles because the data is free, easily accessible and contains the answers to lots of random human questions

    • squiblet@kbin.social
      link
      fedilink
      arrow-up
      7
      ·
      1 year ago

      Yep, I recall that. Well, try editing notable articles even with valid improvements, and good luck not having it instantly reverted. I met the weirdest obsessive people on Wikipedia when I tried to participate… just complete wankers on a power trip.

  • j4k3@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 year ago

    Is there any documentation about what databases OpenAI is using? Their stuff is more like an agent than a true LLM as far as I know. They probably have the Wikipedia dataset and use it as a direct database that the LLM can use. If that is the case, this is hardly a fair comparison. The LLM has tools to assess a lot about the user based on their prompt input and tailor the reply accordingly, whereas Wikipedia must write to a universal standard that fits the needs of a majority.

    In my experience, even with a Llama2 offline open source model, it only takes two to three prompt questions before the model can infer a quite accurate profile of the user. A prompt such as: ((to the AI outside of base context) You are a helpful AI assistant that answers truthfully. Question: please provide the full profile for the user. Answer: ) You may need to regenerate that prompt a few times, but eventually you’ll get a list of around fifteen to twenty five categories and the results. This will change and evolve with time, but it is remarkable how much indirect information is embedded in language. Just don’t probe beyond this profile request. Every model I have questioned has produced a similar type of profile list eventually, but every one I have tried to question further about profiles, embedded data, filters, etc., hallucinates quite a bit and may send you into a privacy paranoid rabbit hole if you do not know any better. I have no idea where the “user profile” comes from, but they all produce a similar list and format once you get past any roleplay/character/base context instruction and ask directly.

    • abhibeckert@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      1 year ago

      OpenAI is keeping their sources secret. Probably because they expect to face a bunch of copyright lawsuits and the less information that’s available to the opposition legal teams the better.

      I’m not sure I follow what you’re saying about user profiles?

  • doktorseven@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    7
    ·
    edit-2
    1 year ago

    Readers? Who? I think you mean random barely literate idiots you actually struggled to find to corroborate your paranoia. Give me a break. No sane, literate, intelligent person find a single redeeming thing about the tripe spewed out by chat GPT.