• Deckweiss@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    4
    ·
    edit-2
    1 year ago

    I don’t understand. Are there places where using chatGPT for papers is illegal?

    The state where I live explicitly allows it. Only plagiarism is prohibited. But making chatGPT formulate the result of your scientific work, or correct the grammar or improve the style, etc. doesn’t bother anybody.

    If you use chatGPT you should still read over it, because it can say something wrong about your results and run a plagiarism tool on it because it could unintentionally do that. So whats the big deal?

    • alienanimals@lemmy.world
      link
      fedilink
      English
      arrow-up
      27
      arrow-down
      7
      ·
      1 year ago

      It’s not a big deal. People are just upset that kids have more tools/resources than they did. They would prefer kids wrote on paper with pencil and did not use calculators or any other tool that they would have available to them in the workforce.

      • Phanatik@kbin.social
        link
        fedilink
        arrow-up
        9
        arrow-down
        1
        ·
        1 year ago

        There’s a difference between using ChatGPT to help you write a paper and having ChatGPT write the paper for you. One invokes plagiarism which schools/universities are strongly against.

        The problem is being able to differentiate between a paper that’s been written by a human (which may or may not be written with ChatGPT’s assistance) and a paper entirely written by ChatGPT and presented as a student’s own work.

        I want to strongly stress that in the latter situation, it is plagiarism. The argument doesn’t even involve the plagiarism that ChatGPT does. The definition of plagiarism is simple, ChatGPT wrote a paper, you the student did not and you are presenting ChatGPT’s paper as your own, ergo plagiarism.

          • olmec@lemm.ee
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            Correct me if I am wrong with current teaching methods, but I feel like the way you outlined things is how school is taught. Calculators were “banned” until about 6th grade, because we were learning the rules of math. Sure, we could give calculators to 3rd graders, but they will learn that 2 + 2 = 4 because the calculator said so, and not because they worked it out. Calculators were allowed once you get into geometry and algebra, where the actual calculation is merely a mechanism for the logical thinking you are learning. Finding the answer to 5/7 is so trivially important to finding that that value for X is what makes Y = 0.

            I am not close to the education sector, but I imagine LLM are going to be used similarly, we just don’t have the best way laid out yet. I can totally see a scenario, where in 2030, students have to write and edit their own papers until they reach grade 6 or so. Then, rather than writing a paper which tests all your language arts skills, you will proof-read 3 different papers written by LMM, with a hyper focus on one skill set. One week, it may be active vs passive voice, or using gerunds correctly. Just like with math and the calculator, you will move beyond learning the mechanics of reading and writing, and focus on composing thoughts in a clear manner. This doesn’t seem like a reach, we just don’t have curriculum ready to take advantage of it yet.

            • LukeMedia@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              The experience I had was that they were banned until absolutely necessary. I remember in some of my first algebra classes, we were doing many repetitive equations by hand, when a calculator would be used in a real world scenario. I understand when and how it’s necessary to teach without the use of assistive tools first, so the fundamental understanding of a subject isn’t left out, I’m not of the opinion that should change.

              The way the schools treat this stuff likely changes region to region, even school to school. I’m mainly speaking from my own experience, I don’t feel I got enough education about properly using assistive tools until later in high school, and even then there were a lot of things I found important that I had to learn on my own.

              Honestly, I think there were definitely students where this was the right pace for them. I think this is one of the symptoms of needing to teach a classroom in a way that everybody can keep up. Those that understand material quicker end up getting frustrated, and using time repetitively instead of continued education. That’s not to say I have a viable solution, nor can I confirm this is a symptom of that.

        • RiikkaTheIcePrincess@kbin.social
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          There’s a difference between using ChatGPT to help you write a paper and having ChatGPT write the paper for you.

          Yeah, one is what many “AI” fans insist is what’s happening, and the other is what people actually do because humans are lazy, intellectually dishonest piles of crap. “Just a little GPT,” they say. “I don’t see a problem, we’ll all just use it in moderation,” they say. Then somehow we only see more garbage full of errors; we get BS numbers, references to studies or legal cases or anything else that simply don’t exist, images of people with extra rows of teeth and hands where feet should be, gibberish non-text where text could obviously be… maybe we’ll even get ads injected into everything because why not screw up our already shitty world even more?

          So now people have this “tool” they think is simultaneously smarter and more creative than humans at all of the things humans have historically claimed makes them better than not only machines but other animals, but is also “just a tool” that they’re only going to use a little bit, to help out but not replace. They’ll trust this tool to be smarter than they are, which it will arguably impressively turn out to not be. They’ll expect everyone else to accept the costs this incurs, from environmental damage due to running the damn things to social, scientific, economic, and other harms caused by everything being generated by “hallucinating” “AI” that’s incapable of thinking.

          It’s all very tiring.

          (And now I’m probably going to get more crap for both things I’ve said and things I haven’t, because people are intellectually lazy/dishonest and can’t take criticism. Even more tiring! Bleh.)

          • Phanatik@kbin.social
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            Everything you’ve said I agree with wholeheartedly. This kind of cornercutting isn’t good for us as a species. When you eliminate the struggle involved in developing skills, it cheapens whatever you’ve produced. Just soulless garbage and it’ll proliferate the most in art spaces.

            The first thing that happened was that Microsoft implemented ChatGPT into Windows as part of their Copilot feature. It can now use your activity on your pc as data points and the next step is sure as shit going to be an integration with Bing Ads. I know this because Microsoft presented this to our company.

            I distrusted it then and I want it to burn now.

      • BraveLittleToaster@lemm.ee
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        3
        ·
        1 year ago

        Teachers when I was little “You won’t always have a calculator with you” and here I am with a device more powerful than what sent astronauts to the moon in my pocket 24/7

        • LukeMedia@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          Fun fact for you, many credit-card/debit-card chips alone are comparably powerful to the computers that sent us to the moon.

          It’s mentioned a bit in this short article about how EMV chips are made. This summary of compute power does come from a company that manufactures EMV chips, so there is bias present.

    • kirklennon@kbin.social
      link
      fedilink
      arrow-up
      5
      arrow-down
      2
      ·
      1 year ago

      Why should someone bother to read something if you couldn’t be bothered to write it in the first place? And how can they judge the quality of your writing if it’s not your writing?

      • Deckweiss@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        1 year ago

        Science isn’t about writing. It is about finding new data through scientific process and communicating it to other humans.

        If a tool helps you do any of it better, faster or more efficiently, that tool should be used.

        But I agree with your sentiment when it comes to for example creative writing.

        • sab@kbin.social
          link
          fedilink
          arrow-up
          4
          arrow-down
          2
          ·
          edit-2
          1 year ago

          Science is also creative writing. We do research and write the results, in something that is an original product. Something new is created; it’s creative.

          An LLM is just reiterative. A researcher might feel like they’re producing something, but they’re really just reiterating. Even if the product is better than what they would have produced themselves it is still more worthless, as it is not original and will not make a contribution that haven’t been made already.

          And for a lot of researchers, the writing and the thinking blend into each other. Outsource the writing, and you’re crippling the thinking.

        • Laticauda@lemmy.ca
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          1 year ago

          Science is about thinking. If you’re not the one writing your own research, you’re not the one actually thinking about it and conceptualizing it. The act of writing a research paper is just as important as the paper itself.

      • agent_flounder@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        To me this question hints at the seismic paradigm shift that comes from generative AI.

        I struggle to wrap my head around it and part of me just wants to give up on everything. But… We now have to wrestle with questions like:

        What is art and do humans matter in the process of creating it? Whether novels, graphic arts, plays, whatever else?

        What is the purpose of writing?

        What if anything is still missing from generative writing versus human writing?

        Is the difference between human intelligence and generative AI just a question of scale and complexity?

        Now or in the future, can human experience be simulated by a generative AI via training on works produced by humans with human experience?

        If an AI can now or some day create a novel that is meaningful or moving to readers, with all the hallmarks of a literary masterwork, is it still of value? Does it matter who/what wrote it?

        Can an AI have novel ideas and insights? Is it a question of technology? If so, what is so special about humans?

        Do humans need to think if AI one day can do it for us and even do it better than we can?

        Is there any point in existing if we aren’t needed to create, think, generate ideas and insights? If our intellect is superfluous?

        If human relationships conducted in text and video can be simulated on one end by a sufficiently complex AI, to fool the human, is it really a friendship?

        Are we all just essentially biological machines and our bonds simply functions of electrochemical interactions, instincts, and brain patterns?

        I’m late to the game on all this stuff. I’m sure many have wrestled with a lot of this. But I also think maybe generative AI will force far more of us to confront some of these things.

    • gullible@kbin.social
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      I don’t think people are arguing against minor corrections, just wholesale plagiarism via AI. The big deal is wholesale plagiarism via AI. Your argument is as reasonable as it adjacent to the issue, which is to say completely.

    • TropicalDingdong@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      3
      ·
      1 year ago

      If you use chatGPT you should still read over it, because it can say something wrong about your results and run a plagiarism tool on it because it could unintentionally do that. So whats the big deal?

      There isnt one. Not that I can see.

      • Jesusaurus@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        1 year ago

        At least within a higher level education environment, the problem is who does the critical thinking. If you just offload a complex question to chat gpt and submit the result, you don’t learn anything. One of the purposes of paper-based exercises is to get students thinking about topics and understanding concepts to apply them to other areas.

        • TropicalDingdong@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          3
          ·
          1 year ago

          You are considering it from a student perspective. I’m considering it from a writing and communication/ publishing perspective. I’m a scientist, I think a decent one, but I’m a only a proficient writer and I don’t want to be a good one. Its just not where I want to put my professional focus. However, you can not advance as a scientist without being a ‘good’ writer (and I don’t just mean proficient). I get to offload all kind of shit to chat GPT. I’m even working on some stuff where I can dump in a folder of papers, and have it go through and statistically review all of them to give me a good idea of what the landscape I’m working in looks like.

          Things are changing ridiculously fast. But if you are still relying on writing as your pedagogy, you’re leaving a generation of students behind. They will not be able to keep up with people who directly incorporate AI into their workflows.

          • KingRandomGuy@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            I’m curious what field you’re in. I’m in computer vision and ML and most conferences have clauses saying not to use ChatGPT or other LLM tools. However, most of the folks I work with see no issue with using LLMs to assist in sentence structure, wording, etc, but they generally don’t approve of using LLMs to write accuracy critical sections (such as background, or results) outside of things like rewording.

            I suspect part of the reason conferences are hesitant to allow LLM usage has to do with copyright, since that’s still somewhat of a gray area in the US AFAIK.

            • TropicalDingdong@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              2
              ·
              1 year ago

              I work in remote sensing, AI, and feature detection. However, I work almost exclusively for private industry. Generally in the natural hazard, climate mitigation space.

              Lately, I’ve been using it to statistically summarize big batches of publications into tables that I can then analyze statistically (because the LLMs don’t always get it right). I don’t have the time to read like that, so it helps me build an understanding of a space without having to actually read it all.

              I think the hand wringing is largely that. I’m not sure its going to matter in 6 months to a year. We’re at the inflection (like pre-alpha go) where its clear that AI can do this thing that was thought to be solely the domain of humans. But it doesn’t necessarily do it better than the best of us. We know how this goes though. It will surpass, and likely by a preposterous margin. Pandoras box is wide open. No closing this up.