• 1 Post
  • 109 Comments
Joined 10 months ago
cake
Cake day: September 13th, 2024

help-circle




  • If you are making them aware they will fail by not reading the documentation, then its surprising they would continue to put that off. Using chatgpt is different than only being able to use chatgpt. Then again i was a kid once and kind of get it. Maybe banning it is the better option, as you say.

    I thought it was scary enough when instructors would do “locked down” timed tests with short/essay answers. I cant imagine students thinking they’d be fine using chatgpt for stuff theyll need to applicably demonstrate.

    I wonder if the drop out rate will increase for colleges due to stuff like this, or if students are majoring in more technical stuff more due to llm overconfidence.

    Thanks for your responses!



  • Oh interesting that they wouldnt need or want to hide that. When i use it i interpret every line of code and decide if its appropriate. If that would be too time consuming then i wouldnt use an llm. I would never deviate from the assignment criterion or the material covered by deferring to some obscure methodology used by an llm.

    So i personally dont think its been bad for my education, but i did complete a lot of my education before llms were a thing.

    Dont you guys test the students in ways to punish the laziness? I know you are just a ta, but do you think the class could be better about that? Some classes ive taken are terribly quality and all but encouraged laziness, and other classes were perfactly capable of cutting through the bullshit.





  • So they are moving away from general models and specializing them to tasks as certain kind of ai agents

    It will probably make queries with those agents defined in a narrow domain and those agents will probably be much less prone to error.

    I think its a good next step. Expecting general intelligence to arise out of LLMs with larger training models is obviously a highly criticized idea on Lemmy, and this move supports the apparent limitations of this approach.

    If you think about it, assigning special “thinking” steps for ai models makes less sense for a general model, and much more sense for well-defined scopes.

    We will probably curate these scopes very thoroughly over time and people will start trusting the accuracy of their answer through more tailored design approaches.

    When we have many many effective tailored agents for specialized tasks, we may be able to chain those agents together into compound agents that can reliably carry out many tasks like we expected from AI in the first place.


  • Its all about who lets themselves play into the extremes we are inundated with.

    Be careful not to apply the same logic to all women/leftists who apparently all think men are rapists. Thats obviously not true. Their claim about you is obviously not true.

    Diffuse the polarization with understanding.




  • tee9000@lemmy.worldtopolitics @lemmy.world*Permanently Deleted*
    link
    fedilink
    arrow-up
    5
    arrow-down
    1
    ·
    8 months ago

    An actual opportunity to talk about an issue, rather than reinforce my own beliefs. Dont mind if i do…

    Dont you think this is more of an artifact of personalizing newsfeeds and those separated viewpointa clashing with other personalized newsfeeds?

    Every opinion exists, and the internet allows opinions to be magnified without a basis in people’s true opinions. We need to learn how to navigate and develop our social beliefs without fixating on who’s opinion is percieved as majority or being shown the most in media… that includes overreacting to polarized opinions and validating them.

    Real peoples opinions are different than the popularity of memeified opinions. When people talk to each other there is understanding. When people arm themselves with memes instead of forming their own opinions, we are parroting ideas that might be manufactored so we have no choice but to deny meme ideologies unless their merit is unavoidable.