• 0 Posts
  • 4 Comments
Joined 1 year ago
cake
Cake day: July 25th, 2023

help-circle


  • To be clear, I’m not trying to make the argument that it can only produce exactly what it’s seen, I recognize that this argument is frankly overstated in media. (The interviews with Adam Conover are great examples; he’s not wrong per se, but he does oversimplify things to the point that I think a lot of people misunderstand what’s being discussed)

    The ability to recombine what it’s seen in different ways as an emergent property is interesting and provocative, but isn’t really what OP is asking about.

    A better example of how LLMs can be useful in research like what OP described would be asking it to coalesce information from multiple existing studies about what properties correlate with superconducting in order to help accelerate research in collaboration with actual material scientists. This is all research that could be done without LLMs, or even without ML, but having a general way to parse and filter these kinds of documents is still incredibly powerful, and will be a sort of force multiplication for these researchers going forward.

    My favorite example of the limitation on LLM’s is to ask it to coin a new word, then google that word. It physically is unable to produce a combination of letters that it doesn’t have indexed, and it doesn’t have an index for words it hasn’t seen. It might be able to create a new meaning for a word that it’s seen, but that isn’t necessarily the same.


  • It’s important to be clear what kind of actual system you’re using when you say “AI”.

    If you’re talking about something like ChatGPT, you’re using an LLM, or “Large Language Model”. Its goal is to produce something that reasonably looks like a human wrote it. It has reviewed a ridiculous amount of human text, and has a metric assload of weights associating the relationships between these words.

    If the LLM sees your question and associates a particular compound with superconductors, it’s because it’s seen these things related in other writings (directly or indirectly) or at least sees the relationship as plausible.

    It’s important not to ascribe more intent behind what your seeing than exists. It can’t understand what a superconductor is or how materials can achieve the state, it’s just really good at relaying related words in a convincing manner

    That’s not to say it isn’t cool or useful, or that ML(Machine Learning) can’t be used to help find answers to these kinds of questions.