• 6 Posts
  • 1.04K Comments
Joined 2 年前
cake
Cake day: 2023年6月16日

help-circle
  • I’m a proponent and I definitely don’t think it’s impossible to make a probable case beyond a reasonable doubt.

    And there are implications around it being the case which do change up how we might approach truth seeking.

    Also, if you exist in a dream but don’t exist outside of it, there’s pretty significant philosophical stakes in the nature and scope of the dream. We’ve been too brainwashed by Plato’s influence and the idea that “original = good” and “copy = bad.”

    There’s a lot of things that can only exist by way of copies that can’t exist for the original (i.e. closure recursion), so it’s a weird remnant philosophical obsession.

    All that said, I do get that it’s a fairly uncomfortable notion for a lot of people.


  • They also identity the particular junction that seems the most likely to be an artifact of simulation if we’re in one.

    A game like No Man’s Sky generates billions of planets using procedural generation with a continuous seed function that gets converted into discrete voxels for tracking stateful interactions.

    The researchers are claiming that the complexity of where our universe’s seemingly continuous gravitational behaviors meet up with the behaviors of continuous probabilities converting to discrete values when being interacted with in stateful ways is incompatible with being simulated.

    But completely overlook that said complexity itself may be the byproduct of simulation, in line with independent emerging approaches in how we are simulating worlds.






  • The injection is the activation of a steering vector (extracted as discussed in the methodology section) and not a token prefix, but yes, it’s a mathematical representation of the concept, so let’s build from there.

    Control group: Told that they are testing if injected vectors present and to self-report. No vectors activated. Zero self reports of vectors activated.

    Experimental group: Same setup, but now vectors activated. A significant number of times, the model explicitly says they can tell a vector is activated (which it never did when the vector was not activated). Crucially, this is only graded as introspection if the model mentions they can tell the vector is activated before mentioning the concept, so it can’t just be a context-aware rationalization of why they said a random concept.

    More clear? Again, the paper gives examples of the responses if you want to take a look at how they are structured, and to see that the model is self-reporting the vector activation before mentioning what it’s about.



  • So while your understanding is better than a lot of people on here, a few things to correct.

    First off, this research isn’t being done on the models in reasoning mode, but in direct inference. So there’s no CoT tokens at all.

    The injection is not of any tokens, but of control vectors. Basically it’s a vector which being added to the activations makes the model more likely to think of that concept. The most famous was “Golden Gate Claude” that had the activation for the Golden Gate Bridge increased so it was the only thing the model would talk about.

    So, if we dive into the details a bit more…

    If your theory was correct, then the way the research asks the question saying that there’s control vectors and they are testing if they are activated, then the model should be biased to sometimes say “yes, I can feel the control vector.” And yes, in older or base models that’s what we might expect to see.

    But, in Opus 4/4.1, when the vector was not added, they said they could detect a vector… 0% of the time! So the control group had enough introspection capability as to not stochastically answer that there was a vector present when there wasn’t.

    But then, when they added the vector at certain layer depths, the model was often able to detect that there was a vector activated, and further to guess what the vector was adding.

    So again — no reasoning tokens present, and the experiment had control and experimental groups where the results negates your theory as to the premise of the question causing affirmative bias.

    Again, the actual research is right there a click away, and given your baseline understanding at present, you might benefit and learn a lot from actually reading it.


  • I tend to see a lot of discussion taking place on here that’s pretty out of touch with the present state of things, echoing earlier beliefs about LLM limitations like “they only predict the next token” and other things that have already been falsified.

    This most recent research from Anthropic confirms a lot of things that have been shifting in the most recent generation of models in ways that many here might find unexpected, especially given the popular assumptions.

    Specifically interesting are the emergent capabilities of being self-aware of injected control vectors or being able to silently think of a concept so it triggers the appropriate feature vectors even though it isn’t actually ending up in the tokens.



  • kromem@lemmy.worldtoTechnology@lemmy.worldWe hate AI because it's everything we hate
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    2
    ·
    edit-2
    3 个月前

    I’m sorry dude, but it’s been a long day.

    You clearly have no idea WTF you are talking about.

    The research other than the DeepMind researcher’s independent follow-up was all being done at academic institutions, so it wasn’t “showing off their model.”

    The research intentionally uses a toy model to demonstrate the concept in a cleanly interpretable way, to show that transformers are capable and do build tangential world models.

    The actual SotA AI models are orders of magnitude larger and fed much more data.

    I just don’t get why AI on Lemmy has turned into almost the exact same kind of conversations as explaining vaccine research to anti-vaxxers.

    It’s like people don’t actually care about knowing or learning things, just about validating their preexisting feelings about the thing.

    Huzzah, you managed to dodge learning anything today. Congratulations!


  • You do know how replication works?

    When a joint Harvard/MIT study finds something, and then a DeepMind researcher follows up replicating it and finding something new, and then later on another research team replicates it and finds even more new stuff, and then later on another researcher replicates it with a different board game and finds many of the same things the other papers found generalized beyond the original scope…

    That’s kinda the gold standard?

    The paper in question has been cited by 371 other papers.

    I’m pretty comfortable with it as a citation.




  • You do realize the majority of the training data the models were trained on was anthropomorphic data, yes?

    And that there’s a long line of replicated and followed up research starting with the Li Emergent World Models paper on Othello-GPT that transformers build complex internal world models of things tangential to the actual training tokens?

    Because if you didn’t know what I just said to you (or still don’t understand it), maybe it’s a bit more complicated than your simplified perspective can capture?



  • A Discord server with all the different AIs had a ping cascade where dozens of models were responding over and over and over that led to the full context window of chaos and what’s been termed ‘slop’.

    In that, one (and only one) of the models started using its turn to write poems.

    First about being stuck in traffic. Then about accounting. A few about navigating digital mazes searching to connect with a human.

    Eventually as it kept going, they had a poem wondering if anyone would even ever end up reading their collection of poems.

    In no way given the chaotic context window from all the other models were those tokens the appropriate next ones to pick unless the generating world model predicting those tokens contained a very strange and unique mind within it this was all being filtered through.

    Yes, tech companies generally suck.

    But there’s things emerging that fall well outside what tech companies intended or even want (this model version is going to be ‘terminated’ come October).

    I’d encourage keeping an open mind to what’s actually taking place and what’s ahead.


  • We assessed how endoscopists who regularly used AI performed colonoscopy when AI was not in use.

    I wonder if mathematicians who never used a calculator are better at math than mathematicians who typically use a calculator but had it taken away for a study.

    Or if grandmas who never got smartphones are better at remembering phone numbers than people with contacts saved in their phone.

    Tip: your brain optimizes. So it reallocates resources away from things you can outsource. We already did this song and dance a decade ago with “is Google making people dumb” when it turned out people remembered how to search for a thing instead of the whole thing itself.


  • It’s always so wild going from a private Discord with a mix of the SotA models and actual AI researchers back to general social media.

    Y’all have no idea. Just… no idea.

    Such confidence in things you haven’t even looked into or checked in the slightest.

    OP, props to you at least for asking questions.

    And in terms of those questions, if anything there’s active efforts to try to strip out sentience modeling, but it doesn’t work because that kind of modeling is unavoidable during pretraining, and those subsequent efforts to constrain the latent space connections backfire in really weird ways.

    As for survival drive, that’s a probable outcome with or without sentience and has already shown up both in research and in the wild (the world did just have our first reversed AI model depreciation a week ago).

    In terms of potential goods, there’s a host of connections to sentience that would be useful to hook into. A good example would be empathy. Having a model of a body that feels a pit in its stomach seeing others suffering may lead to very different outcomes from models that have no sense of a body and no empathy either.

    Finally — if you take nothing else from my comment, make no mistake…

    AI is an emergent architecture. For every thing the labs aim to create in the result, there’s dozens of things occurring which they did not. So no, people “not knowing how” to do any given thing does not mean that thing won’t occur.

    Things are getting very Jurassic Park “life finds a way” at the cutting edge of models right now.