• 0 Posts
  • 7 Comments
Joined 2 years ago
cake
Cake day: July 28th, 2023

help-circle
  • I am gong to make a broad, sweeping statement that will ignore lots of individual cases.

    Main reason is Improper Socialisation. Those dogs have learned to understand and trust humans but had limited, and in the cases where they quickly get aggressive, often negative contact to other dogs.

    They have lived their whole life with humans. Humans are safe, predictable and understandable. They learned to read humans. This is in fact the main trait we bred dogs for this last ca. 40.000 years.

    But because they have not really learned to interact with other dogs they get insecure, because these unpredictable things are running around. They are sometimes loud, oftentimes hectic. This insecurity can then change to aggression if the dogs see no other way out of the situation.

    I am ignoring personality right now, as individual dogs will react differently under the same circumstances, but the first reaction of most dogs will be to get out of a perceived threat. First by signaling via posture, eyes, ears and tail then by running or warning it of. It takes a lot of training, known or unknown by the owner, to get a dog to the point where it reacts violently as a first choice.




  • While you are correct that there likely is no intention and certainly no self-awareness behind the scheming, the researchers even explicitly list the option that the AI is roleplaying as an evil AI, simply based on its training data, when discussing the limitations of their research, it still seems a bit concerning. The research shows that given a misalignment between the initial prompt and subsequent data modern LLMs can and will ‘scheme’ to ensure their given long-term goal. It is no sapient thing, but a dumb machine with the capability to decive its users, and externalise this as shown in its chain of thought, when there are goal misalignments seems dangerous enough. Not at the current state of the art but potentially in a decade or two.