A contrarian isn’t one who always objects - that’s a confirmist of a different sort. A contrarian reasons independently, from the ground up, and resists pressure to conform.

  • Naval Ravikant
  • 8 Posts
  • 312 Comments
Joined 5 months ago
cake
Cake day: January 30th, 2025

help-circle







  • “Your claim is only valid if you first run this elaborate, long-term experiment that I came up with.”

    The world isn’t binary. When someone says less moderation, they don’t mean no moderation. Framing it as all-or-nothing just misrepresents their view to make it easier for you to argue against. CSAM is illegal, so it’s always going to be against the rules - that’s not up to Google and is therefore a moot point.

    As for other content you ideologically oppose, that’s your issue. As long as it’s not advocating violence or breaking the law, I don’t see why they’d be obligated to remove it. You’re free to think they should - but it’s their platform, not yours. If they want to allow that kind of content, they’re allowed to. If you don’t like it, don’t go there.







  • That’s because it is.

    The term artificial intelligence is broader than many people realize. It doesn’t mean human-level consciousness or sci-fi-style general intelligence - that’s a specific subset called AGI (Artificial General Intelligence). In reality, AI refers to any system designed to perform tasks that would typically require human intelligence. That includes everything from playing chess to recognizing patterns, translating languages, or generating text.

    Large language models fall well within this definition. They’re narrow AIs - highly specialized, not general - but still part of the broader AI category. When people say “this isn’t real AI,” they’re often working from a fictional or futuristic idea of what AI should be, rather than how the term has actually been used in computer science for decades.


  • Different definitions for intelligence:

    • The ability to acquire, understand, and use knowledge.
    • the ability to learn or understand or to deal with new or trying situations.
    • the ability to apply knowledge to manipulate one’s environment or to think abstractly as measured by objective criteria (such as tests)
    • the act of understanding
    • the ability to learn, understand, and make judgments or have opinions that are based on reason
    • It can be described as the ability to perceive or infer information; and to retain it as knowledge to be applied to adaptive behaviors within an environment or context.

    We have plenty of intelligent AI systems already. LLM’s probably fit the definition. Something like Tesla FSD definitely does.




  • I’m not on any other social media, so I can’t comment on that. I’m sure it existed on Reddit as well, but the user base there was more ideologically diverse, so extremism would usually get pushback no matter where it came from. Lemmy, on the other hand, is much more of a left-wing echo chamber, so those kinds of comments mostly just get applause, and calling them out tends to lead to being shunned instead. I don’t follow political communities, but I still encounter these kinds of comments regularly - and they’re usually upvoted by several people.


  • Thanks.

    Well, I don’t think OpenAI knows how to build AGI, so that’s false. Otherwise, Sam’s statement there is technically correct, but kind of misleading - he talks about AGI and then, in the next sentence, switches back to AI.

    Sergey’s claim that they will achieve AGI before 2030 could turn out to be true, but again, he couldn’t possibly know that. I’m sure it’s their intention, but that’s different from reality.

    Elon’s statement doesn’t even make sense. I’ve never heard anyone define AGI like that. A thirteen-year-old with an IQ of 85 is generally intelligent. Being smarter than the smartest human definitely qualifies as AGI, but that’s just a weird bar. General intelligence isn’t about how smart something is - it’s about whether it can apply its intelligence across multiple unrelated fields.