• 2 Posts
  • 237 Comments
Joined 2 years ago
cake
Cake day: June 19th, 2023

help-circle

  • Quick recap for future historians:

    • for a really brief part of its history, humanity tried to give kindness a go. A half-hearted attempt at best, but there were things like DEI programs, for instance, attempting to create a gentler, more accepting world for everyone. At the very least, trying to appear human to the people they managed was seen as a good attribute for Leaders.

    • some people felt that their God-given right to be assholes to everyone was being taken away (it’s right there in the Bible: be a jerk to your neighbor, take away his job and f##k his wife)

    • Assholes came back in full force, with a vengeance. Not that they had ever disappeared, but now they relished the opportunity to be openly mean for no reason again. Once again, True Leaders were judged by their ability to drain every drop of blood from their employees and take their still-beating hearts as an offering to the Almighty Shareholders.


  • I get what you mean and it’s a fair point. But I would still go with Meta as the most immediate threat in a war with the US.

    As the ignorant I am, my understanding is that the phone manufacturer has a level of control on the way Android works and so it wouldn’t be as easy for Google to access any individual Samsung or Xiaomi phone as it is for Meta with WhatsApp, an app they fully control with permissions to use (way too many) phone features regardless of brand.

    Plus, getting both Google and Apple to cooperate and coordinate sounds harder to me than just going to one company, that is basically controlled by only one person.


  • They are basically at war with the US and there is this piece of US Tech that nearly everyone is carrying around and that can access their communications, precise location, microphone and camera.

    It’s also owned by a company, Meta, that has a history of being used as a tool to manipulate public opinion. I have no particular sympathy for Iran’s leadership but I can understand why they would advice that (and I don’t think WhatsApp is the only way for people to communicate with the outside world).


  • I can’t tell if it’s “the true cause” of the massive tech layoffs because I know jackshit of US tax, but it does make more sense than every company realising at the same time that they over-hired or becoming instant believers of AI-driven productivity.

    The only part that doesn’t make sense to me is why hide this from employees. Countless all-hamds with uncomfortable CTOs spitting badly rehearsed bs about why 20% of their team was suddenly let go or why project Y, top of last year’s strategic priorities, was unceremoniously cancelled. Instead of “R&D is no longer deductible so it costs us much more now”.

    I would not necessarily be happier about being laid off but this would at least be an explanation I feel I’d truly be able to accept


  • Machine learning has existed for many years, now. The issue is with these funding-hungry new companies taking their LLMs, repackaging them as “AI” and attributing every ML win ever to “AI”.

    ML programs designed and trained specifically to identify tumors in medical imaging have become good diagnostic tools. But if you read in news that “AI helps cure cancer”, it makes it sound like it was a lone researcher who spent a few minutes engineering the right prompt for Copilot.

    Yes a specifically-designed and finely tuned ML program can now beat the best human chess player, but calling it “AI” and bundling it together with the latest Gemini or Claude iteration’s “reasoning capabilities” is intentionally misleading. That’s why articles like this one are needed. ML is a useful tool but far from the “super-human general intelligence” that is meant to replace half of human workers by the power of wishful prompting






  • I agree. I was almost skipping it because of the title, but the article is nuanced and has some very good reflections on topics other that AI. Every technical progress is a tradeoff. The article mentions cars to get to the grocery store and how there are advantages in walking that we give up when always using a car. Are cars in general a stupid and useless technology? No, but we need to be aware of where the tradeoffs are. And eventually most of these tradeoffs are economic in nature.

    By industrializing the production of carpets we might have lost some of our collective ability to produce those hand-made masterpieces of old, but we get to buy ok-looking carpets for cheap.

    By reducing and industrializing the production of text content, our mastery of language is declining, but we get to read a lot of not-very-good content for free. This pre-dates AI btw, as can be seen by standardized tests in schools everywhere.

    The new thing about GenAI, though is that it upends the promise that technology was going to do the grueling, boring work for us and free up time for us to do the creative things that give us joy. I feel the roles have reversed: even when I have to write an email or a piece of coding, AI does the creative piece and I’m the glorified proofreader and corrector.








  • Basically, model collapse happens when the training data no longer matches real-world data

    I’m more concerned about LLMs collaping the whole idea of “real-world”.

    I’m not a machine learning expert but I do get the basic concept of training a model and then evaluating its output against real data. But the whole thing rests on the idea that you have a model trained with relatively small samples of the real world and a big, clearly distinct “real world” to check the model’s performance.

    If LLMs have already ingested basically the entire information in the “real world” and their output is so pervasive that you can’t easily tell what’s true and what’s AI-generated slop “how do we train our models now” is not my main concern.

    As an example, take the judges who found made-up cases because lawyers used a LLM. What happens if made-up cases are referenced in several other places, including some legal textbooks used in Law Schools? Don’t they become part of the “real world”?


  • I tried reading the paper. There is a free preprint version on arxiv. This page (from the article linked by OP) also links the code they used and the data they tried compressing, in the end.

    While most of the theory is above my head, the basic intuition is that compression improves if you have some level of “understanding” or higher-level context of the data you are compressing. And LLMs are generally better at doing that than numeric algorithms.

    As an example if you recognize a sequence of letters as the first chapter of the book Moby-Dick you’ll probably transmit that information more efficiently than a compression algorithm. “The first chapter of Moby-Dick”; there … I just did it.