• 2 Posts
  • 983 Comments
Joined 2 years ago
cake
Cake day: June 9th, 2023

help-circle

  • It’s about as narrowly targeted a chant as you can get.

    It’s not about jews. It’s not about israelis. It’s specifically the army of israel. If that’s not narrowly targeted enough, what’s acceptable? “Down with the members of the IDF who intentionally target civilians but not those members of the IDF who are willing to risk a court martial to make sure that they only attack valid military targets?” Doesn’t make a very good chant.



  • The other thing that most people don’t focus on is how we train LLMs.

    We’re basically building something like a spider tailed viper. A spider tailed viper is a kind of snake that has a growth on its tail that looks a lot like a spider. It wiggles it around so it looks like a spider, convincing birds they’ve found a snack, and when the bird gets close enough the snake strikes and eats the bird.

    Now, I’m not saying we’re building something that is designed to kill us. But, I am saying that we’re putting enormous effort into building something that can fool us into thinking it’s intelligent. We’re not trying to build something that can do something intelligent. We’re instead trying to build something that mimics intelligence.

    What we’re effectively doing is looking at this thing that mimics a spider, and trying harder and harder to tweak its design so that it looks more and more realistic. What’s crazy about that is that we’re not building this to fool a predator so that we’re not in danger. We’re not doing it to fool prey, so we can catch and eat them more easily. We’re doing it so we can fool ourselves.

    It’s like if, instead of a spider-tailed snake, a snake evolved a bird-like tail, and evolution kept tweaking the design so that the tail was more and more likely to fool the snake so it would bite its own tail. Except, evolution doesn’t work like that because a snake that ignored actual prey and instead insisted on attacking its own tail would be an evolutionary dead end. Only a truly stupid species like humans would intentionally design something that wasn’t intelligent but mimicked intelligence well enough that other humans preferred it to actual information and knowledge.












  • All other things being equal, it would save a lot of lives to replace every human driver with a Waymo car right now. They’re already significantly better than the average driver.

    But, there are a few caveats. One is that so far they’ve only ever driven under relatively easy conditions. They don’t do any highway driving, and they’ve never driven in snow. Another one is that because they all share one “mind”, we don’t know if there are failure modes that would affect every car. Every human driver is different, but every human is more or less the same. If a human sees a 100 km/h or 60 mph speed limit on a narrow, twisty, suburban street with poor visibility, most of them are probably going to assume it was a mistake and won’t actually try to drive 100 km/h. We don’t know if a robo-vehicle will do that. AFAIK they haven’t found any way to emulate “common sense”. They might also freak out during an eclipse because they’ve never been trained for that kind of lighting. Or they might try to drive at normal speeds when visibility is obscured by forest fire smoke.

    There’s also the side effects of replacing millions of drivers with robo-cars. What will it do to people who drive for a living? Should Google/Waymo be paying most of the cost of retraining them? Paying their bills until they can find a new job? What will it do to cities? Will it mean that we no longer need parking lots because cars come and drop people off and then head off to take care of someone else? Or will it mean empty cars roaming the city causing gridlock and making it hell for pedestrians and bikers? Will people now want to live in the city because they don’t need to pay for parking and can get a car easily whenever they need one? Or will people now want to live even farther out into the suburbs / rural areas because they don’t need to drive and can work in the car on the way into the city?

    Personally, I’m hopeful. I think they could make cities better. But, who knows. We should move slowly until we figure things out.


  • a silicon valley AI project to put transit workers out of work

    Silicon valley doesn’t have objectives like “putting transit workers out of work”. They only care about growth and profit.

    In this case, the potential for growth is replacing every driver, not merely targeting transit workers. If they can do that, it would mean millions fewer cars on the road, and millions fewer cars being produced. Great for the environment, but yeah, some people might lose their jobs. But, other new jobs might be created.

    The original car boom also destroyed all kinds of jobs. Farriers, stable hands, grooms, riding instructors, equine veterinarians, horse trainers, etc. But, should we have held back technology so those jobs were all around today? We’d still have streets absolutely covered in horse poop, and horses regularly dying in the street, along with all the resulting disease. Would that be a better world? I don’t think so.

    It’s another project to get AI money and destroy labor rights.

    Waymo obviously uses a form of AI, but they’ve been around a lot longer than the current AI / LLM boom. It’s 16 years old as a Google project, 21 years old if you consider the original Stanford team. As for destroying labour rights, sure, every capitalist company wants weaker labour rights. But, that includes the car companies making normal human-driven cars, it includes the companies manufacturing city buses and trains. There’s nothing special about Waymo / Google in that regard.

    Sure, strengthening labour rights would be a good idea, but I don’t think it really has anything to do with Waymo. But, sure, we should organize and unionize Google if that’s at all possible.

    Transit is incredibly underfunded and misregulated in California/the USA

    Sure. That has nothing to do with Waymo though.

    robotaxis are a criminal misinvestment in resources.

    Misinvestment by whom? Google? What should Google be investing in instead?


  • AFAIK they’re as safe as SawStop table saws. There has only ever been one collision involving a Waymo car that resulted in a serious injury. It was when a driver in another car, who was fleeing from police, sideswiped two cars, went onto the sidewalk and hit 2 pedestrians. One of the cars that was hit was a Waymo car, and the passenger was injured. Obviously, this wasn’t the fault of Waymo, but it was included in their list of 25 crashes with injuries, and was the only one involving a serious injury.

    Of the rest, 17 involved the Waymo car being rear-ended. 3 involved another car running a red light and hitting the Waymo car. 2 were sideswipes caused by the other driver. 2 were vehicles turning left across the path of the Waymo car, one a bike, one a car. One was a Waymo car turning left and being hit on the passenger side. It’s possible that a few of these cases involving a collision between a vehicle turning and a vehicle going straight could be at least partially blamed on the Waymo car. But, based on the descriptions of the crashes it certainly wasn’t making an obvious error.

    IMO it would be hard to argue that the cars aren’t already significantly safer than the average driver. There are still plenty of bugs to be ironed out, but for the most part they don’t seem to be safety-related bugs.

    If the math were simple and every Waymo car on the road meant one human driver off the road with no other consequences or costs, it would be a no-brainer to start replacing human drivers with Waymo’s tech. But, of course, nothing is ever that simple.

    Source: https://www.understandingai.org/p/human-drivers-are-to-blame-for-most