Eh, I have a few things from Kickstarter that were successful. Exploding Kittens is probably the most successful one of all the ones I own.
Eh, I have a few things from Kickstarter that were successful. Exploding Kittens is probably the most successful one of all the ones I own.
Isn’t Umbraco the one that struggled loading a page that didn’t exist, taking several seconds to load the PageNotFound page and causing very high CPU load in the meantime? Like, an issue they had for years?
Somehow I don’t have great faith in that solution, but perhaps it’s improved in recent years.
RFCs aren’t really law you know. They can deviate, it just means less compatibility.
You do get the advantage of easy and above all fast placement.
Not sure how this would work out. There’s pros and cons I suppose.
If producing an AGI is intractable, why does the human meat-brain exist?
Ah, but here we have to get pedantic a little bit: producing an AGI through current known methods is intractable.
The human brain is extremely complex and we still don’t fully know how it works. We don’t know if the way we learn is really analogous to how these AIs learn. We don’t really know if the way we think is analogous to how computers “think”.
There’s also another argument to be made, that an AGI that matches the currently agreed upon definition is impossible. And I mean that in the broadest sense, e.g. humans don’t fit the definition either. If that’s true, then an AI could perhaps be trained in a tractable amount of time, but this would upend our understanding of human consciousness (perhaps justifyingly so). Maybe we’re overestimating how special we are.
And then there’s the argument that you already mentioned: it is intractable, but 60 million years, spread over trillions of creatures is long enough. That also suggests that AGI is really hard, and that creating one really isn’t “around the corner” as some enthusiasts claim. For any practical AGI we’d have to finish training in maybe a couple years, not millions of years.
And maybe we develop some quantum computing breakthrough that gets us where we need to be. Who knows?
This is a gross misrepresentation of the study.
That’s as shortsighted as the “I think there is a world market for maybe five computers” quote, or the worry that NYC would be buried under mountains of horse poop before cars were invented.
That’s not their argument. They’re saying that they can prove that machine learning cannot lead to AGI in the foreseeable future.
Maybe transformers aren’t the path to AGI, but there’s no reason to think we can’t achieve it in general unless you’re religious.
They’re not talking about achieving it in general, they only claim that no known techniques can bring it about in the near future, as the AI-hype people claim. Again, they prove this.
That’s a silly argument. It sets up a strawman and knocks it down. Just because you create a model and prove something in it, doesn’t mean it has any relationship to the real world.
That’s not what they did. They provided an extremely optimistic scenario in which someone creates an AGI through known methods (e.g. they have a computer with limitless memory, they have infinite and perfect training data, they can sample without any bias, current techniques can eventually create AGI, an AGI would only have to be slightly better than random chance but not perfect, etc…), and then present a computational proof that shows that this is in contradiction with other logical proofs.
Basically, if you can train an AGI through currently known methods, then you have an algorithm that can solve the Perfect-vs-Chance problem in polynomial time. There’s a technical explanation in the paper that I’m not going to try and rehash since it’s been too long since I worked on computational proofs, but it seems to check out. But this is a contradiction, as we have proof, hard mathematical proof, that such an algorithm cannot exist and must be non-polynomial or NP-Hard. Therefore, AI-learning for an AGI must also be NP-Hard. And because every known AI learning method is tractable, it cannor possibly lead to AGI. It’s not a strawman, it’s a hard proof of why it’s impossible, like proving that pi has infinite decimals or something.
Ergo, anyone who claims that AGI is around the corner either means “a good AI that can demonstrate some but not all human behaviour” or is bullshitting. We literally could burn up the entire planet for fuel to train an AI and we’d still not end up with an AGI. We need some other breakthrough, e.g. significant advancements in quantum computing perhaps, to even hope at beginning work on an AGI. And again, the authors don’t offer a thought experiment, they provide a computational proof for this.
This article was amended on 14 September 2023 to add an update to the subheading. As the Guardian reported on 12 September 2023, following the publication of this article, Walter Isaacson retracted the claim in his biography of Elon Musk that the SpaceX CEO had secretly told engineers to switch off Starlink coverage of the Crimean coast.
IIRC Musk didn’t switch it off, it wasn’t turned on in the first place and Musk refused to turn it on when the Ukrainian military reqeusted it.
Musk is a shithead but not for this reason.
https://github.com/cheeaun/phanpy?tab=readme-ov-file#easy-way
It’s fairly literally just a download-and-run kind of deal it seems. Does seem pretty trivial.
I won’t pretend I understand all the math and the notation they use, but the abstract/conclusions seem clear enough.
I’d argue what they’re presenting here isn’t the LLM actually “reasoning”. I don’t think the paper really claims that the AI does either.
The CoT process they describe here I think is somewhat analogous to a very advanced version of prompting an LLM something like “Answer like a subject matter expert” and finding it improves the quality of the answer.
They basically help break the problem into smaller steps and get the LLM to answer smaller questions based on those smaller steps. This likely also helps the AI because it was trained on these explained steps, or on smaller problems that it might string together.
I think it mostly helps to transform the prompt into something that is easier for an LLM to respond accurately to. And because each substep is less complex, the LLM has an easier time as well. But the mechanism to break down a problem is quite rigid and not something trainable.
It’s super cool tech, don’t get me wrong. But I wouldn’t say the AI is really “reasoning” here. It’s being prompted in a really clever way to increase the answer quality.
For a month, we’d talk about aliens. Then, we’d talk about Trump or Musks latest shit takes about aliens.
It’s not a direct response.
First off, the video is pure speculation, the author doesn’t really know how it works either (or at least doesn’t seem to claim to know). They have a reasonable grasp of how it works, but what they believe it implies may not be correct.
Second, the way O1 seems to work is that it generates a ton of less-than-ideal answers and picks the best one. It might then rerun that step until it reaches a sufficient answer (as the video says).
The problem with this is that you still have an LLM evaluating each answer based on essentially word prediction, and the entire “reasoning” process is happening outside any LLM; it’s thinking process is not learned, but “hardcoded”.
We know that chaining LLMs like this can give better answers. But I’d argue this isn’t reasoning. Reasoning requires a direct understanding of the domain, which ChatGPT simply doesn’t have. This is explicitly evident by asking it questions using terminology that may appear in multiple domains; it has a tendency of mixing them up, which you wouldn’t do if you truly understood what the words mean. It is possible to get a semblance of understanding of a domain in an LLM, but not in a generalised way.
It’s also evident from the fact that these AIs are apparently unable to come up with “new knowledge”. It’s not able to infer new patterns or theories, it can only “use” what is already given to it. An AI like this would never be able to come up with E=mc2 if it hasn’t been fed information about that formula before. It’s LLM evaluator would dismiss any of the “ideas” that might come close to it because it’s never seen this before; ergo it is unlikely to be true/correct.
Don’t get me wrong, an AI like this may still be quite useful w.r.t. information it has been fed. I see the utility in this, and the tech is cool. But it’s still a very, very far cry from AGI.
This is true, but it’s specifically not what LLMs are doing here. It may come to some very limited, very specific reasoning about some words, but there’s no “general reasoning” going on.
It’s not always true that appeals must go to some “higher court”. In some countries appeals may end up before the same judge, or another judge from the same court.
The US army does indeed, and they would be valid military targets. People working for the EPA, perhaps not so much. Hezbollah however is structured towards support for the militant arm, as the Lebanese government handles civilian tasks.
And I’m sure Islamic State and the Taliban have non-combatant elements too.
I don’t mind Israel defending against militant groups that fire rockets into Israel. I do mind them carpet-bombing civilian populations. This pager-thing seems to have the hallmarks of an operation that manages to cripple Hezbollah with a minimal loss of life and even fairly low civilian casualties. I much prefer Israel do this over the alternatives.
There’s thousands of Hezbollah militants as well. We don’t know yet exactly how targeted the attack was.
Regardless “only” 9 people died so far. Thousands were wounded, but that’s much better than land mines would’ve been. This attack was extraordinarily targeted, and despite there being civilians hurt, they’re likely to be less hurt than the militants and unlikely to be among the dead. Every civilian death is a tragedy, but Hezbollah and Israel are in an armed conflict. Some civilian deaths are unavoidable. I much prefer Israel do this than the indiscriminate bombing on Gaza.
9/11 targeted and killed civilians. This attack largely struck Hezbollah militants, who are in open hostilities with Israel. Doing things this way is far better than the seemingly indiscriminate bombing in Gaza.
It’s less antisemitic though. Please don’t conflate Jewish people with Israel, it’s caused enough problems as it is.
I can see this having some advantages over two-folds. The unfolded screen has a better aspect ratio, there’s no need for a “back”-screen and all folds have only one screen on them, allowing the full thing to be thinner.
Price is an issue of course, as well as it having HarmonyOS instead of Android (less app compatibility).
Sure, but merely linking to a page isn’t reusing the content. If said content was being embedded, rehashed or otherwise shown then a compensation would be fair. But merely linking to a page should absolutely be free. That’s a massively important cornerstone of the internet that shouldn’t be compromised on.
Linking directs traffic which can be monetized by the website itself, it shouldn’t require additional fees on top.