

This article didn’t even go into the service disruptions Perplexity has had the past couple of weeks. In short, the best thing Perplexity does is give you access to multiple models at once, but frequently when you try to select one for a specific thread, it will throw an error and (quietly) kick the response to your prompt to a backup.
Perplexity’s calling it a technical issue but it looks more like throttling, especially considering API access to Claude is expensive and that’s the one that is having the most “technical issues.” I would have already gone elsewhere if my sub wasn’t free, and if it continues to be this bad, I might end up going elsewhere anyway.


And the problem with Reddit–especially with certain language communities–is you’ll get a hallucination rate higher than current LLMs because learners can either overestimate their knowledge or sound off just because they want to show off.
I don’t recommend LLM use for beginners at languages but once they get a semester or two (or the equivalent) under their belt, the instant access to an answer that’s right most of the time is invaluable. Just first get to the point where you can start to recognize “maybe that’s not quite right…” first, and check sources. And definitely check in with natives as much as possible.