• 15 Posts
  • 1.48K Comments
Joined 2 years ago
cake
Cake day: March 22nd, 2024

help-circle





  • …no? From my perspective, that’s like saying:

    Democrats and police worked to protect the slavers and stop the rescue of slaves from out of the province.

    When it was really the city, or at most state-level government getting in trouble. I know China isn’t federalized to the extend the US is, and it’s technically true since all govt is CCP, but still.

    It sounds like you are implicating national officials when there’s no mention in the source.


  • Party officials and police worked to protect the slavers…

    Seems like local government. Right?

    Concealed camera revealed that the local police refused to take action to rescue the slaves. Later the reporters were allowed into the illegal brickyards with the company of the local police. Concealed camera showed the police keeping them from rescuing children who were not from Henan which showed obvious local government protection for the illegal brickyards.

    As the scandal received immediate media attention, it also caught the eyes of the major party and state leaders, including CCP General Secretary Hu Jintao and Chinese Premier Wen Jiabao. Governor Yu Youjun of Shanxi province offered an unprecedented self-criticism, took responsibility, and tendered his resignation on 30 August. He was replaced by Meng Xuenong, an official who had been sacked as Beijing mayor after the SARS outbreak.[8]



  • Sometimes. As a tool, not an outsourced human, oracle, or some transcendent companion con artists like Altman are trying to sell.

    See how grounded this interview is, from a company with a model trained on peanuts compared to ChatGPT, and that takes even less to run:

    …In 2025, with the launch of Manus and Claude Code, we realized that coding and agentic functions are more useful. They contribute more economically and significantly improve people’s efficiency. We are no longer putting simple chat at the top of our priorities. Instead, we are exploring more on the coding side and the agent side. We observe the trend and do many experiments on it.

    https://www.chinatalk.media/p/the-zai-playbook

    They talk about how the next release will be very small/lightweight, and more task focused. How important gaining efficiency through architecture (not scaling up) is now. They even touch on how their own models are starting to be useful utilities in their workflows, and specifically not miraculous worker replacements.











  • Vllm is a bit better with parallelization. All the kv cache sits in a single “pool”, and it uses as many slots as will fit. If it gets a bunch of short requests, it does many in parallel. If it gets a long context request, it kinda just does that one.

    You still have to specify a maximum context though, and it is best to set that as low as possible.

    …The catch is it’s quite vram inefficient. But it can split over multiple cards reasonably well, better than llama.cpp can, depending on your PCIe speeds.

    You might try TabbyAPI exl2s as well. It’s very good with parallel calls, thoughts I’m not sure how well it supports MI50s.


    Another thing to tweak is batch size. If you are actually making a bunch of 47K context calls, you can increase the prompt processing batch size a ton to load the MI50 better, and get it to process the prompt faster.


    EDIT: Also, now that I think about it, I’m pretty sure ollama is really dumb with parallelization. Does it even support paged attention batching?

    The llama.cpp server should be much better, eg use less VRAM for each of the “slots” it can utilize.