• Blaster M@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    3 months ago

    As a general rule of thumb, you need about 1 GB per 1B parameters, so you’re looking at about 405 GB for the full size of the model.

    Quantization can compress it down to 1/2 or 1/4 that, but “makes it stupider” as a result.