• ch00f@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    6 hours ago

    I’ll give that a shot.

    I’m running it in docker because it’s running on a headless server with a boatload of other services. Ideally whatever I use will be accessible over the network.

    I think at the time I started, not everything supported Intel cards, but it looks like llama-cli has support form Intel GPUs. I’ll give it a shot. Thanks!