Oh cool, Lemmy automatically obfuscates your password. All I see is *************!
Oh cool, Lemmy automatically obfuscates your password. All I see is *************!
I think children go in dictionaries so you can look them to via name (key).
Hey, my setup works for me! Just add an option to enable CPU overheating in the next update!
Fusion is effectively renewable. Use a small portion of energy to do electrolysis and you got your fuel. We won’t be running out of water any time soon.
Same, I thought it was used commonly too.
I like Ardour. Unfa on YouTube made a great tutorial on how to use it.
It isn’t misusing metric, it just simply isn’t metric at all.
No, it is customer’s since there will only be one customer left at that point.
single master text file
Sounds like something you are using to manage your packages to me…
IANAL but it looks like they are violating Apache 2, as they are supposed to retain the license and mark any changes.
You missed a factor of ten from the gravitational field strength, but still not great. Their heat batteries work better when it comes to heating, but that is mostly limited to just that.
Sure. If you are using an nvidia optimus laptop, you should also add __NV_PRIME_RENDER_OFFLOAD=1 __GLX_VENDOR_LIBRARY_NAME=nvidia at the start of the last line when running in hybrid mode to run mpv on the dgpu. You should have a file at ~/.wallpaperrc that contains wallpaper_playlist: /path/to/mpv/playlist
. You may want to add this script to your startup sequence via your wm/de.
#!/bin/sh
WALLPAPER_PLAYLIST=$(cat ~/.wallpaperrc | grep -v '^\w*#' | grep 'wallpaper_playlist' | sed "s/wallpaper_playlist: //")
xwinwrap -g 1920x1080 -ov -- mpv -wid WID --no-osc --no-audio --loop-playlist --shuffle --playlist=$WALLPAPER_PLAYLIST
Hope this helps!
I set mpv as the root window which worked well. I stopped using it a while back, but if you are interested, I could dig up the simple script for you (literally one or two lines iirc).
Wow, CUPS is way better than I previously thought and I thought it was amazing!
If you have ever seen a police interregation, you may notice the detectives ask a question and then, after either no answer or insufficient answer, they will just look at the suspect expectantly. This is done to put phsycological pressure on the suspect to answer the question. Given this info, I would say so, at least in a face to face situation.
Online, I am not so sure. How many posts did you scroll past in the last week on Lemmy that ask a question that you did not answer? How many did you answer? Even if you answered most, you would be in the minority, as if you were not, we would expect far higher engagement rates on posts.
If I’m being honest, it is fairly slow. It takes a good few seconds to respond on a 6800XT using the medium vram option. But that is the price to pay to running ai locally. Of course, a cluster should drastically improve the speed of the model.
You can run llms on text-generation-ui such as open llama and gpt2. It is very similar to the stable diffusion web ui.
It can’t double the dBs. It will only add 3 as dBs are a log scale and +/-3dBs is double/half.
If you don’t want to help the slavers, here is a tip: you can destroy ladders.