

That doesn’t sound great. What benefits do you see in mirroring this behavior
That doesn’t sound great. What benefits do you see in mirroring this behavior
Well, guess I can’t deny such compelling evidence
As much of a prick as this guy is, I don’t think that’s true. The behind the bastards episode on him couldn’t substantiate it at least
Yup, that’s what I was alluding to, while it may not still be the case for transistors, they did manage to take 50 odd years to get there, push that trend line from the figure 50 years heh (not saying you should, 5 seems much more conservative)
Take a look at Nvidias pace wrt Moore’s law (of FLOPS) https://netrouting.com/nvidia-surpassing-moores-law-gpu-innovation/
Or like looking at the early days of semiconductors and extrapolating that CPU speed will double every 18 months …smh these people
I don’t really follow your logic, how else would you propose to shape the audio that is not “just an effect”.
Your analogy to real life does not take into account that the audio source itself is moving, so their is an extra variable outside of just stereo signal -which is what spatial audio is modelling
And your muffling example sounds a bit over simplified maybe? My understanding is that the spatial stuff is produced by phase shifting the LR signals slightly
Finally why not go further? “I don’t listen to speaker audio because it’s all just effects and mirages to sound like a real sound, what only 2^16 discrete positions the diaphragm can be in” :p
There’s is a huge difference though.
That being one is making hardware and the other is copying books into your training pipeline though
The copy occurs in the dataset preparation.
Privacy preserving federated learning is a thing - essentially you train a local model and send the weight updates back to Google rather than the data itself…but also it’s early days so who knows what vulnerabilities may exist
You could try dexed, it’s a YamahaDX7 clone https://github.com/asb2m10/dexed/releases
You need rebase instead. Merge just creates useless commits and makes the diffs harder to comprehend (all changes are shown at once, but with rebase you fix the conflicts in the commit where they happened)
Then instead of your branch of branch strat you just rebase daily into main and you’re golden when it comes time to PR
Hmm, not so sure. He produced a digital signal, who’s spectrogram happened to be an image, and then played that digital signal to a bird. Dunno if a analogue spectrogram really even makes sense as a concept. The only analogue part of the chain would be the birds vocalisations, right?