Why is that?
Why is that?
Honestly, latency/performance stuff. As in: how do VST synths ensure that they’ll synthesize in time to keep up with the audio buffer, depending on user hardware. I’m asking because I’ve seen/heard countless VST synths fail at this and sound like a clicky mess, and I feel like if I understood how it’s handled in code it would make more sense to me.
Spectrogram*
That’s actually an accurate description of what is happening: an audio file turned into a 2d image with the x axis being time, the y axis being frequency and color being amplitude.
I get where you’re coming from, but I also think it’s fair to say archaeologists have at least some insight into what happens to glass over long periods of time. Hopefully Microsoft has consulted with them.
Good luck lmao