Ex-Redditor. I have big autism, big sad-all-the-time, and weird math energy.

Interests

  • extreme metal
  • audio engineering
  • electrical engineering
  • math
  • programming
  • anarchism

Dislikes

  • proprietary software
  • advertisements
  • paywalls
  • capitalism
  • bigotry
  • people who defend the above
  • 0 Posts
  • 4 Comments
Joined 1 year ago
cake
Cake day: June 17th, 2023

help-circle


  • because all of them can also be said of DC electrical current.

    I mean I can’t and wouldn’t force you to think a certain way, but that premise is false, and I thought I demonstrated as such in the previous comment.

    What I can add is that actual “DC current”, e.g. that delivered by a physical, nearly-constant current source that turned on at some point in time and ostensibly will be turned off before the heat death of the universe, does have an AC component! At the very least, it will turn on and off, which is a variation in time. When we design circuits for “DC current” (or voltage), we make the assumption that the AC component is too small to be considered, and thus we just pretend that we have an ideal DC current.

    So when we talk about DC current with any kind of precision, we really mean the constant part of the current waveform equal to the average value of the signal. Blowing as a set of related signals in all it’s media are not constant signals. A recording would demonstrate this, and the requirement for sound to have a nonzero frequency also rules out the possibility for a DC sound.

    Now I know that analogies are loose comparisons, and if your analogy aids your understanding then more power to you, but I genuinely cannot find any way that they are analogous.


  • [Air] being blown out of your mouth is similar to DC ( direct current ) and that it’s a continuous wave of air with frequency zero.

    Nope. You can’t have sound without a vibration. A vibration of zero frequency is constant for all time. When you blow air, you get a bunch of “not-zero” frequency noise from the actual movement of air. Even if you could somehow blow a perfectly DC (0Hz frequency) wave, the fact that you started at some point of time mathematically implies that there are higher frequencies in the signal. [1]

    To convince yourself of this, record an audio clip of yourself blowing into a microphone. Any mic will do, just don’t overload it.[2] Open up the audio file in Audacity, Ardour, or any other audio program that can display waveforms. It will be oscillating quite a bit.

    This also indicates that approximating sound as a constant waveform is not a good engineering decision. As an hobbyist audio programmer and electrical engineering major, it would make my life a lot easier if blowing sounds were constant, because then I could do away with frequency analysis and digital filtering, which is so easy to screw up. We would simply sample the constant audio waveform in whatever medium [3] it is constant.

    [1] I actually had a much more detailed post in mind where I discussed Fourier series, Fourier transforms, and the exact definitions of DC values in electrical engineering, but unfortunately Jerboa ate the comment before I could submit it. Oh well, I can’t be mad since the app is so early in its lifecycle. If you need any help navigating the above pages, feel free to comment. I can also point you to more rigorous references if you need some reading material.

    [2] Really, I mean not to clip any element in the signal chain. All digital audio devices have a maximum loudness. If the signal has a bunch of flat tops, like it was going to keep going higher or lower and then some jerk clipped off the highest and lowest points with scissors, you’ve clipped the signal. This is especially important for blowing because it (intentionally) moves a lot more air than ordinary talking, so try to physically back away from the microphone when you blow. Technically you can damage a microphone by blowing at it, but you probably can’t blow hard enough to blow it. It’s mostly a signal integrity issue.

    [3] I have been using the word “waveform” rather loosely. The sound is physically propagated through space as related waves in pressure and particle velocity. Microphones typically respond to changes in pressure, which is converted into an analog voltage waveform. Now the pressure waveform exists over time, but also over space. Mathematically, this expresses the fact a sound might be louder or quieter depending on where in space you are relative to the sound’s source. If the electrical system is competently designed, the distribution of the voltage in space should be negligible. This expresses the reality that audio distributed through headphones sound the same regardless of where the player is located relative to the headphones, so long as all the wires are connected correctly. Ideally, once you have the pressure at a point, or more realistically an average over a small region of space, the reading is converted to a voltage that is directly proportional to the pressure waveform. In reality, there are going to be some nonlinearities, but the hope is that the waveform is as close to the original as possible under reasonable restrictions on frequency content and signal size, e.g. that the signal isn’t too fast or too big.

    Furthermore, the analog waveform needs to be sampled. This generates a new waveform that only exists at discrete points in time. Then, because computers have a finite number of storage bits, the sampled waveform is quantized, or forced into one of a discrete set of values. This is the digital waveform seen in Audacity or a similar program. Furthermore, your computer has to reverse that process so it can send a voltage signal to the headphones, which finally generates the pressure variations that reach your ears.

    We can use the term “audio waveform” interchangeably for all of these things, including the digital ones, because they carry (approximately; ideally exactly) the same information. This is not some hand-wavy term; information theory posits that the amount of information that a signal carries can be quantified. However, the hand-wavy explanation for it is that all of these waveforms are simply different ways to represent the same thing. For the purposes of classifying signals, sound signals should share common properties despite being in different mediums.